From mrbm74 at gmail.com Sat Sep 1 03:47:04 2018 From: mrbm74 at gmail.com (Martin Bammer) Date: Sat, 1 Sep 2018 09:47:04 +0200 Subject: [Python-ideas] Add recordlcass to collections module Message-ID: Hi, what about adding recordclass (https://bitbucket.org/intellimath/recordclass) to the collections module It is like namedtuple, but elements are writable and it is written in C and thus much faster. And for convenience it could be named as namedlist. Regards, Martin From stefan_ml at behnel.de Sat Sep 1 04:01:06 2018 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 1 Sep 2018 10:01:06 +0200 Subject: [Python-ideas] Why shouldn't Python be better at implementing Domain Specific Languages? In-Reply-To: References: Message-ID: Matthew Einhorn schrieb am 31.08.2018 um 20:57: > with model: > with Dense(): > units = 64 > activation = 'relu' > input_dim = 100 > > with Dense(): > units = 10 > activation = 'softmax' This looks like it could use 'class' instead of 'with'. Stefan From steve at pearwood.info Sat Sep 1 04:25:21 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 1 Sep 2018 18:25:21 +1000 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: Message-ID: <20180901082516.GO27312@ando.pearwood.info> On Sat, Sep 01, 2018 at 09:47:04AM +0200, Martin Bammer wrote: > Hi, > > what about adding recordclass > (https://bitbucket.org/intellimath/recordclass) to the collections module The first thing you need to do is ask the author of that library whether or not he or she is willing to donate the library to the Python stdlib, which (among other things) means keeping to the same release schedule as the rest of the stdlib. > It is like namedtuple, but elements are writable and it is written in C > and thus much faster. Faster than what? > And for convenience it could be named as namedlist. Why? Is it a list? How or why is it better than dataclasses? -- Steve From jfine2358 at gmail.com Sat Sep 1 12:10:49 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 1 Sep 2018 17:10:49 +0100 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: Message-ID: Hi Martin Summary: Thank you. Your suggestion has good points. I suggest to advance it (i) provide a pure Python implementation of namedlist, and (ii) ask that the Python docs for namedtuple provide a link to namedlist. Thank you, Martin, for bringing https://bitbucket.org/intellimath/recordclass to this list's attention. Here's my first impressions. Here's the good things I've noticed (although not closely examined). 1. This is released software, available through pip. 2. There's a link on the page to an example in a Jupyter notebook. 3. That page gives performance statistics for the C-implementation. 4. The key idea is simple and well expressed. 5. The promoter (you) is not the package's author. Of all the suggestions made to this list, I'd say based on the above that this one is in the top quarter. The credit for this belong mostly, of course its author Zaur Shibzukhov. By the way, there's a mirror of the bitbucket repository here https://github.com/intellimath/recordclass. Here's my suggestions for going forward. They're based on my guess that there's some need for a mutable variant of named tuple, but not the same need for a C implementation. And they're based on what I like, rather than the opinions of many. 1. Produce a pure Python implementation of recordclass. 2. Instead, as you said, call it namedlist. 3. Write some docs for the new class, similar to https://docs.python.org/3/library/collections.html#collections.namedtuple 4. Once you've done 1-3 above, request that the Python docs reference the new class in the "See also" for named tuple. Mutable and immutable is, for me, a key concept in Python. Here's an easy way to 'modify' a tuple: >>> orig = tuple(range(5)); orig (0, 1, 2, 3, 4) >>> tmp = list(orig) >>> tmp = list(orig); tmp [0, 1, 2, 3, 4] >>> tmp[3] += tmp[1]; tmp[4] += tmp[2] >>> tmp [0, 1, 2, 4, 6] >>> result = tuple(tmp); result (0, 1, 2, 4, 6) Of course, 'modify' means create a new one, changed in some way. And if the original is a namedtuple, that it makes sense to use namedlist. Here are some final remarks. (All my own opinions, not deep truth.) 1. Focus on getting and meeting the expressed needs of users. A link from the Python docs will help here. 2. Don't worry about performance of the pure Python implementation. It won't hold back progress. 3. I'd personally like to see something like numpy, but for combinatorial rather than numerical computing. Perhaps the memoryslots.c (on which recordclass depends) might be useful here. But that's further in the future. Once again, thank you for Martin, for bringing this to our attention. And to Zaur for writing the software. -- best regards Jonathan From goosey15 at gmail.com Sat Sep 1 13:08:06 2018 From: goosey15 at gmail.com (Angus Hollands) Date: Sat, 1 Sep 2018 18:08:06 +0100 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: Message-ID: > > From: "Steven D'Aprano" > To: python-ideas at python.org > Cc: > Bcc: > Date: Sat, 1 Sep 2018 18:25:21 +1000 > Subject: Re: [Python-ideas] Add recordlcass to collections module > On Sat, Sep 01, 2018 at 09:47:04AM +0200, Martin Bammer wrote: > > Hi, > > > > what about adding recordclass > > (https://bitbucket.org/intellimath/recordclass) to the collections > module > > The first thing you need to do is ask the author of that library whether > or not he or she is willing to donate the library to the Python stdlib, > which (among other things) means keeping to the same release schedule as > the rest of the stdlib. > > > > It is like namedtuple, but elements are writable and it is written in C > > and thus much faster. > > Faster than what? > > > > And for convenience it could be named as namedlist. > > Why? Is it a list? > > How or why is it better than dataclasses? > > > -- > Steve > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas It would need to be a list of the members are mutable. As to the other questions, yes, do we need another module in the standard library?Angus > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcgoble3 at gmail.com Sat Sep 1 13:32:44 2018 From: jcgoble3 at gmail.com (Jonathan Goble) Date: Sat, 1 Sep 2018 13:32:44 -0400 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: Message-ID: On Sat, Sep 1, 2018 at 1:08 PM Angus Hollands wrote: > As to the other questions, yes, do we need another module in the standard > library? > Wouldn't need a new module. This would be a perfect fit for the existing collections module where namedtuple already resides. I Googled "pypi namedlist", and the top three results were all other implementations of named lists or something similar: - namedlist , which I have personally used and found extremely useful - list-property - mutabletuple Clearly the concept is useful enough to have several competing implementations on PyPI, and to me that is a point in favor of picking an implementation and adding it to the stdlib as the one obvious way to do it. So +1 from me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertve92 at gmail.com Sat Sep 1 13:37:49 2018 From: robertve92 at gmail.com (Robert Vanden Eynde) Date: Sat, 1 Sep 2018 19:37:49 +0200 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: Message-ID: What's the difference between you proposition and dataclasses ? Introduced in Python 3.7 ? Le sam. 1 sept. 2018 ? 19:33, Jonathan Goble a ?crit : > On Sat, Sep 1, 2018 at 1:08 PM Angus Hollands wrote: > >> As to the other questions, yes, do we need another module in the standard >> library? >> > > Wouldn't need a new module. This would be a perfect fit for the existing > collections module where namedtuple already resides. > > I Googled "pypi namedlist", and the top three results were all other > implementations of named lists or something similar: > - namedlist , which I have > personally used and found extremely useful > - list-property > - mutabletuple > > Clearly the concept is useful enough to have several competing > implementations on PyPI, and to me that is a point in favor of picking an > implementation and adding it to the stdlib as the one obvious way to do it. > So +1 from me. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcgoble3 at gmail.com Sat Sep 1 13:44:18 2018 From: jcgoble3 at gmail.com (Jonathan Goble) Date: Sat, 1 Sep 2018 13:44:18 -0400 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: Message-ID: On Sat, Sep 1, 2018 at 1:38 PM Robert Vanden Eynde wrote: > What's the difference between you proposition and dataclasses ? Introduced > in Python 3.7 ? > A named list would allow sequence operations such as iteration. Dataclasses, IIUC, do not support sequence operations. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaoxiansamma at gmail.com Sat Sep 1 14:22:11 2018 From: yaoxiansamma at gmail.com (Thautwarm Zhao) Date: Sun, 2 Sep 2018 02:22:11 +0800 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: Message-ID: > > > ---------- Forwarded message ---------- > From: Martin Bammer > To: python-ideas at python.org > Cc: > Bcc: > Date: Sat, 1 Sep 2018 09:47:04 +0200 > Subject: [Python-ideas] Add recordlcass to collections module > Hi, > > what about adding recordclass > (https://bitbucket.org/intellimath/recordclass) to the collections module > > It is like namedtuple, but elements are writable and it is written in C > and thus much faster. > > And for convenience it could be named as namedlist. > > Regards, > > Martin > > There are a problem which prevent you to reach your goals. As list in Python is already that efficient, a wrapper of this list in C to supply so-called namedlist interface could not be that efficient. Member accessing in Python bytecode requires looking up corresponding attribute in a hashtable, if you want to access a list by attribute, actually there is an overhead. This is nothing to do with C implementation. See https://docs.python.org/3/library/dis.html#opcode-LOAD_ATTR -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfine2358 at gmail.com Sat Sep 1 14:35:06 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 1 Sep 2018 19:35:06 +0100 Subject: [Python-ideas] Fix some special cases in Fractions? In-Reply-To: References: <00acb756cab343ee80274f5de606572d@xmail103.UGent.be> <5B8786CE.4070604@UGent.be> <5B8789C3.6020803@canterbury.ac.nz> Message-ID: Greg Ewing and Jonathan Goble wrote >> Also, Fraction(1) for the second case would be flat-out wrong. > How? Raising something to the 2/3 power means squaring it and then taking > the cube root of it. -1 squared is 1, and the cube root of 1 is 1. Or am I > having a 2:30am brain fart? Let's see. What about computing the Fraction(2, 4) power by first squaring and then taking the fourth root. Let's start with (-16). Square to get +256. And then the fourth root is +4. I've just followed process Jonathan G suggested, without noticing that Fraction(2, 4) is equal to Fraction(1, 2). But Fraction(1, 2) is the square root. And -16 requires complex numbers for its square root. The problem, I think, may not be doing something sensible in any particular case. Rather, it could be doing something sensible and coherent in all cases. A bit like trying to fit a carpet that is cut to the wrong size for the room. -- Jonathan From steve at pearwood.info Sat Sep 1 20:18:30 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 2 Sep 2018 10:18:30 +1000 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: Message-ID: <20180902001830.GQ27312@ando.pearwood.info> On Sat, Sep 01, 2018 at 05:10:49PM +0100, Jonathan Fine wrote: > Hi Martin > > Summary: Thank you. Your suggestion has good points. I suggest to > advance it (i) provide a pure Python implementation of namedlist, and > (ii) ask that the Python docs for namedtuple provide a link to > namedlist. Before Martin (and you) get carried away doing these things, there's a lot more to do first. For starters, how about answering the questions I asked? Recapping: - The package author describes this as a record class, not a list, and it doesn't seem to support any list operations, so why do you and Martin want to change the name to namedlist? - What would it mean to insert, sort, append etc named items in a list? When would you want to do it? - Have you asked the author what he thinks about putting it in the standard library? (The author calls it a "proof of concept", and it is version 0.5. That doesn't sound like the author considers this a mature product.) - How is this different from data classes? If the answer is that this supports iteration, why not add iteration to data classes? See this thread: https://mail.python.org/pipermail/python-ideas/2018-August/052683.html [...] > 1. Focus on getting and meeting the expressed needs of users. A link > from the Python docs will help here. It's not the job of the Python docs to link to every and any third-party package that somebody might find useful. It might -- perhaps -- make sense for the docs to mention or link to third-party libraries such as numpy which are widely recognised as "best of breed". (Not that numpy needs a link from the std lib.) But in general it is hardly fair for us to single out some arbitrary third- party libraries for official recognition while other libraries, perhaps better or more worthy, are ignored. Put yourself in the shoes of somebody who has worked hard to get a package into a mature state, and then the Python docs start linking to a competing alpha-quality package just because by pure chance, that was the package that got mentioned on Python-Ideas first. -- Steve From szport at gmail.com Sun Sep 2 14:24:19 2018 From: szport at gmail.com (Zaur Shibzukhov) Date: Sun, 2 Sep 2018 11:24:19 -0700 (PDT) Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: Message-ID: <98478a89-fc8a-4ae3-9810-bbacf9844938@googlegroups.com> As the author of `recordclass` I would like to shed some light... Recorclass originated as a response to the [question](https://stackoverflow.com/questions/29290359/existence-of-mutable-named-tuple-in-python/29419745#29419745) on stackoverflow. `Recordclass` was conceived and implemented as a type that, by api, memory and speed, would be completely identical to` namedtuple`, except that it would support an assignment in which any element could be replaced without creating a new instance, as in ` namedtuple`. Those. would be almost identical to `namedtuple` and support the assignment (` __setitem__` / `setslice__`). The effectiveness of namedtuple is based on the effectiveness of the `tuple` type in python. In order to achieve the same efficiency it was necessary to create a type `memoryslots`. Its structure (`PyMemorySlotsObject`) is identical to the structure of` tuple` (`PyTupleObject`) and therefore takes up the same amount of memory as` tuple`. `Recordclass` is defined on top of` memoryslots` just like `namedtuple` above` tuple`. Attributes are accessed via a descriptor (`itemgetset`), which supports both` __get__` and `__set__` by the element index. The class generated by `recordclass` is: `` ` from recordclass import memoryslots, itemgetset class C (memoryslots): __slots__ = () _fields = ('attr_1', ..., 'attr_m') attr_1 = itemgetset (0) ... attr_m = itemgetset (m-1) def __new __ (cls, attr_1, ..., attr_m): 'Create new instance of {typename} ({arg_list})' return memoryslots .__ new __ (cls, attr_1, ..., attr_m) `` ` etc. following the `namedtuple` definition scheme. As a result, `recordclass` takes up as much memory as` namedtuple`, it supports quick access by `__getitem__` /` __setitem__` and by attribute name via the protocol of the descriptors. Regards, Zaur ???????, 1 ???????? 2018 ?., 10:48:07 UTC+3 ???????????? Martin Bammer ???????: > > Hi, > > what about adding recordclass > (https://bitbucket.org/intellimath/recordclass) to the collections module > > It is like namedtuple, but elements are writable and it is written in C > and thus much faster. > > And for convenience it could be named as namedlist. > > Regards, > > Martin > > > _______________________________________________ > Python-ideas mailing list > Python... at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrbm74 at gmail.com Sun Sep 2 16:56:50 2018 From: mrbm74 at gmail.com (Martin Bammer) Date: Sun, 2 Sep 2018 22:56:50 +0200 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: <20180902001830.GQ27312@ando.pearwood.info> References: <20180902001830.GQ27312@ando.pearwood.info> Message-ID: <13706067-2033-aec9-69df-966785bf3549@gmail.com> Hi, then intention of my first mail was to start a discussion about this topic about the pros and cons and possible alternatives. As long as it is not clear that recordclass or something like that is accepted to be implemented to the collections module I do not want to spend any effort on this. My wish that the collections module gets something like namedtuple, but writable, is based on my personal experience when projects are becoming bigger and data structures more complex it is sometimes useful to named items and not just an index. This improves the readability and makes development and maintenance of the code easier. Another important topic for me is performance. When I write applications then they should finish their tasks quickly. The performance of recordclass was one reason for me to use it (some benchmarks with Python 2 can be found on here https://gist.github.com/grantjenks/a06da0db18826be1176c31c95a6ee572). I've done some more recent and additional benchmarks with Python 3.7 on Linux which you can find here https://github.com/brmmm3/compare-recordclass. These new benchmarks show that namedtuple is as fast as recordclass in all cases, but with named attribute access. Named attribute access is faster with recordclass. Compared to dataclass: dataclass wins only on the topic object size. When it comes to speed and functionality (indexing, sorting) dataclass would be my last choice. Yes it is possible to make dataclass fast by using __slots__, but this always an extra programming effort. namedtuple and recordclass are easy to use with small effort. Adding new items: This is not possible with namedtuple and also not possible with recordclass. I see no reason why a namedlist should support this, because with these object types you define new object types and these types should not change. I hope 3.8 will get a namedlist and maybe it will be the recordclass module (currently my choice). As the author of this module already has responded to this discussion I hope he willing to contribute his code to the Python project. Best regards, Martin From wes.turner at gmail.com Sun Sep 2 18:02:12 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 2 Sep 2018 18:02:12 -0400 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: <98478a89-fc8a-4ae3-9810-bbacf9844938@googlegroups.com> Message-ID: On Sunday, September 2, 2018, Zaur Shibzukhov wrote: > > > --- > *Zaur Shibzukhov* > > > 2018-09-02 22:11 GMT+03:00 Wes Turner : > >> Does the value of __hash__ change when attributes of a recordclass change? >> > > Currently recordclass's __hash__ didn't implemented. > https://docs.python.org/3/glossary.html#term-hashable https://docs.python.org/3/reference/datamodel.html#object.__hash__ http://www.attrs.org/en/stable/hashing.html > >> On Sunday, September 2, 2018, Zaur Shibzukhov wrote: >> >>> As the author of `recordclass` I would like to shed some light... >>> >>> Recorclass originated as a response to the [question]( >>> https://stackoverflow.com/questions/29290359/exis >>> tence-of-mutable-named-tuple-in-python/29419745#29419745) on >>> stackoverflow. >>> >>> `Recordclass` was conceived and implemented as a type that, by api, >>> memory and speed, would be completely identical to` namedtuple`, except >>> that it would support an assignment in which any element could be replaced >>> without creating a new instance, as in ` namedtuple`. Those. would be >>> almost identical to `namedtuple` and support the assignment (` __setitem__` >>> / `setslice__`). >>> >>> The effectiveness of namedtuple is based on the effectiveness of the >>> `tuple` type in python. In order to achieve the same efficiency it was >>> necessary to create a type `memoryslots`. Its structure >>> (`PyMemorySlotsObject`) is identical to the structure of` tuple` >>> (`PyTupleObject`) and therefore takes up the same amount of memory as` >>> tuple`. >>> >>> `Recordclass` is defined on top of` memoryslots` just like `namedtuple` >>> above` tuple`. Attributes are accessed via a descriptor (`itemgetset`), >>> which supports both` __get__` and `__set__` by the element index. >>> >>> The class generated by `recordclass` is: >>> >>> `` ` >>> from recordclass import memoryslots, itemgetset >>> >>> class C (memoryslots): >>> __slots__ = () >>> >>> _fields = ('attr_1', ..., 'attr_m') >>> >>> attr_1 = itemgetset (0) >>> ... >>> attr_m = itemgetset (m-1) >>> >>> def __new __ (cls, attr_1, ..., attr_m): >>> 'Create new instance of {typename} ({arg_list})' >>> return memoryslots .__ new __ (cls, attr_1, ..., attr_m) >>> `` ` >>> etc. following the `namedtuple` definition scheme. >>> >>> As a result, `recordclass` takes up as much memory as` namedtuple`, it >>> supports quick access by `__getitem__` /` __setitem__` and by attribute >>> name via the protocol of the descriptors. >>> >>> Regards, >>> >>> Zaur >>> >>> ???????, 1 ???????? 2018 ?., 10:48:07 UTC+3 ???????????? Martin Bammer >>> ???????: >>>> >>>> Hi, >>>> >>>> what about adding recordclass >>>> (https://bitbucket.org/intellimath/recordclass) to the collections >>>> module >>>> >>>> It is like namedtuple, but elements are writable and it is written in C >>>> and thus much faster. >>>> >>>> And for convenience it could be named as namedlist. >>>> >>>> Regards, >>>> >>>> Martin >>>> >>>> >>>> _______________________________________________ >>>> Python-ideas mailing list >>>> Python... at python.org >>>> https://mail.python.org/mailman/listinfo/python-ideas >>>> Code of Conduct: http://python.org/psf/codeofconduct/ >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Sun Sep 2 19:09:54 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Mon, 03 Sep 2018 11:09:54 +1200 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: <98478a89-fc8a-4ae3-9810-bbacf9844938@googlegroups.com> References: <98478a89-fc8a-4ae3-9810-bbacf9844938@googlegroups.com> Message-ID: <5B8C6DC2.3090503@canterbury.ac.nz> Zaur Shibzukhov wrote: > `Recordclass` is defined on top of` memoryslots` just like `namedtuple` > above` tuple`. Attributes are accessed via a descriptor (`itemgetset`), > which supports both` __get__` and `__set__` by the element index. > > As a result, `recordclass` takes up as much memory as` namedtuple`, it > supports quick access by `__getitem__` /` __setitem__` and by attribute > name via the protocol of the descriptors. I'm not sure why you need a new C-level type for this. Couldn't you get the same effect just by using __slots__? e.g. class C: __slots__ = ('attr_1', ..., 'attr_m') def __new __ (cls, attr_1, ..., attr_m): self.attr_1 = attr_1 ... self.attt_m = attr_m -- Greg From steve at pearwood.info Sun Sep 2 19:49:42 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 3 Sep 2018 09:49:42 +1000 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: <13706067-2033-aec9-69df-966785bf3549@gmail.com> References: <20180902001830.GQ27312@ando.pearwood.info> <13706067-2033-aec9-69df-966785bf3549@gmail.com> Message-ID: <20180902234942.GT27312@ando.pearwood.info> On Sun, Sep 02, 2018 at 10:56:50PM +0200, Martin Bammer wrote: > Compared to dataclass: > dataclass wins only on the topic object size. When it comes to speed and > functionality (indexing, sorting) dataclass would be my last choice. I see no sign that recordclass supports sorting. (But I admit that I haven't tried it.) What would it mean to sort a recordclass? Person = recordclass('Person', 'personalname familyname address') fred = Person("Fred", "Anderson", "123 Main Street") fred.sort() print(fred) => output: Person(personalname='123 Main Street', familyname='Anderson', address='Fred') [...] > Adding new items: > This is not possible with namedtuple and also not possible with > recordclass. I see no reason why a namedlist should support this, If you want to change the name and call it a "list", then it needs to support the same things that lists support. > because with these object types you define new object types and these > types should not change. Sorry, I don't understand that. How do you get "no insertions" from "can't change the type"? A list remains a list when you insert into it. In case it isn't clear, I think there is zero justification for renaming recordclass to namedlist. I don't think "named list" makes sense as a concept, and recordclass surely doesn't implement a list-like interface. As for the idea of adding a recordclass or mutable-namedtuple or whatever to the stdlib, the idea seems reasonable but its not clear to me that dataclass wouldn't be suitable. -- Steve From arj.python at gmail.com Mon Sep 3 01:58:14 2018 From: arj.python at gmail.com (Abdur-Rahmaan Janhangeer) Date: Mon, 3 Sep 2018 09:58:14 +0400 Subject: [Python-ideas] Why shouldn't Python be better at implementing Domain Specific Languages? In-Reply-To: <20180831155433.GL27312@ando.pearwood.info> References: <20180831155433.GL27312@ando.pearwood.info> Message-ID: wrote a quick article here : https://www.pythonmembers.club/2018/09/03/how-to-create-your-own-dsldomain-specific-language-in-python/ -- Abdur-Rahmaan Janhangeer https://github.com/abdur-rahmaanj Mauritius -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Mon Sep 3 03:00:14 2018 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 3 Sep 2018 03:00:14 -0400 Subject: [Python-ideas] Executable space protection: NX bit, Message-ID: Rationale ========= - Separation of executable code and non-executable data is a good thing. - Additional security in Python is a good idea. - Python should support things like the NX bit to separate code and non-executable data. Discussion ========== How could Python implement support for the NX bit? (And/or additional modern security measures; as appropriate). What sort of an API would C extensions need? Would this be easier in PyPy or in CPython? - https://en.wikipedia.org/wiki/NX_bit - https://en.wikipedia.org/wiki/Executable_space_protection Here's one way to identify whether an executable supports NX: https://github.com/longld/peda/blob/e0eb0af4bcf3ee/peda.py#L2543 -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.van.dorp at deonet.nl Mon Sep 3 03:00:59 2018 From: j.van.dorp at deonet.nl (Jacco van Dorp) Date: Mon, 3 Sep 2018 09:00:59 +0200 Subject: [Python-ideas] Fix some special cases in Fractions? In-Reply-To: References: <00acb756cab343ee80274f5de606572d@xmail103.UGent.be> <5B8786CE.4070604@UGent.be> <5B8789C3.6020803@canterbury.ac.nz> Message-ID: If we wanted to be mathematically correct, taking a 4th root should give you 4 answers. You could return a tuple of (4, -4, 4j, -4j) for a 4th root of 256. It actually makes power to a Fraction(2, 4) unequal with a Fraction(1, 2) calculating this way. (which, from what I can tell, is exactly your point - don't just take a power and a square root for a fractional power, reduce it to a float or whatever first to get well-defined behaviour. ) But full mathematical correctness is probably not what we want either. Current behaviour IMO is the only solution that returns only 1 answer but does something sensible in all cases. Op za 1 sep. 2018 om 20:35 schreef Jonathan Fine : > Greg Ewing and Jonathan Goble wrote > > >> Also, Fraction(1) for the second case would be flat-out wrong. > > > How? Raising something to the 2/3 power means squaring it and then taking > > the cube root of it. -1 squared is 1, and the cube root of 1 is 1. Or am > I > > having a 2:30am brain fart? > > Let's see. What about computing the Fraction(2, 4) power by first > squaring and then taking the fourth root. Let's start with (-16). > Square to get +256. And then the fourth root is +4. I've just followed > process Jonathan G suggested, without noticing that Fraction(2, 4) is > equal to Fraction(1, 2). > > But Fraction(1, 2) is the square root. And -16 requires complex > numbers for its square root. The problem, I think, may not be doing > something sensible in any particular case. Rather, it could be doing > something sensible and coherent in all cases. A bit like trying to fit > a carpet that is cut to the wrong size for the room. > > -- > Jonathan > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.van.dorp at deonet.nl Mon Sep 3 03:23:59 2018 From: j.van.dorp at deonet.nl (Jacco van Dorp) Date: Mon, 3 Sep 2018 09:23:59 +0200 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: <20180902234942.GT27312@ando.pearwood.info> References: <20180902001830.GQ27312@ando.pearwood.info> <13706067-2033-aec9-69df-966785bf3549@gmail.com> <20180902234942.GT27312@ando.pearwood.info> Message-ID: This feels really useful to me to make some quick changes to a database - perhaps a database layer could return an class of type Recordclass, and then you just simply mutate it and shove it back into the database. Pseudocode: record = database.execute("SELECT * FROM mytable WHERE primary_key = 15") record.mostRecentLoggedInTime = time.time() database.execute(f"UPDATE mytable SET mostRecentLoggedInTime = {record.mostRecentLoggedInTime} WHERE primary_key = {record.primary_key}":) Or any smart database wrapper might just go: database.updateOrInsert(table = mytable, record = record) And be smart enough to figure out that we already have a primary key unequal to some sentinel value like None, and do an update, while it could do an insert if the primary key WAS some kind of sentinel value. which is something I really wanted to do in the past with namedTuples, but had to use dicts for instead. Also, it's rather clear that namedList is a really bad name for a Recordclass. It's cleary not intended to be a list. It's a record you can take out from somewhere, mutate, and push back in. We often use namedTuples as records now, but we can't just mutate those to shove them back in - you have to make new ones, and unless you write a smart wrapper for database handling yourself, you can't just shove them in either. Recordclass could be the gateway drug to a smart database access layer that reduces the amount of SQL we need to write - and that's a good thing in my opinion. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mistersheik at gmail.com Mon Sep 3 03:28:46 2018 From: mistersheik at gmail.com (Neil Girdhar) Date: Mon, 3 Sep 2018 00:28:46 -0700 (PDT) Subject: [Python-ideas] Why shouldn't Python be better at implementing Domain Specific Languages? In-Reply-To: References: Message-ID: I prefer Python's syntax. In the Keras example, Python pays off compared to the XML or YAML or whatever as soon as you need to define something programmatically. For example, if your model is generated based on some other input. Anyway, most of your time is not spent typing punctuation. Most of your time is spent debugging. I wish that LaTeX, the DSLs I use, were implemented as a Python package. There are a thousand terrible design decision in latex that could all be fixed if it had been done as one good Python package. Maybe LuaTex will end up fullfilling the dream. On Thursday, August 30, 2018 at 9:41:46 PM UTC-4, Guido van Rossum wrote: > > On Fri, Aug 31, 2018 at 3:19 AM, Michael Selik > wrote: > >> On Thu, Aug 30, 2018 at 5:31 PM James Lu > >> wrote: >> >>> It would be nice if there was a DSL for describing neural networks >>> (Keras). >>> >>> model.add(Dense(units=64, activation='relu', input_dim=100)) >>> model.add(Dense(units=10, activation='softmax')) >>> >>> >> Why not JSON or XML for cross-language compatibility? >> > > Presumably because those are even harder to read and write for humans. > > I believe that the key issue with using Python as a DSL has to do with its > insistence on punctuation -- the above example uses nested parentheses, > commas, equal signs, and quotation marks. Those are in general needed to > avoid ambiguities, but DSLs are often highly stylized, and a language that > doesn't need them has a certain advantage. For example if a shell-like > language was adopted, the above could probably be written with spaces > instead of commas, parentheses and equal signs, and dropping the quotes > (though perhaps it would be more readable if the equal signs were kept). > > I'm not sure how we would go about this though. IIRC there was a proposal > once to allow top-level function calls to be written without parentheses, > but it was too hard to make it unambiguous (e.g. would "foo +1" mean > "foo(+1)" or "foo + 1"?) > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcgoble3 at gmail.com Mon Sep 3 03:41:10 2018 From: jcgoble3 at gmail.com (Jonathan Goble) Date: Mon, 3 Sep 2018 03:41:10 -0400 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: <20180902001830.GQ27312@ando.pearwood.info> <13706067-2033-aec9-69df-966785bf3549@gmail.com> <20180902234942.GT27312@ando.pearwood.info> Message-ID: On Mon, Sep 3, 2018 at 3:25 AM Jacco van Dorp wrote: > Also, it's rather clear that namedList is a really bad name for a > Recordclass. It's cleary not intended to be a list. It's a record you can > take out from somewhere, mutate, and push back in. > So call it "namedrecord", perhaps? -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Mon Sep 3 03:59:53 2018 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 3 Sep 2018 03:59:53 -0400 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: <20180902001830.GQ27312@ando.pearwood.info> <13706067-2033-aec9-69df-966785bf3549@gmail.com> <20180902234942.GT27312@ando.pearwood.info> Message-ID: On Mon, Sep 3, 2018 at 3:25 AM Jacco van Dorp wrote: > This feels really useful to me to make some quick changes to a database - > perhaps a database layer could return an class of type Recordclass, and > then you just simply mutate it and shove it back into the database. > Pseudocode: > > record = database.execute("SELECT * FROM mytable WHERE primary_key = 15") > record.mostRecentLoggedInTime = time.time() > database.execute(f"UPDATE mytable SET mostRecentLoggedInTime = > {record.mostRecentLoggedInTime} WHERE primary_key = {record.primary_key}":) > > Or any smart database wrapper might just go: > > database.updateOrInsert(table = mytable, record = record) > > And be smart enough to figure out that we already have a primary key > unequal to some sentinel value like None, and do an update, while it could > do an insert if the primary key WAS some kind of sentinel value. > SQLAlchemy.orm solves for this (with evented objects with evented attributes): http://docs.sqlalchemy.org/en/latest/orm/session_state_management.html#session-object-states - Transient, Pending, Persistent, Deleted, Detached http://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.attributes.flag_modified - flag_modified isn't necessary in most cases because attribute mutation on mapped classes deriving from Base(declarative_base()) is evented http://docs.sqlalchemy.org/en/latest/orm/session_events.html#attribute-change-events http://docs.sqlalchemy.org/en/latest/orm/tutorial.html There are packages for handling attribute states with the Django ORM, as well: - https://github.com/romgar/django-dirtyfields - https://github.com/Suor/django-dirty What would be the performance impact of instead subclassing from recordclass? IDK. pyrsistent.PRecord(PMap) is immutable and supports .attribute access: https://github.com/tobgu/pyrsistent#precord -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Mon Sep 3 04:16:45 2018 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 3 Sep 2018 18:16:45 +1000 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: <20180902001830.GQ27312@ando.pearwood.info> <13706067-2033-aec9-69df-966785bf3549@gmail.com> <20180902234942.GT27312@ando.pearwood.info> Message-ID: On Mon, Sep 3, 2018 at 5:23 PM, Jacco van Dorp wrote: > This feels really useful to me to make some quick changes to a database - > perhaps a database layer could return an class of type Recordclass, and then > you just simply mutate it and shove it back into the database. Pseudocode: > > record = database.execute("SELECT * FROM mytable WHERE primary_key = 15") > record.mostRecentLoggedInTime = time.time() > database.execute(f"UPDATE mytable SET mostRecentLoggedInTime = > {record.mostRecentLoggedInTime} WHERE primary_key = {record.primary_key}":) > > Or any smart database wrapper might just go: > > database.updateOrInsert(table = mytable, record = record) > > And be smart enough to figure out that we already have a primary key unequal > to some sentinel value like None, and do an update, while it could do an > insert if the primary key WAS some kind of sentinel value. In its purest form, what you're asking for is an "upsert" or "merge" operation: https://en.wikipedia.org/wiki/Merge_(SQL) In a multi-user transactional database, there are some fundamentally hard problems to implementing a merge. I'm not 100% certain, so I won't say "impossible", but it is certainly *extremely difficult* to implement an operation like this in application-level software without some form of race condition. ChrisA From wes.turner at gmail.com Mon Sep 3 04:31:51 2018 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 3 Sep 2018 04:31:51 -0400 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: <20180902001830.GQ27312@ando.pearwood.info> <13706067-2033-aec9-69df-966785bf3549@gmail.com> <20180902234942.GT27312@ando.pearwood.info> Message-ID: On Mon, Sep 3, 2018 at 4:17 AM Chris Angelico wrote: > On Mon, Sep 3, 2018 at 5:23 PM, Jacco van Dorp > wrote: > > This feels really useful to me to make some quick changes to a database - > > perhaps a database layer could return an class of type Recordclass, and > then > > you just simply mutate it and shove it back into the database. > Pseudocode: > > > > record = database.execute("SELECT * FROM mytable WHERE primary_key = 15") > > record.mostRecentLoggedInTime = time.time() > > database.execute(f"UPDATE mytable SET mostRecentLoggedInTime = > > {record.mostRecentLoggedInTime} WHERE primary_key = > {record.primary_key}":) > > > > Or any smart database wrapper might just go: > > > > database.updateOrInsert(table = mytable, record = record) > > > > And be smart enough to figure out that we already have a primary key > unequal > > to some sentinel value like None, and do an update, while it could do an > > insert if the primary key WAS some kind of sentinel value. > > In its purest form, what you're asking for is an "upsert" or "merge" > operation: > > https://en.wikipedia.org/wiki/Merge_(SQL) > > In a multi-user transactional database, there are some fundamentally > hard problems to implementing a merge. I'm not 100% certain, so I > won't say "impossible", but it is certainly *extremely difficult* to > implement an operation like this in application-level software without > some form of race condition. > http://docs.sqlalchemy.org/en/latest/orm/contextual.html#contextual-thread-local-sessions - scoped_session http://docs.sqlalchemy.org/en/latest/orm/session_state_management.html#merging http://docs.sqlalchemy.org/en/latest/orm/session_basics.html obj = ExampleObject(attr='value') assert obj.id is None session.add(obj) session.flush() assert obj.id is not None session.commit() -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Mon Sep 3 04:39:21 2018 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 3 Sep 2018 18:39:21 +1000 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: <20180902001830.GQ27312@ando.pearwood.info> <13706067-2033-aec9-69df-966785bf3549@gmail.com> <20180902234942.GT27312@ando.pearwood.info> Message-ID: On Mon, Sep 3, 2018 at 6:31 PM, Wes Turner wrote: > > > On Mon, Sep 3, 2018 at 4:17 AM Chris Angelico wrote: >> >> On Mon, Sep 3, 2018 at 5:23 PM, Jacco van Dorp >> wrote: >> > This feels really useful to me to make some quick changes to a database >> > - >> > perhaps a database layer could return an class of type Recordclass, and >> > then >> > you just simply mutate it and shove it back into the database. >> > Pseudocode: >> > >> > record = database.execute("SELECT * FROM mytable WHERE primary_key = >> > 15") >> > record.mostRecentLoggedInTime = time.time() >> > database.execute(f"UPDATE mytable SET mostRecentLoggedInTime = >> > {record.mostRecentLoggedInTime} WHERE primary_key = >> > {record.primary_key}":) >> > >> > Or any smart database wrapper might just go: >> > >> > database.updateOrInsert(table = mytable, record = record) >> > >> > And be smart enough to figure out that we already have a primary key >> > unequal >> > to some sentinel value like None, and do an update, while it could do an >> > insert if the primary key WAS some kind of sentinel value. >> >> In its purest form, what you're asking for is an "upsert" or "merge" >> operation: >> >> https://en.wikipedia.org/wiki/Merge_(SQL) >> >> In a multi-user transactional database, there are some fundamentally >> hard problems to implementing a merge. I'm not 100% certain, so I >> won't say "impossible", but it is certainly *extremely difficult* to >> implement an operation like this in application-level software without >> some form of race condition. > > > http://docs.sqlalchemy.org/en/latest/orm/contextual.html#contextual-thread-local-sessions > - scoped_session > > http://docs.sqlalchemy.org/en/latest/orm/session_state_management.html#merging > > http://docs.sqlalchemy.org/en/latest/orm/session_basics.html > > obj = ExampleObject(attr='value') > assert obj.id is None > session.add(obj) > session.flush() > assert obj.id is not None > session.commit() Yep. What does it do if it's on a back-end database that doesn't provide a merge/upsort intrinsic? What if you have a multi-column primary key? There are, of course, easier sub-forms of this (eg you mandate that the PK be a single column and be immutable), but if there is any chance that any other client might simultaneously be changing the PK of your row, a perfectly reliable upsert/merge basically depends on the DB itself providing that functionality. ChrisA From wes.turner at gmail.com Mon Sep 3 05:22:18 2018 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 3 Sep 2018 05:22:18 -0400 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: <20180902001830.GQ27312@ando.pearwood.info> <13706067-2033-aec9-69df-966785bf3549@gmail.com> <20180902234942.GT27312@ando.pearwood.info> Message-ID: On Mon, Sep 3, 2018 at 4:40 AM Chris Angelico wrote: > On Mon, Sep 3, 2018 at 6:31 PM, Wes Turner wrote: > > > > > > On Mon, Sep 3, 2018 at 4:17 AM Chris Angelico wrote: > >> > >> On Mon, Sep 3, 2018 at 5:23 PM, Jacco van Dorp > >> wrote: > >> > This feels really useful to me to make some quick changes to a > database > >> > - > >> > perhaps a database layer could return an class of type Recordclass, > and > >> > then > >> > you just simply mutate it and shove it back into the database. > >> > Pseudocode: > >> > > >> > record = database.execute("SELECT * FROM mytable WHERE primary_key = > >> > 15") > >> > record.mostRecentLoggedInTime = time.time() > >> > database.execute(f"UPDATE mytable SET mostRecentLoggedInTime = > >> > {record.mostRecentLoggedInTime} WHERE primary_key = > >> > {record.primary_key}":) > >> > > >> > Or any smart database wrapper might just go: > >> > > >> > database.updateOrInsert(table = mytable, record = record) > >> > > >> > And be smart enough to figure out that we already have a primary key > >> > unequal > >> > to some sentinel value like None, and do an update, while it could do > an > >> > insert if the primary key WAS some kind of sentinel value. > >> > >> In its purest form, what you're asking for is an "upsert" or "merge" > >> operation: > >> > >> https://en.wikipedia.org/wiki/Merge_(SQL) > >> > >> In a multi-user transactional database, there are some fundamentally > >> hard problems to implementing a merge. I'm not 100% certain, so I > >> won't say "impossible", but it is certainly *extremely difficult* to > >> implement an operation like this in application-level software without > >> some form of race condition. > > > > > > > http://docs.sqlalchemy.org/en/latest/orm/contextual.html#contextual-thread-local-sessions > > - scoped_session > > > > > http://docs.sqlalchemy.org/en/latest/orm/session_state_management.html#merging > > > > http://docs.sqlalchemy.org/en/latest/orm/session_basics.html > > > > obj = ExampleObject(attr='value') > > assert obj.id is None > > session.add(obj) > > session.flush() > > assert obj.id is not None > > session.commit() > > Yep. What does it do if it's on a back-end database that doesn't > provide a merge/upsort intrinsic? What if you have a multi-column > primary key? There are, of course, easier sub-forms of this (eg you > mandate that the PK be a single column and be immutable), but if there > is any chance that any other client might simultaneously be changing > the PK of your row, a perfectly reliable upsert/merge basically > depends on the DB itself providing that functionality. > There's yet another argument for indeed, immutable surrogate primary keys. With appropriate foreign key constraints, changing any part of the [composite] PK is a really expensive operation because all references must also be updated (w/ e.g. ON UPDATE CASCADE), and that doesn't fix e.g. existing URLs or serialized references in cached JSON documents. Far better, IMHO, to just enforce a UNIQUE constraint on those column(s). UUIDs don't require a central key allocation service (such as AUTOINCREMENT, which is now fixed in MySQL AFAIU);. Should the __hash__() of a recordclass change when attributes are modified? http://www.attrs.org/en/stable/hashing.html has a good explanation. In general, neither .__hash__() nor id(obj) are good candidates for a database primary key because when/if there are collisions (birthday paradox) -- e.g. when an INSERT or UPSERT or INSERT OR REPLACE fails -- it has to change. Sorry getting OT, something like COW immutability is actually desirable with SQL databases, too. Database backups generally require offline intervention in order to rollback; if there's even a backup which contains those transactions. https://en.wikipedia.org/wiki/Temporal_database#Implementations_in_notable_products (SELECT, ) https://django-reversion.readthedocs.io/en/stable/ > ChrisA > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gjcarneiro at gmail.com Mon Sep 3 08:10:26 2018 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Mon, 3 Sep 2018 13:10:26 +0100 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: References: Message-ID: I'm not a security expert, but I believe the NX bit is a hardware protection against a specific class of attack: buffer overflow attacks. These attacks are possible because of the lack of safety in the C programming language: it is very easy for a programmer to forget to check the bounds of a receiving buffer, properly, and sometimes data copied from the network receives machine code. Or the stack is overwritten with the return address pointing to some machine code previously injected. Python is intrinsically a safer programming language and requires no such hardware protection. At most, a C library used by a Python extension can still have such bugs, but then again the OS already sets the NX bit for data segments anyway, so Python doesn't need to do anything. On Mon, 3 Sep 2018 at 08:00, Wes Turner wrote: > Rationale > ========= > - Separation of executable code and non-executable data is a good thing. > - Additional security in Python is a good idea. > - Python should support things like the NX bit to separate code and > non-executable data. > > Discussion > ========== > How could Python implement support for the NX bit? (And/or additional > modern security measures; as appropriate). > > What sort of an API would C extensions need? > > Would this be easier in PyPy or in CPython? > > - https://en.wikipedia.org/wiki/NX_bit > - https://en.wikipedia.org/wiki/Executable_space_protection > > Here's one way to identify whether an executable supports NX: > https://github.com/longld/peda/blob/e0eb0af4bcf3ee/peda.py#L2543 > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -- Gustavo J. A. M. Carneiro Gambit Research "The universe is always one step beyond logic." -- Frank Herbert -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfine2358 at gmail.com Mon Sep 3 08:24:48 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Mon, 3 Sep 2018 13:24:48 +0100 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: References: <5B875DDE.2030308@stoneleaf.us> <5B885B1E.30806@stoneleaf.us> Message-ID: I've just read and article which makes a good case for providing pre-conditions and post-conditions. http://pgbovine.net/python-unreadable.htm The main point is: "without proper comments and documentation, even the cleanest Python code is incomprehensible 'in-the-large'." I find the article to be thoughtful and well-written. By the way the author, Philip J Guo, is also the author of http://pgbovine.net/publications/non-native-english-speakers-learning-programming_CHI-2018.pdf http://pythontutor.com/ I recommend all of the above. -- Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephanh42 at gmail.com Mon Sep 3 08:40:06 2018 From: stephanh42 at gmail.com (Stephan Houben) Date: Mon, 3 Sep 2018 14:40:06 +0200 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: References: Message-ID: I am pretty sure that on systems which support it, Python's stack and data are already NX. NX is basically the default on modern systems. Stephan Op ma 3 sep. 2018 09:00 schreef Wes Turner : > Rationale > ========= > - Separation of executable code and non-executable data is a good thing. > - Additional security in Python is a good idea. > - Python should support things like the NX bit to separate code and > non-executable data. > > Discussion > ========== > How could Python implement support for the NX bit? (And/or additional > modern security measures; as appropriate). > > What sort of an API would C extensions need? > > Would this be easier in PyPy or in CPython? > > - https://en.wikipedia.org/wiki/NX_bit > - https://en.wikipedia.org/wiki/Executable_space_protection > > Here's one way to identify whether an executable supports NX: > https://github.com/longld/peda/blob/e0eb0af4bcf3ee/peda.py#L2543 > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfine2358 at gmail.com Mon Sep 3 09:08:38 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Mon, 3 Sep 2018 14:08:38 +0100 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: References: Message-ID: Wes Turner wrote > - Separation of executable code and non-executable data is a good thing. > - Additional security in Python is a good idea. > - Python should support things like the NX bit to separate code and non-executable data. When I saw this, I thought at first it was about preventing tricks such as def ask_save(): print('Save all files?') def ask_delete(): print('Delete all files?') >>> ask_save() Save all files? >>> ask_delete() Delete all files? # Evil code! ask_delete.__code__, ask_save.__code__ = ask_save.__code__, ask_delete.__code__ >>> ask_save() Delete all files? >>> ask_delete() Save all files? Any code that can directly call fn() and gn() can play this trick! -- Jonathan From szport at gmail.com Mon Sep 3 14:17:13 2018 From: szport at gmail.com (Zaur Shibzukhov) Date: Mon, 3 Sep 2018 11:17:13 -0700 (PDT) Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: <5B8C6DC2.3090503@canterbury.ac.nz> References: <98478a89-fc8a-4ae3-9810-bbacf9844938@googlegroups.com> <5B8C6DC2.3090503@canterbury.ac.nz> Message-ID: <672191f8-f72e-48a1-a14c-beb56673f45f@googlegroups.com> ???????????, 3 ???????? 2018 ?., 2:11:06 UTC+3 ???????????? Greg Ewing ???????: > > Zaur Shibzukhov wrote: > > > `Recordclass` is defined on top of` memoryslots` just like `namedtuple` > > above` tuple`. Attributes are accessed via a descriptor (`itemgetset`), > > which supports both` __get__` and `__set__` by the element index. > > > > As a result, `recordclass` takes up as much memory as` namedtuple`, it > > supports quick access by `__getitem__` /` __setitem__` and by attribute > > name via the protocol of the descriptors. > > I'm not sure why you need a new C-level type for this. Couldn't you > get the same effect just by using __slots__? > > e.g. > > class C: > > __slots__ = ('attr_1', ..., 'attr_m') > > def __new __ (cls, attr_1, ..., attr_m): > self.attr_1 = attr_1 > ... > self.attt_m = attr_m > > Yes, you can. The only difference is that access by index to fields are slow. So if you don't need fast access by index but only by name then using __slots__ is enough. Recordclass is actually a fixed array with named access to the elements in the same manner as namedtuple is a actually a tuple with named access to it's elements. -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Mon Sep 3 18:50:27 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 04 Sep 2018 10:50:27 +1200 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: References: <5B875DDE.2030308@stoneleaf.us> <5B885B1E.30806@stoneleaf.us> Message-ID: <5B8DBAB3.9080501@canterbury.ac.nz> Jonathan Fine wrote: > I've just read and article which makes a good case for providing > pre-conditions and post-conditions. > > http://pgbovine.net/python-unreadable.htm There's nothing in there that talks about PBC-style executable preconditions and postconditions, it's all about documenting the large-scale intent and purpose of code. He doesn't put forward any argument why executable code should be a better way to do that than writing comments. Personally I don't think it is. E.g. def distim(doshes): for d in doshes: assert isinstance(d, Dosh) # do something here for d in doshes: assert is_distimmed(d) This ticks the precondition and postcondition boxes, but still doesn't give you any idea what a Dosh is and why you would want to distim it. -- Greg From greg.ewing at canterbury.ac.nz Mon Sep 3 18:54:22 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 04 Sep 2018 10:54:22 +1200 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: References: Message-ID: <5B8DBB9E.4060201@canterbury.ac.nz> Jonathan Fine wrote: # Evil code! > ask_delete.__code__, ask_save.__code__ = ask_save.__code__, > ask_delete.__code__ If an attacker can trick you into executing that line of code, he can probably just delete your data directly. -- Greg From levkivskyi at gmail.com Mon Sep 3 19:08:31 2018 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Tue, 4 Sep 2018 00:08:31 +0100 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: <5B8DBAB3.9080501@canterbury.ac.nz> References: <5B875DDE.2030308@stoneleaf.us> <5B885B1E.30806@stoneleaf.us> <5B8DBAB3.9080501@canterbury.ac.nz> Message-ID: On Mon, 3 Sep 2018 at 23:51, Greg Ewing wrote: > Jonathan Fine wrote: > > I've just read and article which makes a good case for providing > > pre-conditions and post-conditions. > > > > http://pgbovine.net/python-unreadable.htm > > There's nothing in there that talks about PBC-style executable > preconditions and postconditions, it's all about documenting > the large-scale intent and purpose of code. He doesn't put > forward any argument why executable code should be a better > way to do that than writing comments. > FWIW this article looks more like a typical motivational intro to static types in Python :-) (Even his comment about types can be partially answered with e.g. Protocols.) -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Mon Sep 3 20:46:18 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 4 Sep 2018 10:46:18 +1000 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: References: <5B875DDE.2030308@stoneleaf.us> <5B885B1E.30806@stoneleaf.us> <5B8DBAB3.9080501@canterbury.ac.nz> Message-ID: <20180904004618.GU27312@ando.pearwood.info> On Tue, Sep 04, 2018 at 12:08:31AM +0100, Ivan Levkivskyi wrote: > On Mon, 3 Sep 2018 at 23:51, Greg Ewing wrote: > > > Jonathan Fine wrote: > > > I've just read and article which makes a good case for providing > > > pre-conditions and post-conditions. > > > > > > http://pgbovine.net/python-unreadable.htm > > > > There's nothing in there that talks about PBC-style executable > > preconditions and postconditions, it's all about documenting > > the large-scale intent and purpose of code. He doesn't put > > forward any argument why executable code should be a better > > way to do that than writing comments. > > > > FWIW this article looks more like a typical motivational intro to static > types in Python :-) Did we read the same article? This is no more about static typing than it is about contracts. It is about the need for documentation. The only connection here between either static typing or contracts is that both can be a form of very limited documentation: type declarations tell you the types of parameters and variables (but not what range of values they can take or what they represent) and contracts tell you the types and values (but not what they represent). The author explicitly states that statically typed languages have the same problem communicating the meaning of the program. Neither clean syntax (like Python) nor static types help the reader comprehend *what* the program is doing "in the large". He says: What's contained within allCovData and covMap (which I presume are both dicts)? What are the types of the keys? What are the types of the values? More importantly, what is the meaning of the keys, values, and their mapping? What do these objects represent in the grand scheme of the entire program, and how can I best leverage them to do what I want to do? Unfortunately, nothing short of having the programmer write high-level comments and/or personally explain the code to me can possibly provide me with such knowledge. It's not Python's fault, though; I would've faced the same comprehension barriers with analogous code written in Java or C++. Nothing is wrong with Python in this regard, but unfortunately its clear syntax cannot provide any advantages for me when trying to understand code 'in-the-large'. and later goes on to say: (To be fair, static types aren't a panacea either: If I showed you the same Java code filled with type definitions, then it would be easier to understand what this function is doing in terms of its concrete types, but without comments, you still won't be able to understand what this function is doing in terms of its actual underlying purpose, which inevitably involves programmer-intended 'abstract types'.) -- Steve From wes.turner at gmail.com Mon Sep 3 20:58:50 2018 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 3 Sep 2018 20:58:50 -0400 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: <5B8DBB9E.4060201@canterbury.ac.nz> References: <5B8DBB9E.4060201@canterbury.ac.nz> Message-ID: So, if an application accepts user-supplied input (such as a JSON payload), is that data marked as non-executable? On Monday, September 3, 2018, Greg Ewing wrote: > Jonathan Fine wrote: > > # Evil code! > >> ask_delete.__code__, ask_save.__code__ = ask_save.__code__, >> ask_delete.__code__ >> > > If an attacker can trick you into executing that line of code, > he can probably just delete your data directly. > > -- soon > Greg > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Mon Sep 3 21:12:03 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 4 Sep 2018 11:12:03 +1000 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: <5B8DBAB3.9080501@canterbury.ac.nz> References: <5B875DDE.2030308@stoneleaf.us> <5B885B1E.30806@stoneleaf.us> <5B8DBAB3.9080501@canterbury.ac.nz> Message-ID: <20180904011203.GV27312@ando.pearwood.info> On Tue, Sep 04, 2018 at 10:50:27AM +1200, Greg Ewing wrote: > Jonathan Fine wrote: > >I've just read and article which makes a good case for providing > >pre-conditions and post-conditions. > > > >http://pgbovine.net/python-unreadable.htm > > There's nothing in there that talks about PBC-style executable > preconditions and postconditions, it's all about documenting > the large-scale intent and purpose of code. Indeed. Apart from a throw-away comment about using pre-conditions and post-conditions, that post has little or nothing to do with contracts specifically. > He doesn't put > forward any argument why executable code should be a better > way to do that than writing comments. That's because the article isn't about executable comments versus dumb comments. Its about the need for writing documentation and comments of any sort, so long as it helps people understand the purpose of the code, what and why it does what it does. If you read the author's other posts, he discusses the advantages of assertions over dumb comments in other places, such as here: http://pgbovine.net/programming-with-asserts.htm As far as dumb comments go, I'm reminded of this quote: "At Resolver we've found it useful to short-circuit any doubt and just refer to comments in code as 'lies'." --Michael Foord paraphrases Christian Muirhead on python-dev, 2009-03-22 But you're right that assertions can only give you limited assistence in understanding the large scale structure of code: - types tell you only what kind of thing a variable is; - assertions tell you both the kind of thing and the acceptible values it can take; - unlike dumb comments, assertions are checked, so they stay relevant longer and are less likely to become lies. These can help the reader understand the what and sometimes the how, but to understand the why you need to either be able to infer it from the code, or documentation (including comments). Given the choice between a comment and an assertion: # x must be between 11 and 17 assert 11 <= x <= 17 I think it should be obvious why the assertion is better. But neither explain *why* x must be within that range. Unless it is obvious from context (and often it is!) there should be a reason given, otherwise the reader has to just take it on faith. > Personally I don't think it is. E.g. > > def distim(doshes): > for d in doshes: > assert isinstance(d, Dosh) > # do something here > for d in doshes: > assert is_distimmed(d) > > This ticks the precondition and postcondition boxes, but > still doesn't give you any idea what a Dosh is and why > you would want to distim it. I think the author would agree with you 100%, given that his article is talking about the need to understand the *why* of code, not just what it does in small detail, but the large scale reasons for it. -- Steve From cs at cskk.id.au Mon Sep 3 21:20:40 2018 From: cs at cskk.id.au (Cameron Simpson) Date: Tue, 4 Sep 2018 11:20:40 +1000 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: References: Message-ID: <20180904012040.GA63330@cskk.homeip.net> On 03Sep2018 20:58, Wes Turner wrote: >So, if an application accepts user-supplied input (such as a JSON payload), >is that data marked as non-executable? Unless you've hacked the JSON decoder (I think you can supply a custom decoder for some things) all you're doing to get back is ints, strs, dicts and lists. And floats. None of those is executable. Cheers, Cameron Simpson From wes.turner at gmail.com Mon Sep 3 22:32:43 2018 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 3 Sep 2018 22:32:43 -0400 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: <20180904012040.GA63330@cskk.homeip.net> References: <20180904012040.GA63330@cskk.homeip.net> Message-ID: On Monday, September 3, 2018, Cameron Simpson wrote: > On 03Sep2018 20:58, Wes Turner wrote: > >> So, if an application accepts user-supplied input (such as a JSON >> payload), >> is that data marked as non-executable? >> > > Unless you've hacked the JSON decoder (I think you can supply a custom > decoder for some things) all you're doing to get back is ints, strs, dicts > and lists. And floats. None of those is executable. Can another process or exploitable C extension JMP to that data or no? > > Cheers, > Cameron Simpson > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cs at cskk.id.au Mon Sep 3 23:26:48 2018 From: cs at cskk.id.au (Cameron Simpson) Date: Tue, 4 Sep 2018 13:26:48 +1000 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: References: Message-ID: <20180904032648.GA96523@cskk.homeip.net> On 03Sep2018 22:32, Wes Turner wrote: >On Monday, September 3, 2018, Cameron Simpson wrote: >> On 03Sep2018 20:58, Wes Turner wrote: >>> So, if an application accepts user-supplied input (such as a JSON >>> payload), >>> is that data marked as non-executable? >> >> Unless you've hacked the JSON decoder (I think you can supply a custom >> decoder for some things) all you're doing to get back is ints, strs, dicts >> and lists. And floats. None of those is executable. > >Can another process or exploitable C extension JMP to that data or no? See Stephan Houben's reply to your post: heap and stack on modern OSes are normally NX mode already, and CPython objects live on the stack. So in that circumstance, no. Cheers, Cameron Simpson From cs at cskk.id.au Mon Sep 3 23:32:36 2018 From: cs at cskk.id.au (Cameron Simpson) Date: Tue, 4 Sep 2018 13:32:36 +1000 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: <20180904032648.GA96523@cskk.homeip.net> References: <20180904032648.GA96523@cskk.homeip.net> Message-ID: <20180904033236.GA15373@cskk.homeip.net> On 04Sep2018 13:26, Cameron Simpson wrote: >On 03Sep2018 22:32, Wes Turner wrote: >>Can another process or exploitable C extension JMP to that data or no? > >See Stephan Houben's reply to your post: heap and stack on modern OSes >are normally NX mode already, and CPython objects live on the stack. So in >that circumstance, no. Pardon me, CPython objects live on the heap, not the stack. Cheers, Cameron Simpson From steve at pearwood.info Tue Sep 4 07:08:54 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 4 Sep 2018 21:08:54 +1000 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: <20180904012040.GA63330@cskk.homeip.net> References: <20180904012040.GA63330@cskk.homeip.net> Message-ID: <20180904110854.GW27312@ando.pearwood.info> On Tue, Sep 04, 2018 at 11:20:40AM +1000, Cameron Simpson wrote: > On 03Sep2018 20:58, Wes Turner wrote: > >So, if an application accepts user-supplied input (such as a JSON payload), > >is that data marked as non-executable? > > Unless you've hacked the JSON decoder (I think you can supply a custom > decoder for some things) all you're doing to get back is ints, strs, dicts > and lists. And floats. None of those is executable. Strings are executable with exec and eval, but if you're calling exec on untrusted strings, you've already lost. -- Steve From jfine2358 at gmail.com Tue Sep 4 07:40:40 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Tue, 4 Sep 2018 12:40:40 +0100 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: <20180904110854.GW27312@ando.pearwood.info> References: <20180904012040.GA63330@cskk.homeip.net> <20180904110854.GW27312@ando.pearwood.info> Message-ID: This might be a bit off-topic. It's about the dangers of yaml.load. Cameron Simpson and Steve D'Aprano wrote >> So, if an application accepts user-supplied input (such as a JSON payload), >> is that data marked as non-executable? > Unless you've hacked the JSON decoder (I think you can supply a custom > decoder for some things) all you're doing to get back is ints, strs, dicts > and lists. And floats. None of those is executable. It's note the same with YAML. At last year's PyCon UK I went to Rae Knowler's talk about bad defaults. https://2017.pyconuk.org/sessions/keynotes/unsafe-at-any-speed/ https://speakerdeck.com/bellisk/unsafe-at-any-speed-pycon-uk-26th-october-2017 and saw, in a nutshell (slide 21) yaml.load is the obvious function to use but it is dangerous https://security.openstack.org/guidelines/dg_avoid-dangerous-input-parsing-libraries.html#incorrect Rae's talk also mentioned (slides 19 and 20) Enabling certificate verification by default for stdlib http clients https://www.python.org/dev/peps/pep-0476/ Following Rae, I consider the using name *yaml.load* for the *unsafe* load is already a security flaw! -- Jonathan From szport at gmail.com Tue Sep 4 08:03:26 2018 From: szport at gmail.com (Zaur Shibzukhov) Date: Tue, 4 Sep 2018 15:03:26 +0300 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: <98478a89-fc8a-4ae3-9810-bbacf9844938@googlegroups.com> Message-ID: --- *Zaur Shibzukhov* 2018-09-03 1:02 GMT+03:00 Wes Turner : > > On Sunday, September 2, 2018, Zaur Shibzukhov wrote: > >> >> >> --- >> *Zaur Shibzukhov* >> >> >> 2018-09-02 22:11 GMT+03:00 Wes Turner : >> >>> Does the value of __hash__ change when attributes of a recordclass >>> change? >>> >> >> Currently recordclass's __hash__ didn't implemented. >> > > https://docs.python.org/3/glossary.html#term-hashable > > https://docs.python.org/3/reference/datamodel.html#object.__hash__ > > http://www.attrs.org/en/stable/hashing.html > There is correction: recordclass and it's base memoryslots didn't implement __hash__, but memoryslots implement richcompare (almost as python's list). > > >> >>> On Sunday, September 2, 2018, Zaur Shibzukhov wrote: >>> >>>> As the author of `recordclass` I would like to shed some light... >>>> >>>> Recorclass originated as a response to the [question]( >>>> https://stackoverflow.com/questions/29290359/exis >>>> tence-of-mutable-named-tuple-in-python/29419745#29419745) on >>>> stackoverflow. >>>> >>>> `Recordclass` was conceived and implemented as a type that, by api, >>>> memory and speed, would be completely identical to` namedtuple`, except >>>> that it would support an assignment in which any element could be replaced >>>> without creating a new instance, as in ` namedtuple`. Those. would be >>>> almost identical to `namedtuple` and support the assignment (` __setitem__` >>>> / `setslice__`). >>>> >>>> The effectiveness of namedtuple is based on the effectiveness of the >>>> `tuple` type in python. In order to achieve the same efficiency it was >>>> necessary to create a type `memoryslots`. Its structure >>>> (`PyMemorySlotsObject`) is identical to the structure of` tuple` >>>> (`PyTupleObject`) and therefore takes up the same amount of memory as` >>>> tuple`. >>>> >>>> `Recordclass` is defined on top of` memoryslots` just like `namedtuple` >>>> above` tuple`. Attributes are accessed via a descriptor >>>> (`itemgetset`), which supports both` __get__` and `__set__` by the element >>>> index. >>>> >>>> The class generated by `recordclass` is: >>>> >>>> `` ` >>>> from recordclass import memoryslots, itemgetset >>>> >>>> class C (memoryslots): >>>> __slots__ = () >>>> >>>> _fields = ('attr_1', ..., 'attr_m') >>>> >>>> attr_1 = itemgetset (0) >>>> ... >>>> attr_m = itemgetset (m-1) >>>> >>>> def __new __ (cls, attr_1, ..., attr_m): >>>> 'Create new instance of {typename} ({arg_list})' >>>> return memoryslots .__ new __ (cls, attr_1, ..., attr_m) >>>> `` ` >>>> etc. following the `namedtuple` definition scheme. >>>> >>>> As a result, `recordclass` takes up as much memory as` namedtuple`, it >>>> supports quick access by `__getitem__` /` __setitem__` and by attribute >>>> name via the protocol of the descriptors. >>>> >>>> Regards, >>>> >>>> Zaur >>>> >>>> ???????, 1 ???????? 2018 ?., 10:48:07 UTC+3 ???????????? Martin Bammer >>>> ???????: >>>>> >>>>> Hi, >>>>> >>>>> what about adding recordclass >>>>> (https://bitbucket.org/intellimath/recordclass) to the collections >>>>> module >>>>> >>>>> It is like namedtuple, but elements are writable and it is written in >>>>> C >>>>> and thus much faster. >>>>> >>>>> And for convenience it could be named as namedlist. >>>>> >>>>> Regards, >>>>> >>>>> Martin >>>>> >>>>> >>>>> _______________________________________________ >>>>> Python-ideas mailing list >>>>> Python... at python.org >>>>> https://mail.python.org/mailman/listinfo/python-ideas >>>>> Code of Conduct: http://python.org/psf/codeofconduct/ >>>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mehaase at gmail.com Tue Sep 4 09:55:16 2018 From: mehaase at gmail.com (Mark E. Haase) Date: Tue, 4 Sep 2018 09:55:16 -0400 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: References: Message-ID: Hey Wes, the checksec() function in PEDA that you cited has a standalone version as well: https://github.com/slimm609/checksec.sh Running this on my Python (installed from Ubuntu package): $ checksec --output json -f /usr/bin/python3.6 | python3 -m json.tool { "file": { "relro": "partial", "canary": "yes", "nx": "yes", "pie": "no", "rpath": "no", "runpath": "no", "fortify_source": "yes", "fortified": "17", "fortify-able": "41", "filename": "/usr/bin/python3.6" } } My Python has pretty typical security mitigations. Most of these features are determined at compile time, so you can try compiling Python yourself with different compiler flags and see what other configurations are possible. Some mitigations hurt performance and others may be incompatible with Python itself. If you search on bugs.python.org you'll find a few different issues on these topics. On Mon, Sep 3, 2018 at 3:01 AM Wes Turner wrote: > Rationale > ========= > - Separation of executable code and non-executable data is a good thing. > - Additional security in Python is a good idea. > - Python should support things like the NX bit to separate code and > non-executable data. > > Discussion > ========== > How could Python implement support for the NX bit? (And/or additional > modern security measures; as appropriate). > > What sort of an API would C extensions need? > > Would this be easier in PyPy or in CPython? > > - https://en.wikipedia.org/wiki/NX_bit > - https://en.wikipedia.org/wiki/Executable_space_protection > > Here's one way to identify whether an executable supports NX: > https://github.com/longld/peda/blob/e0eb0af4bcf3ee/peda.py#L2543 > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Sep 4 15:03:23 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 4 Sep 2018 21:03:23 +0200 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: <98478a89-fc8a-4ae3-9810-bbacf9844938@googlegroups.com> Message-ID: Chiming in here: dataclasses was just added to the stdlib. I understand that record class is not the same thing, but the use cases do overlap a great deal. So I think the cord goal for anyone that wants to see this in the stdlib is to demonstrate tbat recordclass Adds significant enough value to justify something so similar. Personally, I don?t see it. -CHB On Tue, Sep 4, 2018 at 2:04 PM Zaur Shibzukhov wrote: > > > --- > *Zaur Shibzukhov* > > > 2018-09-03 1:02 GMT+03:00 Wes Turner : > >> >> On Sunday, September 2, 2018, Zaur Shibzukhov wrote: >> >>> >>> >>> --- >>> *Zaur Shibzukhov* >>> >>> >>> 2018-09-02 22:11 GMT+03:00 Wes Turner : >>> >>>> Does the value of __hash__ change when attributes of a recordclass >>>> change? >>>> >>> >>> Currently recordclass's __hash__ didn't implemented. >>> >> >> https://docs.python.org/3/glossary.html#term-hashable >> >> https://docs.python.org/3/reference/datamodel.html#object.__hash__ >> >> http://www.attrs.org/en/stable/hashing.html >> > > There is correction: > recordclass and it's base memoryslots didn't implement __hash__, but > memoryslots implement richcompare (almost as python's list). > >> >> >>> >>>> On Sunday, September 2, 2018, Zaur Shibzukhov wrote: >>>> >>>>> As the author of `recordclass` I would like to shed some light... >>>>> >>>>> Recorclass originated as a response to the [question]( >>>>> https://stackoverflow.com/questions/29290359/existence-of-mutable-named-tuple-in-python/29419745#29419745) >>>>> on stackoverflow. >>>>> >>>>> `Recordclass` was conceived and implemented as a type that, by api, >>>>> memory and speed, would be completely identical to` namedtuple`, except >>>>> that it would support an assignment in which any element could be replaced >>>>> without creating a new instance, as in ` namedtuple`. Those. would be >>>>> almost identical to `namedtuple` and support the assignment (` __setitem__` >>>>> / `setslice__`). >>>>> >>>>> The effectiveness of namedtuple is based on the effectiveness of the >>>>> `tuple` type in python. In order to achieve the same efficiency it >>>>> was necessary to create a type `memoryslots`. Its structure >>>>> (`PyMemorySlotsObject`) is identical to the structure of` tuple` >>>>> (`PyTupleObject`) and therefore takes up the same amount of memory as` >>>>> tuple`. >>>>> >>>>> `Recordclass` is defined on top of` memoryslots` just like >>>>> `namedtuple` above` tuple`. Attributes are accessed via a descriptor >>>>> (`itemgetset`), which supports both` __get__` and `__set__` by the element >>>>> index. >>>>> >>>>> The class generated by `recordclass` is: >>>>> >>>>> `` ` >>>>> from recordclass import memoryslots, itemgetset >>>>> >>>>> class C (memoryslots): >>>>> __slots__ = () >>>>> >>>>> _fields = ('attr_1', ..., 'attr_m') >>>>> >>>>> attr_1 = itemgetset (0) >>>>> ... >>>>> attr_m = itemgetset (m-1) >>>>> >>>>> def __new __ (cls, attr_1, ..., attr_m): >>>>> 'Create new instance of {typename} ({arg_list})' >>>>> return memoryslots .__ new __ (cls, attr_1, ..., attr_m) >>>>> `` ` >>>>> etc. following the `namedtuple` definition scheme. >>>>> >>>>> As a result, `recordclass` takes up as much memory as` namedtuple`, it >>>>> supports quick access by `__getitem__` /` __setitem__` and by attribute >>>>> name via the protocol of the descriptors. >>>>> >>>>> Regards, >>>>> >>>>> Zaur >>>>> >>>>> ???????, 1 ???????? 2018 ?., 10:48:07 UTC+3 ???????????? Martin Bammer >>>>> ???????: >>>>>> >>>>>> Hi, >>>>>> >>>>>> what about adding recordclass >>>>>> (https://bitbucket.org/intellimath/recordclass) to the collections >>>>>> module >>>>>> >>>>>> It is like namedtuple, but elements are writable and it is written in >>>>>> C >>>>>> and thus much faster. >>>>>> >>>>>> And for convenience it could be named as namedlist. >>>>>> >>>>>> Regards, >>>>>> >>>>>> Martin >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Python-ideas mailing list >>>>>> Python... at python.org >>>>>> https://mail.python.org/mailman/listinfo/python-ideas >>>>>> Code of Conduct: http://python.org/psf/codeofconduct/ >>>>>> >>>>> >>> > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Tue Sep 4 18:15:12 2018 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 4 Sep 2018 18:15:12 -0400 Subject: [Python-ideas] Add recordlcass to collections module In-Reply-To: References: <98478a89-fc8a-4ae3-9810-bbacf9844938@googlegroups.com> Message-ID: <48a0468d-202a-d0a1-ee0c-49c2d8a0c75c@trueblade.com> On 9/4/2018 3:03 PM, Chris Barker via Python-ideas wrote: > Chiming in here: > > dataclasses was just added to the stdlib. > > I understand that record class is not the same thing, but the use cases > do overlap a great deal. > > So I think the cord goal for anyone that wants to see this in the stdlib > is to demonstrate tbat?recordclass > Adds significant enough value to justify something so similar. I've seen three things mentioned that might be different from dataclasses: - instance size - speed (not sure of what: instance creation? field access?) - iterating over fields But I've not seen concrete examples of the first two where dataclasses doesn't perform well enough. For the third one, there's already a thread on this mailing list: "Consider adding an iterable option to dataclass". I'm contemplating adding it. > Personally, I don?t see it. I'm skeptical, too. Eric > > -CHB > > On Tue, Sep 4, 2018 at 2:04 PM Zaur Shibzukhov > wrote: > > > > --- > /Zaur Shibzukhov/ > > > 2018-09-03 1:02 GMT+03:00 Wes Turner >: > > > On Sunday, September 2, 2018, Zaur Shibzukhov > wrote: > > > > --- > /Zaur Shibzukhov/ > > > 2018-09-02 22:11 GMT+03:00 Wes Turner >: > > Does the value of __hash__ change when attributes of a > recordclass change? > > > Currently recordclass's __hash__ didn't implemented. > > > https://docs.python.org/3/glossary.html#term-hashable > > https://docs.python.org/3/reference/datamodel.html#object.__hash__ > > http://www.attrs.org/en/stable/hashing.html > > > There is correction: > recordclass and it's base memoryslots didn't implement __hash__, but > memoryslots implement richcompare (almost as python's list). > > > On Sunday, September 2, 2018, Zaur Shibzukhov > > wrote: > > As the author of `recordclass` I would like to shed > some light... > > Recorclass originated as a response to the > [question](https://stackoverflow.com/questions/29290359/existence-of-mutable-named-tuple-in-python/29419745#29419745) > on stackoverflow. > > `Recordclass` was conceived and implemented as a > type that, by api, memory and speed, would be > completely identical to` namedtuple`, except that it > would support an assignment in which any element > could be replaced without creating a new instance, > as in ` namedtuple`. Those. would be almost > identical to `namedtuple` and support the assignment > (` __setitem__` / `setslice__`). > > The effectiveness of namedtuple is based on the > effectiveness of the `tuple` type in python. In > order to achieve the same efficiency it was > necessary to create a type `memoryslots`. Its > structure (`PyMemorySlotsObject`) is identical to > the structure of` tuple` (`PyTupleObject`) and > therefore takes up the same amount of memory as` tuple`. > > `Recordclass` is defined on top of` memoryslots` > just like `namedtuple` above` tuple`. Attributes are > accessed via a descriptor (`itemgetset`), which > supports both` __get__` and `__set__` by the element > index. > > The class generated by `recordclass` is: > > `` ` > from recordclass import memoryslots, itemgetset > > class C (memoryslots): > __slots__ = () > > _fields = ('attr_1', ..., 'attr_m') > > attr_1 = itemgetset (0) > ... > attr_m = itemgetset (m-1) > > def __new __ (cls, attr_1, ..., attr_m): > 'Create new instance of {typename} ({arg_list})' > return memoryslots .__ new __ (cls, attr_1, ..., attr_m) > `` ` > etc. following the `namedtuple` definition scheme. > > As a result, `recordclass` takes up as much memory > as` namedtuple`, it supports quick access by > `__getitem__` /` __setitem__` and by attribute name > via the protocol of the descriptors. > > Regards, > > Zaur > > ???????, 1 ???????? 2018 ?., 10:48:07 UTC+3 > ???????????? Martin Bammer ???????: > > Hi, > > what about adding recordclass > (https://bitbucket.org/intellimath/recordclass) > to the collections module > > It is like namedtuple, but elements are writable > and it is written in C > and thus much faster. > > And for convenience it could be named as namedlist. > > Regards, > > Martin > > > _______________________________________________ > Python-ideas mailing list > Python... at python.org > https://mail.python.org/mailman/listinfo/python-ideas > > Code of Conduct: > http://python.org/psf/codeofconduct/ > > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R ? ? ? ? ? ?(206) 526-6959?? voice > 7600 Sand Point Way NE ??(206) 526-6329?? fax > Seattle, WA ?98115 ? ? ??(206) 526-6317?? main reception > > Chris.Barker at noaa.gov > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > From wes.turner at gmail.com Tue Sep 4 19:51:03 2018 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 4 Sep 2018 19:51:03 -0400 Subject: [Python-ideas] Executable space protection: NX bit, In-Reply-To: References: Message-ID: What about ` -mindirect-branch=thunk -mindirect-branch-register `? Thanks! Looks like NX is on by default A quick search of the codebase doesn't find any mprotect() calls, so I'm assuming it's just the compiler flags defaulting to NX on for the main stack. This answer helped me understand a bit more; I'll take a look at the issue tracker as well: > From all of this we conclude that on at least some recent versions of Linux, the stacks (main and threads) will usually be marked as non-executable (i.e. NX bit set). However, if the executable code of the application, or some executable code somewhere in a DLL loaded by the application, contains a nested function or otherwise advertises a need for an executable stack, then all stacks in the application will be marked as executable (i.e. NX bit not set). In other words, a single loadable plugin or extension for an application may deactivate NX stack protection for all threads of the applications, simply by using a rarely used but legitimate and documented feature of GCC. > There is no need to panic, though, because, as @tylerl points out, protection afforded by the NX bit is not that great. It will make some exploits more awkward for the least competent of attackers; but good attackers will not be impeded. https://security.stackexchange.com/a/47825 So, a C extension with a nested function (e.g. a trampoline) causes the NX bit to be off? retpoline is a trampoline approach *partially mitigating* the Spectre (but not Meltdown?) vulns, right? https://security.googleblog.com/2018/01/more-details-about-mitigations-for-cpu_4.html "What is a retpoline and how does it work?" https://stackoverflow.com/a/48099456 https://github.com/speed47/spectre-meltdown-checker/issues/119 > mindirect-branch=thunk -mindirect-branch-register Here's a helpful table of the 'Speculative execution exploit variants' discovered as of yet: https://en.wikipedia.org/wiki/Speculative_Store_Bypass#Speculative_execution_exploit_variants Everything built - including the kernel - needs to be recompiled with new thunk switches, AFAIU? How can we tell whether a python binary or C extension has been rebuilt with which appropriate compiler flags? On Tuesday, September 4, 2018, Mark E. Haase wrote: > Hey Wes, the checksec() function in PEDA that you cited has a standalone > version as well: > > https://github.com/slimm609/checksec.sh > > Running this on my Python (installed from Ubuntu package): > > $ checksec --output json -f /usr/bin/python3.6 | python3 -m json.tool > { > "file": { > "relro": "partial", > "canary": "yes", > "nx": "yes", > "pie": "no", > "rpath": "no", > "runpath": "no", > "fortify_source": "yes", > "fortified": "17", > "fortify-able": "41", > "filename": "/usr/bin/python3.6" > } > } > > My Python has pretty typical security mitigations. Most of these features > are determined at compile time, so you can try compiling Python yourself > with different compiler flags and see what other configurations are > possible. Some mitigations hurt performance and others may be incompatible > with Python itself. If you search on bugs.python.org you'll find a few > different issues on these topics. > > On Mon, Sep 3, 2018 at 3:01 AM Wes Turner wrote: > >> Rationale >> ========= >> - Separation of executable code and non-executable data is a good thing. >> - Additional security in Python is a good idea. >> - Python should support things like the NX bit to separate code and >> non-executable data. >> >> Discussion >> ========== >> How could Python implement support for the NX bit? (And/or additional >> modern security measures; as appropriate). >> >> What sort of an API would C extensions need? >> >> Would this be easier in PyPy or in CPython? >> >> - https://en.wikipedia.org/wiki/NX_bit >> - https://en.wikipedia.org/wiki/Executable_space_protection >> >> Here's one way to identify whether an executable supports NX: >> https://github.com/longld/peda/blob/e0eb0af4bcf3ee/peda.py#L2543 >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Thu Sep 6 06:15:46 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 6 Sep 2018 12:15:46 +0200 Subject: [Python-ideas] Keyword only argument on function call Message-ID: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> I have a working implementation for a new syntax which would make using keyword arguments a lot nicer. Wouldn't it be awesome if instead of: foo(a=a, b=b, c=c, d=3, e=e) we could just write: foo(*, a, b, c, d=3, e) and it would mean the exact same thing? This would not just be shorter but would create an incentive for consistent naming across the code base. So the idea is to generalize the * keyword only marker from function to also have the same meaning at the call site: everything after * is a kwarg. With this feature we can now simplify keyword arguments making them more readable and concise. (This syntax does not conflict with existing Python code.) The full PEP-style suggestion is here: https://gist.github.com/boxed/f72221e7e77370be3e5703087c1ba54d I have also written an analysis tool you can use on your code base to see what kind of impact this suggestion might have. It's available at https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c . The results for django and twisted are posted as comments to the gist. We've run this on our two big code bases at work (both around 250kloc excluding comments and blank lines). The results show that ~30% of all arguments would benefit from this syntax. Me and my colleague Johan L?bcke have also written an implementation that is available at: https://github.com/boxed/cpython / Anders Hovm?ller From steve at pearwood.info Thu Sep 6 09:10:28 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 6 Sep 2018 23:10:28 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> Message-ID: <20180906131028.GB27312@ando.pearwood.info> On Thu, Sep 06, 2018 at 12:15:46PM +0200, Anders Hovm?ller wrote: > I have a working implementation for a new syntax which would make > using keyword arguments a lot nicer. Wouldn't it be awesome if instead > of: > > foo(a=a, b=b, c=c, d=3, e=e) > > we could just write: > > foo(*, a, b, c, d=3, e) > > and it would mean the exact same thing? No. > This would not just be shorter but would create an incentive for > consistent naming across the code base. You say that as if consistent naming is *in and of itself* a good thing, merely because it is consistent. I'm in favour of consistent naming when it helps the code, when the names are clear and relevant. But why should I feel bad about failing to use the same names as the functions I call? If some library author names the parameter to a function "a", why should I be encouraged to use that same name *just for the sake of consistency*? > So the idea is to generalize the * keyword only marker from function > to also have the same meaning at the call site: everything after * is > a kwarg. With this feature we can now simplify keyword arguments > making them more readable and concise. (This syntax does not conflict > with existing Python code.) It's certainly more concise, provided those named variables already exist, but how often does that happen? You say 30% in your code base. (By the way, well done for writing an analysis tool! I mean it, I'm not being sarcastic. We should have more of those.) I disagree that f(*, page) is more readable than an explicit named keyword argument f(page=page). My own feeling is that this feature would encourage what I consider a code-smell: function calls requiring large numbers of arguments. Your argument about being concise makes a certain amount of sense if you are frequently making calls like this: # chosing a real function, not a made-up example open(file, mode=mode, buffering=buffering, encoding=encoding, errors=errors, newline=newline, closefd=closefd, opener=opener) If 30% of your function calls look like that, I consider it a code-smell. The benefit is a lot smaller if your function calls look more like this: open(file, encoding=encoding) and even less here: open(file, 'r', encoding=self.encoding or self.default_encoding, errors=self.errors or self.default_error_handler) for example. To get benefit from your syntax, I would need to extract out the arguments into temporary variables: encoding = self.encoding or self.default_encoding errors = self.errors or self.default_error_handler open(file, 'r', *, encoding, errors) which completely cancels out the "conciseness" argument. First version, with in-place arguments: 1 statement 2 lines 120 characters including whitespace Second version, with temporary variables: 3 statements 3 lines 138 characters including whitespace However you look at it, it's longer and less concise if you have to create temporary variables to make use of this feature. -- Steve From cspealma at redhat.com Thu Sep 6 09:18:02 2018 From: cspealma at redhat.com (Calvin Spealman) Date: Thu, 6 Sep 2018 09:18:02 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180906131028.GB27312@ando.pearwood.info> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> Message-ID: On Thu, Sep 6, 2018 at 9:11 AM Steven D'Aprano wrote: > On Thu, Sep 06, 2018 at 12:15:46PM +0200, Anders Hovm?ller wrote: > > > I have a working implementation for a new syntax which would make > > using keyword arguments a lot nicer. Wouldn't it be awesome if instead > > of: > > > > foo(a=a, b=b, c=c, d=3, e=e) > > > > we could just write: > > > > foo(*, a, b, c, d=3, e) > > > > and it would mean the exact same thing? > > No. > > > > This would not just be shorter but would create an incentive for > > consistent naming across the code base. > > You say that as if consistent naming is *in and of itself* a good thing, > merely because it is consistent. > > I'm in favour of consistent naming when it helps the code, when the > names are clear and relevant. But why should I feel bad about failing to > use the same names as the functions I call? If some library author names > the parameter to a function "a", why should I be encouraged to use > that same name *just for the sake of consistency*? > I've been asking this same question on the Javascript/ES6 side of my work ever since unpacking was introduced there which baked hash-lookup into the unpacking at a syntax level. In that world its impacted this same encouragement of "consistency" between local variable names and parameters of called functions and it certainly seems popular in that ecosystem. The practice still feels weird to me and I'm on the fence about it. Although, to be honest, I'm definitely leaning towards the "No, actually, it is a good thing." I grew up, development-speaking, in the Python world with a strong emphasis drilled into me that style constraints make better code and maybe this is just an extension of that. Of course, you might not always want the same name, but it is only encouraged not required. You can always rename variables. That said... I'm not actually a fan of the specific suggested syntax: > foo(*, a, b, c, d=3, e) I just wanted to give my two cents on the name consistency issue. > > So the idea is to generalize the * keyword only marker from function > > to also have the same meaning at the call site: everything after * is > > a kwarg. With this feature we can now simplify keyword arguments > > making them more readable and concise. (This syntax does not conflict > > with existing Python code.) > > It's certainly more concise, provided those named variables already > exist, but how often does that happen? You say 30% in your code base. > > (By the way, well done for writing an analysis tool! I mean it, I'm not > being sarcastic. We should have more of those.) > > I disagree that f(*, page) is more readable than an explicit named > keyword argument f(page=page). > > My own feeling is that this feature would encourage what I consider a > code-smell: function calls requiring large numbers of arguments. Your > argument about being concise makes a certain amount of sense if you are > frequently making calls like this: > > # chosing a real function, not a made-up example > open(file, mode=mode, buffering=buffering, encoding=encoding, > errors=errors, newline=newline, closefd=closefd, opener=opener) > > If 30% of your function calls look like that, I consider it a > code-smell. > > The benefit is a lot smaller if your function calls look more like this: > > open(file, encoding=encoding) > > and even less here: > > open(file, 'r', encoding=self.encoding or self.default_encoding, > errors=self.errors or self.default_error_handler) > > for example. To get benefit from your syntax, I would need to > extract out the arguments into temporary variables: > > encoding = self.encoding or self.default_encoding > errors = self.errors or self.default_error_handler > open(file, 'r', *, encoding, errors) > > which completely cancels out the "conciseness" argument. > > First version, with in-place arguments: > 1 statement > 2 lines > 120 characters including whitespace > > Second version, with temporary variables: > 3 statements > 3 lines > 138 characters including whitespace > > > However you look at it, it's longer and less concise if you have to > create temporary variables to make use of this feature. > > > -- > Steve > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertve92 at gmail.com Thu Sep 6 09:52:18 2018 From: robertve92 at gmail.com (Robert Vanden Eynde) Date: Thu, 6 Sep 2018 15:52:18 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> Message-ID: I'm trying to see how it can be done with current python. from somelib import auto auto(locals(), function, 'a', 'b', 'c', d=5) auto(locals(), function).call('a', 'b', 'c', d=5) auto(locals(), function)('a', 'b', 'c', d=5) auto(locals()).bind(function).call('a', 'b', 'c', d=5) One of those syntax for a class auto could be chosen but it allows you to give locals in the call. However, locals() gives a copy of the variables so it must be given as this code illustrates : def f(x): y = x+1 a = locals() g = 4 print(a) f(5) # {'y': 6, 'x': 5} Le jeu. 6 sept. 2018 ? 15:18, Calvin Spealman a ?crit : > > > On Thu, Sep 6, 2018 at 9:11 AM Steven D'Aprano > wrote: > >> On Thu, Sep 06, 2018 at 12:15:46PM +0200, Anders Hovm?ller wrote: >> >> > I have a working implementation for a new syntax which would make >> > using keyword arguments a lot nicer. Wouldn't it be awesome if instead >> > of: >> > >> > foo(a=a, b=b, c=c, d=3, e=e) >> > >> > we could just write: >> > >> > foo(*, a, b, c, d=3, e) >> > >> > and it would mean the exact same thing? >> >> No. >> >> >> > This would not just be shorter but would create an incentive for >> > consistent naming across the code base. >> >> You say that as if consistent naming is *in and of itself* a good thing, >> merely because it is consistent. >> >> I'm in favour of consistent naming when it helps the code, when the >> names are clear and relevant. But why should I feel bad about failing to >> use the same names as the functions I call? If some library author names >> the parameter to a function "a", why should I be encouraged to use >> that same name *just for the sake of consistency*? >> > > I've been asking this same question on the Javascript/ES6 side of my work > ever since unpacking was introduced there which baked hash-lookup into > the unpacking at a syntax level. > > In that world its impacted this same encouragement of "consistency" between > local variable names and parameters of called functions and it certainly > seems > popular in that ecosystem. The practice still feels weird to me and I'm on > the fence > about it. > > Although, to be honest, I'm definitely leaning towards the "No, actually, > it is a > good thing." I grew up, development-speaking, in the Python world with a > strong emphasis drilled into me that style constraints make better code and > maybe this is just an extension of that. > > Of course, you might not always want the same name, but it is only > encouraged > not required. You can always rename variables. > > That said... I'm not actually a fan of the specific suggested syntax: > > > foo(*, a, b, c, d=3, e) > > I just wanted to give my two cents on the name consistency issue. > > > >> > So the idea is to generalize the * keyword only marker from function >> > to also have the same meaning at the call site: everything after * is >> > a kwarg. With this feature we can now simplify keyword arguments >> > making them more readable and concise. (This syntax does not conflict >> > with existing Python code.) >> >> It's certainly more concise, provided those named variables already >> exist, but how often does that happen? You say 30% in your code base. >> >> (By the way, well done for writing an analysis tool! I mean it, I'm not >> being sarcastic. We should have more of those.) >> >> I disagree that f(*, page) is more readable than an explicit named >> keyword argument f(page=page). >> >> My own feeling is that this feature would encourage what I consider a >> code-smell: function calls requiring large numbers of arguments. Your >> argument about being concise makes a certain amount of sense if you are >> frequently making calls like this: >> >> # chosing a real function, not a made-up example >> open(file, mode=mode, buffering=buffering, encoding=encoding, >> errors=errors, newline=newline, closefd=closefd, opener=opener) >> >> If 30% of your function calls look like that, I consider it a >> code-smell. >> >> The benefit is a lot smaller if your function calls look more like this: >> >> open(file, encoding=encoding) >> >> and even less here: >> >> open(file, 'r', encoding=self.encoding or self.default_encoding, >> errors=self.errors or self.default_error_handler) >> >> for example. To get benefit from your syntax, I would need to >> extract out the arguments into temporary variables: >> >> encoding = self.encoding or self.default_encoding >> errors = self.errors or self.default_error_handler >> open(file, 'r', *, encoding, errors) >> >> which completely cancels out the "conciseness" argument. >> >> First version, with in-place arguments: >> 1 statement >> 2 lines >> 120 characters including whitespace >> >> Second version, with temporary variables: >> 3 statements >> 3 lines >> 138 characters including whitespace >> >> >> However you look at it, it's longer and less concise if you have to >> create temporary variables to make use of this feature. >> >> >> -- >> Steve >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Thu Sep 6 09:56:10 2018 From: toddrjen at gmail.com (Todd) Date: Thu, 6 Sep 2018 09:56:10 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> Message-ID: I have encountered situations like this, and generally I just use **kwargs for non-critical and handle the parameter management in the body of the function. This also makes it easier to pass the arguments to another function. You can use a dict comprehension to copy over the keys you want, then unpack them as arguments to the next function. On Thu, Sep 6, 2018 at 6:16 AM Anders Hovm?ller wrote: > I have a working implementation for a new syntax which would make using > keyword arguments a lot nicer. Wouldn't it be awesome if instead of: > > foo(a=a, b=b, c=c, d=3, e=e) > > we could just write: > > foo(*, a, b, c, d=3, e) > > and it would mean the exact same thing? This would not just be shorter but > would create an incentive for consistent naming across the code base. > > So the idea is to generalize the * keyword only marker from function to > also have the same meaning at the call site: everything after * is a kwarg. > With this feature we can now simplify keyword arguments making them more > readable and concise. (This syntax does not conflict with existing Python > code.) > > The full PEP-style suggestion is here: > https://gist.github.com/boxed/f72221e7e77370be3e5703087c1ba54d > > I have also written an analysis tool you can use on your code base to see > what kind of impact this suggestion might have. It's available at > https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c . The > results for django and twisted are posted as comments to the gist. > > We've run this on our two big code bases at work (both around 250kloc > excluding comments and blank lines). The results show that ~30% of all > arguments would benefit from this syntax. > > Me and my colleague Johan L?bcke have also written an implementation that > is available at: https://github.com/boxed/cpython > > / Anders Hovm?ller > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Thu Sep 6 09:58:03 2018 From: toddrjen at gmail.com (Todd) Date: Thu, 6 Sep 2018 09:58:03 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> Message-ID: Sorry, nevermind. I think I misunderstood the idea. On Thu, Sep 6, 2018 at 9:56 AM Todd wrote: > I have encountered situations like this, and generally I just use **kwargs > for non-critical and handle the parameter management in the body of the > function. > > This also makes it easier to pass the arguments to another function. You > can use a dict comprehension to copy over the keys you want, then unpack > them as arguments to the next function. > > On Thu, Sep 6, 2018 at 6:16 AM Anders Hovm?ller > wrote: > >> I have a working implementation for a new syntax which would make using >> keyword arguments a lot nicer. Wouldn't it be awesome if instead of: >> >> foo(a=a, b=b, c=c, d=3, e=e) >> >> we could just write: >> >> foo(*, a, b, c, d=3, e) >> >> and it would mean the exact same thing? This would not just be shorter >> but would create an incentive for consistent naming across the code base. >> >> So the idea is to generalize the * keyword only marker from function to >> also have the same meaning at the call site: everything after * is a kwarg. >> With this feature we can now simplify keyword arguments making them more >> readable and concise. (This syntax does not conflict with existing Python >> code.) >> >> The full PEP-style suggestion is here: >> https://gist.github.com/boxed/f72221e7e77370be3e5703087c1ba54d >> >> I have also written an analysis tool you can use on your code base to see >> what kind of impact this suggestion might have. It's available at >> https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c . The >> results for django and twisted are posted as comments to the gist. >> >> We've run this on our two big code bases at work (both around 250kloc >> excluding comments and blank lines). The results show that ~30% of all >> arguments would benefit from this syntax. >> >> Me and my colleague Johan L?bcke have also written an implementation that >> is available at: https://github.com/boxed/cpython >> >> / Anders Hovm?ller >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Thu Sep 6 10:05:57 2018 From: boxed at killingar.net (=?UTF-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 6 Sep 2018 07:05:57 -0700 (PDT) Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180906131028.GB27312@ando.pearwood.info> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> Message-ID: <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> On Thursday, September 6, 2018 at 3:11:46 PM UTC+2, Steven D'Aprano wrote: > > On Thu, Sep 06, 2018 at 12:15:46PM +0200, Anders Hovm?ller wrote: > > > I have a working implementation for a new syntax which would make > > using keyword arguments a lot nicer. Wouldn't it be awesome if instead > > of: > > > > foo(a=a, b=b, c=c, d=3, e=e) > > > > we could just write: > > > > foo(*, a, b, c, d=3, e) > > > > and it would mean the exact same thing? > > No. Heh. I did expect the first mail to be uncivil :P > > > > This would not just be shorter but would create an incentive for > > consistent naming across the code base. > > You say that as if consistent naming is *in and of itself* a good thing, > merely because it is consistent. > If it's the same thing yes. Otherwise no. > I'm in favour of consistent naming when it helps the code, when the > names are clear and relevant. Which is what I'm saying. > But why should I feel bad about failing to > use the same names as the functions I call? Yea, why would you feel bad? If you should have different names, then do. Of course. > If some library author names > the parameter to a function "a", why should I be encouraged to use > that same name *just for the sake of consistency*? > It would encourage library authors to name their parameters well. It wouldn't do anything else. > > So the idea is to generalize the * keyword only marker from function > > to also have the same meaning at the call site: everything after * is > > a kwarg. With this feature we can now simplify keyword arguments > > making them more readable and concise. (This syntax does not conflict > > with existing Python code.) > > It's certainly more concise, provided those named variables already > exist, but how often does that happen? You say 30% in your code base. > (Caveat: 30% of the cases where my super simple and stupid tool can find.) It's similar for django btw. > I disagree that f(*, page) is more readable than an explicit named > keyword argument f(page=page). > People prefer f(page) today. For some reason. That might refute your statement or not, depending on why they do it. > > My own feeling is that this feature would encourage what I consider a > code-smell: function calls requiring large numbers of arguments. Your > argument about being concise makes a certain amount of sense if you are > frequently making calls like this: > I don't see how that's relevant (or true, but let's stick with relevant). There are actual APIs that have lots of arguments. GUI toolkits are a great example. Another great example is to send a context dict to a template engine. To get benefit from your syntax, I would need to > extract out the arguments into temporary variables: > which completely cancels out the "conciseness" argument. > > First version, with in-place arguments: > 1 statement > 2 lines > 120 characters including whitespace > > Second version, with temporary variables: > 3 statements > 3 lines > 138 characters including whitespace > > > However you look at it, it's longer and less concise if you have to > create temporary variables to make use of this feature. Ok. Sure, but that's a straw man.... / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Thu Sep 6 10:12:26 2018 From: mertz at gnosis.cx (David Mertz) Date: Thu, 6 Sep 2018 10:12:26 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180906131028.GB27312@ando.pearwood.info> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> Message-ID: Steven's point is the same as my impression. It's not terribly uncommon in code I write or read to use the same name for a formal parameter (whether keyword or positional) in the calling scope. But it's also far from universal. Almost all the time where it's not the case, it's for a very good reason. Functions by their nature are *generic* in some sense. That is, they allow themselves to be called from many other places. Each of those places has its own semantic context where different names are relevant to readers of the code in that other place. As a rule, the names used in function parameters are less specific or descriptive because they have to be neutral about that calling context. So e.g. a toy example: for record in ledger: if record.amount > 0: bank_transaction(currency=currencies[record.country], deposit=record.amount, account_number=record.id) Once in a while the names in the two scopes align, but it would be code obfuscation to *force* them to do so (either by actual requirement or because "it's shorter"). On Thu, Sep 6, 2018 at 9:11 AM Steven D'Aprano wrote: > > I have a working implementation for a new syntax which would make > > using keyword arguments a lot nicer. Wouldn't it be awesome if instead > > foo(a=a, b=b, c=c, d=3, e=e) > > we could just write: > > foo(*, a, b, c, d=3, e) > You say that as if consistent naming is *in and of itself* a good thing, > merely because it is consistent. > I'm in favour of consistent naming when it helps the code, when the > names are clear and relevant. But why should I feel bad about failing to > use the same names as the functions I call? -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhodri at kynesim.co.uk Thu Sep 6 10:44:11 2018 From: rhodri at kynesim.co.uk (Rhodri James) Date: Thu, 6 Sep 2018 15:44:11 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> Message-ID: <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> On 06/09/18 15:05, Anders Hovm?ller wrote: > > > On Thursday, September 6, 2018 at 3:11:46 PM UTC+2, Steven D'Aprano wrote: >> >> On Thu, Sep 06, 2018 at 12:15:46PM +0200, Anders Hovm?ller wrote: >> >>> I have a working implementation for a new syntax which would make >>> using keyword arguments a lot nicer. Wouldn't it be awesome if instead >>> of: >>> >>> foo(a=a, b=b, c=c, d=3, e=e) >>> >>> we could just write: >>> >>> foo(*, a, b, c, d=3, e) >>> >>> and it would mean the exact same thing? >> >> No. > > > Heh. I did expect the first mail to be uncivil :P For comparison, my reaction did indeed involve awe. It was full of it, in fact :-p Sorry, but that syntax looks at best highly misleading -- how many parameters are we passing? I don't like it at all. >> I'm in favour of consistent naming when it helps the code, when the >> names are clear and relevant. > > > Which is what I'm saying. Actually you are not. Adding specific syntax support is a strong signal that you expect people to use it and (in this case) use consistent naming. Full stop. It's a much stronger statement than you seem to think. >> I disagree that f(*, page) is more readable than an explicit named >> keyword argument f(page=page). >> > > People prefer f(page) today. For some reason. That might refute your > statement or not, depending on why they do it. Evidence? -- Rhodri James *-* Kynesim Ltd From jfine2358 at gmail.com Thu Sep 6 12:40:38 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Thu, 6 Sep 2018 17:40:38 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> Message-ID: Hi Anders Thank you for your interesting message. I'm sure it's based on a real need. You wrote: > I have a working implementation for a new syntax which would make using keyword arguments a lot nicer. Wouldn't it be awesome if instead of: > foo(a=a, b=b, c=c, d=3, e=e) > we could just write: > foo(*, a, b, c, d=3, e) > and it would mean the exact same thing? I assume you're talking about defining functions. Here's something that already works in Python. >>> def fn(*, a, b, c, d, e): return locals() >>> fn.__kwdefaults__ = dict(a=1, b=2, c=3, d=4, e=5) >>> fn() {'d': 4, 'b': 2, 'e': 5, 'c': 3, 'a': 1} And to pick up something from the namespace >>> eval('aaa', fn.__globals__) 'telltale' Aside: This is short, simple and unsafe. Here's a safer way >>> __name__ '__main__' >>> import sys >>> getattr(sys.modules[__name__], 'aaa') 'telltale' >From this, it should be easy to construct exactly the dict() that you want for the kwdefaults. -- Jonathan From ethan at stoneleaf.us Thu Sep 6 12:50:11 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 06 Sep 2018 09:50:11 -0700 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> Message-ID: <5B915AC3.8080003@stoneleaf.us> On 09/06/2018 07:05 AM, Anders Hovm?ller wrote: > On Thursday, September 6, 2018 at 3:11:46 PM UTC+2, Steven D'Aprano wrote: >> On Thu, Sep 06, 2018 at 12:15:46PM +0200, Anders Hovm?ller wrote: >>> Wouldn't it be awesome if [...] >> >> No. > > Heh. I did expect the first mail to be uncivil :P Direct disagreement is not uncivil, just direct. You asked a yes/no question and got a yes/no answer. D'Aprano's comments further down are also not uncivil, just explicative (not expletive ;) ) of his position. As for your proposal, I agree with D'Aprano -- this is a lot machinery to support a use-case that doesn't feel compelling to me, and I do tend to name my variables the same when I can. -- ~Ethan~ From jfine2358 at gmail.com Thu Sep 6 13:45:45 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Thu, 6 Sep 2018 18:45:45 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> Message-ID: I missed an important line of code. Here it is: >>> aaa = 'telltale' Once you have that, these will work: >>> eval('aaa', fn.__globals__) 'telltale' >>> __name__ '__main__' >>> import sys >>> getattr(sys.modules[__name__], 'aaa') 'telltale' -- Jonathan From leewangzhong+python at gmail.com Thu Sep 6 14:11:02 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Thu, 6 Sep 2018 14:11:02 -0400 Subject: [Python-ideas] On evaluating features [was: Unpacking iterables for augmented assignment] In-Reply-To: <5B85CE84.7080300@canterbury.ac.nz> References: <23429.581.322785.398472@turnbull.sk.tsukuba.ac.jp> <23429.31285.936745.529073@turnbull.sk.tsukuba.ac.jp> <5B85CE84.7080300@canterbury.ac.nz> Message-ID: On Tue, Aug 28, 2018 at 6:37 PM Greg Ewing wrote: > > Guido van Rossum wrote: > > we might propose (as the OP did) that this: > > > > a, b, c += x, y, z > > > > could be made equivalent to this: > > > > a += x > > b += y > > c += z > > But not without violating the principle that > > lhs += rhs > > is equivalent to > > lhs = lhs.__iadd__(lhs) (Corrected: lhs = lhs.__iadd__(rhs)) Since lhs here is neither a list nor a tuple, how is it violated? Or rather, how is it any more of a special case than in this syntax: # Neither name-binding or setitem/setattr. [a,b,c] = items If lhs is a Numpy array, then: a_b_c += x, y, z is equivalent to: a_b_c = a_b_c.__iadd__((x,y,z)) We can translate the original example: a, b, c += x, y, z to: a, b, c = target_list(a,b,c).__iadd__((x,y,z)) where `target_list` is a virtual (not as in "virtual function") type for target list constructs. From rosuav at gmail.com Thu Sep 6 14:23:02 2018 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 7 Sep 2018 04:23:02 +1000 Subject: [Python-ideas] On evaluating features [was: Unpacking iterables for augmented assignment] In-Reply-To: References: <23429.581.322785.398472@turnbull.sk.tsukuba.ac.jp> <23429.31285.936745.529073@turnbull.sk.tsukuba.ac.jp> <5B85CE84.7080300@canterbury.ac.nz> Message-ID: On Fri, Sep 7, 2018 at 4:11 AM, Franklin? Lee wrote: > On Tue, Aug 28, 2018 at 6:37 PM Greg Ewing wrote: >> >> Guido van Rossum wrote: >> > we might propose (as the OP did) that this: >> > >> > a, b, c += x, y, z >> > >> > could be made equivalent to this: >> > >> > a += x >> > b += y >> > c += z >> >> But not without violating the principle that >> >> lhs += rhs >> >> is equivalent to >> >> lhs = lhs.__iadd__(lhs) > > (Corrected: lhs = lhs.__iadd__(rhs)) > > Since lhs here is neither a list nor a tuple, how is it violated? Or > rather, how is it any more of a special case than in this syntax: > > # Neither name-binding or setitem/setattr. > [a,b,c] = items > > If lhs is a Numpy array, then: > a_b_c += x, y, z > is equivalent to: > a_b_c = a_b_c.__iadd__((x,y,z)) > > We can translate the original example: > a, b, c += x, y, z > to: > a, b, c = target_list(a,b,c).__iadd__((x,y,z)) > where `target_list` is a virtual (not as in "virtual function") type > for target list constructs. What is the virtual type here, and what does its __iadd__ method do? I don't understand you here. Can you go into detail? Suppose I'm the author of the class that all six of these objects are instances of; can I customize the effect of __iadd__ here in some way, and if so, how? ChrisA From jfine2358 at gmail.com Thu Sep 6 14:33:46 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Thu, 6 Sep 2018 19:33:46 +0100 Subject: [Python-ideas] On evaluating features [was: Unpacking iterables for augmented assignment] In-Reply-To: References: <23429.581.322785.398472@turnbull.sk.tsukuba.ac.jp> <23429.31285.936745.529073@turnbull.sk.tsukuba.ac.jp> <5B85CE84.7080300@canterbury.ac.nz> Message-ID: Hi Franklin Lee Thank you for your message. You wrote: > We can translate the original example: > a, b, c += x, y, z > to: > a, b, c = target_list(a,b,c).__iadd__((x,y,z)) > where `target_list` is a virtual (not as in "virtual function") type > for target list constructs. Yes, we can.I think all are agreed that that such semantics for a, b, c += x, y, z could be provided in a future version of Python. At present we get >>> a, b, c += [4, 5, 6] SyntaxError: illegal expression for augmented assignment Where we're not agreed, I think, is whether doing so would be a good idea. The proposers think it is a good idea. However, unless the proposers convince sufficient users that it is a good idea to do so, it probably won't be added to Python. By the way, I think it's easier to get users for a pure Python module (and hence perhaps get it into the standard library) than it is to make a language syntax and semantics change. And I also like that things are this way. -- Jonathan From leewangzhong+python at gmail.com Thu Sep 6 14:38:26 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Thu, 6 Sep 2018 14:38:26 -0400 Subject: [Python-ideas] On evaluating features [was: Unpacking iterables for augmented assignment] In-Reply-To: References: <23429.581.322785.398472@turnbull.sk.tsukuba.ac.jp> <23429.31285.936745.529073@turnbull.sk.tsukuba.ac.jp> <5B85CE84.7080300@canterbury.ac.nz> Message-ID: On Thu, Sep 6, 2018 at 2:23 PM Chris Angelico wrote: > > On Fri, Sep 7, 2018 at 4:11 AM, Franklin? Lee > wrote: > > On Tue, Aug 28, 2018 at 6:37 PM Greg Ewing wrote: > >> > >> Guido van Rossum wrote: > >> > we might propose (as the OP did) that this: > >> > > >> > a, b, c += x, y, z > >> > > >> > could be made equivalent to this: > >> > > >> > a += x > >> > b += y > >> > c += z > >> > >> But not without violating the principle that > >> > >> lhs += rhs > >> > >> is equivalent to > >> > >> lhs = lhs.__iadd__(lhs) > > > > (Corrected: lhs = lhs.__iadd__(rhs)) > > > > Since lhs here is neither a list nor a tuple, how is it violated? Or > > rather, how is it any more of a special case than in this syntax: > > > > # Neither name-binding or setitem/setattr. > > [a,b,c] = items > > > > If lhs is a Numpy array, then: > > a_b_c += x, y, z > > is equivalent to: > > a_b_c = a_b_c.__iadd__((x,y,z)) > > > > We can translate the original example: > > a, b, c += x, y, z > > to: > > a, b, c = target_list(a,b,c).__iadd__((x,y,z)) > > where `target_list` is a virtual (not as in "virtual function") type > > for target list constructs. > > What is the virtual type here, and what does its __iadd__ method do? I > don't understand you here. Can you go into detail? Suppose I'm the > author of the class that all six of these objects are instances of; > can I customize the effect of __iadd__ here in some way, and if so, > how? I shouldn't have used jargon I had to look up myself. The following are equivalent and compile down to the same code: a, b, c = lst [a, b, c] = lst The left hand side is not an actual list (even though it looks like one). The brackets are optional. The docs call the left hand side a target list: https://docs.python.org/3/reference/simple_stmts.html#assignment-statements "Target list" is not a real type. You can't construct such an object, or hold one in memory. You can't make a class that emulates it (without interpreter-specific hacks), because it is a collection of its names, not a collection of values. target_list.__iadd__ also does not exist, because target_list does not exist. However, target_list can be thought of as a virtual type, a type that the compiler compiles away. We can then consider target_list.__iadd__ as a virtual operator, which the compiler will understand but hide from the runtime. I was making the point that, because the __iadd__ in the example does not refer to list.__iadd__, but rather a virtual target_list.__iadd__, there is not yet a violation of the rule. From rosuav at gmail.com Thu Sep 6 14:46:36 2018 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 7 Sep 2018 04:46:36 +1000 Subject: [Python-ideas] On evaluating features [was: Unpacking iterables for augmented assignment] In-Reply-To: References: <23429.581.322785.398472@turnbull.sk.tsukuba.ac.jp> <23429.31285.936745.529073@turnbull.sk.tsukuba.ac.jp> <5B85CE84.7080300@canterbury.ac.nz> Message-ID: On Fri, Sep 7, 2018 at 4:38 AM, Franklin? Lee wrote: > The following are equivalent and compile down to the same code: > a, b, c = lst > [a, b, c] = lst > > The left hand side is not an actual list (even though it looks like > one). The brackets are optional. The docs call the left hand side a > target list: https://docs.python.org/3/reference/simple_stmts.html#assignment-statements > > "Target list" is not a real type. You can't construct such an object, > or hold one in memory. You can't make a class that emulates it > (without interpreter-specific hacks), because it is a collection of > its names, not a collection of values. A target list is a syntactic element, like a name, or an operator, or a "yield" statement. You can't construct one, because it isn't an object type. It's not a "virtual type". It's a completely different sort of thing. > target_list.__iadd__ also does not exist, because target_list does not > exist. However, target_list can be thought of as a virtual type, a > type that the compiler compiles away. We can then consider > target_list.__iadd__ as a virtual operator, which the compiler will > understand but hide from the runtime. > > I was making the point that, because the __iadd__ in the example does > not refer to list.__iadd__, but rather a virtual target_list.__iadd__, > there is not yet a violation of the rule. What you're suggesting is on par with trying to say that: for += 5 should be implemented as: current_loop.__iadd__(5) where "current_loop" doesn't really exist, but it's a virtual type that represents a 'for' loop. That doesn't make sense, because there is no object in Python to represent the loop. There is no class/type that represents all loops, on which a method like this could be added. The word 'for' is part of the grammar, not the object model. And "target list" is the same. There's no way to attach an __iadd__ method to something that doesn't exist. So for your proposal to work, you would need to break that rule, and give a *different* meaning to this. ChrisA From jfine2358 at gmail.com Thu Sep 6 15:10:49 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Thu, 6 Sep 2018 20:10:49 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> Message-ID: Summary: I addressed the DEFINING problem. My mistake. Some rough ideas for the CALLING problem. Anders has kindly pointed out to me, off-list, that I solved the wrong problem. His problem is CALLING the function fn, not DEFINING fn. Thank you very much for this, Anders. For calling, we can use https://docs.python.org/3/library/functions.html#locals >>> lcls = locals() >>> a = 'apple' >>> b = 'banana' >>> c = 'cherry' >>> dict((k, lcls[k]) for k in ('a', 'b', 'c')) {'b': 'banana', 'c': 'cherry', 'a': 'apple'} So in his example foo(a=a, b=b, c=c, d=3, e=e) one could instead write foo(d=3, **helper(locals(), ('a', 'b', 'c', 'e'))) or perhaps better helper(locals(), 'a', 'b', 'c', 'e')(foo, d=3) where the helper() picks out items from the locals(). And in the second form, does the right thing with them. Finally, one might be able to use >>> def fn(*, a, b, c, d, e): f, g, h = 3, 4, 5 >>> fn.__code__.co_kwonlyargcount 5 >>> fn.__code__.co_varnames ('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h') >>> fn.__code__.co_argcount 0 to identify the names of all keyword arguments of the function foo(), and they provide the values in locals() as the defaults. Of course, this is somewhat magical, and requires strict conformance to conventions. So might not be a good idea. The syntax could then be localmagic(foo, locals())(d=3) which, for magicians, might be easier. But rightly in my opinion, Python is reluctant to use magic. On the other hand, for a strictly controlled Domain Specific Language, it might, just might, be useful. And this list is for "speculative language ideas" (see https://mail.python.org/mailman/listinfo/python-ideas). -- Jonathan From leewangzhong+python at gmail.com Thu Sep 6 15:26:47 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Thu, 6 Sep 2018 15:26:47 -0400 Subject: [Python-ideas] On evaluating features [was: Unpacking iterables for augmented assignment] In-Reply-To: References: <23429.581.322785.398472@turnbull.sk.tsukuba.ac.jp> <23429.31285.936745.529073@turnbull.sk.tsukuba.ac.jp> <5B85CE84.7080300@canterbury.ac.nz> Message-ID: n Thu, Sep 6, 2018 at 2:47 PM Chris Angelico wrote: > > On Fri, Sep 7, 2018 at 4:38 AM, Franklin? Lee > wrote: > > The following are equivalent and compile down to the same code: > > a, b, c = lst > > [a, b, c] = lst > > > > The left hand side is not an actual list (even though it looks like > > one). The brackets are optional. The docs call the left hand side a > > target list: https://docs.python.org/3/reference/simple_stmts.html#assignment-statements > > > > "Target list" is not a real type. You can't construct such an object, > > or hold one in memory. You can't make a class that emulates it > > (without interpreter-specific hacks), because it is a collection of > > its names, not a collection of values. > > A target list is a syntactic element, like a name, or an operator, or > a "yield" statement. You can't construct one, because it isn't an > object type. It's not a "virtual type". It's a completely different > sort of thing. I didn't think I gave the impression that I was complaining about not being able to construct it. I gave an explanation for how it isn't a real type, because you asked how you could modify the behavior, and because I wanted to give an explanation for more than just you. There are constructs that correspond to types (such as slices and functions). There are those that don't. We call `3:2` (in the right context) a slice, even though it's technically a construct which is compiled down to a `slice` object. I see no problem there. I called it a "virtual type" and explained why I called it that. You reject the use of that term, but you don't even acknowledge that I gave reasons for it. > > target_list.__iadd__ also does not exist, because target_list does not > > exist. However, target_list can be thought of as a virtual type, a > > type that the compiler compiles away. We can then consider > > target_list.__iadd__ as a virtual operator, which the compiler will > > understand but hide from the runtime. > > > > I was making the point that, because the __iadd__ in the example does > > not refer to list.__iadd__, but rather a virtual target_list.__iadd__, > > there is not yet a violation of the rule. > > What you're suggesting is on par with trying to say that: > > for += 5 > > should be implemented as: > > current_loop.__iadd__(5) > > where "current_loop" doesn't really exist, but it's a virtual type > that represents a 'for' loop. I explained how target_list could be thought of as a special imaginary type which only exists in the compiler's "mind", and then extended that to an imaginary method on that type. Of course your example shows absurdity: you didn't try to say how a for-loop is like an object in the first place. > That doesn't make sense, because there > is no object in Python to represent the loop. There is no class/type > that represents all loops, on which a method like this could be added. > The word 'for' is part of the grammar, not the object model. And > "target list" is the same. There's no way to attach an __iadd__ method > to something that doesn't exist. But I'm not using the word `for`. I am using constructs like `[a,b,c]` (where it is not a list). At least use `(for x in y: z) += 5` as your example. You're effectively accusing me of trying to make `[` (a single token, not a full construct) an object. Your argument here is that there is no Python object to represent a loop, but that really means there's no _runtime_ object to represent a loop. I already said that target lists don't exist in memory (i.e. runtime). "Target list" does exist, just not as a runtime type. It exists as an abstraction not available to the runtime, and we can extend that abstraction in ways not available to the runtime. That means that you can't attach it during the runtime. It does not mean you can't reason with it during compile-time. > So for your proposal to work, you would need to break that rule, and > give a *different* meaning to this. It is not my proposal. I was questioning how there was a rule violation about x+=y translating to `x = x.__iadd__(y)`. You're talking about a different, made-up rule about how syntactical constructs can't correspond to compile-time imaginary objects or runtime objects. But there are syntactical constructs that DO correspond to runtime types (slice, list, class), there are those which don't but can (let's not get into that), there are those which can stay compile-time (f-strings, target lists), and there are those which probably can't be thought of as types at all (import). From brett at python.org Thu Sep 6 15:57:21 2018 From: brett at python.org (Brett Cannon) Date: Thu, 6 Sep 2018 12:57:21 -0700 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <5B915AC3.8080003@stoneleaf.us> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <5B915AC3.8080003@stoneleaf.us> Message-ID: On Thu, 6 Sep 2018 at 09:51 Ethan Furman wrote: > On 09/06/2018 07:05 AM, Anders Hovm?ller wrote: > > On Thursday, September 6, 2018 at 3:11:46 PM UTC+2, Steven D'Aprano > wrote: > >> On Thu, Sep 06, 2018 at 12:15:46PM +0200, Anders Hovm?ller wrote: > > >>> Wouldn't it be awesome if [...] > >> > >> No. > > > > Heh. I did expect the first mail to be uncivil :P > > Direct disagreement is not uncivil, just direct. You asked a yes/no > question and got a yes/no answer. D'Aprano's > comments further down are also not uncivil, just explicative (not > expletive ;) ) of his position. > It also wouldn't have hurt to say "I don't think so" versus the hard "no" as it means the same thing. You're right that blunt isn't necessarily uncivil, but bluntness is also interpreted differently in various cultures so it's something to avoid if possible. -Brett > > As for your proposal, I agree with D'Aprano -- this is a lot machinery to > support a use-case that doesn't feel > compelling to me, and I do tend to name my variables the same when I can. > > -- > ~Ethan~ > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu Sep 6 18:39:07 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 07 Sep 2018 10:39:07 +1200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> Message-ID: <5B91AC8B.9030909@canterbury.ac.nz> Rhodri James wrote: > that syntax looks at best highly misleading -- > how many parameters are we passing? I don't like it at all. Maybe something like this would be better: f(=a, =b, =c) Much more suggestive that you're passing a keyword argument. As for whether consistent naming is a good idea, seems to me it's the obvious thing to do when e.g. you're overriding a method, to keep the signature the same for people who want to pass arguments by keyword. You'd need to have a pretty strong reason *not* to keep the parameter names the same. Given that, it's natural to want a way to avoid repeating yourself so much when passing them on. So I think the underlying idea has merit, but the particular syntax proposed is not the best. -- Greg From boxed at killingar.net Thu Sep 6 22:30:35 2018 From: boxed at killingar.net (=?UTF-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 6 Sep 2018 19:30:35 -0700 (PDT) Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> Message-ID: <10421daf-51a6-4a71-a891-8fd2312d0dc4@googlegroups.com> On Thursday, September 6, 2018 at 4:13:45 PM UTC+2, David Mertz wrote: > > Steven's point is the same as my impression. It's not terribly uncommon in > code I write or read to use the same name for a formal parameter (whether > keyword or positional) in the calling scope. But it's also far from > universal. Almost all the time where it's not the case, it's for a very > good reason. > > Functions by their nature are *generic* in some sense. That is, they > allow themselves to be called from many other places. Each of those places > has its own semantic context where different names are relevant to readers > of the code in that other place. As a rule, the names used in function > parameters are less specific or descriptive because they have to be neutral > about that calling context. So e.g. a toy example: > > for record in ledger: > if record.amount > 0: > bank_transaction(currency=currencies[record.country], > deposit=record.amount, > account_number=record.id) > > Once in a while the names in the two scopes align, but it would be code > obfuscation to *force* them to do so (either by actual requirement or > because "it's shorter"). > Pythons normal arguments already gives people an option to write something else "because it's shorter" though: just use positional style. So your example is a bit dishonest because it would be: bank_transaction(currencies[record.country], record.amount, record.id) ...in many many or even most code bases. And I would urge you to try out my analysis tool on some large code base you have access to. I do have numbers to back up my claims. I don't have numbers on all the places where the names don't align but would be *better* if they did align though, because that's a huge manual task, but I think it's pretty obvious these places exists. -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Thu Sep 6 22:38:02 2018 From: boxed at killingar.net (=?UTF-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 6 Sep 2018 19:38:02 -0700 (PDT) Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> Message-ID: <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> > > > For comparison, my reaction did indeed involve awe. It was full of it, > in fact :-p Sorry, but that syntax looks at best highly misleading -- > how many parameters are we passing? I don't like it at all. > (nitpick: we're passing arguments, not parameters) I don't see how this could be confusing. Do you think it's confusing how many parameters a function has in python now because of the keyword only marker? This suggestion follows the same rules you should already be familiar with when counting parameters, why would you now have trouble counting when the line doesn't begin with "def " and end with ":"? > >> I'm in favour of consistent naming when it helps the code, when the > >> names are clear and relevant. > > > > > > Which is what I'm saying. > > Actually you are not. Adding specific syntax support is a strong signal > that you expect people to use it and (in this case) use consistent > naming. Full stop. It's a much stronger statement than you seem to think. > I expect this to be common enough to warrant nicer language constructs (like OCaml has). I expect people today to use positional arguments to get concise code, and I think python pushes people in this direction. This is a bad direction imo. > >> I disagree that f(*, page) is more readable than an explicit named > >> keyword argument f(page=page). > >> > > > > People prefer f(page) today. For some reason. That might refute your > > statement or not, depending on why they do it. > > Evidence? > Run my analysis tool. Check the numbers. It's certainly true at work, and it's true for Django for example. -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Thu Sep 6 22:41:54 2018 From: boxed at killingar.net (=?UTF-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 6 Sep 2018 19:41:54 -0700 (PDT) Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <5B915AC3.8080003@stoneleaf.us> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <5B915AC3.8080003@stoneleaf.us> Message-ID: <6511cc76-7ffd-4c92-9c6d-0509a6e93dd8@googlegroups.com> On Thursday, September 6, 2018 at 6:51:12 PM UTC+2, Ethan Furman wrote: > > On 09/06/2018 07:05 AM, Anders Hovm?ller wrote: > > On Thursday, September 6, 2018 at 3:11:46 PM UTC+2, Steven D'Aprano > wrote: > >> On Thu, Sep 06, 2018 at 12:15:46PM +0200, Anders Hovm?ller wrote: > > >>> Wouldn't it be awesome if [...] > >> > >> No. > > > > Heh. I did expect the first mail to be uncivil :P > > Direct disagreement is not uncivil, just direct. You asked a yes/no > question and got a yes/no answer. > It's a rhetorical question in a PR sense, not an actual yes/no question. > D'Aprano's > comments further down are also not uncivil, just explicative (not > expletive ;) ) of his position. > > As for your proposal, I agree with D'Aprano -- this is a lot machinery to > support a use-case that doesn't feel > compelling to me, and I do tend to name my variables the same when I can. > It's not a lot of machinery. It's super tiny. Look at my implementation. Generally these arguments against sound like the arguments against f-strings to me. I personally think f-strings are the one of the best things to happen to python in at least a decade, I don't know if people on this list agree? -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Thu Sep 6 22:44:51 2018 From: boxed at killingar.net (=?UTF-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 6 Sep 2018 19:44:51 -0700 (PDT) Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> Message-ID: <6ea539ad-7474-4809-aec9-772abefa9b23@googlegroups.com> > For calling, we can use > https://docs.python.org/3/library/functions.html#locals > > >>> lcls = locals() > > >>> a = 'apple' > >>> b = 'banana' > >>> c = 'cherry' > > >>> dict((k, lcls[k]) for k in ('a', 'b', 'c')) > {'b': 'banana', 'c': 'cherry', 'a': 'apple'} > > So in his example > > foo(a=a, b=b, c=c, d=3, e=e) > > one could instead write > > foo(d=3, **helper(locals(), ('a', 'b', 'c', 'e'))) > > or perhaps better > > helper(locals(), 'a', 'b', 'c', 'e')(foo, d=3) > > where the helper() picks out items from the locals(). And in the > second form, does the right thing with them. > Sure. This was the argument against f-strings too. In any case I'm not trying to solve a problem of how to extract things from the local namespace anymore than "foo(a, b)" is. I'm trying to minimize the advantage positional arguments have over keyword arguments in brevity. If that makes sense? -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Thu Sep 6 22:48:41 2018 From: boxed at killingar.net (=?UTF-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 6 Sep 2018 19:48:41 -0700 (PDT) Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <5B91AC8B.9030909@canterbury.ac.nz> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> Message-ID: <281760bf-2568-4030-87f0-0d8a2e9c2f12@googlegroups.com> > Maybe something like this would be better: > > f(=a, =b, =c) > Haha. Look at my PEP, it's under "rejected alternative syntax", because of the super angry replies I got on this very mailing list when I suggested this syntax a few years ago :P I think that syntax is pretty nice personally, but me and everyone at work I've discussed this with think that f(*, a, b, c) syntax is even nicer since it mirrors "def f(*, a, b, c)" so nicely. Most replies to my new syntax has been along the lines of "seems obvious" and "ooooh" :P -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Thu Sep 6 22:57:19 2018 From: boxed at killingar.net (=?UTF-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 6 Sep 2018 19:57:19 -0700 (PDT) Subject: [Python-ideas] Positional-only parameters In-Reply-To: References: Message-ID: <30d64c29-0f06-4c96-afef-82c21f7ac6c2@googlegroups.com> I think it makes more sense to remove the concept of positional only parameters by slowly fixing the standard library. I've discussed the existence of positional only with a few people and their response falls in to some basic categories: - disgust - disbelief - bargaining (it's not very common right?! in fact yes it is) I don't think that's a good look for Python :P -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Thu Sep 6 23:32:11 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 6 Sep 2018 23:32:11 -0400 Subject: [Python-ideas] Positional-only parameters In-Reply-To: <30d64c29-0f06-4c96-afef-82c21f7ac6c2@googlegroups.com> References: <30d64c29-0f06-4c96-afef-82c21f7ac6c2@googlegroups.com> Message-ID: On Thu, Sep 6, 2018 at 10:57 PM Anders Hovm?ller wrote: [..] > I don't think that's a good look for Python :P Anders, Discussing something privately with "a few people", posting snarky conclusions, and giving baseless recommendations isn't how we strive to make decisions in Python. Please refrain from posting in this manner to python-ideas and python-dev, as emails written this way are simply distracting and borderline disturbing. Thanks, Yury From cs at cskk.id.au Fri Sep 7 00:00:36 2018 From: cs at cskk.id.au (Cameron Simpson) Date: Fri, 7 Sep 2018 14:00:36 +1000 Subject: [Python-ideas] Positional-only parameters In-Reply-To: References: Message-ID: <20180907040036.GA95533@cskk.homeip.net> On 01Mar2017 21:25, Serhiy Storchaka wrote: >On 28.02.17 23:17, Victor Stinner wrote: >>My question is: would it make sense to implement this feature in >>Python directly? If yes, what should be the syntax? Use "/" marker? >>Use the @positional() decorator? > >I'm strongly +1 for supporting positional-only parameters. The main >benefit to me is that this allows to declare functions that takes >arbitrary keyword arguments like Formatter.format() or >MutableMapping.update(). Now we can't use even the "self" parameter >and need to use a trick with parsing *args manually. This harms clearness and >performance. I was a mild +0.1 on this until I saw this argument; now I am +1 (unless there's some horrible unforseen performance penalty). I've been writing quite a few functions lately where it is reasonable for a caller to want to pass arbitrary keyword arguments, but where I also want some additional parameters for control purposes. The most recent example was database related: functions accepting arbitrary keyword arguments indicating column values. As a specific example, what I _want_ to write includes this method: def update(self, where, **column_values): Now, because "where" happens to be an SQL keyword it is unlikely that there will be a column of that name, _if_ the database is human designed by an SQL person. I have other examples where picking a "safe" name is harder. I can even describe scenarios where "where" is plausible: supposing the the database is generated from some input data, perhaps supplied by a CSV file (worse, a CSV file that is an export of a human written spreadsheet with a "Where" column header). That isn't really even made up: I've got functions whose purpose is to import such spreadsheet exports, making namedtuple subclasses automatically from the column headers. In many of these situations I've had recently positional-only arguments would have been very helpful. I even had to bugfix a function recently where a positional argument was being trouced by a keyword argument by a caller. Cheers, Cameron Simpson From yselivanov.ml at gmail.com Fri Sep 7 01:00:06 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 7 Sep 2018 01:00:06 -0400 Subject: [Python-ideas] Positional-only parameters In-Reply-To: <2A6ED8F3-D504-40C6-88DD-9E0361EC6881@killingar.net> References: <30d64c29-0f06-4c96-afef-82c21f7ac6c2@googlegroups.com> <2A6ED8F3-D504-40C6-88DD-9E0361EC6881@killingar.net> Message-ID: On Fri, Sep 7, 2018 at 12:31 AM Anders Hovm?ller wrote: > > Yury, > > I?m sorry if that came off badly, I was not attempting to be snarky. Text is hard and I know I?m not good in emails but rereading the text below I honestly can?t see why my honest attempt at describing my experience can be considered snarky. > > I haven?t sought out to discuss positional only parameters, this is something that has just come up in conversation from time to time over the last few years and this has been the response. > > If you would explain how you interpreted my mail in this way I would of course be thankful but I also don?t want to take more of your time. Sure. (If you choose to reply to this email please do that off-list.) IMHO your email lacks substance, uses rather strong words like "disgust" and "disbelief", and ends with "I don't think that's a good look for Python :P" phrase that doesn't help you to make any point. You re-surfaced a pretty old email thread where a number of core developers explained their position and listed quite a few arguments for having positional-only arguments. You, on the other hand, didn't add a lot to the discussion except your own opinion with no serious arguments to support it. Please don't feel discouraged from posting to python-ideas though, just try to keep a higher signal-to-noise ratio. ;) Yury From j.van.dorp at deonet.nl Fri Sep 7 02:59:44 2018 From: j.van.dorp at deonet.nl (Jacco van Dorp) Date: Fri, 7 Sep 2018 08:59:44 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <281760bf-2568-4030-87f0-0d8a2e9c2f12@googlegroups.com> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <281760bf-2568-4030-87f0-0d8a2e9c2f12@googlegroups.com> Message-ID: Op vr 7 sep. 2018 om 04:49 schreef Anders Hovm?ller : > > Maybe something like this would be better: >> >> f(=a, =b, =c) >> > > Haha. Look at my PEP, it's under "rejected alternative syntax", because of > the super angry replies I got on this very mailing list when I suggested > this syntax a few years ago :P > > I think that syntax is pretty nice personally, but me and everyone at work > I've discussed this with think that f(*, a, b, c) syntax is even nicer > since it mirrors "def f(*, a, b, c)" so nicely. Most replies to my new > syntax has been along the lines of "seems obvious" and "ooooh" :P > I must say I like the idea of being able to write it the way you propose. Sometimes we make a function only to be called once at a specific location, more because of factoring out some functions for clarity. Been doing that myself lately for scripting, and I think it'd increase clarity. However, it's really alike to f(a, b, c), which does something totally different. It -might- become something of a newb trap, as myfunc(*, a, b, c) would be 100% equal to myfunc(*, c, a, b) but that's not true for the f(c, a, b) case. I dislike the f(=arg) syntax. -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Fri Sep 7 07:06:20 2018 From: boxed at killingar.net (=?UTF-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 7 Sep 2018 04:06:20 -0700 (PDT) Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <281760bf-2568-4030-87f0-0d8a2e9c2f12@googlegroups.com> Message-ID: > I must say I like the idea of being able to write it the way you propose. > Sometimes we make a function only to be called once at a specific location, > more because of factoring out some functions for clarity. Been doing that > myself lately for scripting, and I think it'd increase clarity. However, > it's really alike to f(a, b, c), which does something totally different. It > -might- become something of a newb trap, as myfunc(*, a, b, c) would be > 100% equal to myfunc(*, c, a, b) but that's not true for the f(c, a, b) > case. > I've seen beginners make the mistake of calling f(c, a, b) and being confused why it doesn't work the way they expected, so I think the newb trap might go in the other direction. If by "newb" one means "totally new to programming" then I think the keyword style is probably less confusing but if you come from a language with only positional arguments (admittedly most languages!) then the trap goes in the other direction. Of course, I don't have the resources or time to make a study about this to figure out which is which, but I agree it's an interesting question. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertvandeneynde at hotmail.com Fri Sep 7 07:57:50 2018 From: robertvandeneynde at hotmail.com (Robert Vanden Eynde) Date: Fri, 7 Sep 2018 11:57:50 +0000 Subject: [Python-ideas] Python dialect that compiles into python Message-ID: Many features on this list propose different syntax to python, producing different python "dialects" that can statically be transformed to python : - a,b += f(x) ? _t = f(x); a += _t; b += _t; (augmented assignement unpacking) - a = 2x + 1 ? a = 2*x + 1 (juxtaposition is product) - f(*, x, y) ? f(x=x, y=y) (simplekwargs) - DSL specific language - all def become @partially def - etc... Using a modified version of ast, it is relatively easy to modifiy the syntax tree of a program to produce another program. So one could compile the "python dialect" into regular python. The last example with partially for example doesn't even need new syntax. Those solutions that are too specific would then be provided as a module on pip that has a common interface for "compiling" : $ cat test.dialect.py #! dialect: juxtaposition a = 2x + 1 $ python -m compile test.dialect.py $ cat test.py #! compiled with dialect juxtaposition a = 2x + 1 The generated file should also be read only if the filesystem provides the option. In the web world, it's very common to compile into html, css or js. One of the reason was that the web must be veeeery generic and will not try to fit everyone needs. - less compiles scss into css - coffeescript into js - source map provides a standard way to map each line of the new file into lines of the old files (useful for exceptions !) One useful feature of those compilers is the --watch Option that allows to avoid to launch the compilation manually. Of course, in the js world, the syntax was improved in the end after a long maturation of the compiling and not compiling libraries. In the java world, languages such as Scala compile into bytecode, that's another idea. If a standard module like "compile" is written, users can write their own module that will automatically be read by "compile" (for example, pip install compile_juxtaposition would allow the juxtaposition dialect). Compile doesn't even have to be on the standard python, it can be a lib. One could write a module using multiple dialect like dialect: juxtaposition, simplekwargs The order would be technically important but functionally non important. Actually, I might start to write this lib, that looks fun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Fri Sep 7 08:38:48 2018 From: boxed at killingar.net (=?UTF-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 7 Sep 2018 05:38:48 -0700 (PDT) Subject: [Python-ideas] Python dialect that compiles into python In-Reply-To: References: Message-ID: <34c8b45c-8b15-4c9d-a37c-16288650ff01@googlegroups.com> Many features on this list propose different syntax to python, producing > different python "dialects" that can statically be transformed to python : > > - a,b += f(x) ? _t = f(x); a += _t; b += _t; (augmented assignement > unpacking) > - a = 2x + 1 ? a = 2*x + 1 (juxtaposition is product) > - f(*, x, y) ? f(x=x, y=y) (simplekwargs) > - DSL specific language > - all def become @partially def > - etc... > > Using a modified version of ast, it is relatively easy to modifiy the > syntax tree of a program to produce another program. So one could compile > the "python dialect" into regular python. The last example with partially > for example doesn't even need new syntax. > For my specific suggestion (simplekwargs), it's really not worth doing at all if it's not a part of standard Python in my opinion. The reason is tooling: you wouldn't get tools like Jedi and PyCharm to read these files and interpret them correctly. That being said, it could absolutely be a nice way to prototype things or for very edge cases. I'd recommend looking into parso (the AST lib that underlies Jedi). I used it for the analysis tool I wrote for simplekwargs and I've also used it to write my mutation tester mutmut. It has several advantages: - it's a roundtrip AST so you wouldn't lose any formatting or comments etc - it has an error recovery mode where parse errors are isolated in the AST as error nodes. You could probably use this to just enumerate error nodes and doing .get_code() on them and then have all your special parsing in there. - it is very liberal in what it accepts putting into an AST that it renders out to source again so it's quite easy to hack something together that works / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhodri at kynesim.co.uk Fri Sep 7 09:13:17 2018 From: rhodri at kynesim.co.uk (Rhodri James) Date: Fri, 7 Sep 2018 14:13:17 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> Message-ID: <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> On 07/09/18 03:38, Anders Hovm?ller wrote: >> For comparison, my reaction did indeed involve awe. It was full of it, >> in fact :-p Sorry, but that syntax looks at best highly misleading -- >> how many parameters are we passing? I don't like it at all. > > (nitpick: we're passing arguments, not parameters) potayto, potahto > I don't see how this could be confusing. Do you think it's confusing how > many parameters a function has in python now because of the keyword only > marker? This suggestion follows the same rules you should already be > familiar with when counting parameters, why would you now have trouble > counting when the line doesn't begin with "def " and end with ":"? I counted commas. I came up with the wrong number. Simple. For what it's worth, I don't like the keyword-only marker or the proposed positional-only marker for exactly the same reason. >>>> I'm in favour of consistent naming when it helps the code, when the >>>> names are clear and relevant. >>> >>> >>> Which is what I'm saying. >> >> Actually you are not. Adding specific syntax support is a strong signal >> that you expect people to use it and (in this case) use consistent >> naming. Full stop. It's a much stronger statement than you seem to think. >> > > I expect this to be common enough to warrant nicer language constructs > (like OCaml has). I expect people today to use positional arguments to get > concise code, and I think python pushes people in this direction. This is a > bad direction imo. I disagree. Keyword arguments are a fine and good thing, but they are best used for optional arguments IMHO. Verbosity for the sake of verbosity is not a good thing. > > >>>> I disagree that f(*, page) is more readable than an explicit named >>>> keyword argument f(page=page). >>>> >>> >>> People prefer f(page) today. For some reason. That might refute your >>> statement or not, depending on why they do it. >> >> Evidence? >> > > Run my analysis tool. Check the numbers. It's certainly true at work, and > it's true for Django for example. OK, then your assertion didn't mean what I thought it means, and I'm very confused about what it does mean. Could you try that again? -- Rhodri James *-* Kynesim Ltd From boxed at killingar.net Fri Sep 7 09:59:45 2018 From: boxed at killingar.net (=?UTF-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 7 Sep 2018 06:59:45 -0700 (PDT) Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> Message-ID: > > I counted commas. I came up with the wrong number. Simple. > > For what it's worth, I don't like the keyword-only marker or the > proposed positional-only marker for exactly the same reason. > There's also potentially trailing commas to confuse you further :P I'm not a big fan of the keyword argument only syntax either, but that ship has sailed long ago, so now I think we should consider it Pythonic and judge future suggestions accordingly. I do like the feature of keyword only and understand the tradeoffs made to make the syntax work, so I'm quite happy overall. > I disagree. Keyword arguments are a fine and good thing, but they are > best used for optional arguments IMHO. Verbosity for the sake of > verbosity is not a good thing. > > Hmm.. it seems to me like there are some other caveats to your position here. Like "no functions with more than two arguments!" or similar? Personally I think readability suffers greatly already at two arguments if none of the parameters are named. Sometimes you can sort of fix the readability with function names like do_something_with_a_foo_and_bar(foo, bar), but that is usually more ugly than just using keyword arguments. > >>>> I disagree that f(*, page) is more readable than an explicit named > >>>> keyword argument f(page=page). > >>>> > >>> > >>> People prefer f(page) today. For some reason. That might refute your > >>> statement or not, depending on why they do it. > >> > >> Evidence? > >> > > > > Run my analysis tool. Check the numbers. It's certainly true at work, and > > it's true for Django for example. > > OK, then your assertion didn't mean what I thought it means, and I'm > very confused about what it does mean. Could you try that again? > Functions in real code have > 2 arguments. Often when reading the code the only way to know what those arguments are is by reading the names of the parameters on the way in, because it's positional arguments. But those aren't checked. To me it's similar to bracing for indent: you're telling the human one thing and the machine something else and no one is checking that those two are in sync. I have seen beginners try: def foo(b, a): pass a = 1 b = 2 foo(a, b) and then be confused because a and b are flipped. I have no idea if any of that made more sense :P Email is hard. / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhodri at kynesim.co.uk Fri Sep 7 10:43:41 2018 From: rhodri at kynesim.co.uk (Rhodri James) Date: Fri, 7 Sep 2018 15:43:41 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> Message-ID: <6b7c671f-b1bc-1712-ed0b-c93502ffc82a@kynesim.co.uk> On 07/09/18 14:59, Anders Hovm?ller wrote: >> I disagree. Keyword arguments are a fine and good thing, but they are >> best used for optional arguments IMHO. Verbosity for the sake of >> verbosity is not a good thing. >> >> > Hmm.. it seems to me like there are some other caveats to your position > here. Like "no functions with more than two arguments!" or similar? No. > Personally I think readability suffers greatly already at two arguments if > none of the parameters are named. Sometimes you can sort of fix the > readability with function names like do_something_with_a_foo_and_bar(foo, > bar), but that is usually more ugly than just using keyword arguments. I'd have said three arguments in the general case, more if you've chosen your function name to make it obvious (*not* by that nasty foo_and_bar method!), though that's pretty rare. That said, I don't often find I need more than a few mandatory arguments. > Functions in real code have > 2 arguments. Often when reading the code the > only way to know what those arguments are is by reading the names of the > parameters on the way in, because it's positional arguments. But those > aren't checked. To me it's similar to bracing for indent: you're telling > the human one thing and the machine something else and no one is checking > that those two are in sync. I'll repeat; surprisingly few of my function have more than three mandatory (positional) arguments. Expecting to understand functions by just reading the function call and not the accompanying documentation (or code) is IMHO hopelessly optimistic, and just having keyword parameters will not save you from making mistaken assumptions. > I have seen beginners try: > > def foo(b, a): > pass > > a = 1 > b = 2 > foo(a, b) > > and then be confused because a and b are flipped. I have seen teachers get their students to do that deliberately, to give them practical experience that the variable names they use in function calls are not in any way related to the names used in the function definition. I've not seen those students make the same mistake twice :-) I wonder if part of my dislike of your proposal is that you are deliberately blurring that disconnect? -- Rhodri James *-* Kynesim Ltd From mike at selik.org Fri Sep 7 11:06:49 2018 From: mike at selik.org (Michael Selik) Date: Fri, 7 Sep 2018 08:06:49 -0700 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <281760bf-2568-4030-87f0-0d8a2e9c2f12@googlegroups.com> Message-ID: On Fri, Sep 7, 2018, 12:00 AM Jacco van Dorp wrote: > Sometimes we make a function only to be called once at a specific > location, more because of factoring out some functions for clarity. > I've found myself making the opposite refactoring recently, improving clarity by eliminating unnecessary extra functions, where the local scope is passed to the helper function. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertve92 at gmail.com Fri Sep 7 12:17:54 2018 From: robertve92 at gmail.com (Robert Vanden Eynde) Date: Fri, 7 Sep 2018 18:17:54 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> Message-ID: > > > I disagree. Keyword arguments are a fine and good thing, but they are > best used for optional arguments IMHO. Verbosity for the sake of > verbosity is not a good thing. I disagree, when you have more than one parameter it's sometimes complicated to remember the order. Therefore, when you name your args, you have way less probability of passing the wrong variable, even with only one arg. Verbosity adds redundancy, so that both caller and callee are sure they mean the same thing. That's why Java has types everywhere, such that the "declaration part" and the "use" part agree on the same idea (same type). -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Fri Sep 7 12:54:04 2018 From: mertz at gnosis.cx (David Mertz) Date: Fri, 7 Sep 2018 12:54:04 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> Message-ID: Here's a function found online (I'm too lazy to write my own, but it would be mostly the same). Tell me how keyword arguments could help this... Or WHAT names you'd give. 1. def quad(a,b,c): 2. """solves quadratic equations of the form 3. aX^2+bX+c, inputs a,b,c, 4. works for all roots(real or complex)""" 5. root=b**2-4*a*c 6. if root <0: 7. root=abs(complex(root)) 8. j=complex(0,1) 9. x1=(-b+j+sqrt(root))/2*a 10. x2=(-b-j+sqrt(root))/2*a 11. return x1,x2 12. else: 13. x1=(-b+sqrt(root))/2*a 14. x2=(-b-sqrt(root))/2*a 15. return x1,x2 After that, explain why forcing all callers to name their local variables a, b, c would be a good thing. On Fri, Sep 7, 2018, 12:18 PM Robert Vanden Eynde wrote: > >> I disagree. Keyword arguments are a fine and good thing, but they are >> best used for optional arguments IMHO. Verbosity for the sake of >> verbosity is not a good thing. > > > I disagree, when you have more than one parameter it's sometimes > complicated to remember the order. Therefore, when you name your args, you > have way less probability of passing the wrong variable, even with only one > arg. > > Verbosity adds redundancy, so that both caller and callee are sure they > mean the same thing. > > That's why Java has types everywhere, such that the "declaration part" and > the "use" part agree on the same idea (same type). > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertve92 at gmail.com Fri Sep 7 13:17:30 2018 From: robertve92 at gmail.com (Robert Vanden Eynde) Date: Fri, 7 Sep 2018 19:17:30 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> Message-ID: If you want to force using pos args, go ahead and use Python docstring notation we'd write def quad(a,b,c, /) The names should not be renamed because they already have a normal ordering x ** n. This notation is standard, so it would be a shame to use something people don't use. However, I recently used a quad function in one of my uni course where the different factors are computed with a long expression, so keyword arguments, so I'd call: Vout = quad( a=... Some long expression spanning a lot of lines ..., b=... Same thing ..., c=... Same thing...) Without the a= reminder, one could count the indentation. And if you'd think it's a good idea to refactor it like that ... a = ... Some long expression spanning a lot of lines ... b = ... Same thing ... c = ... Same thing... Vout = quad(a,b,c) Then you're in the case of quad(*, a, b, c) (even if here, one would never def quad(c,b,a)). Wheter or not this refactor is more clear is a matter of "do you like functional programming". However, kwargs arz more useful in context where some parameters are optional or less frequentely used. But it makes sense (see Pep about mandatory kwargs). Kwargs is a wonderful invention in Python (or, lisp). Le ven. 7 sept. 2018 ? 18:54, David Mertz a ?crit : > Here's a function found online (I'm too lazy to write my own, but it would > be mostly the same). Tell me how keyword arguments could help this... Or > WHAT names you'd give. > > > 1. def quad(a,b,c): > 2. """solves quadratic equations of the form > 3. aX^2+bX+c, inputs a,b,c, > 4. works for all roots(real or complex)""" > 5. root=b**2-4*a*c > 6. if root <0: > 7. root=abs(complex(root)) > 8. j=complex(0,1) > 9. x1=(-b+j+sqrt(root))/2*a > 10. x2=(-b-j+sqrt(root))/2*a > 11. return x1,x2 > 12. else: > 13. x1=(-b+sqrt(root))/2*a > 14. x2=(-b-sqrt(root))/2*a > 15. return x1,x2 > > > After that, explain why forcing all callers to name their local variables > a, b, c would be a good thing. > > On Fri, Sep 7, 2018, 12:18 PM Robert Vanden Eynde > wrote: > >> >>> I disagree. Keyword arguments are a fine and good thing, but they are >>> best used for optional arguments IMHO. Verbosity for the sake of >>> verbosity is not a good thing. >> >> >> I disagree, when you have more than one parameter it's sometimes >> complicated to remember the order. Therefore, when you name your args, you >> have way less probability of passing the wrong variable, even with only one >> arg. >> >> Verbosity adds redundancy, so that both caller and callee are sure they >> mean the same thing. >> >> That's why Java has types everywhere, such that the "declaration part" >> and the "use" part agree on the same idea (same type). >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Fri Sep 7 14:14:34 2018 From: boxed at killingar.net (=?UTF-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 7 Sep 2018 11:14:34 -0700 (PDT) Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> Message-ID: <1c2701e2-cca6-4375-bb77-149e8c818abc@googlegroups.com> Do you want to change my PEP suggestion to be about forcing stuff? Because otherwise I don?t see why you keep being that up. We?ve explained to you two times (three counting the original mail) that no one is saying anything about forcing anything. From rhodri at kynesim.co.uk Fri Sep 7 14:21:59 2018 From: rhodri at kynesim.co.uk (Rhodri James) Date: Fri, 7 Sep 2018 19:21:59 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> Message-ID: Top posting for once, since no one is quoting well in this thread: Does this in any way answer David's question? I'm serious; you've spent a lot of words that, as best I can tell, say exactly nothing about how keyword arguments would help that quadratic function. If I'm missing something, please tell me. On 07/09/18 18:17, Robert Vanden Eynde wrote: > If you want to force using pos args, go ahead and use Python docstring > notation we'd write def quad(a,b,c, /) > > The names should not be renamed because they already have a normal ordering > x ** n. > > This notation is standard, so it would be a shame to use something people > don't use. > > However, I recently used a quad function in one of my uni course where the > different factors are computed with a long expression, so keyword > arguments, so I'd call: > > Vout = quad( > a=... Some long expression > spanning a lot of lines ..., > b=... Same thing ..., > c=... Same thing...) > > Without the a= reminder, one could count the indentation. > > And if you'd think it's a good idea to refactor it like that ... > > a = ... Some long expression > spanning a lot of lines ... > b = ... Same thing ... > c = ... Same thing... > > Vout = quad(a,b,c) > > Then you're in the case of quad(*, a, b, c) (even if here, one would never > def quad(c,b,a)). > > Wheter or not this refactor is more clear is a matter of "do you like > functional programming". > > However, kwargs arz more useful in context where some parameters are > optional or less frequentely used. But it makes sense (see Pep about > mandatory kwargs). > > Kwargs is a wonderful invention in Python (or, lisp). > > Le ven. 7 sept. 2018 ? 18:54, David Mertz a ?crit : > >> Here's a function found online (I'm too lazy to write my own, but it would >> be mostly the same). Tell me how keyword arguments could help this... Or >> WHAT names you'd give. >> >> >> 1. def quad(a,b,c): >> 2. """solves quadratic equations of the form >> 3. aX^2+bX+c, inputs a,b,c, >> 4. works for all roots(real or complex)""" >> 5. root=b**2-4*a*c >> 6. if root <0: >> 7. root=abs(complex(root)) >> 8. j=complex(0,1) >> 9. x1=(-b+j+sqrt(root))/2*a >> 10. x2=(-b-j+sqrt(root))/2*a >> 11. return x1,x2 >> 12. else: >> 13. x1=(-b+sqrt(root))/2*a >> 14. x2=(-b-sqrt(root))/2*a >> 15. return x1,x2 >> >> >> After that, explain why forcing all callers to name their local variables >> a, b, c would be a good thing. >> >> On Fri, Sep 7, 2018, 12:18 PM Robert Vanden Eynde >> wrote: >> >>> >>>> I disagree. Keyword arguments are a fine and good thing, but they are >>>> best used for optional arguments IMHO. Verbosity for the sake of >>>> verbosity is not a good thing. >>> >>> >>> I disagree, when you have more than one parameter it's sometimes >>> complicated to remember the order. Therefore, when you name your args, you >>> have way less probability of passing the wrong variable, even with only one >>> arg. >>> >>> Verbosity adds redundancy, so that both caller and callee are sure they >>> mean the same thing. >>> >>> That's why Java has types everywhere, such that the "declaration part" >>> and the "use" part agree on the same idea (same type). >>> _______________________________________________ >>> Python-ideas mailing list >>> Python-ideas at python.org >>> https://mail.python.org/mailman/listinfo/python-ideas >>> Code of Conduct: http://python.org/psf/codeofconduct/ >>> >> > -- Rhodri James *-* Kynesim Ltd From jamtlu at gmail.com Fri Sep 7 15:09:01 2018 From: jamtlu at gmail.com (James Lu) Date: Fri, 7 Sep 2018 15:09:01 -0400 Subject: [Python-ideas] Python-ideas Digest, Vol 142, Issue 22 In-Reply-To: References: Message-ID: <08DDEC5B-C1E0-4C8E-9045-31E50570F050@gmail.com> What if * and ** forwarded all unnamed arguments to a function? Example: import traceback def print_http_response(request, color=True): ... def print_invalid_api_response(error, *, show_traceback=False, **): print_http_response(*, **) if show_traceback: traceback.print_last() else: print(error) This would essentially allow * and ** to be used to call a function without having to give a name: *args or **kwargs. However in this scenario, the client function is more likely to be ?inheriting from? the behavior of the inner function, in a way where all or most of the arguments of the inner function are valid on the client function. Example: requests.get creates a Request object and immediately sends the response while blocking for it. > On Sep 6, 2018, at 3:27 PM, python-ideas-request at python.org wrote: > > Send Python-ideas mailing list submissions to > python-ideas at python.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.python.org/mailman/listinfo/python-ideas > or, via email, send a message with subject or body 'help' to > python-ideas-request at python.org > > You can reach the person managing the list at > python-ideas-owner at python.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Python-ideas digest..." > > > Today's Topics: > > 1. Re: On evaluating features [was: Unpacking iterables for > augmented assignment] (Franklin? Lee) > 2. Re: On evaluating features [was: Unpacking iterables for > augmented assignment] (Chris Angelico) > 3. Re: Keyword only argument on function call (Jonathan Fine) > 4. Re: On evaluating features [was: Unpacking iterables for > augmented assignment] (Franklin? Lee) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 6 Sep 2018 14:38:26 -0400 > From: "Franklin? Lee" > To: Chris Angelico > Cc: Python-Ideas > Subject: Re: [Python-ideas] On evaluating features [was: Unpacking > iterables for augmented assignment] > Message-ID: > > Content-Type: text/plain; charset="UTF-8" > >> On Thu, Sep 6, 2018 at 2:23 PM Chris Angelico wrote: >> >> On Fri, Sep 7, 2018 at 4:11 AM, Franklin? Lee >> wrote: >>>> On Tue, Aug 28, 2018 at 6:37 PM Greg Ewing wrote: >>>> >>>> Guido van Rossum wrote: >>>>> we might propose (as the OP did) that this: >>>>> >>>>> a, b, c += x, y, z >>>>> >>>>> could be made equivalent to this: >>>>> >>>>> a += x >>>>> b += y >>>>> c += z >>>> >>>> But not without violating the principle that >>>> >>>> lhs += rhs >>>> >>>> is equivalent to >>>> >>>> lhs = lhs.__iadd__(lhs) >>> >>> (Corrected: lhs = lhs.__iadd__(rhs)) >>> >>> Since lhs here is neither a list nor a tuple, how is it violated? Or >>> rather, how is it any more of a special case than in this syntax: >>> >>> # Neither name-binding or setitem/setattr. >>> [a,b,c] = items >>> >>> If lhs is a Numpy array, then: >>> a_b_c += x, y, z >>> is equivalent to: >>> a_b_c = a_b_c.__iadd__((x,y,z)) >>> >>> We can translate the original example: >>> a, b, c += x, y, z >>> to: >>> a, b, c = target_list(a,b,c).__iadd__((x,y,z)) >>> where `target_list` is a virtual (not as in "virtual function") type >>> for target list constructs. >> >> What is the virtual type here, and what does its __iadd__ method do? I >> don't understand you here. Can you go into detail? Suppose I'm the >> author of the class that all six of these objects are instances of; >> can I customize the effect of __iadd__ here in some way, and if so, >> how? > > I shouldn't have used jargon I had to look up myself. > > The following are equivalent and compile down to the same code: > a, b, c = lst > [a, b, c] = lst > > The left hand side is not an actual list (even though it looks like > one). The brackets are optional. The docs call the left hand side a > target list: https://docs.python.org/3/reference/simple_stmts.html#assignment-statements > > "Target list" is not a real type. You can't construct such an object, > or hold one in memory. You can't make a class that emulates it > (without interpreter-specific hacks), because it is a collection of > its names, not a collection of values. > > target_list.__iadd__ also does not exist, because target_list does not > exist. However, target_list can be thought of as a virtual type, a > type that the compiler compiles away. We can then consider > target_list.__iadd__ as a virtual operator, which the compiler will > understand but hide from the runtime. > > I was making the point that, because the __iadd__ in the example does > not refer to list.__iadd__, but rather a virtual target_list.__iadd__, > there is not yet a violation of the rule. > > > ------------------------------ > > Message: 2 > Date: Fri, 7 Sep 2018 04:46:36 +1000 > From: Chris Angelico > To: Python-Ideas > Subject: Re: [Python-ideas] On evaluating features [was: Unpacking > iterables for augmented assignment] > Message-ID: > > Content-Type: text/plain; charset="UTF-8" > > On Fri, Sep 7, 2018 at 4:38 AM, Franklin? Lee > wrote: >> The following are equivalent and compile down to the same code: >> a, b, c = lst >> [a, b, c] = lst >> >> The left hand side is not an actual list (even though it looks like >> one). The brackets are optional. The docs call the left hand side a >> target list: https://docs.python.org/3/reference/simple_stmts.html#assignment-statements >> >> "Target list" is not a real type. You can't construct such an object, >> or hold one in memory. You can't make a class that emulates it >> (without interpreter-specific hacks), because it is a collection of >> its names, not a collection of values. > > A target list is a syntactic element, like a name, or an operator, or > a "yield" statement. You can't construct one, because it isn't an > object type. It's not a "virtual type". It's a completely different > sort of thing. > >> target_list.__iadd__ also does not exist, because target_list does not >> exist. However, target_list can be thought of as a virtual type, a >> type that the compiler compiles away. We can then consider >> target_list.__iadd__ as a virtual operator, which the compiler will >> understand but hide from the runtime. >> >> I was making the point that, because the __iadd__ in the example does >> not refer to list.__iadd__, but rather a virtual target_list.__iadd__, >> there is not yet a violation of the rule. > > What you're suggesting is on par with trying to say that: > > for += 5 > > should be implemented as: > > current_loop.__iadd__(5) > > where "current_loop" doesn't really exist, but it's a virtual type > that represents a 'for' loop. That doesn't make sense, because there > is no object in Python to represent the loop. There is no class/type > that represents all loops, on which a method like this could be added. > The word 'for' is part of the grammar, not the object model. And > "target list" is the same. There's no way to attach an __iadd__ method > to something that doesn't exist. > > So for your proposal to work, you would need to break that rule, and > give a *different* meaning to this. > > ChrisA > > > ------------------------------ > > Message: 3 > Date: Thu, 6 Sep 2018 20:10:49 +0100 > From: Jonathan Fine > To: Anders Hovm?ller > Cc: python-ideas > Subject: Re: [Python-ideas] Keyword only argument on function call > Message-ID: > > Content-Type: text/plain; charset="UTF-8" > > Summary: I addressed the DEFINING problem. My mistake. Some rough > ideas for the CALLING problem. > > Anders has kindly pointed out to me, off-list, that I solved the wrong > problem. His problem is CALLING the function fn, not DEFINING fn. > Thank you very much for this, Anders. > > For calling, we can use https://docs.python.org/3/library/functions.html#locals > >>>> lcls = locals() > >>>> a = 'apple' >>>> b = 'banana' >>>> c = 'cherry' > >>>> dict((k, lcls[k]) for k in ('a', 'b', 'c')) > {'b': 'banana', 'c': 'cherry', 'a': 'apple'} > > So in his example > > foo(a=a, b=b, c=c, d=3, e=e) > > one could instead write > > foo(d=3, **helper(locals(), ('a', 'b', 'c', 'e'))) > > or perhaps better > > helper(locals(), 'a', 'b', 'c', 'e')(foo, d=3) > > where the helper() picks out items from the locals(). And in the > second form, does the right thing with them. > > Finally, one might be able to use > >>>> def fn(*, a, b, c, d, e): f, g, h = 3, 4, 5 >>>> fn.__code__.co_kwonlyargcount > 5 >>>> fn.__code__.co_varnames > ('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h') >>>> fn.__code__.co_argcount > 0 > > to identify the names of all keyword arguments of the function foo(), > and they provide the values in locals() as the defaults. Of course, > this is somewhat magical, and requires strict conformance to > conventions. So might not be a good idea. > > The syntax could then be > > localmagic(foo, locals())(d=3) > > which, for magicians, might be easier. But rightly in my opinion, > Python is reluctant to use magic. > > On the other hand, for a strictly controlled Domain Specific Language, > it might, just might, be useful. And this list is for "speculative > language ideas" (see > https://mail.python.org/mailman/listinfo/python-ideas). > > -- > Jonathan > > > ------------------------------ > > Message: 4 > Date: Thu, 6 Sep 2018 15:26:47 -0400 > From: "Franklin? Lee" > To: Chris Angelico > Cc: Python-Ideas > Subject: Re: [Python-ideas] On evaluating features [was: Unpacking > iterables for augmented assignment] > Message-ID: > > Content-Type: text/plain; charset="UTF-8" > > n Thu, Sep 6, 2018 at 2:47 PM Chris Angelico wrote: >> >> On Fri, Sep 7, 2018 at 4:38 AM, Franklin? Lee >> wrote: >>> The following are equivalent and compile down to the same code: >>> a, b, c = lst >>> [a, b, c] = lst >>> >>> The left hand side is not an actual list (even though it looks like >>> one). The brackets are optional. The docs call the left hand side a >>> target list: https://docs.python.org/3/reference/simple_stmts.html#assignment-statements >>> >>> "Target list" is not a real type. You can't construct such an object, >>> or hold one in memory. You can't make a class that emulates it >>> (without interpreter-specific hacks), because it is a collection of >>> its names, not a collection of values. >> >> A target list is a syntactic element, like a name, or an operator, or >> a "yield" statement. You can't construct one, because it isn't an >> object type. It's not a "virtual type". It's a completely different >> sort of thing. > > I didn't think I gave the impression that I was complaining about not > being able to construct it. I gave an explanation for how it isn't a > real type, because you asked how you could modify the behavior, and > because I wanted to give an explanation for more than just you. > > There are constructs that correspond to types (such as slices and > functions). There are those that don't. We call `3:2` (in the right > context) a slice, even though it's technically a construct which is > compiled down to a `slice` object. I see no problem there. > > I called it a "virtual type" and explained why I called it that. You > reject the use of that term, but you don't even acknowledge that I > gave reasons for it. > >>> target_list.__iadd__ also does not exist, because target_list does not >>> exist. However, target_list can be thought of as a virtual type, a >>> type that the compiler compiles away. We can then consider >>> target_list.__iadd__ as a virtual operator, which the compiler will >>> understand but hide from the runtime. >>> >>> I was making the point that, because the __iadd__ in the example does >>> not refer to list.__iadd__, but rather a virtual target_list.__iadd__, >>> there is not yet a violation of the rule. >> >> What you're suggesting is on par with trying to say that: >> >> for += 5 >> >> should be implemented as: >> >> current_loop.__iadd__(5) >> >> where "current_loop" doesn't really exist, but it's a virtual type >> that represents a 'for' loop. > > I explained how target_list could be thought of as a special imaginary > type which only exists in the compiler's "mind", and then extended > that to an imaginary method on that type. Of course your example shows > absurdity: you didn't try to say how a for-loop is like an object in > the first place. > >> That doesn't make sense, because there >> is no object in Python to represent the loop. There is no class/type >> that represents all loops, on which a method like this could be added. >> The word 'for' is part of the grammar, not the object model. And >> "target list" is the same. There's no way to attach an __iadd__ method >> to something that doesn't exist. > > But I'm not using the word `for`. I am using constructs like `[a,b,c]` > (where it is not a list). At least use `(for x in y: z) += 5` as your > example. You're effectively accusing me of trying to make `[` (a > single token, not a full construct) an object. > > Your argument here is that there is no Python object to represent a > loop, but that really means there's no _runtime_ object to represent a > loop. I already said that target lists don't exist in memory (i.e. > runtime). > > "Target list" does exist, just not as a runtime type. It exists as an > abstraction not available to the runtime, and we can extend that > abstraction in ways not available to the runtime. That means that you > can't attach it during the runtime. It does not mean you can't reason > with it during compile-time. > >> So for your proposal to work, you would need to break that rule, and >> give a *different* meaning to this. > > It is not my proposal. I was questioning how there was a rule > violation about x+=y translating to `x = x.__iadd__(y)`. You're > talking about a different, made-up rule about how syntactical > constructs can't correspond to compile-time imaginary objects or > runtime objects. But there are syntactical constructs that DO > correspond to runtime types (slice, list, class), there are those > which don't but can (let's not get into that), there are those which > can stay compile-time (f-strings, target lists), and there are those > which probably can't be thought of as types at all (import). > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > > > ------------------------------ > > End of Python-ideas Digest, Vol 142, Issue 22 > ********************************************* From ethan at stoneleaf.us Fri Sep 7 15:21:23 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 07 Sep 2018 12:21:23 -0700 Subject: [Python-ideas] Python-ideas Digest, Vol 142, Issue 22 In-Reply-To: <08DDEC5B-C1E0-4C8E-9045-31E50570F050@gmail.com> References: <08DDEC5B-C1E0-4C8E-9045-31E50570F050@gmail.com> Message-ID: <5B92CFB3.60408@stoneleaf.us> On 09/07/2018 12:09 PM, James Lu wrote: [stuff] James, the digest you replied to had four different topics, and I have no idea how many individual messages. You didn't change the subject line, and you didn't trim the text you were not replying to. Which thread/message were you replying to? -- ~Ethan~ From rhodri at kynesim.co.uk Fri Sep 7 14:21:59 2018 From: rhodri at kynesim.co.uk (Rhodri James) Date: Fri, 7 Sep 2018 19:21:59 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> Message-ID: Top posting for once, since no one is quoting well in this thread: Does this in any way answer David's question? I'm serious; you've spent a lot of words that, as best I can tell, say exactly nothing about how keyword arguments would help that quadratic function. If I'm missing something, please tell me. On 07/09/18 18:17, Robert Vanden Eynde wrote: > If you want to force using pos args, go ahead and use Python docstring > notation we'd write def quad(a,b,c, /) > > The names should not be renamed because they already have a normal ordering > x ** n. > > This notation is standard, so it would be a shame to use something people > don't use. > > However, I recently used a quad function in one of my uni course where the > different factors are computed with a long expression, so keyword > arguments, so I'd call: > > Vout = quad( > a=... Some long expression > spanning a lot of lines ..., > b=... Same thing ..., > c=... Same thing...) > > Without the a= reminder, one could count the indentation. > > And if you'd think it's a good idea to refactor it like that ... > > a = ... Some long expression > spanning a lot of lines ... > b = ... Same thing ... > c = ... Same thing... > > Vout = quad(a,b,c) > > Then you're in the case of quad(*, a, b, c) (even if here, one would never > def quad(c,b,a)). > > Wheter or not this refactor is more clear is a matter of "do you like > functional programming". > > However, kwargs arz more useful in context where some parameters are > optional or less frequentely used. But it makes sense (see Pep about > mandatory kwargs). > > Kwargs is a wonderful invention in Python (or, lisp). > > Le ven. 7 sept. 2018 ? 18:54, David Mertz a ?crit : > >> Here's a function found online (I'm too lazy to write my own, but it would >> be mostly the same). Tell me how keyword arguments could help this... Or >> WHAT names you'd give. >> >> >> 1. def quad(a,b,c): >> 2. """solves quadratic equations of the form >> 3. aX^2+bX+c, inputs a,b,c, >> 4. works for all roots(real or complex)""" >> 5. root=b**2-4*a*c >> 6. if root <0: >> 7. root=abs(complex(root)) >> 8. j=complex(0,1) >> 9. x1=(-b+j+sqrt(root))/2*a >> 10. x2=(-b-j+sqrt(root))/2*a >> 11. return x1,x2 >> 12. else: >> 13. x1=(-b+sqrt(root))/2*a >> 14. x2=(-b-sqrt(root))/2*a >> 15. return x1,x2 >> >> >> After that, explain why forcing all callers to name their local variables >> a, b, c would be a good thing. >> >> On Fri, Sep 7, 2018, 12:18 PM Robert Vanden Eynde >> wrote: >> >>> >>>> I disagree. Keyword arguments are a fine and good thing, but they are >>>> best used for optional arguments IMHO. Verbosity for the sake of >>>> verbosity is not a good thing. >>> >>> >>> I disagree, when you have more than one parameter it's sometimes >>> complicated to remember the order. Therefore, when you name your args, you >>> have way less probability of passing the wrong variable, even with only one >>> arg. >>> >>> Verbosity adds redundancy, so that both caller and callee are sure they >>> mean the same thing. >>> >>> That's why Java has types everywhere, such that the "declaration part" >>> and the "use" part agree on the same idea (same type). >>> _______________________________________________ >>> Python-ideas mailing list >>> Python-ideas at python.org >>> https://mail.python.org/mailman/listinfo/python-ideas >>> Code of Conduct: http://python.org/psf/codeofconduct/ >>> >> > -- Rhodri James *-* Kynesim Ltd From leewangzhong+python at gmail.com Fri Sep 7 18:54:36 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Fri, 7 Sep 2018 18:54:36 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> Message-ID: On Fri, Sep 7, 2018 at 2:22 PM Rhodri James wrote: > > Top posting for once, since no one is quoting well in this thread: > > Does this in any way answer David's question? I'm serious; you've spent > a lot of words that, as best I can tell, say exactly nothing about how > keyword arguments would help that quadratic function. If I'm missing > something, please tell me. I read Robert's response as saying, 1. The quadratic formula and its parameter list are well-known enough that you shouldn't use different names or orders. 2. Even still, there are cases where the argument expressions are long enough that you might want to bind them to local variable names. However, I don't think David's example/question is fair in the first place. Robert said that passing as keywords can be useful in cases where the order is hard to remember, and David responded with an example where the argument order is standardized (so you wouldn't forget order), then talked about "forcing" callers to use certain variable names (which I don't think is warranted). On Fri, Sep 7, 2018 at 2:22 PM Rhodri James wrote: > > Top posting for once, since no one is quoting well in this thread: > > Does this in any way answer David's question? I'm serious; you've spent > a lot of words that, as best I can tell, say exactly nothing about how > keyword arguments would help that quadratic function. If I'm missing > something, please tell me. > > On 07/09/18 18:17, Robert Vanden Eynde wrote: > > If you want to force using pos args, go ahead and use Python docstring > > notation we'd write def quad(a,b,c, /) > > > > The names should not be renamed because they already have a normal ordering > > x ** n. > > > > This notation is standard, so it would be a shame to use something people > > don't use. > > > > However, I recently used a quad function in one of my uni course where the > > different factors are computed with a long expression, so keyword > > arguments, so I'd call: > > > > Vout = quad( > > a=... Some long expression > > spanning a lot of lines ..., > > b=... Same thing ..., > > c=... Same thing...) > > > > Without the a= reminder, one could count the indentation. > > > > And if you'd think it's a good idea to refactor it like that ... > > > > a = ... Some long expression > > spanning a lot of lines ... > > b = ... Same thing ... > > c = ... Same thing... > > > > Vout = quad(a,b,c) > > > > Then you're in the case of quad(*, a, b, c) (even if here, one would never > > def quad(c,b,a)). > > > > Wheter or not this refactor is more clear is a matter of "do you like > > functional programming". > > > > However, kwargs arz more useful in context where some parameters are > > optional or less frequentely used. But it makes sense (see Pep about > > mandatory kwargs). > > > > Kwargs is a wonderful invention in Python (or, lisp). > > > > Le ven. 7 sept. 2018 ? 18:54, David Mertz a ?crit : > > > >> Here's a function found online (I'm too lazy to write my own, but it would > >> be mostly the same). Tell me how keyword arguments could help this... Or > >> WHAT names you'd give. > >> > >> > >> 1. def quad(a,b,c): > >> 2. """solves quadratic equations of the form > >> 3. aX^2+bX+c, inputs a,b,c, > >> 4. works for all roots(real or complex)""" > >> 5. root=b**2-4*a*c > >> 6. if root <0: > >> 7. root=abs(complex(root)) > >> 8. j=complex(0,1) > >> 9. x1=(-b+j+sqrt(root))/2*a > >> 10. x2=(-b-j+sqrt(root))/2*a > >> 11. return x1,x2 > >> 12. else: > >> 13. x1=(-b+sqrt(root))/2*a > >> 14. x2=(-b-sqrt(root))/2*a > >> 15. return x1,x2 > >> > >> > >> After that, explain why forcing all callers to name their local variables > >> a, b, c would be a good thing. > >> > >> On Fri, Sep 7, 2018, 12:18 PM Robert Vanden Eynde > >> wrote: > >> > >>> > >>>> I disagree. Keyword arguments are a fine and good thing, but they are > >>>> best used for optional arguments IMHO. Verbosity for the sake of > >>>> verbosity is not a good thing. > >>> > >>> > >>> I disagree, when you have more than one parameter it's sometimes > >>> complicated to remember the order. Therefore, when you name your args, you > >>> have way less probability of passing the wrong variable, even with only one > >>> arg. > >>> > >>> Verbosity adds redundancy, so that both caller and callee are sure they > >>> mean the same thing. > >>> > >>> That's why Java has types everywhere, such that the "declaration part" > >>> and the "use" part agree on the same idea (same type). > >>> _______________________________________________ > >>> Python-ideas mailing list > >>> Python-ideas at python.org > >>> https://mail.python.org/mailman/listinfo/python-ideas > >>> Code of Conduct: http://python.org/psf/codeofconduct/ > >>> > >> > > > > > -- > Rhodri James *-* Kynesim Ltd > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ From steve at pearwood.info Fri Sep 7 20:28:29 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 8 Sep 2018 10:28:29 +1000 Subject: [Python-ideas] Python dialect that compiles into python In-Reply-To: References: Message-ID: <20180908002829.GG27312@ando.pearwood.info> On Fri, Sep 07, 2018 at 11:57:50AM +0000, Robert Vanden Eynde wrote: > Many features on this list propose different syntax to python, > producing different python "dialects" that can statically be > transformed to python : [...] > Using a modified version of ast, it is relatively easy to modifiy the > syntax tree of a program to produce another program. So one could > compile the "python dialect" into regular python. The last example > with partially for example doesn't even need new syntax. [...] > Actually, I might start to write this lib, that looks fun. I encourage you to do so! It would be great for non-C coders to be able to prototype proposed syntax changes to get a feel for what works and what doesn't. There are already a few joke Python transpilers around, such as "Like, Python": https://jon.how/likepython/ but I think this is a promising technique that could be used more to keep the core Python language simple while not *entirely* closing the door to people using domain-specific (or project-specific) syntax. -- Steve From ethan at stoneleaf.us Fri Sep 7 21:12:30 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 07 Sep 2018 18:12:30 -0700 Subject: [Python-ideas] Python dialect that compiles into python In-Reply-To: References: Message-ID: <5B9321FE.20800@stoneleaf.us> On 09/07/2018 04:57 AM, Robert Vanden Eynde wrote: > Actually, I might start to write this lib, that looks fun. You should also check out MacroPy: https://pypi.org/project/MacroPy/ Although I freely admit I don't know if does what you are talking about. -- ~Ethan~ From steve at pearwood.info Sat Sep 8 05:04:06 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 8 Sep 2018 19:04:06 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> Message-ID: <20180908090406.GJ27312@ando.pearwood.info> On Fri, Sep 07, 2018 at 06:59:45AM -0700, Anders Hovm?ller wrote: > Personally I think readability suffers greatly already at two arguments if > none of the parameters are named. *At* two arguments? As in this example? map(len, sequence) I'll admit that I struggle to remember the calling order of list.insert, I never know which of these I ought to write: mylist.insert(0, 1) mylist.insert(1, 0) but *in general* I don't think two positional arguments is confusing. > Sometimes you can sort of fix the > readability with function names like do_something_with_a_foo_and_bar(foo, > bar), but that is usually more ugly than just using keyword arguments. It is difficult to judge the merit of that made-up example. Real examples are much more convincing and informative. > Functions in real code have > 2 arguments. Functions in real code also have <= 2 arguments. > Often when reading the code the > only way to know what those arguments are is by reading the names of the > parameters on the way in, because it's positional arguments. I don't understand that sentence. If taken literally, the way to tell what the arguments are is to look at the arguments. I think you might mean the only way to tell the mapping from arguments supplied by the caller to the parameters expected by the called function is to look at the called function's signature. If so, then yes, I agree. But why is this relevent? You don't have to convince us that for large, complex signatures (a hint that you may have excessively complex, highly coupled code!) keyword arguments are preferable to opaque positional arguments. That debate was won long ago. If a complex calling signature is unavoidable, keyword args are nicer. > But those aren't checked. I don't understand this either. Excess positional arguments aren't silently dropped, and missing ones are an error. > To me it's similar to bracing for indent: you're telling > the human one thing and the machine something else and no one is checking > that those two are in sync. No, you're telling the reader and the machine the same thing. func(a, b, c) tells both that the first parameter is given the argument a, the second is given argument b, and the third is given argument c. What's not checked is the *intention* of the writer, because it can't be. Neither the machine nor the reader has any insight into what I meant when I wrote the code (not even if I am the reader, six weeks after I wrote the code). Keywords help a bit with that... it's harder to screw up open(filename, 'r', buffering=-1, encoding='utf-8', errors='strict') than: open(filename, 'r', -1, 'utf-8', 'strict') but not impossible. But again, this proposal isn't for keyword arguments. You don't need to convince us that keyword arguments are good. > I have seen beginners try: > > def foo(b, a): > pass > > a = 1 > b = 2 > foo(a, b) > > and then be confused because a and b are flipped. How would they know? Beginners are confused by many things. Coming from a background in Pascal, which has no keyword arguments, it took me a while to get to grips with keyword arguments: def spam(a, b): print("a is", a) print("b is", b) a = 1 b = 2 spam(a=b, b=a) print(a, b) The effect of this, and the difference between the global a, b and local a, b, is not intuitively obvious. -- Steve From steve at pearwood.info Sat Sep 8 05:41:50 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 8 Sep 2018 19:41:50 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <5B91AC8B.9030909@canterbury.ac.nz> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> Message-ID: <20180908094150.GK27312@ando.pearwood.info> On Fri, Sep 07, 2018 at 10:39:07AM +1200, Greg Ewing wrote: > As for whether consistent naming is a good idea, seems to > me it's the obvious thing to do when e.g. you're overriding > a method, to keep the signature the same for people who want > to pass arguments by keyword. You'd need to have a pretty > strong reason *not* to keep the parameter names the same. > > Given that, it's natural to want a way to avoid repeating > yourself so much when passing them on. > > So I think the underlying idea has merit, but the particular > syntax proposed is not the best. But the proposal isn't just for a way to avoid repeating oneself when overriding methods: class Parent: def spam(self, spam, eggs, cheese): ... class Child(Parent): def spam(self, foo, bar, baz): # why the change in names? ... I agree that inconsistency here is a strange thing to do, and its a minor annoyance to have to manually repeat the names each time you override a class. Especially during rapid development, when the method signatures haven't yet reached a stable API. (But I don't know of any alternative which isn't worse, given that code is read far more often than its written and we don't design our language to only be usable for people using IntelliSense.) The proposal is for syntax to make one specific pattern shorter and more concise when *calling arbitrary functions*. Nothing to do with inheritance at all, except as a special case. It is pure syntactic sugar for one specific case, "name=name" when calling a function. Syntactic sugar is great, in moderation. I think this is too much sugar for not enough benefit. But I acknowledge that's because little of my code uses that name=name idiom. (Most of my functions take no more than three arguments, I rarely need to use keywords, but when I do, they hardly ever end up looking like name=name. A quick and dirty manual search of my code suggests this would be useful to me in less than 1% of function calls.) But for those who use that idiom a lot, this may seem more appealing. With the usual disclaimer that I understand it will never be manditory to use this syntax, nevertheless I can see it leading to the "foolish consistency" quote from PEP 8. "We have syntax to write shorter code, shorter code is better, so if we want to be Pythonic we must design our functions to use the same names for local variables as the functions we call." -- hypothetical blog post, Stackoverflow answer, opinionated tutorial, etc. I don't think this is a pattern we want to encourage. We have a confluence of a few code smells, each of which in isolation are not *necessarily* bad but often represent poor code: - complex function signatures; - function calls needing lots of arguments; - needing to use keyword arguments (as otherwise the function call is too hard to read); - a one-to-one correspondence between local variables and arguments; and syntax designed to make this case easier to use, and hence discourage people from refactoring to remove the pain. (If they can.) I stress that none of these are necessarily poor code, but they are frequently seen in poor code. As a simplified example: def function(alpha, beta, gamma): ... # later, perhaps another module def do_something_useful(spam, eggs, cheese): result = function(alpha=eggs, beta=spam, gamma=cheese) ... In this case, the proposed syntax cannot be applied, but the argument from consistency would suggest that I ought change the signature of do_something_useful to this so I can use the syntax: # consistency is good, m'kay? def do_something_useful(beta, alpha, gamma): result = function(*, alpha, beta, gamma) ... Alternatively, I could keep the existing signature: def do_something_useful(spam, eggs, cheese): alpha, beta, gamma = eggs, spam, cheese result = function(*, alpha, beta, gamma) ... To save seventeen characters on one line, the function call, we add an extra line and thirty-nine characters. We haven't really ended up with more concise code. In practice, I think the number of cases where people *actually can* take advantage of this feature by renaming their own local variables or function parameters will be pretty small. (Aside from inheritance.) But given the "consistency is good" meme, I reckon people would be always looking for opportunities to use it, and sad when they can't. (I know that *I* would, if I believed that consistency was a virtue for its own sake. I think that DRY is a virtue, and I'm sad when I have to repeat myself.) We know from other proposals [don't mention assignment expressions...] that syntax changes can be accepted even when they have limited applicability and can be misused. It comes down to a value judgement as to whether the pros are sufficiently pro and the cons insufficiently con. I don't think they do: Pros: - makes one specific, and (probably unusual) pain-point slightly less painful; - rewards consistency in naming when consistency in naming is justified. Cons: - creates yet another special meaning for * symbol; - implicit name binding instead of explicit; - discourages useful refactoring; - potentially encourages a bogus idea that consistency is a virtue for its own sake, regardless of whether it makes the code better or not; - similarly, it rewards consistency in naming even when consistency in naming is not needed or justified; - it's another thing for people to learn, more documentation needed, extra complexity in the parser, etc; - it may simply *shift* complexity, being even more verbose than the status quo under some circumstances. -- Steve From boxed at killingar.net Sat Sep 8 06:33:26 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sat, 8 Sep 2018 12:33:26 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180908090406.GJ27312@ando.pearwood.info> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <1c494333-3d3d-4f21-81eb-a984ec295ac5@googlegroups.com> <1f47edf9-7e5e-7076-0a3b-d067747129af@kynesim.co.uk> <20180908090406.GJ27312@ando.pearwood.info> Message-ID: <069072E7-7AA4-447B-9A68-6854C33F1C3D@killingar.net> > *At* two arguments? As in this example? > > map(len, sequence) > > > I'll admit that I struggle to remember the calling order of list.insert, > I never know which of these I ought to write: > > mylist.insert(0, 1) > mylist.insert(1, 0) > > but *in general* I don't think two positional arguments is confusing. It?s often enough. But yes, map seems logical positional to me too but I can?t tell if it?s because I?ve programmed in positional languages for many years, or that I?m a Swedish and English native speaker. I don?t see why map would be clear and insert not so I?m guessing it has to do with language somehow. I think it?s a good thing to be more explicit in border cases. I don?t know what the intuitions of future readers are. > It is difficult to judge the merit of that made-up example. Real > examples are much more convincing and informative. Agreed. I just could only vaguely remember doing this sometimes but I had no idea what to grep for so couldn?t find a real example :P >> Functions in real code have > 2 arguments. > > Functions in real code also have <= 2 arguments. Yea and they are ok as is. >> Often when reading the code the >> only way to know what those arguments are is by reading the names of the >> parameters on the way in, because it's positional arguments. > > I don't understand that sentence. If taken literally, the way to tell > what the arguments are is to look at the arguments. > > I think you might mean the only way to tell the mapping from arguments > supplied by the caller to the parameters expected by the called function > is to look at the called function's signature. > > If so, then yes, I agree. But why is this relevent? You don't have to > convince us that for large, complex signatures (a hint that you may > have excessively complex, highly coupled code!) keyword arguments are > preferable to opaque positional arguments. That debate was won long ago. > If a complex calling signature is unavoidable, keyword args are nicer. Good to see we have common ground here. I won?t try to claim the code base at work doesn?t have way too many functions with way too many parameters :P It?s a problem that we are working to ameliorate but it?s also a problem my suggested feature would help with. I think we should accept that such code bases exists even when managed by competent teams. Adding one parameter is often ok but only over time you can create a problem. Refactoring to remove a substantial amount of parameters is also not always feasible or with the effort. I think we should expect such code bases to be fairly common and be more common in closed source big business line apps. I think it?s important to help for these uses, but I?m biased since it?s my job :P ?We? did add @ for numerical work after all and that?s way more niche than the types of code bases I?m discussing here. I think you?d also agree on that point? >> But those aren't checked. > > I don't understand this either. Excess positional arguments aren't > silently dropped, and missing ones are an error. Yea the arity is checked but if a refactor removes one parameter and adds another all the existing call sites are super obviously wrong if you look at the definition and the call at the same time, but Python doesn?t know. > >> To me it's similar to bracing for indent: you're telling >> the human one thing and the machine something else and no one is checking >> that those two are in sync. > > No, you're telling the reader and the machine the same thing. Just like with bracing and misleading indents yes. It blames the user for a design flaw of the language. > What's not checked is the *intention* of the writer, because it can't > be. That?s my point yes. And of course it can be. With keyword arguments it is. Today. If people used them drastically more the computer would check intention more. > But again, this proposal isn't for keyword > arguments. You don't need to convince us that keyword arguments are > good. I?m not convinced I?m not in fact arguing this point :P There is a big and unfair advantage positional has over kw today due to the conciseness of one over the other. My suggestion cuts down this advantage somewhat, or drastically in some cases. >> and then be confused because a and b are flipped. > > How would they know? How would they know what? They know it?s broken because their program doesn?t work. How would they know the computer didn?t understand a is a and b is b when it?s blatantly obvious to a human? That?s my argument isn?t it? :P / Anders From jfine2358 at gmail.com Sat Sep 8 07:05:33 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 8 Sep 2018 12:05:33 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180908094150.GK27312@ando.pearwood.info> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> Message-ID: Steve wrote: > With the usual disclaimer that I understand it will never be manditory > to use this syntax, nevertheless I can see it leading to the "foolish > consistency" quote from PEP 8. > "We have syntax to write shorter code, shorter code is better, > so if we want to be Pythonic we must design our functions to use > the same names for local variables as the functions we call." > -- hypothetical blog post, Stackoverflow answer, > opinionated tutorial, etc. > I don't think this is a pattern we want to encourage. Steve's "hypothetical blog post" is a pattern he doesn't like, and he said that it's not a pattern we want to encourage. And he proceeds to demolish this pattern, in the rest of his post. According to https://en.wikipedia.org/wiki/Straw_man The typical straw man argument creates the illusion of having completely refuted or defeated an opponent's proposition through the covert replacement of it with a different proposition (i.e., "stand up a straw man") and the subsequent refutation of that false argument ("knock down a straw man") instead of the opponent's proposition. So what was the original proposition. I summarise from the original post. It was to allow foo(*, a, b, c, d=3, e) as a shorthand for foo(a=a, b=b, c=c, d=3, e=e) And also that on two big code bases about 30% of all arguments would benefit from this syntax. And also that it would create an incentive for consistent naming across the code base. To me, the "30% of all arguments" deserves more careful examination. Does the proposal significant improve the reading and writing of this code? And are there other, perhaps better, ways of improving this code? I'm very keen to dig into this. I'll start a new thread for this very topic. -- Jonathan From jfine2358 at gmail.com Sat Sep 8 07:17:38 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 8 Sep 2018 12:17:38 +0100 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code Message-ID: I thank Steve D'Aprano for pointing me to this real-life (although perhaps extreme) code example https://github.com/Tinche/aiofiles/blob/master/aiofiles/threadpool/__init__.py#L17-L37 def open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None, *, loop=None, executor=None): return AiofilesContextManager(_open(file, mode=mode, buffering=buffering, encoding=encoding, errors=errors, newline=newline, closefd=closefd, opener=opener, loop=loop, executor=executor)) @asyncio.coroutine def _open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None, *, loop=None, executor=None): """Open an asyncio file.""" if loop is None: loop = asyncio.get_event_loop() cb = partial(sync_open, file, mode=mode, buffering=buffering, encoding=encoding, errors=errors, newline=newline, closefd=closefd, opener=opener) f = yield from loop.run_in_executor(executor, cb) return wrap(f, loop=loop, executor=executor) Anders Hovm?ller has proposed a Python syntax extension to improve this code. It provides, for example return wrap(f, *, loop, executor) as a shorthand for return wrap(f, loop=loop, executor=executor) See: https://mail.python.org/pipermail/python-ideas/2018-September/053207.html I'd like us, in this thread, to discuss OTHER possible ways of improving this code. This could include refactoring, and the introduction of tools. I'm particularly interested in gathering alternatives, and at this time not much interesting in "knowing which one is best". -- Jonathan From paddy3118 at gmail.com Sat Sep 8 07:33:07 2018 From: paddy3118 at gmail.com (Paddy3118) Date: Sat, 8 Sep 2018 04:33:07 -0700 (PDT) Subject: [Python-ideas] Add Unicode-aware str.reverse() function? Message-ID: I wrote a blog post nearly a decade ago on extending a Rosetta Code task example to handle the correct reversal of strings with combining characters. On checking my blog statistics today I found that it still had a readership and revisited the code (and updated it to Python3.6).. I found that amongst the nearly 200 languages that complete the RC task,there were a smattering of languages that correctly handled reversing strings having Unicode combining characters, including Perl 6 which uses flip. I would like to propose that Python add a Unicode-aware *str.reverse *method. The problem is, I'm a Brit, who only speaks English and only very rarely dips into Unicode.* I don't know how useful this would be!* Cheers, Paddy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfine2358 at gmail.com Sat Sep 8 08:12:00 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 8 Sep 2018 13:12:00 +0100 Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: References: Message-ID: Paddy wrote > I would like to propose that Python add a Unicode-aware str.reverse method. > The problem is, I'm a Brit, who only speaks English and only very rarely > dips into Unicode. I don't know how useful this would be! Excellent post and piece of work. Well done! Here's someone who might know not only how useful, but also the wrinkles in doing it correctly: https://www.telecom-bretagne.eu/studies/msc/professors/haralambous/ Yannis Haralambous received his Ph.D. in Pure Mathematics from the Universit? de Sciences et Techniques de Lille-Flandre-Artois, Lille, France in 1990. He is currently working as a full-time Professor at Institut Mines-Telecom/Telecom Bretagne, Brest, in the Computer Science Department. His research areas include digital typography and representation of text, electronic documents, internationalization of documents, character encodings and the preservation of the cultural heritage of the book in the digital era. He is the author of Fonts & Encodings, to be published by O'Reilly in 2007 (French version : Fontes & codages, O'Reilly France, 2003). I know Yannis, so could approach him on behalf of this list. -- Jonathan From boxed at killingar.net Sat Sep 8 08:21:56 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sat, 8 Sep 2018 14:21:56 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: Message-ID: It?s obvious but there is one easy way to shorten the code: using **kwargs. It?s way shorter but the down sides are: - the ?real? function signature gets hidden so IDEs for example won?t pick it up - the error when you make a mistake when calling is not in your code anymore but one level down. This is confusing. One could imagine solving this specific case by having a type annotation of ?this function has the types of that function?. Maybe: def _open(*args: args_of_(sync_open), **kwargs: kwargs_of(sync_open) -> return_of(sync_open): But of course this only solves the case where there is a 1:1 mapping. / Anders > On 8 Sep 2018, at 13:17, Jonathan Fine wrote: > > I thank Steve D'Aprano for pointing me to this real-life (although > perhaps extreme) code example > > https://github.com/Tinche/aiofiles/blob/master/aiofiles/threadpool/__init__.py#L17-L37 > > def open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, > closefd=True, opener=None, *, loop=None, executor=None): > return AiofilesContextManager(_open(file, mode=mode, buffering=buffering, > encoding=encoding, errors=errors, > newline=newline, closefd=closefd, > opener=opener, loop=loop, > executor=executor)) > > > @asyncio.coroutine > def _open(file, mode='r', buffering=-1, encoding=None, errors=None, > newline=None, > closefd=True, opener=None, *, loop=None, executor=None): > """Open an asyncio file.""" > if loop is None: > loop = asyncio.get_event_loop() > cb = partial(sync_open, file, mode=mode, buffering=buffering, > encoding=encoding, errors=errors, newline=newline, > closefd=closefd, opener=opener) > f = yield from loop.run_in_executor(executor, cb) > > return wrap(f, loop=loop, executor=executor) > > > > Anders Hovm?ller has proposed a Python syntax extension to improve > this code. It provides, for example > return wrap(f, *, loop, executor) > as a shorthand for > return wrap(f, loop=loop, executor=executor) > > See: https://mail.python.org/pipermail/python-ideas/2018-September/053207.html > > I'd like us, in this thread, to discuss OTHER possible ways of > improving this code. This could include refactoring, and the > introduction of tools. I'm particularly interested in gathering > alternatives, and at this time not much interesting in "knowing which > one is best". > > -- > Jonathan > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ From stephanh42 at gmail.com Sat Sep 8 08:22:00 2018 From: stephanh42 at gmail.com (Stephan Houben) Date: Sat, 8 Sep 2018 14:22:00 +0200 Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: References: Message-ID: Op za 8 sep. 2018 13:33 schreef Paddy3118 : > > I would like to propose that Python add a Unicode-aware *str.reverse *method. > The problem is, I'm a Brit, who only speaks English and only very rarely > dips into Unicode.* I don't know how useful this would be!* > To be honest, quite apart from the Unicode issue, I never had a need to reverse a string in real code. .ytilibigel edepmi ot sdnet yllareneg tI Stephan > Cheers, Paddy. > > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Sat Sep 8 08:23:24 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sat, 8 Sep 2018 14:23:24 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> Message-ID: <1ABCDBAD-D477-40F2-AC1B-27E50F921E43@killingar.net> > To me, the "30% of all arguments" deserves more careful examination. > Does the proposal significant improve the reading and writing of this > code? And are there other, perhaps better, ways of improving this > code? Maybe my tool should be expanded to produce more nuanced data? Like how many of those 30% are: - arity 1,2,3, etc? (Arity 1 maybe should be discarded as being counted unfairly? I don?t think so but some clearly do) - matches 1 argument, 2,3,4 etc? Matching just one is of less value than matching 5. Maybe some other statistics? / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Sat Sep 8 08:45:11 2018 From: mertz at gnosis.cx (David Mertz) Date: Sat, 8 Sep 2018 08:45:11 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <1ABCDBAD-D477-40F2-AC1B-27E50F921E43@killingar.net> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <1ABCDBAD-D477-40F2-AC1B-27E50F921E43@killingar.net> Message-ID: A finer grained analysis tool would be helpful. I'm -0 on the idea because I believe it would discourage more expressive names in calling contexts in order to enable the proposed syntax. But I also see a big difference between cases where all keywords match calling names and cases where only a few of them do. I.e. this is probably a small win: # function (a=a, b=b, c=c, d=d) function(*, a, b, c, d) But this feels like it invites confusion and bugs: # function (a=my_a, b=b, c=my_c, d=d) function(*, a=my_a, b, c=my_c, d) I recognize that if the syntax were added it wouldn't force anyone to use the second version... But that means no one who WRITES the code. As a reader I would certainly have to parse some of the bad uses along with the good ones. I know these examples use simplified and artificial names, but I think the case is even stronger with more realistic names or expressions. On Sat, Sep 8, 2018, 8:24 AM Anders Hovm?ller wrote: > To me, the "30% of all arguments" deserves more careful examination. > > Does the proposal significant improve the reading and writing of this > > code? And are there other, perhaps better, ways of improving this > > code? > > > Maybe my tool should be expanded to produce more nuanced data? Like how > many of those 30% are: > > - arity 1,2,3, etc? (Arity 1 maybe should be discarded as being counted > unfairly? I don?t think so but some clearly do) > - matches 1 argument, 2,3,4 etc? Matching just one is of less value than > matching 5. > > Maybe some other statistics? > > / Anders > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Sat Sep 8 08:51:02 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sat, 8 Sep 2018 22:51:02 +1000 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: Message-ID: On Sat, Sep 8, 2018 at 10:21 PM, Anders Hovm?ller wrote: > It?s obvious but there is one easy way to shorten the code: using **kwargs. It?s way shorter but the down sides are: > > - the ?real? function signature gets hidden so IDEs for example won?t pick it up > - the error when you make a mistake when calling is not in your code anymore but one level down. This is confusing. > > One could imagine solving this specific case by having a type annotation of ?this function has the types of that function?. Maybe: > > def _open(*args: args_of_(sync_open), **kwargs: kwargs_of(sync_open) -> return_of(sync_open): > > But of course this only solves the case where there is a 1:1 mapping. That can be done with functools.wraps(). ChrisA From jfine2358 at gmail.com Sat Sep 8 09:00:17 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 8 Sep 2018 14:00:17 +0100 Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: References: Message-ID: Stephan Houben wrote: > To be honest, quite apart from the Unicode issue, I never had a need to > reverse a string in real code. > > .ytilibigel edepmi ot sdnet yllareneg tI Sometimes we have to write 'backwards' to improve legibility. Odd though that may sound. Some languages are written from left to right. Some from right to left. And some ancient writing alternates, line to line. https://en.wikipedia.org/wiki/Right-to-left https://www.andiamo.co.uk/resources/right-left-languages https://en.wikipedia.org/wiki/Boustrophedon Users of modern rendering systems, such as in modern browsers, don't have to worry about this. This is because the renderer will handle LTR and RTL switches based on the language attribute. (Alway, text should be encoded in reading order.) But those implementing a bidectional rendering system might have to worry about such things. So what does that have to do with us, Python developers and users. According to the web: Arabic, Hebrew, Persian, and Urdu are the most widespread RTL writing systems in modern times. To provide legible localised (translated) help messages at the interactive Python interpreter, the system somewhere will have to correctly reverse Unicode strings, either before or after processing combining characters. There are about 422 million Arabic speakers, 110 million Persian, 5 million Hebrew and 100 million Urdu. Definitely worth doing, in my opinion. Otherwise the help message will look TO THEM like this: daer ot drah yrev si siht instead of this is very hard to read -- Jonathan From mal at egenix.com Sat Sep 8 09:08:55 2018 From: mal at egenix.com (M.-A. Lemburg) Date: Sat, 8 Sep 2018 15:08:55 +0200 Subject: [Python-ideas] Add Unicode-aware str.reverse() function? In-Reply-To: References: Message-ID: <602aa953-3e46-d03e-caf6-55ce520348bf@egenix.com> On 08.09.2018 13:33, Paddy3118 wrote: > I wrote a blog post > nearly > a decade ago on extending a Rosetta Code task example > to handle the correct > reversal of strings with combining characters. > On checking my blog statistics today I found that it still had a > readership and revisited the code > > (and updated it to Python3.6).. > > I found that amongst the nearly 200 languages that complete the RC > task,there were a smattering of languages that correctly handled > reversing strings having Unicode combining characters, > including Perl 6 > which uses flip. > > I would like to propose that Python add a Unicode-aware *str.reverse > *method. The problem is, I'm a Brit, who only speaks English and only > very rarely dips into Unicode./I don't know how useful this would be!/ I've been using Unicode for quite a while and so far never had a need to reverse a string in real life. This sometimes comes up as coding challenge and perhaps in language classes as exercise, but I can hardly imaging a use case where we'd need a builtin method for this. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Sep 08 2018) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From mal at egenix.com Sat Sep 8 09:22:49 2018 From: mal at egenix.com (M.-A. Lemburg) Date: Sat, 8 Sep 2018 15:22:49 +0200 Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: References: Message-ID: <31d53c32-956d-c1a6-13c2-835e1bc4b7bc@egenix.com> On 08.09.2018 15:00, Jonathan Fine wrote: > Stephan Houben wrote: > >> To be honest, quite apart from the Unicode issue, I never had a need to >> reverse a string in real code. >> >> .ytilibigel edepmi ot sdnet yllareneg tI > > Sometimes we have to write 'backwards' to improve legibility. Odd > though that may sound. > > Some languages are written from left to right. Some from right to > left. And some ancient writing alternates, line to line. > > https://en.wikipedia.org/wiki/Right-to-left > https://www.andiamo.co.uk/resources/right-left-languages > https://en.wikipedia.org/wiki/Boustrophedon > > Users of modern rendering systems, such as in modern browsers, don't > have to worry about this. This is because the renderer will handle LTR > and RTL switches based on the language attribute. (Alway, text should > be encoded in reading order.) > > But those implementing a bidectional rendering system might have to > worry about such things. Most likely yes, but they would not render RTL text by first switching the direction and then printing them LTR again. Please also note that switching from LTR to RTL and back again is possible within a Unicode string, so applying str.reverse() would actually make things worse and not better :-) Processing in Unicode is always left to right, even if the resulting text may actually be rendered right to left or top to bottom. See UAX #9 for more details: http://www.unicode.org/reports/tr9/ Here's a document outlining how to render scripts which are LTR, RTL or TTB: https://www.w3.org/International/questions/qa-scripts -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Sep 08 2018) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From boxed at killingar.net Sat Sep 8 09:34:37 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sat, 8 Sep 2018 15:34:37 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <1ABCDBAD-D477-40F2-AC1B-27E50F921E43@killingar.net> Message-ID: <9F645217-9A74-4EFD-9AA9-2D090634C107@killingar.net> > A finer grained analysis tool would be helpful. I'm -0 on the idea because I believe it would discourage more expressive names in calling contexts in order to enable the proposed syntax. But I also see a big difference between cases where all keywords match calling names and cases where only a few of them do. I?ll try to find some time to tune it when I get back to work then. > I.e. this is probably a small win: > > # function (a=a, b=b, c=c, d=d) > function(*, a, b, c, d) > > But this feels like it invites confusion and bugs: > > # function (a=my_a, b=b, c=my_c, d=d) > function(*, a=my_a, b, c=my_c, d) That example could also be rewritten as function(a=my_a, c=my_c, *, b, d) or function(*, b, c, d, a=my_a, c=my_c) Both are much nicer imo. Hmmm... maybe my suggestion is actually better if the special case is only after * so the first of those is legal and the rest not. Hadn?t considered that option before now. > I know these examples use simplified and artificial names, but I think the case is even stronger with more realistic names or expressions. Stronger in what direction? :P / Anders From tjreedy at udel.edu Sat Sep 8 09:38:17 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 8 Sep 2018 09:38:17 -0400 Subject: [Python-ideas] Add Unicode-aware str.reverse() function? In-Reply-To: References: Message-ID: On 9/8/2018 7:33 AM, Paddy3118 wrote: > I wrote a blog post > nearly > a decade ago on extending a Rosetta Code task example > to handle the correct > reversal of strings with combining characters. The problem statement gives one Latin string example: "as?df?" (combining circle between 's' and 'd') should be "f?ds?a", not "?fd?sa". Note that Thunderbird combines the overbar '\u0305' with 'f' but does *not* combine the ? '\u20dd' with anything, because ? does not have the 'combining' property. >>> import unicodedata >>> unicodedata.combining('\u20dd') 0 Firefox garbles the problem statement by putting the following char, not the preceeding char, inside the circle. What is the 'correct reversal' of '\u301' or '\u301a'? > On checking my blog statistics today I found that it still had a > readership and revisited the code > > (and updated it to Python3.6).. Your code raises IndexError on the strings above. If the intended domain of your function is 'all Python strings' (sequences of unicode codepoints), it is buggy. If the intended domain is some 'properly formed' subset of strings, then IndexError should be caught and replaced with ValueError('string starts with combining character'). Your code uses another latin string, ?str?m, as an example because it gives the 'incorrect' answer "?fd?sa" for the reverse of "as?df?". > I found that amongst the nearly 200 languages that complete the RC > task,there were a smattering of languages that correctly handled > reversing strings having Unicode combining characters, At least Python is one that can do so, at least for latin chars. > I would like to propose that Python add a Unicode-aware *str.reverse > *method. A python string is a sequence of unicode codepoints. String methods operate on the string as such. We intentionally leave higher level methods to third parties. One reason is the problem of getting such things 'right' for all strings. What do we do with a leading combining char? Do combining characters always combine with the preceding char, as your code assumes? Do all languages treat all combining characters the same? (Pretty sure not.) Does .combining() encompass all order dependencies that should considered in a higher level reverse function? (According the the page you reference, no.) -- Terry Jan Reedy From jfine2358 at gmail.com Sat Sep 8 09:41:27 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 8 Sep 2018 14:41:27 +0100 Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: <31d53c32-956d-c1a6-13c2-835e1bc4b7bc@egenix.com> References: <31d53c32-956d-c1a6-13c2-835e1bc4b7bc@egenix.com> Message-ID: M.-A. Lemburg wrote: > Most likely yes, but they would not render RTL text by first > switching the direction and then printing them LTR again. > > Please also note that switching from LTR to RTL and back again > is possible within a Unicode string, so applying str.reverse() > would actually make things worse and not better :-) > > Processing in Unicode is always left to right, even if the resulting > text may actually be rendered right to left or top to bottom. > http://www.unicode.org/reports/tr9/ > https://www.w3.org/International/questions/qa-scripts Your reminder of the difficulties in Unicode, and the URLs are much appreciated. In particular, the keyword 'while' in Arabic should be written Left-To-Right, even though the ambient text is Left-To-Right. I've found these URLs, which suggests that there's a still a problem to be solved. https://www.linkedin.com/pulse/fix-rtl-right-left-support-persian-arabic-text-ubuntu-ghorbani/ https://askubuntu.com/questions/983480/showing-text-file-content-right-to-left-in-the-terminal https://github.com/behdad/bicon My understanding is that at present it's not straightforward to provide legible localised text at the Python console, when the locale language is Arabic, Persian, Hebrew or Urdu. (And another problem is allow copy and paste of such text.) If it is straightforward to provide RTL localisation at the Python interpreter, I'd very much appreciate being pointed to such a solution. -- Jonathan From jfine2358 at gmail.com Sat Sep 8 09:53:05 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 8 Sep 2018 14:53:05 +0100 Subject: [Python-ideas] Add Unicode-aware str.reverse() function? In-Reply-To: References: Message-ID: Terry Ready wrote: > A python string is a sequence of unicode codepoints. String methods operate > on the string as such. We intentionally leave higher level methods to third > parties. One reason is the problem of getting such things 'right' for all > strings. What do we do with a leading combining char? Do combining > characters always combine with the preceding char, as your code assumes? Do > all languages treat all combining characters the same? (Pretty sure not.) > Does .combining() encompass all order dependencies that should considered in > a higher level reverse function? (According the the page you reference, no.) I've already mentioned Yannis Haralambous (in this thread). He's something of an expert on these matters. And also the author of Fonts & Encodings: From Advanced Typography to Unicode and Everything in Between http://shop.oreilly.com/product/9780596102425.do He's likely to know how to get things right for users for all (or at least many) languages and strings. I've let him know about this discussion. -- Jonathan From tjreedy at udel.edu Sat Sep 8 09:57:21 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 8 Sep 2018 09:57:21 -0400 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: Message-ID: On 9/8/2018 7:17 AM, Jonathan Fine wrote: > I thank Steve D'Aprano for pointing me to this real-life (although > perhaps extreme) code example > > https://github.com/Tinche/aiofiles/blob/master/aiofiles/threadpool/__init__.py#L17-L37 > > def open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, > closefd=True, opener=None, *, loop=None, executor=None): > return AiofilesContextManager(_open(file, mode=mode, buffering=buffering, > encoding=encoding, errors=errors, > newline=newline, closefd=closefd, > opener=opener, loop=loop, > executor=executor)) Given that open and _open, likely written at the same time, have the same signature, I would have written the above as the slightly faster call return AiofilesContextManager(_open( file, mode, buffering, encoding, errors, newline, closefd, opener, loop=loop, executor=executor)) > @asyncio.coroutine > def _open(file, mode='r', buffering=-1, encoding=None, errors=None, > newline=None, > closefd=True, opener=None, *, loop=None, executor=None): > """Open an asyncio file.""" > if loop is None: > loop = asyncio.get_event_loop() > cb = partial(sync_open, file, mode=mode, buffering=buffering, > encoding=encoding, errors=errors, newline=newline, > closefd=closefd, opener=opener) > f = yield from loop.run_in_executor(executor, cb) > > return wrap(f, loop=loop, executor=executor) > -- Terry Jan Reedy From mertz at gnosis.cx Sat Sep 8 10:05:40 2018 From: mertz at gnosis.cx (David Mertz) Date: Sat, 8 Sep 2018 10:05:40 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <9F645217-9A74-4EFD-9AA9-2D090634C107@killingar.net> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <1ABCDBAD-D477-40F2-AC1B-27E50F921E43@killingar.net> <9F645217-9A74-4EFD-9AA9-2D090634C107@killingar.net> Message-ID: On Sat, Sep 8, 2018 at 9:34 AM Anders Hovm?ller wrote: > function(a=my_a, c=my_c, *, b, d) > function(*, b, c, d, a=my_a, c=my_c) > Yes, those look less bad. They are also almost certainly should get this message rather than working: TypeError: function() got multiple values for keyword argument 'c' But they also force changing the order of keyword arguments in the call. That doesn't do anything to the *behavior* of the call, but it often affects readability. For functions with lots of keyword arguments there is often a certain convention about the order they are passed in that readers expect to see. Those examples of opening and reading files that several people have given are good examples of this. I.e. most optional arguments are not used, but when they are used they have certain relationships among them that lead readers to expect them in a certain order. Here's a counter-proposal that does not require any new syntax. Is there ANYTHING your new syntax would really get you that this solution does not accomplish?! (other than save 4 characters; fewer if you came of with a one character name for the helper) >>> def function(a=11, b=22, c=33, d=44): ... print(a, b, c, d) ... >>> a, b, c = 1, 2, 3 >>> function(a=77, **use('b d')) 77 2 33 None We could implement this helper function like this: >>> def use(names): ... kws = {} ... for name in names.split(): ... try: ... val = eval(name) ... except: ... val = None ... kws[name] = val ... return kws -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Sat Sep 8 10:14:40 2018 From: mertz at gnosis.cx (David Mertz) Date: Sat, 8 Sep 2018 10:14:40 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <1ABCDBAD-D477-40F2-AC1B-27E50F921E43@killingar.net> <9F645217-9A74-4EFD-9AA9-2D090634C107@killingar.net> Message-ID: I'm not sure whether my toy function is better to assume None for a name that is "used" but does not exist, or to raise a NameError. I can see arguments in both directions, but either behavior is a very small number of lines (and the same decision exists for the proposed syntax). You might also allow the `use()` function to take some argument(s) other than a space-separated string, but that's futzing with a demonstration API. On Sat, Sep 8, 2018 at 10:05 AM David Mertz wrote: > On Sat, Sep 8, 2018 at 9:34 AM Anders Hovm?ller > wrote: > >> function(a=my_a, c=my_c, *, b, d) >> function(*, b, c, d, a=my_a, c=my_c) >> > > Yes, those look less bad. They are also almost certainly should get this > message rather than working: > > TypeError: function() got multiple values for keyword argument 'c' > > But they also force changing the order of keyword arguments in the call. > That doesn't do anything to the *behavior* of the call, but it often > affects readability. > > For functions with lots of keyword arguments there is often a certain > convention about the order they are passed in that readers expect to see. > Those examples of opening and reading files that several people have given > are good examples of this. I.e. most optional arguments are not used, but > when they are used they have certain relationships among them that lead > readers to expect them in a certain order. > > Here's a counter-proposal that does not require any new syntax. Is there > ANYTHING your new syntax would really get you that this solution does not > accomplish?! (other than save 4 characters; fewer if you came of with a one > character name for the helper) > > >>> def function(a=11, b=22, c=33, d=44): > ... print(a, b, c, d) > ... > >>> a, b, c = 1, 2, 3 > >>> function(a=77, **use('b d')) > 77 2 33 None > > > We could implement this helper function like this: > > >>> def use(names): > ... kws = {} > ... for name in names.split(): > ... try: > ... val = eval(name) > ... except: > ... val = None > ... kws[name] = val > ... return kws > > > > -- > Keeping medicines from the bloodstreams of the sick; food > from the bellies of the hungry; books from the hands of the > uneducated; technology from the underdeveloped; and putting > advocates of freedom in prisons. Intellectual property is > to the 21st century what the slave trade was to the 16th. > -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien at palard.fr Sat Sep 8 10:26:20 2018 From: julien at palard.fr (Julien Palard) Date: Sat, 08 Sep 2018 14:26:20 +0000 Subject: [Python-ideas] Does jargon make learning more difficult? In-Reply-To: References: <1b8e0506-3414-4fb9-9a0c-c913c3074309@googlegroups.com> <5B71FF06.2070407@canterbury.ac.nz> <23417.33056.874196.815862@turnbull.sk.tsukuba.ac.jp> Message-ID: o/ > IMHO a better usage of the PSF funding would be to organize some local > sprints to translate the Python documentation. That's what we're already doing here in France for the french translation, and the PSF is already fouding them (thanks!) in Paris [1] and the AFPy [2] is founding them in Lyon. [1]: https://www.meetup.com/fr-FR/Python-AFPY-Paris [2]: https://www.afpy.org/ --? Julien Palard https://mdk.fr From mike at selik.org Sat Sep 8 10:26:41 2018 From: mike at selik.org (Michael Selik) Date: Sat, 8 Sep 2018 07:26:41 -0700 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <9F645217-9A74-4EFD-9AA9-2D090634C107@killingar.net> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <1ABCDBAD-D477-40F2-AC1B-27E50F921E43@killingar.net> <9F645217-9A74-4EFD-9AA9-2D090634C107@killingar.net> Message-ID: On Sat, Sep 8, 2018, 6:34 AM Anders Hovm?ller wrote: > > A finer grained analysis tool would be helpful. I'm -0 on the idea > because I believe it would discourage more expressive names in calling > contexts in order to enable the proposed syntax. But I also see a big > difference between cases where all keywords match calling names and cases > where only a few of them do. > > I?ll try to find some time to tune it when I get back to work then. > Even better would be to show full context on one or a few cases where this syntax helps. I've found that many proposals in this mailing list have better solutions when one can see the complete code. If your proposal seems like the best solution after seeing the context, that can be more compelling than some assertion about 30% of parameters. If you can't share proprietary code, why not link to a good example in the Django project? If nothing else, maybe Django could get a pull request out of this. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paddy3118 at gmail.com Sat Sep 8 12:41:33 2018 From: paddy3118 at gmail.com (Paddy3118) Date: Sat, 8 Sep 2018 09:41:33 -0700 (PDT) Subject: [Python-ideas] Add Unicode-aware str.reverse() function? In-Reply-To: References: Message-ID: <70ccf762-8fcb-4bf5-9f63-284373d5829e@googlegroups.com> Thanks for your replies. After reading them,, although I seem to have a brain freeze at the moment and cannot think of an algorithm; I think it plausible, just in the ASCII world for someone to want to iterate through characters in a string in reverse order - maybe to zip with another existing iterable that would otherwise need to be reversed? If it shifted from ASCII to unicode then letters with their combining characters would have to be reversed as a single character; but also iterated over as a single unicode "character" - another problem! I thought so, I scratch the surface of unicode, and find a deep chasm awaits. On Saturday, 8 September 2018 12:33:07 UTC+1, Paddy3118 wrote: > > I wrote a blog post > nearly > a decade ago on extending a Rosetta Code task example > to handle the correct > reversal of strings with combining characters. > On checking my blog statistics today I found that it still had a > readership and revisited the code > > (and updated it to Python3.6).. > > I found that amongst the nearly 200 languages that complete the RC > task,there were a smattering of languages that correctly handled reversing > strings having Unicode combining characters, > including Perl 6 > which uses flip. > > I would like to propose that Python add a Unicode-aware *str.reverse *method. > The problem is, I'm a Brit, who only speaks English and only very rarely > dips into Unicode.* I don't know how useful this would be!* > > Cheers, Paddy. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paddy3118 at gmail.com Sat Sep 8 12:43:07 2018 From: paddy3118 at gmail.com (Paddy3118) Date: Sat, 8 Sep 2018 09:43:07 -0700 (PDT) Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: References: Message-ID: Please involve those with more knowledge on the subject. Thanks. On Saturday, 8 September 2018 13:13:05 UTC+1, Jonathan Fine wrote: > > Paddy wrote > > > I would like to propose that Python add a Unicode-aware str.reverse > method. > > The problem is, I'm a Brit, who only speaks English and only very rarely > > dips into Unicode. I don't know how useful this would be! > > Excellent post and piece of work. Well done! > > Here's someone who might know not only how useful, but also the > wrinkles in doing it correctly: > > https://www.telecom-bretagne.eu/studies/msc/professors/haralambous/ > > Yannis Haralambous received his Ph.D. in Pure Mathematics from the > Universit? de Sciences et Techniques de Lille-Flandre-Artois, Lille, > France in 1990. He is currently working as a full-time Professor at > Institut Mines-Telecom/Telecom Bretagne, Brest, in the Computer > Science Department. His research areas include digital typography and > representation of text, electronic documents, internationalization of > documents, character encodings and the preservation of the cultural > heritage of the book in the digital era. He is the author of Fonts & > Encodings, to be published by O'Reilly in 2007 (French version : > Fontes & codages, O'Reilly France, 2003). > > > I know Yannis, so could approach him on behalf of this list. > > -- > Jonathan > _______________________________________________ > Python-ideas mailing list > Python... at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Sat Sep 8 13:02:46 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 9 Sep 2018 03:02:46 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> Message-ID: <20180908170246.GL27312@ando.pearwood.info> On Thu, Sep 06, 2018 at 07:05:57AM -0700, Anders Hovm?ller wrote: > On Thursday, September 6, 2018 at 3:11:46 PM UTC+2, Steven D'Aprano wrote: [...] > > But why should I feel bad about failing to > > use the same names as the functions I call? > > Yea, why would you feel bad? If you should have different names, then do. > Of course. You are suggesting special syntax which encourages people to name their local variables the same as the parameters to functions which they call. That makes a value judgement that it is not just a good thing to match those names, but that it is *such* a good thing that the language ought to provide syntax to make it easier. If we make this judgement that consistency of names is Good, then naturally *inconsistency* of names is, if not outright Bad, at least *Less* Good and therefore to be avoided. If this suggestion is accepted, it's likely that there will be peer pressure to treat this as more Pythonic (i.e. better quality code) than the older explicit name=name style, which will quickly become unPythonic. See, for example, how quickly people have moved to the implicit f-strings over the explicit string.format form. Laziness and conciseness trumps the Zen. Whether this is a good thing or a bad thing, I leave to people to make up their own mind. If we believe that this consistency is desirable then maybe this would be a good thing. Linters could warn when you use "name=spam" instead of "*, name"; style guides can demand that code always uses this idiom whenever practical, tutorials and blog posts will encourage it, and the peer pressure to rename variables to match the called function's parameters would be a good thing too. But if consistency for consistency's sake is not generally a good thing, then we ought not to add such syntax just for conciseness. > > If some library author names > > the parameter to a function "a", why should I be encouraged to use > > that same name *just for the sake of consistency*? > > It would encourage library authors to name their parameters well. It > wouldn't do anything else. If library authors are choosing bad names for their parameters, how would this syntax change that practice? If they care so little for their callers that they choose poorly-named parameters, I doubt this will change their practice. But I'm not actually talking about library authors choosing bad names. I only used "a" as the name following your example. I presumed it was a stand-in for a more realistic name. There's no reason to expect that there's only one good name that works equally well as a formal parameter and as a local argument. Formal parameters are often more generic, local arguments can be more specific to the caller's context. Of course I understand that with this proposal, there's nothing *forcing* people to use it. But it shifts the *preferred* idiom from explicit "name=spam" to implicit "*, name" and puts the onus on people to justify why they aren't naming their local variables the same as the function parameter, instead of treating "the same name" as just another name. [...] > > My own feeling is that this feature would encourage what I consider a > > code-smell: function calls requiring large numbers of arguments. Your > > argument about being concise makes a certain amount of sense if you are > > frequently making calls like this: > > > > I don't see how that's relevant (or true, but let's stick with relevant). Let's not :-) Regarding it being a code-smell: https://refactoring.guru/smells/long-parameter-list http://wiki.c2.com/?TooManyParameters For a defence of long parameter lists, see the first answer here: http://wiki.c2.com/?LongParameterList but that active preference for long parameter lists seems to be a very rare, more common is the view that *at best* long parameter lists is a necessary evil that needs mitigation. I think this is an extreme position to take: https://www.matheus.ro/2018/01/29/clean-code-avoid-many-arguments-functions/ and I certainly wouldn't want to put a hard limit on the number of parameters allowed. But in general, I think it is unquestionable that long parameter lists are a code-smell. It is also relevant in this sense. Large, complex function calls are undoubtably painful. We have mitigated that pain somewhat by various means, probably the best of which are named keyword arguments, and sensible default values. The unintended consequence of this is that it has reduced the pressure on developers to redesign their code to avoid long function signatures, leading to more technical debt in the long run. Your suggestion would also reduce the pain of functions that require many arguments. That is certainly good news if the long argument list is *truly necessary* but it does nothing to reduce the amount of complexity or technical debt. The unintended consequence is likewise that it reduces the pressure on developers to avoid designing such functions in the first place. This might sound like I am a proponent of hair-shirt programming where everything is made as painful as possible so as to force people to program the One True Way. That's not my intention at all. I love my syntactic sugar as much as the next guy. But I'd rather deal with the trap of technical debt and excessive complexity by avoiding it in the first place, not by making it easier to fall into. The issue I have is that the problem you are solving is *too narrow*: it singles out a specific special case of "function call is too complex with too many keyword arguments", namely the one where the arguments are simple names which duplicate the parameter exactly, but without actually reducing or mitigating the underlying problems with such code. (On the contrary, I fear it will *encourage* such code.) So I believe this feature would add complexity to the language, making keyword arguments implicit instead of explicit, for very little benefit. (Not withstanding your statement that 30% of function calls would benefit. That doesn't match my experience, but we're looking at different code bases.) > There are actual APIs that have lots of arguments. GUI toolkits are a great > example. Another great example is to send a context dict to a template > engine. Indeed. And I'm sympathetic that some tasks are inherently complex and require many arguments. Its a matter of finding a balance between being able to use them, without encouraging them. > To get benefit from your syntax, I would need to > > extract out the arguments into temporary variables: > > which completely cancels out the "conciseness" argument. [...] > > However you look at it, it's longer and less concise if you have to > > create temporary variables to make use of this feature. > > > Ok. Sure, but that's a straw man.... You claimed the benefit of "conciseness", but that doesn't actually exist unless your arguments are already local variables named the same as the parameters of the function you are calling. Getting those local variables is not always free: sometimes they're natually part of your function anyway, and then your syntax would be a genuine win for conciseness. But often they're not, and you have to either forgo the benefit of your syntax, or add complexity to your function in order to gain that benefit. Pointing out that weakness in your argument is not a straw man. -- Steve From desmoulinmichel at gmail.com Sat Sep 8 13:05:09 2018 From: desmoulinmichel at gmail.com (Michel Desmoulin) Date: Sat, 8 Sep 2018 10:05:09 -0700 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> Message-ID: <94daa9d3-0cff-c5ff-0ed3-1d552cbc256e@gmail.com> Le 06/09/2018 ? 03:15, Anders Hovm?ller a ?crit?: > I have a working implementation for a new syntax which would make using keyword arguments a lot nicer. Wouldn't it be awesome if instead of: > > foo(a=a, b=b, c=c, d=3, e=e) > > we could just write: > > foo(*, a, b, c, d=3, e) > It will make code harder to read. Indeed, now your brain has to make the distinction between: foo(a, *, b, c) and: foo(a, b, *, c) Which is very subtle, yet not at all the same thing. All in all, this means: - you have to stop to get the meaning of this. Scanning the lines doesn't work anymore. - this is a great opportunity for mistakes, and hence bugs. - the combination of the two makes bugs that are hard to spot and fix. -1 > and it would mean the exact same thing? This would not just be shorter but would create an incentive for consistent naming across the code base. > > So the idea is to generalize the * keyword only marker from function to also have the same meaning at the call site: everything after * is a kwarg. With this feature we can now simplify keyword arguments making them more readable and concise. (This syntax does not conflict with existing Python code.) > > The full PEP-style suggestion is here: https://gist.github.com/boxed/f72221e7e77370be3e5703087c1ba54d > > I have also written an analysis tool you can use on your code base to see what kind of impact this suggestion might have. It's available at https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c . The results for django and twisted are posted as comments to the gist. > > We've run this on our two big code bases at work (both around 250kloc excluding comments and blank lines). The results show that ~30% of all arguments would benefit from this syntax. > > Me and my colleague Johan L?bcke have also written an implementation that is available at: https://github.com/boxed/cpython > > / Anders Hovm?ller > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > From klahnakoski at mozilla.com Sat Sep 8 13:33:22 2018 From: klahnakoski at mozilla.com (Kyle Lahnakoski) Date: Sat, 8 Sep 2018 13:33:22 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> Message-ID: <5329d375-e366-91a9-4b85-de8af6fa5761@mozilla.com> I agree that this is a familiar pattern, but I long since forgot the specifics of the domain it happens in.? I borrowed your code, and added filename tracking to see what source files had high `could_have_been_a_matched_kwarg`.? Here is the top one: https://github.com/django/django/blob/master/tests/migrations/test_autodetector.py The argument-name-matches-the-local-variable-name pattern does appear to happen in many test files. I assume programmers are more agnostic about variable names in a test because they have limited impact on the rest of the program; matching the argument names makes sense. There are plenty of non-test files that can use this pattern, here are two intense ones: https://github.com/django/django/blob/master/django/contrib/admin/options.py (212 call parameters match) https://github.com/django/django/blob/master/django/db/backends/base/schema.py (69 call parameters match) Opening these in an IDE, and looking at the function definitions, there is a good chance you find a call where the local variable and argument names match.? It is interesting to see this match, but I not sure how I feel about it.? For example, the options.py has a lot of small methods that deal with (request, obj) pairs: eg? `has_view_or_change_permission(self, request, obj=None)`? Does that mean there should be a namedtuple("request_on_object", ["request", "obj"]) to "simplify" all these calls?? There are also many methods that accept a single `request` argument; but I doubt they would benefit from the new syntax. On 2018-09-06 06:15, Anders Hovm?ller wrote: > I have a working implementation for a new syntax which would make using keyword arguments a lot nicer. Wouldn't it be awesome if instead of: > > foo(a=a, b=b, c=c, d=3, e=e) > > we could just write: > > foo(*, a, b, c, d=3, e) > From mertz at gnosis.cx Sat Sep 8 13:41:40 2018 From: mertz at gnosis.cx (David Mertz) Date: Sat, 8 Sep 2018 13:41:40 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <56D8E592-9A88-4FCF-809E-9484B89441BE@killingar.net> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <1ABCDBAD-D477-40F2-AC1B-27E50F921E43@killingar.net> <9F645217-9A74-4EFD-9AA9-2D090634C107@killingar.net> <56D8E592-9A88-4FCF-809E-9484B89441BE@killingar.net> Message-ID: > > I disagree. Those are examples of people being used to *positional > arguments* and this expecting that order to carry over. I don?t think > that?s a good argument because it presupposes that a habit of positional > arguments is good. > If 99% of existing code uses: pd.read_csv(fname, parse_dates=True, day_first=True) In preference to: pd.read_csv(fname, day_first=True, parse_dates=True) It seems pretty absurd to say that readability isn't harmed by someone choosing the "non-standard" order. Of course it still works the same for the library; but it's not the same for humans reading the code. There is one thing: my proposal wouldn?t result in a NameError at the > eval() call. You tried it out in a console but didn?t think about the > scoping properly. For your suggestion to work you have to copy paste that > helper function into the code at all scopes you want to use it. > This is just wrong. Assuming there is no 'd' available in the current scope, what could this POSSIBLY do other than raise a NameError: function(a=77, *, b, d) My little utility function decided to convert the NameError into a None value for the missing variable; but I mentioned that I'm not sure whether that's more useful behavior, and I'm not much attached to one or the other. function(a=77, **use('b d')) To make that helper function work you need to grab the stack frame and > extract the variables from there. You could absolutely do that though. > That?s pretty evil though. > Nope, there's absolutely no need to poke into the stack frame. It's just a regular closure over any variables that might exist in surrounding scopes. *Exactly* the same thing that your proposal would have to do. It makes absolutely no difference how deeply or shallowly nested the call to `use()` might be... A name like `d` simply is or is not available. Since the utility function doesn't have its own locals or formal parameters, nothing is changed by being one level deeper.[*] [*] Actually, that's not true. My implementation potentially steps on the three names `names`, `name` and `kws`. Perhaps those should be called `__names`, `__name`, and `__kws` to avoid that issue. If so, that's an easy change. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfine2358 at gmail.com Sat Sep 8 13:52:28 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 8 Sep 2018 18:52:28 +0100 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: Message-ID: OK. Here's another piece of code to look at, from a URL that Kyle Lahnakoski posted to a different thread. https://github.com/django/django/blob/master/django/contrib/admin/options.py#L1477-L1493 def get_inline_formsets(self, request, formsets, inline_instances, obj=None): inline_admin_formsets = [] for inline, formset in zip(inline_instances, formsets): fieldsets = list(inline.get_fieldsets(request, obj)) readonly = list(inline.get_readonly_fields(request, obj)) has_add_permission = inline._has_add_permission(request, obj) has_change_permission = inline.has_change_permission(request, obj) has_delete_permission = inline.has_delete_permission(request, obj) has_view_permission = inline.has_view_permission(request, obj) prepopulated = dict(inline.get_prepopulated_fields(request, obj)) inline_admin_formset = helpers.InlineAdminFormSet( inline, formset, fieldsets, prepopulated, readonly, model_admin=self, has_add_permission=has_add_permission, has_change_permission=has_change_permission, has_delete_permission=has_delete_permission, has_view_permission=has_view_permission, ) inline_admin_formsets.append(inline_admin_formset) return inline_admin_formsets How can we make this code better? Again, ideas please, and no discussion of "which is best". (We can get to that later.) -- Jonathan From desmoulinmichel at gmail.com Sat Sep 8 14:09:16 2018 From: desmoulinmichel at gmail.com (Michel Desmoulin) Date: Sat, 8 Sep 2018 11:09:16 -0700 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: References: Message-ID: <140891b8-3aef-0991-9421-7479e6a63eb6@gmail.com> Isn't the purpose of "assert" to be able to do design by contract ? assert test, "error message is the test fail" I mean, you just write your test, dev get a feedback on problems, and prod can remove all assert using -o. What more do you need ? Le 15/08/2018 ? 23:06, Marko Ristin-Kaufmann a ?crit?: > Hi, > > I would be very interested to bring design-by-contract into python 3. I > find design-by-contract particularly interesting and indispensable for > larger projects and automatic generation of unit tests. > > I looked at some of the packages found on pypi and also we rolled our > own solution (https://github.com/Parquery/icontract/ > ). I also looked into > https://www.python.org/dev/peps/pep-0316/ > . > > However, all the current solutions seem quite clunky to me. The > decorators involve an unnecessary computational overhead and the > implementation of icontract became quite tricky once we wanted to get > the default values of the decorated function. > > Could somebody update me on the state of the discussion on this matter? > > I'm very grateful for any feedback on this! > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > From jfine2358 at gmail.com Sat Sep 8 14:12:55 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 8 Sep 2018 19:12:55 +0100 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: Message-ID: The problem is to improve https://github.com/django/django/blob/master/django/contrib/admin/options.py#L1477-L1493 Here's a suggestion. For each of four permissions, the code has an assignment such as has_add_permission = inline._has_add_permission(request, obj) and the function call has four named parameters such as has_add_permission=has_add_permission where the permissions are add, change, deleted and view. Browsing the file, I see something similar in several places, such as https://github.com/django/django/blob/master/django/contrib/admin/options.py#L1136-L1139 'has_view_permission': self.has_view_permission(request, obj), 'has_add_permission': self.has_add_permission(request), 'has_change_permission': self.has_change_permission(request, obj), 'has_delete_permission': self.has_delete_permission(request, obj), So, off the top of my head, have a function get_permissions(item, request, obj) and then simply write ** get_permissions(item, request, obj) in the function call, to pass the permissions to the called function. By the way, for ease of use this is relying on https://www.python.org/dev/peps/pep-0448/ # Additional Unpacking Generalizations which allows multiple **kwargs in a function call. It was implemented in Python 3.5. By the way, Django 1.11 and 2.0 are still supported by Django, and they both support Python 3.4. So we'll have to wait for a bit before Django could accept this suggestion. https://docs.djangoproject.com/en/2.1/faq/install/#faq-python-version-support -- Jonathan From rosuav at gmail.com Sat Sep 8 15:19:31 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 9 Sep 2018 05:19:31 +1000 Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: References: <31d53c32-956d-c1a6-13c2-835e1bc4b7bc@egenix.com> Message-ID: On Sat, Sep 8, 2018 at 11:41 PM, Jonathan Fine wrote: > M.-A. Lemburg wrote: > >> Most likely yes, but they would not render RTL text by first >> switching the direction and then printing them LTR again. >> >> Please also note that switching from LTR to RTL and back again >> is possible within a Unicode string, so applying str.reverse() >> would actually make things worse and not better :-) >> >> Processing in Unicode is always left to right, even if the resulting >> text may actually be rendered right to left or top to bottom. > >> http://www.unicode.org/reports/tr9/ >> https://www.w3.org/International/questions/qa-scripts > > Your reminder of the difficulties in Unicode, and the URLs are much > appreciated. In particular, the keyword 'while' in Arabic should be > written Left-To-Right, even though the ambient text is Left-To-Right. > > I've found these URLs, which suggests that there's a still a problem > to be solved. > > https://www.linkedin.com/pulse/fix-rtl-right-left-support-persian-arabic-text-ubuntu-ghorbani/ > https://askubuntu.com/questions/983480/showing-text-file-content-right-to-left-in-the-terminal > https://github.com/behdad/bicon > > My understanding is that at present it's not straightforward to > provide legible localised text at the Python console, when the locale > language is Arabic, Persian, Hebrew or Urdu. (And another problem is > allow copy and paste of such text.) > > If it is straightforward to provide RTL localisation at the Python > interpreter, I'd very much appreciate being pointed to such a > solution. > Generally, problems with RTL text are *display* problems, and are not solved by reversing strings. I've hardly ever needed to reverse a string, and when it does happen, it's generally for the sake of *parsing*. You reverse a string, parse it from left to right, then reverse the result, in order to do a "parse from the right" operation. Never done that in Python, because the situations where that's necessary are (a) rare, and (b) generally suitable for a regex anyway. Does anyone have a really complex parsing job that absolutely cannot be done with a regex, and benefits from reversal? ChrisA From jfine2358 at gmail.com Sat Sep 8 15:34:36 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 8 Sep 2018 20:34:36 +0100 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: <140891b8-3aef-0991-9421-7479e6a63eb6@gmail.com> References: <140891b8-3aef-0991-9421-7479e6a63eb6@gmail.com> Message-ID: Michel Desmoulin wrote: > Isn't the purpose of "assert" to be able to do design by contract ? > > assert test, "error message is the test fail" > > I mean, you just write your test, dev get a feedback on problems, and > prod can remove all assert using -o. > > What more do you need ? Good question. My opinion is that assert statements are good. I like them. But wait, more is possible. Here are some ideas. 1. Checking the return value (or exception). This is a post-condition. 2. Checking return value, knowing the input values. This is a more sophisticated post-condition. 3. Adding checks around an untrusted function - possibly third party, possibly written in C. 4. Selective turning on and off of checking. The last two, selective checks around untrusted functions, I find particularly interesting. Suppose you have a solid, trusted, well-tested and reliable system. And you add, or change, a function called wibble(). In this situation, errors are most likely to be in wibble(), or in the interface to wibble(). So which checks are most valuable? I suggest the answer is 1. Checks internal to wibble. 2. Pre-conditions and post-conditions for wibble 3. Pre-conditions for any function called by wibble. Suppose wibble calls wobble. We should certainly have the system check wobble's preconditions, in this situation. But we don't need wobble to run checks all the time. Only when the immediate caller is wibble. I think assertions and design-by-contract point in similar directions. But design-by-contract takes you further, and is I suspect more valuable when the system being built is large. Thank you, Michel, for your good question. -- Jonathan From boxed at killingar.net Sat Sep 8 16:01:36 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sat, 8 Sep 2018 22:01:36 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: Message-ID: > get_permissions(item, request, obj) > > and then simply write > > ** get_permissions(item, request, obj) > > in the function call, to pass the permissions to the called function. > By the way, for ease of use this is relying on > > https://www.python.org/dev/peps/pep-0448/ # Additional Unpacking > Generalizations > > which allows multiple **kwargs in a function call. It was implemented > in Python 3.5. > > By the way, Django 1.11 and 2.0 are still supported by Django, and > they both support Python 3.4. So we'll have to wait for a bit before > Django could accept this suggestion. A dict merge is fairly trivial to implement to get even 2.7 support so no need to be that restrictive. This is a good fix for this case. From jfine2358 at gmail.com Sat Sep 8 16:08:48 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 8 Sep 2018 21:08:48 +0100 Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: References: <31d53c32-956d-c1a6-13c2-835e1bc4b7bc@egenix.com> Message-ID: Chris Angelico wrote: > Generally, problems with RTL text are *display* problems, and are not > solved by reversing strings. I very much agree with this statement, with one exception. If you wish to display RTL text on a LTR display, then a suitable reversing of strings is probably part of the solution. Using Google translate and a Python console I get English: integer Arabic: ??? ???? # Copy and paste from Google translate into gmail Python: ??? ???? The Python output shown here is obtained by 1. Copy and paste from Google translate into Python console. 2. And copied back again into gmail Notice how it looks just the same as the direct translation. So what's the problem? In the Python console, it doesn't look right. Here's what you should get. >>> '??? ????' '\xd8\xb9\xd8\xaf\xd8\xaf \xd8\xb5\xd8\xad\xd9\x8a\xd8\xad' See, the arabic is exactly the same. But when I paste the arabic string into the Python console, I get something that looks quite different, and sort of backwards. The problem, I think, is that the Python console is outputting the arabic glyphs from left to right. By the way, I get the same problem in the bash shell. -- Jonathan From jfine2358 at gmail.com Sat Sep 8 16:11:54 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 8 Sep 2018 21:11:54 +0100 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: Message-ID: Hi Anders You wrote: > A dict merge is fairly trivial to implement to get even 2.7 support so no need to be that restrictive. > This is a good fix for this case. I very much appreciate your openness to solutions other than the one you proposed. I like experts who are willing to change their view, when presented with new evidence. Thank you. -- Jonathan From rosuav at gmail.com Sat Sep 8 16:27:08 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 9 Sep 2018 06:27:08 +1000 Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: References: <31d53c32-956d-c1a6-13c2-835e1bc4b7bc@egenix.com> Message-ID: On Sun, Sep 9, 2018 at 6:08 AM, Jonathan Fine wrote: > Chris Angelico wrote: > >> Generally, problems with RTL text are *display* problems, and are not >> solved by reversing strings. > > I very much agree with this statement, with one exception. If you wish > to display RTL text on a LTR display, then a suitable reversing of > strings is probably part of the solution. That assumes that there is such a thing as an "LTR display". I disagree. :) There are "buggy displays" and there are "flawed displays" and there are "simplistic and naive displays", any or all of which could be limited in what they're able to render (and for the record, there's nothing inherently wrong with a simplistic display); but for those, Arabic text simply won't display correctly. RTL text is just one such problem (other examples include the way that different characters affect each other - an Arabic word is not the same as the abuttal of its individual characters - and the correct wrapping of text that uses joiners and spacers), and perfect Unicode display is *hard*. Improving a rendering engine or console so it's capable of correct RTL display is outside the scope of Python code, generally. ChrisA From jfine2358 at gmail.com Sat Sep 8 16:55:26 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Sat, 8 Sep 2018 21:55:26 +0100 Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: References: <31d53c32-956d-c1a6-13c2-835e1bc4b7bc@egenix.com> Message-ID: Chris Angelico wrote: > Improving a rendering engine or console so it's capable of correct RTL > display is outside the scope of Python code, generally. I agree with you, generally. But there are over 600 million people who speak a RTL language. About 12% of the world's population. I'd like Python's command line console to work for them. It may be worth making a special effort, and breaking a general rule, here. But we'd have to think carefully about it, and have expert help. I'm beginning to think that, as well as (instead of?) IDLE, a browser based Python command line console might be a good idea. For example, I'm getting reasonable results from using https://brython.info/tests/editor.html?lang=en Perhaps RTL and LTR problems by themselves are not sufficient reason to make a browser-based IDLE. But they should be a significant influence. Something to think about. By the way, IDLE has the same problem. -- Jonathan From rosuav at gmail.com Sat Sep 8 17:01:48 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 9 Sep 2018 07:01:48 +1000 Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: References: <31d53c32-956d-c1a6-13c2-835e1bc4b7bc@egenix.com> Message-ID: On Sun, Sep 9, 2018 at 6:55 AM, Jonathan Fine wrote: > Chris Angelico wrote: > >> Improving a rendering engine or console so it's capable of correct RTL >> display is outside the scope of Python code, generally. > > I agree with you, generally. > > But there are over 600 million people who speak a RTL language. About > 12% of the world's population. I'd like Python's command line console > to work for them. That's fine - but adding methods to Python won't change it. It's a console change, not a language change. > I'm beginning to think that, as well as (instead of?) IDLE, a browser > based Python command line console might be a good idea. For example, > I'm getting reasonable results from using > > https://brython.info/tests/editor.html?lang=en > > Perhaps RTL and LTR problems by themselves are not sufficient reason > to make a browser-based IDLE. But they should be a significant > influence. Something to think about. > > By the way, IDLE has the same problem. Have you tried out ipython / Jupyter? It might be what you're looking for. (I haven't tried it on this.) ChrisA From Richard at Damon-Family.org Sat Sep 8 17:20:17 2018 From: Richard at Damon-Family.org (Richard Damon) Date: Sat, 8 Sep 2018 17:20:17 -0400 Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: References: <31d53c32-956d-c1a6-13c2-835e1bc4b7bc@egenix.com> Message-ID: <8eb0b8f8-6039-0cd9-7614-866736b00d64@Damon-Family.org> On 9/8/18 4:55 PM, Jonathan Fine wrote: > Chris Angelico wrote: > >> Improving a rendering engine or console so it's capable of correct RTL >> display is outside the scope of Python code, generally. > I agree with you, generally. > > But there are over 600 million people who speak a RTL language. About > 12% of the world's population. I'd like Python's command line console > to work for them. > > It may be worth making a special effort, and breaking a general rule, > here. But we'd have to think carefully about it, and have expert help. > > I'm beginning to think that, as well as (instead of?) IDLE, a browser > based Python command line console might be a good idea. For example, > I'm getting reasonable results from using > > https://brython.info/tests/editor.html?lang=en > > Perhaps RTL and LTR problems by themselves are not sufficient reason > to make a browser-based IDLE. But they should be a significant > influence. Something to think about. > > By the way, IDLE has the same problem. > I would say that this shows that the problem isn't a need for a Unicode-aware string reverse, as that won't handle the problem (and is in someways the easiest part of the problem). The issue is that the string is quite likely a combination of LTR and RTL codes, so you perhaps want a functions to convert a Unicode string and process it so the requested glyphs are now all in a LTR order (perhaps even adding the override codes to the string so if the display DOES know how to handle RTL text knows it isn't supposed to change the order). Unicode is complicated, and one big question is how much support for its complexity should be built into the language and the basic types. Currently it is a fairly basic support (mostly just for codepoints). It could make sense to have a Unicode package that knows a lot more of the complexity of Unicode, doing things like extraction a code point package that represents a full glyph knowing all the combining rules, and maybe processing directional rendering like the above problem. -- Richard Damon From greg.ewing at canterbury.ac.nz Sat Sep 8 19:08:19 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 09 Sep 2018 11:08:19 +1200 Subject: [Python-ideas] Fwd: Add Unicode-aware str.reverse() function? In-Reply-To: References: Message-ID: <5B945663.9010204@canterbury.ac.nz> Stephan Houben wrote: > To be honest, quite apart from the Unicode issue, I never had a need to > reverse a string in real code. Yeah, seems to me it would only be useful if you were working on some kind of word game such as a palindrome generator, or if your string represents something other than natural language text (in which case all the tricky unicode stuff probably doesn't apply anyway). For such a rare requirement, maybe a module on PyPI would be a better solution than adding a string method. -- Greg From mertz at gnosis.cx Sat Sep 8 19:23:11 2018 From: mertz at gnosis.cx (David Mertz) Date: Sat, 8 Sep 2018 19:23:11 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <50CCE12B-2A9A-4EC9-82B2-96865E624654@killingar.net> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <1ABCDBAD-D477-40F2-AC1B-27E50F921E43@killingar.net> <9F645217-9A74-4EFD-9AA9-2D090634C107@killingar.net> <56D8E592-9A88-4FCF-809E-9484B89441BE@killingar.net> <50CCE12B-2A9A-4EC9-82B2-96865E624654@killingar.net> Message-ID: > > def foo(): > a, b, c = 1, 2, 3 > function(a=77, **use('b d')) > > foo() > You get the output ?77 None 33 None?. So basically it doesn?t work at all. > For the reason I wrote clearly in my last mail. You have to dig through the > stack frames to make use() work. > OK, you are right. Improved implementation. Still not very hard. In any case, I'm concerned with the API to *use* the `use()` function, not how it's implemented. The point is really just that we can accomplish the same thing you want without syntax added. >>> import inspect >>> def reach(name): ... for f in inspect.stack(): ... if name in f[0].f_locals: ... return f[0].f_locals[name] ... return None ... >>> def use(names): ... kws = {} ... for name in names.split(): ... kws[name] = reach(name) ... return kws ... >>> def function(a=11, b=22, c=33, d=44): ... print(a, b, c, d) ... >>> function(a=77, **use('b d')) 77 None 33 None >>> def foo(): ... a, b, c = 1, 2, 3 ... function(a=77, **use('b d')) ... >>> foo() 77 2 33 None -------------- next part -------------- An HTML attachment was scrubbed... URL: From benlewisj at gmail.com Sat Sep 8 19:40:23 2018 From: benlewisj at gmail.com (Ben Lewis) Date: Sun, 9 Sep 2018 11:40:23 +1200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: Message-ID: > > It?s obvious but there is one easy way to shorten the code: using > **kwargs. It?s way shorter but the down sides are: - the ?real? function signature gets hidden so IDEs for example won?t pick > it up > - the error when you make a mistake when calling is not in your code > anymore but one level down. This is confusing. One could imagine solving this specific case by having a > type annotation of ?this function has the types of that function?. Maybe: > def _open(*args: args_of_(sync_open), **kwargs: kwargs_of(sync_open) -> > return_of(sync_open): But of course this only solves the case where there > is a 1:1 mapping. / Anders These problems could be solved by a decorator that accepts string representation of the signature. The decorator would then have to parse the signature at importing time and set it to the __signature__ attribute on the resultant function. This decorator would also need to bind the arguments e.g. sig.bind(*args, **kwargs), to handle out of order positional arguments. Therefore this would raise an error in the decorator, essentially solving your second point. This would make the example look like this, a lot clearer in my opionion: @signature('''(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None, *, loop=None, executor=None)''') def open(*args, **kwargs): return AiofilesContextManager(_open(*args, **kwargs)) @asyncio.coroutine def _open(*args, loop=None, executor=None, **kwargs): """Open an asyncio file.""" if loop is None: loop = asyncio.get_event_loop() cb = partial(sync_open, *args, **kwargs) f = yield from loop.run_in_executor(executor, cb) return wrap(f, loop=loop, executor=executor) Ben Lewis -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Sun Sep 9 01:19:01 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 9 Sep 2018 15:19:01 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> Message-ID: <20180909051901.GP27312@ando.pearwood.info> On Sat, Sep 08, 2018 at 12:05:33PM +0100, Jonathan Fine wrote: > Steve wrote: > > > With the usual disclaimer that I understand it will never be manditory > > to use this syntax, nevertheless I can see it leading to the "foolish > > consistency" quote from PEP 8. > > > "We have syntax to write shorter code, shorter code is better, > > so if we want to be Pythonic we must design our functions to use > > the same names for local variables as the functions we call." > > > -- hypothetical blog post, Stackoverflow answer, > > opinionated tutorial, etc. > > > I don't think this is a pattern we want to encourage. > > Steve's "hypothetical blog post" is a pattern he doesn't like, and he > said that it's not a pattern we want to encourage. And he proceeds to > demolish this pattern, in the rest of his post. > > According to https://en.wikipedia.org/wiki/Straw_man This is called Poisoning the Well. You have carefully avoided explicitly accusing me of making a straw man argument while nevertheless making a completely irrelevant mention of it, associating me with the fallacy. That is not part of an honest or open discussion. Anders made a proposal for a change in syntax. I made a prediction of the possible unwelcome consequences of that suggested syntax. In no way, shape or form is that a straw man. To give an analogy: Politician A: "We ought to invade Iranistan, because reasons." Politician B: "If we do that, it will cost a lot of money, people will die, we'll bring chaos to the region leading to more terrorism, we might not even accomplish our aims, and our international reputation will be harmed." Politician A: "That's a straw-man! I never argued for those bad things. I just want to invade Iranistan." Pointing out unwelcome consequences of a proposal is not a Straw Man. -- Steve From steve at pearwood.info Sun Sep 9 01:14:19 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 9 Sep 2018 15:14:19 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <23721268-885B-45EF-8ACF-5F9E22FAC905@killingar.net> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <20180908170246.GL27312@ando.pearwood.info> <23721268-885B-45EF-8ACF-5F9E22FAC905@killingar.net> Message-ID: <20180909051419.GO27312@ando.pearwood.info> On Sat, Sep 08, 2018 at 09:54:59PM +0200, Anders Hovm?ller wrote: [...] [Steven (me)] > > If we believe that this consistency is desirable then maybe this would > > be a good thing. Linters could warn when you use "name=spam" instead of > > "*, name"; style guides can demand that code always uses this idiom > > whenever practical, tutorials and blog posts will encourage it, and the > > peer pressure to rename variables to match the called function's > > parameters would be a good thing too. > > That?s the same straw man argument as before as far as I can tell. [...] Enough with the false accusations of straw-manning. I am stating my prediction of the consequence of your proposal. Just because you dislike the conclusion I draw doesn't make it a straw-man, and your repeated false accusations (whether intentional or not) amount to Poisoning The Well. "Steven's arguments are straw-men, so we don't need to address them or pay attention to him." Except it isn't a straw-man. You might disagree, which is your right, you might even just dismiss them as "well that's your opinion" (it *is* my opinion, but I hope a reasoned, logical one), you might even think that the consequences I state are a good thing. But stop trying to poison my reputation by labelling me a straw-manner. > > But if consistency for consistency's sake is not generally a good thing, > > then we ought not to add such syntax just for conciseness. > > But conciseness for conciseness sake is just as not-good and we do > have special syntax for that: positional arguments. I?m proposing to > level the playing field. For short argument lists, there's little or nothing wrong with positional arguments. In fact, I would say that positional arguments are, in some circumstances, far better: len(obj) versus len(target=obj) mylist.append(item) versus mylist.append(obj=item) My argument isn't about making absolute judgements of what is good or bad in all cases. It is about weighing up the benefit in some cases (which I have acknowledged) against the disadvantage in other cases. [...] > > Of course I understand that with this proposal, there's nothing > > *forcing* people to use it. But it shifts the *preferred* idiom from > > explicit "name=spam" to implicit "*, name" and puts the onus on people > > to justify why they aren't naming their local variables the same as the > > function parameter, instead of treating "the same name" as just another > > name. > > Did you just argue that my proposed syntax is so great it?s going to > become the preferred idiom? :) No. I argued that your proposed syntax could become the perferred idiom because it is short and concise and people mistake that for "great". I'm not blind to the advantage. If I had to make many function calls that looked like this: result = function(spam=spam, eggs=eggs, cheese=cheese, foo=foo, bar=bar, baz=baz, fe=fe, fi=fi, fo=fo, fum=fum) I'd be sick of it too and want something better. But we have to balance many competing and often contradictory interests, and I don't think you have made your case that the pain of the above is worse than the (alleged) benefit of being able to write it in a more terse, implicit way: result = function(*, spam, eggs, cheese, foo, bar, baz, fe, fi, fo, fum) is shorter, but is it really better? I don't think you've made that case. But many people do seem to think shorter is better as a general rule. > > but that active preference for long parameter lists seems to be a very > > rare, more common is the view that *at best* long parameter lists is a > > necessary evil that needs mitigation. > > I think that?s overly doom and gloomy. Just look at all the arguments > for open(). Yes, open() is a good example of how long parameter lists are not necessarily a bad thing. With sensible defaults, probably 95% of calls to open need no more than one or two arguments. But can you remember the Bad Old Days before programming languages supported default values? I do. Now imagine making a call to open() with eight required arguments. > In any case I?m proposing a way to mitigate the pain in a subset of > cases (probably not the case of open()!). Yes, I acknowledge that. Just because I sympathise with your pain doesn't mean I want to see your syntax added to the language. > > certainly wouldn't want to put a hard limit on the number of > > parameters allowed. But in general, I think it is unquestionable that > > long parameter lists are a code-smell. > > Or a domain smell. Some domains are smelly. Indeed. > > It is also relevant in this sense. Large, complex function calls are > > undoubtably painful. We have mitigated that pain somewhat by various > > means, probably the best of which are named keyword arguments, and > > sensible default values. The unintended consequence of this is that it > > has reduced the pressure on developers to redesign their code to avoid > > long function signatures, leading to more technical debt in the long > > run. > > So... you?re arguing Python would have been better off without keyword > arguments and default values? Not at all. As I already stated, keyword arguments are great, and I'm not arguing for hair-shirt programming where we intentionally make things as painful as possible. I'll make an analogy here. Pain, real physical pain, is bad, and it is both kind and medically advantagous to reduce it when necessary. But *necessary* is the key word, because sometimes pain is an important biological signal that says Don't Do That. People with no pain receptors tend to die young because they repeatedly injure themselves and don't even know it. Good doctors make careful judgement about the minimum amount of painkillers needed, and don't hand out morphine for stubbed toes because the consequences will be worse than the benefit gained. The analogy in programming follows: reducing immediate pain can, sometimes, cause long-term pain in the form of technical debt, which is worse. It takes a careful balancing act to decide when it is appropriate to take on technical debt in order to avoid short-term annoyance. We cannot hope to make such a sensible decision if we focus only on the immediate relief and not on the long term consequences. > > Your suggestion would also reduce the pain of functions that require > > many arguments. That is certainly good news if the long argument list is > > *truly necessary* but it does nothing to reduce the amount of complexity > > or technical debt. > > The unintended consequence is likewise that it > > reduces the pressure on developers to avoid designing such functions in > > the first place. > > > > This might sound like I am a proponent of hair-shirt programming where > > everything is made as painful as possible so as to force people to > > program the One True Way. That's not my intention at all. I love my > > syntactic sugar as much as the next guy. But I'd rather deal with the > > trap of technical debt and excessive complexity by avoiding it in the > > first place, not by making it easier to fall into. > > But you?ve only proposed or implied ways to avoid it by force and > pain. I don't think the pain is all that great. Its a minor annoyance to write keyword arguments like parameter=parameter, and a good IDE or editor can help there. But I acknowledge there is some pain, and the more you need to do this, the more you feel it. But in any case, you made a proposal. I don't have to come up with a better proposal before I am permitted to argue against yours. "We must do something. This is something, therefore we must do it." is never a valid argument. > And you?re arguing my suggestion is bad because it lowers pain. > So I think you?re contradicting yourself here. That?s fine but you > should admit it and call it a trade off at least. I have done nothing but call it a trade off! I've repeatedly argued that in my opinion the benefit (which I have repeatedly acknowledged!) doesn't outweigh the (predicted) costs as I see them. [...] > >> There are actual APIs that have lots of arguments. GUI toolkits are a great > >> example. Another great example is to send a context dict to a template > >> engine. > > > > Indeed. And I'm sympathetic that some tasks are inherently complex and > > require many arguments. Its a matter of finding a balance between being > > able to use them, without encouraging them. > > What does that mean? We shouldn?t encourage GUI toolkits for Python? > Or we shouldn?t encourage working with complex domains? You probably > meant to say something else but it came across weirdly. Sorry for being unclear. I meant that we shouldn't encourage complex, multi-argument functions, while still accepting that sometimes they are unavoidable (and even occasionally a good thing). > > You claimed the benefit of "conciseness", but that doesn't actually > > exist unless your arguments are already local variables named the same > > as the parameters of the function you are calling. > > Which they often already are. Again: I have numbers. You do not. I don't dispute the numbers you get from your code base. You should be careful about generalising from your code to other people's, especially if you haven't seen their code. > > Pointing out that weakness in your argument is not a straw man. > > You did notice that not just me called it a straw man on this list right? Yes I did. Many people call the world flat too. What's your point? -- Steve From steve at pearwood.info Sun Sep 9 01:29:07 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 9 Sep 2018 15:29:07 +1000 Subject: [Python-ideas] Add Unicode-aware str.reverse() function? In-Reply-To: References: Message-ID: <20180909052907.GQ27312@ando.pearwood.info> On Sat, Sep 08, 2018 at 04:33:07AM -0700, Paddy3118 wrote: > I wrote a blog post > nearly > a decade ago on extending a Rosetta Code task example > to handle the correct > reversal of strings with combining characters. I wouldn't care too much about a dedicated "reverse" method that handled combining characters. I think that's just a special case of iterating over graphemes. If we can iterate over graphemes, then reversing because trivial: ''.join(reversed(mystring.graphemes())) The Unicode Consortium offer an algorithm for identifying grapheme clusters in text strings, and there's at least three requests on the tracker (one closed, two open). https://bugs.python.org/issue30717 https://bugs.python.org/issue18406 https://bugs.python.org/issue12733 -- Steve From bruce at leban.us Sun Sep 9 01:30:33 2018 From: bruce at leban.us (Bruce Leban) Date: Sat, 8 Sep 2018 22:30:33 -0700 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180909051419.GO27312@ando.pearwood.info> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <20180908170246.GL27312@ando.pearwood.info> <23721268-885B-45EF-8ACF-5F9E22FAC905@killingar.net> <20180909051419.GO27312@ando.pearwood.info> Message-ID: The proposal is to eliminate the redundancy of writing name=name repeatedly. But IMHO it doesn't do that consistently. That is it allows both forms mixed together, e.g., f(*, a, b, c=x, d, e) I believe this is confusing in part because the * need not be near the arguments it affects. Consider open(*, name='temp.txt', mode='r', buffering=-1, encoding='utf-8', errors) By the time you get to the last arg, you'll have forgotten the * at the beginning. If Python were to adopt a syntax to address this, I think it should be something like f(=a, =b, c=x, =d, =e) open(name='temp.txt', mode='r', buffering=-1, encoding='utf-8', =errors) Where an = in an argument list without a name in front of it uses the symbol on the right hand side as the name. --- Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Sun Sep 9 01:37:21 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sun, 9 Sep 2018 07:37:21 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180909051901.GP27312@ando.pearwood.info> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> Message-ID: > You have carefully avoided explicitly accusing me of making a straw man > argument while nevertheless making a completely irrelevant mention of > it, associating me with the fallacy. I read that as him accusing you very directly. > That is not part of an honest or open discussion. > > Anders made a proposal for a change in syntax. I made a prediction of > the possible unwelcome consequences of that suggested syntax. In no way, > shape or form is that a straw man. You kept saying I was ?forcing? to use the new syntax. You said it over and over even after we pointed out this was not the actual suggestion. This is classic straw man. But ok, let?s be more charitable and interpret it as you wrote it later: that it won?t be forcing per se, but that the feature will be *so compelling* it will be preferred at all times over both normal keyword arguments *and* positional arguments. For someone who doesn?t like the proposal you seem extremely convinced that everyone else will think it?s so super awesome they will actually try to force it on their colleagues etc. I like my proposal obviously but even I don?t think it?s *that* great. It would almost certainly become the strongly preferred way to do it for some cases like .format() and sending a context to a template renderer in web apps. But that?s because in those cases it is very important to match the names. / Anders From rosuav at gmail.com Sun Sep 9 02:15:06 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 9 Sep 2018 16:15:06 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> Message-ID: On Sun, Sep 9, 2018 at 3:37 PM, Anders Hovm?ller wrote: > >> You have carefully avoided explicitly accusing me of making a straw man >> argument while nevertheless making a completely irrelevant mention of >> it, associating me with the fallacy. > > I read that as him accusing you very directly. > >> That is not part of an honest or open discussion. >> >> Anders made a proposal for a change in syntax. I made a prediction of >> the possible unwelcome consequences of that suggested syntax. In no way, >> shape or form is that a straw man. > > You kept saying I was ?forcing? to use the new syntax. You said it over and over even after we pointed out this was not the actual suggestion. This is classic straw man. > Creating a new and briefer syntax for something is not actually *forcing* people to use it, but it is an extremely strong encouragement. It's the language syntax yelling "HERE! DO THIS!". I see it all the time in JavaScript, where ES2015 introduced a new syntax {name} equivalent to {"name":name} - people will deliberately change their variable names to match the desired object keys. So saying "forcing" is an exaggeration, but a very slight one. ChrisA From boxed at killingar.net Sun Sep 9 03:32:12 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sun, 9 Sep 2018 09:32:12 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> Message-ID: > I see it all the time in JavaScript, where ES2015 introduced a new > syntax {name} equivalent to {"name":name} - people will deliberately > change their variable names to match the desired object keys. So > saying "forcing" is an exaggeration, but a very slight one. Do you have an opinion or feeling about if those synchronizations are for good, neutral or evil? / Anders From rosuav at gmail.com Sun Sep 9 05:26:25 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 9 Sep 2018 19:26:25 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> Message-ID: On Sun, Sep 9, 2018 at 5:32 PM, Anders Hovm?ller wrote: > >> I see it all the time in JavaScript, where ES2015 introduced a new >> syntax {name} equivalent to {"name":name} - people will deliberately >> change their variable names to match the desired object keys. So >> saying "forcing" is an exaggeration, but a very slight one. > > Do you have an opinion or feeling about if those synchronizations are for good, neutral or evil? Often neutral, sometimes definitely evil. Pretty much never good. That said, my analysis is skewed towards the times when (as an instructor) I am asked to assist - the times when a student has run into trouble. But even compensating for that, I would say that the balance still tips towards the bad. ChrisA From steve at pearwood.info Sun Sep 9 08:51:29 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 9 Sep 2018 22:51:29 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> Message-ID: <20180909125129.GR27312@ando.pearwood.info> On Sun, Sep 09, 2018 at 07:37:21AM +0200, Anders Hovm?ller wrote: > > > You have carefully avoided explicitly accusing me of making a straw man > > argument while nevertheless making a completely irrelevant mention of > > it, associating me with the fallacy. > > I read that as him accusing you very directly. Okay. > > That is not part of an honest or open discussion. > > > > Anders made a proposal for a change in syntax. I made a prediction of > > the possible unwelcome consequences of that suggested syntax. In no way, > > shape or form is that a straw man. > > You kept saying I was ?forcing? to use the new syntax. You said it > over and over even after we pointed out this was not the actual > suggestion. This is classic straw man. Over and over again, you say. Then it should be really easy for you to link to a post from me saying that. I've only made six posts in this thread (seven including this one) so it should only take you a minute to justify (or retract) your accusation: https://mail.python.org/pipermail/python-ideas/2018-September/author.html Here are a couple of quotes to get you started: Of course I understand that with this proposal, there's nothing *forcing* people to use it. https://mail.python.org/pipermail/python-ideas/2018-September/053282.html With the usual disclaimer that I understand it will never be manditory [sic] to use this syntax ... https://mail.python.org/pipermail/python-ideas/2018-September/053257.html > But ok, let?s be more charitable and interpret it as you wrote it > later: that it won?t be forcing per se, but that the feature will be > *so compelling* it will be preferred at all times over both normal > keyword arguments *and* positional arguments. Vigorous debate is one thing. Misrepresenting my position is not. This isn't debate club where the idea is to win by any means, including by ridiculing exaggerated versions of the other side's argument. (There's a name for that fallacy, you might have heard of it.) We're supposed to be on the same side, trying to determine what is the best features for the language. We don't have to agree on what those features are, but we do have to agree to treat each other's position with fairness. -- Steve From mertz at gnosis.cx Sun Sep 9 08:57:29 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 9 Sep 2018 08:57:29 -0400 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180909125129.GR27312@ando.pearwood.info> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <20180909125129.GR27312@ando.pearwood.info> Message-ID: Can we all just PLEASE stop the meta-arguments enumerating logical fallacies and recriminating about who made it personal first?! Yes, let's discuss specific proposals and alternatives, and so on. If someone steps out of line of being polite and professional, just ignore it. On Sun, Sep 9, 2018, 8:52 AM Steven D'Aprano wrote: > On Sun, Sep 09, 2018 at 07:37:21AM +0200, Anders Hovm?ller wrote: > > > > > You have carefully avoided explicitly accusing me of making a straw > man > > > argument while nevertheless making a completely irrelevant mention of > > > it, associating me with the fallacy. > > > > I read that as him accusing you very directly. > > Okay. > > > > > That is not part of an honest or open discussion. > > > > > > Anders made a proposal for a change in syntax. I made a prediction of > > > the possible unwelcome consequences of that suggested syntax. In no > way, > > > shape or form is that a straw man. > > > > You kept saying I was ?forcing? to use the new syntax. You said it > > over and over even after we pointed out this was not the actual > > suggestion. This is classic straw man. > > Over and over again, you say. Then it should be really easy for you to > link to a post from me saying that. I've only made six posts in this > thread (seven including this one) so it should only take you a minute to > justify (or retract) your accusation: > > https://mail.python.org/pipermail/python-ideas/2018-September/author.html > > Here are a couple of quotes to get you started: > > Of course I understand that with this proposal, there's nothing > *forcing* people to use it. > > https://mail.python.org/pipermail/python-ideas/2018-September/053282.html > > > With the usual disclaimer that I understand it will never be > manditory [sic] to use this syntax ... > > https://mail.python.org/pipermail/python-ideas/2018-September/053257.html > > > > But ok, let?s be more charitable and interpret it as you wrote it > > later: that it won?t be forcing per se, but that the feature will be > > *so compelling* it will be preferred at all times over both normal > > keyword arguments *and* positional arguments. > > Vigorous debate is one thing. Misrepresenting my position is not. > > This isn't debate club where the idea is to win by any means, including > by ridiculing exaggerated versions of the other side's argument. > (There's a name for that fallacy, you might have heard of it.) > > We're supposed to be on the same side, trying to determine what is the > best features for the language. We don't have to agree on what those > features are, but we do have to agree to treat each other's position > with fairness. > > > > -- > Steve > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Mon Sep 10 02:05:22 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Mon, 10 Sep 2018 08:05:22 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180909051419.GO27312@ando.pearwood.info> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <20180908170246.GL27312@ando.pearwood.info> <23721268-885B-45EF-8ACF-5F9E22FAC905@killingar.net> <20180909051419.GO27312@ando.pearwood.info> Message-ID: <61661E38-1B84-4194-B67D-F41D959E89DC@killingar.net> I just realized I have another question for you: If you had to chose which one would you prefer: f(*, a b, c) or: f(=a, =b, =c) ? I know you?re clearly against the entire idea but it seems we should prefer the least disliked alternative in such a scenario. From paddy3118 at gmail.com Mon Sep 10 03:15:48 2018 From: paddy3118 at gmail.com (Paddy3118) Date: Mon, 10 Sep 2018 00:15:48 -0700 (PDT) Subject: [Python-ideas] Add Unicode-aware str.reverse() function? In-Reply-To: <20180909052907.GQ27312@ando.pearwood.info> References: <20180909052907.GQ27312@ando.pearwood.info> Message-ID: <5ac32004-7525-4f48-848c-65d1e53ab20c@googlegroups.com> On Sunday, 9 September 2018 06:30:19 UTC+1, Steven D'Aprano wrote: > > On Sat, Sep 08, 2018 at 04:33:07AM -0700, Paddy3118 wrote: > > I wrote a blog post > > < > http://paddy3118.blogspot.com/2009/07/case-of-disappearing-over-bar.html>nearly > > > a decade ago on extending a Rosetta Code task example > > to handle the correct > > reversal of strings with combining characters. > > I wouldn't care too much about a dedicated "reverse" method that handled > combining characters. I think that's just a special case of iterating > over graphemes. If we can iterate over graphemes, then reversing because > trivial: > > ''.join(reversed(mystring.graphemes())) > > The Unicode Consortium offer an algorithm for identifying grapheme > clusters in text strings, and there's at least three requests on the > tracker (one closed, two open). > > https://bugs.python.org/issue30717 > > https://bugs.python.org/issue18406 > > https://bugs.python.org/issue12733 > Well that ends this idea! Thanks Steve :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Mon Sep 10 03:29:48 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Mon, 10 Sep 2018 09:29:48 +0200 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: References: <140891b8-3aef-0991-9421-7479e6a63eb6@gmail.com> Message-ID: Hi, I implemented the inheritance via meta classes and function and class attributes for pre/postconditions and invariants, respectively. Unless I missed something, this is as far as we can go without the proper language support with a library based on decorators: https://github.com/Parquery/icontract (version 1.5.0) Note that it is actually a complete implementation of design-by-contract that supports both weakening of the preconditions and strengthening of the postconditions and invariants. Could you please have a look and let me know what you think about the current implementation? Once we are sure that there is nothing obvious missing, I'd like to move forward and discuss whether we could add this library (or rewrite it) into the standard Python libraries and what needs to be all fixed till to make it that far. Cheers, Marko On Sat, 8 Sep 2018 at 21:34, Jonathan Fine wrote: > Michel Desmoulin wrote: > > > Isn't the purpose of "assert" to be able to do design by contract ? > > > > assert test, "error message is the test fail" > > > > I mean, you just write your test, dev get a feedback on problems, and > > prod can remove all assert using -o. > > > > What more do you need ? > > Good question. My opinion is that assert statements are good. I like them. > > But wait, more is possible. Here are some ideas. > > 1. Checking the return value (or exception). This is a post-condition. > > 2. Checking return value, knowing the input values. This is a more > sophisticated post-condition. > > 3. Adding checks around an untrusted function - possibly third party, > possibly written in C. > > 4. Selective turning on and off of checking. > > The last two, selective checks around untrusted functions, I find > particularly interesting. > > Suppose you have a solid, trusted, well-tested and reliable system. > And you add, or change, a function called wibble(). In this situation, > errors are most likely to be in wibble(), or in the interface to > wibble(). > > So which checks are most valuable? I suggest the answer is > > 1. Checks internal to wibble. > > 2. Pre-conditions and post-conditions for wibble > > 3. Pre-conditions for any function called by wibble. > > Suppose wibble calls wobble. We should certainly have the system check > wobble's preconditions, in this situation. But we don't need wobble to > run checks all the time. Only when the immediate caller is wibble. > > I think assertions and design-by-contract point in similar directions. > But design-by-contract takes you further, and is I suspect more > valuable when the system being built is large. > > Thank you, Michel, for your good question. > > -- > Jonathan > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Mon Sep 10 08:24:10 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 10 Sep 2018 22:24:10 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <61661E38-1B84-4194-B67D-F41D959E89DC@killingar.net> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <20180908170246.GL27312@ando.pearwood.info> <23721268-885B-45EF-8ACF-5F9E22FAC905@killingar.net> <20180909051419.GO27312@ando.pearwood.info> <61661E38-1B84-4194-B67D-F41D959E89DC@killingar.net> Message-ID: <20180910122410.GX27312@ando.pearwood.info> On Mon, Sep 10, 2018 at 08:05:22AM +0200, Anders Hovm?ller wrote: > I just realized I have another question for you: Is that a generic you or just me personally? :-) > If you had to chose which one would you prefer: > > f(*, a b, c) > > or: > > f(=a, =b, =c) > > ? I can't say I particularly like either syntax, but if I had a gun pointed at my head and had to choose one, I'd *probably* prefer something like this: open("myfile.txt", 'r', =buffering, =encoding, =errors, newline='\r', =closefd, opener=get_opener()) over this: open("myfile.txt", 'r', *, buffering, encoding, errors, newline='\r', closefd, opener=get_opener()) In the second case, the * is too easy to miss, leaving us with what looks like a confusing mix of positional and keyword arguments. Hmmm... a thought comes to mind... Suppose we had a wild-card symbol that simply told the parser to use the same string as that on the left hand side of the = sign, that might be promising. For the sake of illustration, I'm going to use the Unicode Snowman symbol, but that's just a place-holder. ? copies the token from the left hand side of the = to the right hand side. It is a syntax error if it doesn't follow an = sign. Then we could be (slightly) concise AND explicit: open("myfile.txt", 'r', buffering=?, encoding=?, errors=?, newline='\r', closefd=?, opener=get_opener()) would expand to: open("myfile.txt", 'r', buffering=buffering, encoding=encoding, errors=errors, newline='\r', closefd=closefd, opener=get_opener()) But honestly, this feels Perlish to me, or something that belongs in a personal pre-processor rather than the language. (Maybe we should embrace the concept of a Python pre-processor?) > I know you?re clearly against the entire idea but it seems we should > prefer the least disliked alternative in such a scenario. I accept this is a pain point for some people and some code bases. I'm not convinced it is painful enough to need a language-wide syntactic solution (but I don't have any better solution except "refactor until the pain goes away"), or that the benefit outweighs the real and potential costs. I wouldn't quite describe it as being "against the entire idea". It's not that I'm against the very idea of improving parameter handling in Python. I just don't think this is the way. -- Steve From chris.barker at noaa.gov Mon Sep 10 15:52:14 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 10 Sep 2018 21:52:14 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> Message-ID: On Sun, Sep 9, 2018 at 7:37 AM, Anders Hovm?ller wrote: I've spent this whole thread thinking: "who in the world is writing code with a lot of spam=spam arguments? If you are transferring that much state in a function call, maybe you should have a class that holds that state? Or pass in a **kwargs dict? Note: I write a lot of methods (mostly __init__) with a lot of keyword parameters -- but they all tend have sensible defaults, and/or will have many values specified by literals. Then this: > It would almost certainly become the strongly preferred way to do it for > some cases like .format() and sending a context to a template renderer in > web apps. But that?s because in those cases it is very important to match > the names. OK -- those are indeed good use cases, but: for .format() -- that's why we now have f-strings -- done. for templates -- are you really passing all that data in from a bunch of variables?? as opposed to, say, a dict? That strikes me as getting code and data confused (which is sometimes hard not to do...) So still looking for a compelling use-case -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Mon Sep 10 17:00:58 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 10 Sep 2018 14:00:58 -0700 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> Message-ID: <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> On 09/10/2018 12:52 PM, Chris Barker via Python-ideas wrote: > I've spent this whole thread thinking: "who in the world is writing code > with a lot of spam=spam arguments? If you are transferring that much > state in a function call, maybe you should have a class that holds that > state? Or pass in a **kwargs dict? > So still looking for a compelling use-case In my day job I spend a lot of time writing/customizing modules for a framework called OpenERP (now Odoo*). Those modules are all subclasses, and most work will require updating at least a couple parent metheds -- so most calls look something like: def a_method(self, cr, uid, ids, values, context=None): ... super(self, parent).a_method(cr, uid, ids, values, context=context) Not a perfect example as these can all be positional, but it's the type of code where this syntax would shine. I think, however, that we shouldn't worry about a lead * to activate it, just use a leading '=' and let it show up anywhere and it follows the same semantics/restrictions as current positional vs keyword args: def example(filename, mode, spin, color, charge, orientation): pass example('a name', 'ro', =spin, =color, charge=last, =orientation) So +0 with the above proposal. -- ~Ethan~ From abedillon at gmail.com Mon Sep 10 20:15:11 2018 From: abedillon at gmail.com (Abe Dillon) Date: Mon, 10 Sep 2018 19:15:11 -0500 Subject: [Python-ideas] Python dialect that compiles into python In-Reply-To: <20180908002829.GG27312@ando.pearwood.info> References: <20180908002829.GG27312@ando.pearwood.info> Message-ID: [Steven D'Aprano] > It would be great for non-C coders to be able to prototype proposed > syntax changes to get a feel for what works and what doesn't. I think it would be great in general for the community to be able to try out ideas and mull things over. If there was something like a Python Feature Index (PyFI) and you could install mods to the language, it would allow people to try out ideas before rejecting them or incorporating them into the language (or putting them on hold until someone suggests a better implementation). I could even see features that never make it into the language, but stick around PyFI and get regular maintenance because: A) they're controversial changes that some love and some hate B) they make things easier in some domain but otherwise don't warrant adoption It would have to be made clear from the start that Python can't guarantee backward compatibility with any mods, which should prevent excessive fragmentation (if you want your code to be portable, don't use mod). On Fri, Sep 7, 2018 at 7:30 PM Steven D'Aprano wrote: > On Fri, Sep 07, 2018 at 11:57:50AM +0000, Robert Vanden Eynde wrote: > > > Many features on this list propose different syntax to python, > > producing different python "dialects" that can statically be > > transformed to python : > > [...] > > Using a modified version of ast, it is relatively easy to modifiy the > > syntax tree of a program to produce another program. So one could > > compile the "python dialect" into regular python. The last example > > with partially for example doesn't even need new syntax. > > [...] > > Actually, I might start to write this lib, that looks fun. > > I encourage you to do so! It would be great for non-C coders to be able > to prototype proposed syntax changes to get a feel for what works and > what doesn't. > > There are already a few joke Python transpilers around, such as > "Like, Python": > > https://jon.how/likepython/ > > but I think this is a promising technique that could be used more to > keep the core Python language simple while not *entirely* closing the > door to people using domain-specific (or project-specific) syntax. > > > -- > Steve > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abedillon at gmail.com Mon Sep 10 20:44:39 2018 From: abedillon at gmail.com (Abe Dillon) Date: Mon, 10 Sep 2018 19:44:39 -0500 Subject: [Python-ideas] Positional-only parameters In-Reply-To: References: <58B7DE7F.70505@stoneleaf.us> Message-ID: That looks great to me! I also think the '/' syntax looks fine and the pun works. If part of the motivation for position-only arguments was better performance and that motivation still holds water, then it makes sense to allow Python to support that optimization, but I would be happy with just a decorator too. I definitely DON'T like the double-underscore. On top of all the other complaints, I think it's more prone to break code. It's also more ugly than '/' IMHO. ?On Thu, Mar 2, 2017 at 5:10 AM ???????? wrote:? > Here's a proof-of-concept for the decorator. It does not address the issue > of passing aliases to positional arguments to **kwargs - I guess this > requires changes in the CPython's core. > > (Sorry about the coloring, that's how it's pasted) > > from inspect import signature, Parameter > from functools import wraps > > > def positional_only(n): > def wrap(f): > s = signature(f) > params = list(s.parameters.values()) > for i in range(n): > if params[i].kind != Parameter.POSITIONAL_OR_KEYWORD: > raise TypeError('{} has less than {} positional arguments'.format(f.__name__, n)) > params[i] = params[i].replace(kind=Parameter.POSITIONAL_ONLY) > f.__signature__ = s.replace(parameters=params) > @wraps(f) > def inner(*args, **kwargs): > if len(args) < n: > raise TypeError('{} takes at least {} positional arguments'.format(f.__name__, n)) > return f(*args, **kwargs) > return inner > return wrap > > > @positional_only(2) > def f(a, b, c): > print(a, b, c) > > > help(f) > # f(a, b, /, c, **kwargs) > > f(1, 2, c=2) > > # f(1, b=2, c=3) > # TypeError: f takes at least 2 positional arguments > > > @positional_only(3) > def g(a, b, *, c): > print(a, b, c) > > # TypeError: g has less than 3 positional arguments > > Elazar > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From abedillon at gmail.com Mon Sep 10 20:54:37 2018 From: abedillon at gmail.com (Abe Dillon) Date: Mon, 10 Sep 2018 19:54:37 -0500 Subject: [Python-ideas] Positional-only parameters In-Reply-To: <20180907040036.GA95533@cskk.homeip.net> References: <20180907040036.GA95533@cskk.homeip.net> Message-ID: [Cameron Simpson] > I've been writing quite a few functions lately where it is reasonable for > a > caller to want to pass arbitrary keyword arguments, but where I also want > some > additional parameters for control purposes. I've run into this before and use the trailing '_' convention for names: def update(self, where_, **column_vals): ... Because such names are **probably** never going to show up. But, of course; if such a name were actually used for a column, it would be a fantastically hard bug to find! On Thu, Sep 6, 2018 at 11:01 PM Cameron Simpson wrote: > On 01Mar2017 21:25, Serhiy Storchaka wrote: > >On 28.02.17 23:17, Victor Stinner wrote: > >>My question is: would it make sense to implement this feature in > >>Python directly? If yes, what should be the syntax? Use "/" marker? > >>Use the @positional() decorator? > > > >I'm strongly +1 for supporting positional-only parameters. The main > >benefit to me is that this allows to declare functions that takes > >arbitrary keyword arguments like Formatter.format() or > >MutableMapping.update(). Now we can't use even the "self" parameter > >and need to use a trick with parsing *args manually. This harms clearness > and > >performance. > > I was a mild +0.1 on this until I saw this argument; now I am +1 (unless > there's some horrible unforseen performance penalty). > > I've been writing quite a few functions lately where it is reasonable for > a > caller to want to pass arbitrary keyword arguments, but where I also want > some > additional parameters for control purposes. The most recent example was > database related: functions accepting arbitrary keyword arguments > indicating > column values. > > As a specific example, what I _want_ to write includes this method: > > def update(self, where, **column_values): > > Now, because "where" happens to be an SQL keyword it is unlikely that > there > will be a column of that name, _if_ the database is human designed by an > SQL > person. I have other examples where picking a "safe" name is harder. I can > even > describe scenarios where "where" is plausible: supposing the the database > is > generated from some input data, perhaps supplied by a CSV file (worse, a > CSV > file that is an export of a human written spreadsheet with a "Where" > column > header). That isn't really even made up: I've got functions whose purpose > is > to import such spreadsheet exports, making namedtuple subclasses > automatically > from the column headers. > > In many of these situations I've had recently positional-only arguments > would > have been very helpful. I even had to bugfix a function recently where a > positional argument was being trouced by a keyword argument by a caller. > > Cheers, > Cameron Simpson > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eltrhn at gmail.com Mon Sep 10 22:04:28 2018 From: eltrhn at gmail.com (Elias Tarhini) Date: Mon, 10 Sep 2018 19:04:28 -0700 Subject: [Python-ideas] __iter__(), keys(), and the mapping protocol Message-ID: This has been bouncing around in my head for a while regarding the requisite keys() method on mappings: How come the ** unpacking operator, a built-in language feature, relies on a non-dunder to operate? To me, I mean to say, requiring that classes implement keys() ? a method whose name is totally undistinguished ? in order to conform to the mapping protocol feels like a design running counter to Python's norm of using dunders for everything "hidden". I am not sure if it feels dirty to anybody else, however. Interestingly, the docs already say that *[f]or mappings, [__iter__()] should iterate over the keys of the container*, but it of course is not enforced in any way at present. So, then ? how about enforcing it? Should __iter__(), for the reasons above, replace the current purpose of keys() in mappings? I'm not properly equipped at the moment to mess around with CPython (sorry), but I assume at a minimum this would entail either replacing all instances of PyMapping_Keys() with PyObject_GetIter() or alternatively changing PyMapping_Keys() to call the latter. Does it sound like a reasonable change overall? Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Mon Sep 10 23:50:24 2018 From: tritium-list at sdamon.com (Alex Walters) Date: Mon, 10 Sep 2018 23:50:24 -0400 Subject: [Python-ideas] __iter__(), keys(), and the mapping protocol In-Reply-To: References: Message-ID: <02d401d44982$925bc190$b71344b0$@sdamon.com> I can see custom mapping types where iterating the keys() would be trivial, but items() could be expensive. I could use that as an argument, but I don't have to. The keys() method is part of the API, just like index() and count() are part of the sequence API. To be treated like a mapping everywhere, python requires that you define* a keys() method, so why not use it? I don't see anything wrong with python using "public" methods, in this context. * If you use ABCs, then you don't need to define keys(), but that?s a tangent. > -----Original Message----- > From: Python-ideas list=sdamon.com at python.org> On Behalf Of Elias Tarhini > Sent: Monday, September 10, 2018 10:04 PM > To: Python-Ideas > Subject: [Python-ideas] __iter__(), keys(), and the mapping protocol > > This has been bouncing around in my head for a while regarding the requisite > keys() method on mappings: > > How come the ** unpacking operator, a built-in language feature, relies on a > non-dunder to operate? > > To me, I mean to say, requiring that classes implement keys() ? a method > whose name is totally undistinguished ? in order to conform to the mapping > protocol feels like a design running counter to Python's norm of using > dunders for everything "hidden". I am not sure if it feels dirty to anybody > else, however. Interestingly, the docs already say > > that [f]or mappings, [__iter__()] should iterate over the keys of the > container, but it of course is not enforced in any way at present. > > > So, then ? how about enforcing it? Should __iter__(), for the reasons > above, replace the current purpose of keys() in mappings? > > > I'm not properly equipped at the moment to mess around with CPython > (sorry), but I assume at a minimum this would entail either replacing all > instances of PyMapping_Keys() with PyObject_GetIter() or alternatively > changing PyMapping_Keys() to call the latter. > > > Does it sound like a reasonable change overall? > > > Eli From gadgetsteve at live.co.uk Tue Sep 11 00:47:37 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Tue, 11 Sep 2018 04:47:37 +0000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> Message-ID: On 10/09/2018 22:00, Ethan Furman wrote: > On 09/10/2018 12:52 PM, Chris Barker via Python-ideas wrote: > >> I've spent this whole thread thinking: "who in the world is writing >> code with a lot of spam=spam arguments? If you are transferring that >> much state in a function call, maybe you should have a class that >> holds that state? Or pass in a **kwargs dict? > >> So still looking for a compelling use-case > > In my day job I spend a lot of time writing/customizing modules for a > framework called OpenERP (now Odoo*).? Those modules are all subclasses, > and most work will require updating at least a couple parent metheds -- > so most calls look something like: > > ? def a_method(self, cr, uid, ids, values, context=None): > ??? ... > ??? super(self, parent).a_method(cr, uid, ids, values, context=context) > > Not a perfect example as these can all be positional, but it's the type > of code where this syntax would shine. > > I think, however, that we shouldn't worry about a lead * to activate it, > just use a leading '=' and let it show up anywhere and it follows the > same semantics/restrictions as current positional vs keyword args: > > ? def example(filename, mode, spin, color, charge, orientation): > ????? pass > > ? example('a name', 'ro', =spin, =color, charge=last, =orientation) > > So +0 with the above proposal. > > -- > ~Ethan~ > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ Couldn't just about all of the use cases mentioned so far be met in quite a neat manner by providing access to a method, or dictionary, called __params__ which would give access, as a dictionary, to the parameters as supplied in the call, (or filled in by the defaults). If this was accessible externally, as fn.__defaults__ is then examples such as: > def a_method(self, cr, uid, ids, values, context=None): > ... > super(self, parent).a_method(cr, uid, ids, values, context=context) would become: def a_method(self, cr, uid, ids, values, context=None): ... params = {k:v for k,v in __params__ if k in parent.a_method.keys()} # Possibly add some additional entries here! super(self, parent).a_method(**params) -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From storchaka at gmail.com Tue Sep 11 02:53:42 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 11 Sep 2018 09:53:42 +0300 Subject: [Python-ideas] __iter__(), keys(), and the mapping protocol In-Reply-To: References: Message-ID: 11.09.18 05:04, Elias Tarhini ????: > This has been bouncing around in my head for a while regarding the > requisite keys() method on mappings: > > How come the ** unpacking operator, a built-in language feature, relies > on a non-dunder to operate? > > To me, I mean to say, requiring that classes implement keys()?? a method > whose name is totally undistinguished ? in order to conform to the > mapping protocol feels like a design running counter to Python's norm of > using dunders for everything "hidden". I am not sure if it feels dirty > to anybody else, however. Interestingly, the docs already say > > that /[f]or mappings, [__iter__()] should iterate over the keys of the > container/, but it of course is not enforced in any way at present. > > So, then ? how about enforcing it? Should __iter__(), for the reasons > above, replace the current purpose of keys()?in mappings? > > I'm not properly equipped at the moment to mess around with CPython > (sorry), but I assume at a minimum this would entail either replacing > all instances of PyMapping_Keys() with PyObject_GetIter()or > alternatively changing PyMapping_Keys() to call the latter. > > Does it sound like a reasonable change overall? Dict has keys(), while list doesn't. If use __iter__() instead of keys(), {**m} or dict(m) will give unexpected result for some non-mappings, like m = [0, 2, 1]: >>> {k: m[k] for k in m} {0: 0, 2: 1, 1: 2} From j.van.dorp at deonet.nl Tue Sep 11 02:54:33 2018 From: j.van.dorp at deonet.nl (Jacco van Dorp) Date: Tue, 11 Sep 2018 08:54:33 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> Message-ID: Op di 11 sep. 2018 om 06:48 schreef Steve Barnes : > > > On 10/09/2018 22:00, Ethan Furman wrote: > > On 09/10/2018 12:52 PM, Chris Barker via Python-ideas wrote: > > > >> I've spent this whole thread thinking: "who in the world is writing > >> code with a lot of spam=spam arguments? If you are transferring that > >> much state in a function call, maybe you should have a class that > >> holds that state? Or pass in a **kwargs dict? > > > >> So still looking for a compelling use-case > > > > In my day job I spend a lot of time writing/customizing modules for a > > framework called OpenERP (now Odoo*). Those modules are all subclasses, > > and most work will require updating at least a couple parent metheds -- > > so most calls look something like: > > > > def a_method(self, cr, uid, ids, values, context=None): > > ... > > super(self, parent).a_method(cr, uid, ids, values, context=context) > > > > Not a perfect example as these can all be positional, but it's the type > > of code where this syntax would shine. > > > > I think, however, that we shouldn't worry about a lead * to activate it, > > just use a leading '=' and let it show up anywhere and it follows the > > same semantics/restrictions as current positional vs keyword args: > > > > def example(filename, mode, spin, color, charge, orientation): > > pass > > > > example('a name', 'ro', =spin, =color, charge=last, =orientation) > > > > So +0 with the above proposal. > > > > -- > > ~Ethan~ > > _______________________________________________ > > Python-ideas mailing list > > Python-ideas at python.org > > https://mail.python.org/mailman/listinfo/python-ideas > > Code of Conduct: http://python.org/psf/codeofconduct/ > > Couldn't just about all of the use cases mentioned so far be met in > quite a neat manner by providing access to a method, or dictionary, > called __params__ which would give access, as a dictionary, to the > parameters as supplied in the call, (or filled in by the defaults). > > If this was accessible externally, as fn.__defaults__ is then examples > such as: > > > def a_method(self, cr, uid, ids, values, context=None): > > ... > > super(self, parent).a_method(cr, uid, ids, values, context=context) > > would become: > > > def a_method(self, cr, uid, ids, values, context=None): > ... > params = {k:v for k,v in __params__ if k in parent.a_method.keys()} > # Possibly add some additional entries here! > super(self, parent).a_method(**params) So...deep black magic ? That's what this looks like. Having =spam for same-named kwargs sounds easier to comprehend for new people than a __magic__ object you can only access in function bodies and will give headaches if you have to write decorators: def other_function_defaults(*args, **kwargs): outer_params = __params__.copy() def deco(func): def inner(self, yo_momma): return func(self, **outer_params, **__params__) # overwrite with specifically provided arguments return deco I think that magic objects like that aren't really pythonic - if it were, "self" would be the same kind of magic, instead of us having to name it on every function call (A decision im really a fan of, tbh) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephanh42 at gmail.com Tue Sep 11 03:28:46 2018 From: stephanh42 at gmail.com (Stephan Houben) Date: Tue, 11 Sep 2018 09:28:46 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> Message-ID: My 3 cents: 1. My most objective objection against the f(*, foo, bar, baz) syntax is that it looks like positional arguments, and the syntactic marker * which dissuades you of that can be arbitrarily far apart from the keyword. 2. The syntax f(=foo, =bar, =baz) at least solves that problem. Otherwise I find it quite ugly with the unbalanced = but that is obviously more subjective. 3. I still am not convinced it is needed at all. IMHO, if your code is filled with f(foo=foo, bar=bar, baz=baz) then perhaps Python is telling you that foo, bar and baz want to become fields in a new object which you should pass around. 4. (Bonus cent) Somewhat tongue-in-cheek I offer the following Vim mapping for those who find themselves typing longword=longword all the time. :inoremap =hyiwt=lpa Now you can just do longword. Stephan Op di 11 sep. 2018 om 08:55 schreef Jacco van Dorp : > > > Op di 11 sep. 2018 om 06:48 schreef Steve Barnes : > >> >> >> On 10/09/2018 22:00, Ethan Furman wrote: >> > On 09/10/2018 12:52 PM, Chris Barker via Python-ideas wrote: >> > >> >> I've spent this whole thread thinking: "who in the world is writing >> >> code with a lot of spam=spam arguments? If you are transferring that >> >> much state in a function call, maybe you should have a class that >> >> holds that state? Or pass in a **kwargs dict? >> > >> >> So still looking for a compelling use-case >> > >> > In my day job I spend a lot of time writing/customizing modules for a >> > framework called OpenERP (now Odoo*). Those modules are all >> subclasses, >> > and most work will require updating at least a couple parent metheds -- >> > so most calls look something like: >> > >> > def a_method(self, cr, uid, ids, values, context=None): >> > ... >> > super(self, parent).a_method(cr, uid, ids, values, context=context) >> > >> > Not a perfect example as these can all be positional, but it's the type >> > of code where this syntax would shine. >> > >> > I think, however, that we shouldn't worry about a lead * to activate >> it, >> > just use a leading '=' and let it show up anywhere and it follows the >> > same semantics/restrictions as current positional vs keyword args: >> > >> > def example(filename, mode, spin, color, charge, orientation): >> > pass >> > >> > example('a name', 'ro', =spin, =color, charge=last, =orientation) >> > >> > So +0 with the above proposal. >> > >> > -- >> > ~Ethan~ >> > _______________________________________________ >> > Python-ideas mailing list >> > Python-ideas at python.org >> > https://mail.python.org/mailman/listinfo/python-ideas >> > Code of Conduct: http://python.org/psf/codeofconduct/ >> >> Couldn't just about all of the use cases mentioned so far be met in >> quite a neat manner by providing access to a method, or dictionary, >> called __params__ which would give access, as a dictionary, to the >> parameters as supplied in the call, (or filled in by the defaults). >> >> If this was accessible externally, as fn.__defaults__ is then examples >> such as: >> >> > def a_method(self, cr, uid, ids, values, context=None): >> > ... >> > super(self, parent).a_method(cr, uid, ids, values, >> context=context) >> >> would become: >> >> >> def a_method(self, cr, uid, ids, values, context=None): >> ... >> params = {k:v for k,v in __params__ if k in parent.a_method.keys()} >> # Possibly add some additional entries here! >> super(self, parent).a_method(**params) > > > So...deep black magic ? That's what this looks like. Having =spam for > same-named kwargs sounds easier to comprehend for new people than a > __magic__ object you can only access in function bodies and will give > headaches if you have to write decorators: > > def other_function_defaults(*args, **kwargs): > outer_params = __params__.copy() > def deco(func): > def inner(self, yo_momma): > return func(self, **outer_params, **__params__) # overwrite with > specifically provided arguments > return deco > > > I think that magic objects like that aren't really pythonic - if it were, > "self" would be the same kind of magic, instead of us having to name it on > every function call (A decision im really a fan of, tbh) > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Sep 11 04:12:56 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 11 Sep 2018 10:12:56 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> Message-ID: On Mon, Sep 10, 2018 at 11:00 PM, Ethan Furman wrote: > > In my day job I spend a lot of time writing/customizing modules for a > framework called OpenERP (now Odoo*). Those modules are all subclasses, > and most work will require updating at least a couple parent methods -- so > most calls look something like: > > def a_method(self, cr, uid, ids, values, context=None): > ... > super(self, parent).a_method(cr, uid, ids, values, context=context) > hmm -- this is a trick -- in those cases, I find myself using *args, **kwargs when overloading methods. But that does hide the method signature, which is really unfortunate. IT works pretty well for things like GUI toolkits, where you might be subclassing a wx.Window, and the docs for wx.Window are pretty easy to find, but for you own custom classes with nested subclassing, it does get tricky. For this case, I kinda like Steve Barnes idea (I think it is his) to have a "magic object of some type, so you can have BOTH specified parameters, and easy access to the *args, **kwargs objects. Though I'm also wary of the magic... Perhaps there's some way to make it explicit, like "self": def fun(a, b, c, d=something, e=something, &args, &&kwargs): (I'm not sure I like the &, so think of it as a placeholder) In this case, then &args would be the *args tuple, and &&kwargs would be the **kwargs dict (as passed in) -- completely redundant with the position and keyword parameters. So the above could be: def a_method(self, cr, uid, ids, values, context=None, &args, &&kwargs): super(self, parent).a_method(*args, **kwargs) do_things_with(cr, uid, ...) So you now have a clear function signature, access to the parameters, and also a clear an easy way to pass the whole batch on to the superclass' method. I just came up with this off teh top of my head, so Im sure there are big issues, but maybe it can steer us in a useful direction. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Sep 11 04:20:23 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 11 Sep 2018 10:20:23 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> Message-ID: Did you mean to take this off-list? I hope not, as I'm bringing it back on. On Tue, Sep 11, 2018 at 8:09 AM, Anders Hovm?ller wrote: > I've spent this whole thread thinking: "who in the world is writing code > with a lot of spam=spam arguments? If you are transferring that much state > in a function call, maybe you should have a class that holds that state? Or > pass in a **kwargs dict? > > > Kwargs isn?t good because it breaks static analysis which we really want. > well, Python isn't a static language, and I personally have my doubts about trying to make it more so -- but that makes it sound like we need some solution for type annotations, rather than executable code. But see my other note -- I do agree that a well specified function signature is a good thing. > for .format() -- that's why we now have f-strings -- done. > > > Not really no. F-strings can?t be used for many strings if you need to > localize your app. If you don?t have that need then yes f-strings are great > and I?m fortunate to work on an app where we don?t need to but we shouldn?t > dismiss people who have this need. > Darn -- I hope this was brought up in the original f-string conversation. > for templates -- are you really passing all that data in from a bunch of > variables?? as opposed to, say, a dict? That strikes me as getting code and > data confused (which is sometimes hard not to do...) > > Yes. For example we have decorators on our views that fetch up objects > from our DB based on url fragments or query parameters and then pass to the > function. This means we have all access control in our decorators and you > can?t forget to do this because a view without access control decorators > are stopped by a middleware. So several of variables are there before we > even start the actual view function. > hmm -- this is a bit of what I mean by mixing data and code -- in my mind a database record is more data than code, so better to use a dict than a class with attributes matching teh fields. However, I also see the advantage of mixing the code -- providing additional logic, and pre and post processing, like it sounds like you are doing. But python is a nifty dynamic language -- could your views have an "as_dict" property that provided just the fields in the existing instance? If you were in Stockholm I?d offer you to come by our office and I could > show some code behind closed doors :) > As it happens, I am much closer than usual -- in the Netherlands, but still not Stockholm :-) - CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at brice.xyz Tue Sep 11 04:41:41 2018 From: contact at brice.xyz (Brice Parent) Date: Tue, 11 Sep 2018 10:41:41 +0200 Subject: [Python-ideas] Python dialect that compiles into python In-Reply-To: References: <20180908002829.GG27312@ando.pearwood.info> Message-ID: Le 11/09/2018 ? 02:15, Abe Dillon a ?crit?: > [Steven D'Aprano] > > It would be great for non-C coders to be able?to prototype proposed > syntax changes to get a feel for what works and?what doesn't. > > > I think it would be great in general for the community to be able to > try out ideas and mull things over. > > If there was something like a Python Feature Index (PyFI) and you > could install mods to the language, > it would allow people to try out ideas before rejecting them or > incorporating them into the language > (or putting them on hold until someone suggests a better implementation). > That would be an almost-Python to Python transpiler, I love it! It would surely help a lot to explain and try out new ideas, as well as for domain specific needs. And having an index of those features could help a lot. It would surely also help a lot with backward compatibility of new functionalities. Someone who would want to use a Python 3.9 functionality in 3.8 (whatever his reasons) could use a shim from the index. Such shim wouldn't have to be as optimal or fast as the version that would be used in 3.9, but it could be functionally equivalent to it. I'm just a bit afraid of the popularity that some of these experiments could get (like `from __future__ import braces` -> No problem!) and the pressure that could be made upon core devs to push the most popular changes into Python (popularity not being equivalent to sanity for the language and its future. There are many devs that want to code in one language the same way they do in others, which is often wrong, or at least not optimal). -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at brice.xyz Tue Sep 11 05:13:37 2018 From: contact at brice.xyz (Brice Parent) Date: Tue, 11 Sep 2018 11:13:37 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> Message-ID: > > For this case, I kinda like Steve Barnes idea (I think it is his) to > have a "magic object of some type, so you can have BOTH specified > parameters, and easy access to the *args, **kwargs objects. Though I'm > also wary of the magic... > > Perhaps there's some way to make it explicit, like "self": > > def fun(a, b, c, d=something, e=something, &args, &&kwargs): Another possiblity would be to be able to have alternative signatures for a single function, the first being the one shown in inspection and for auto-completion, the other one(s?) just creating new references to the same variables. Like this: def fun(a, b, c, d=something1, e=something2, f=something3)(_, *args, e=something2, **kwargs): ??? # do whatever you need ??? assert args[0] == b ??? assert kwargs["d"] == something1 ??? super().fun("foo", *args, e="bar", **kwargs) I'm not sure what would happen if we didn't provide the same defaults for `e` in the two signatures (probably an exception). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfine2358 at gmail.com Tue Sep 11 06:03:02 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Tue, 11 Sep 2018 11:03:02 +0100 Subject: [Python-ideas] On the Python function call interface Message-ID: I hope that in this thread we will share and develop our understanding of how Python handles the interface between defining a function and calling a function. In this message, I describe a tension between the caller and implementer of a function. I intend in further messages to cover Elias Tarhini's post on __iter__(), keys(), and the mapping protocol [This affects the behaviour of fn(**kwargs), perhaps to our disadvantage.] https://mail.python.org/pipermail/python-ideas/2018-September/053320.html Marko Ristin-Kaufmann post on Pre-conditions and post-conditions [This is, in part, about wrapping a function, so it can be monitored.] https://mail.python.org/pipermail/python-ideas/2018-August/052781.html and probably some other topics. My main concern is that we know what choices are already available, and the human forces that good design decisions will balance. The signature of a function has at least three purposes. First, to provide the CALLER of the function with guidance as to how the function should be used. Second, to provide the IMPLEMENTER of the function with already initialised variables, local to the function body. Third, to provide both caller and implementer with visible default values. When there are many arguments, these two purposes can be opposed. Here's an example. The IMPLEMENTER might want to write (not tested) def method(self, **kwargs): # Do something with kwargs. # Possibly mutate kwargs. # Change values, remove items, add items. # And now pass the method up. super().method(**kwargs) In some case, the implementer might prefer def method(self, aaa, bbb, **kwargs): # Do something with aaa and bbb. super().method(**kwargs) However, the CALLER might wish that the implementer has a signature def method(self, aaa, bbb, ccc, ddd, eee, fff, ggg, hhh): and this encourages the implementer to write super().method(aaa=aaa, ...). However, there is an alternative: def method(self, aaa, bbb, ccc, ddd, eee, fff, ggg, hhh): lcls = dict(locals()) lcls.pop('aaa') lcls.pop('bbb') # Do something with aaa and bbb. super().method(**lcls) This implementation bring benefits to both the user and the implementer. But it's decidedly odd that the local variables ccc through to hhh are initialised, but not explicitly used. I think it would help to dig deeper into this, via some well-chosen examples, taken for real code. By the way, Steve Barnes has suggested Python be extended to provide "access to a method, or dictionary, called __params__ which would give access, as a dictionary, to the parameters as supplied in the call, (or filled in by the defaults)." [See: https://mail.python.org/pipermail/python-ideas/2018-September/053322.html] Such a thing already exists, and I've just showed how it might be used. https://docs.python.org/3/library/functions.html#locals >>> def fn(a, b, c, d=4, e=5): return locals() >>> fn(1, 2, 3, e=6) {'e': 6, 'd': 4, 'c': 3, 'a': 1, 'b': 2} I think, once we understand Python function and code objects better, we can make progress without extending Python, and know better what extensions are needed. Much of what we need to know is contained in the inspect module: https://docs.python.org/3/library/inspect.html -- Jonathan From stephanh42 at gmail.com Tue Sep 11 06:24:27 2018 From: stephanh42 at gmail.com (Stephan Houben) Date: Tue, 11 Sep 2018 12:24:27 +0200 Subject: [Python-ideas] Python dialect that compiles into python In-Reply-To: References: <20180908002829.GG27312@ando.pearwood.info> Message-ID: Op di 11 sep. 2018 om 10:42 schreef Brice Parent : > Le 11/09/2018 ? 02:15, Abe Dillon a ?crit : > > [Steven D'Aprano] > >> It would be great for non-C coders to be able to prototype proposed >> syntax changes to get a feel for what works and what doesn't. > > > I think it would be great in general for the community to be able to try > out ideas and mull things over. > > If there was something like a Python Feature Index (PyFI) and you could > install mods to the language, > it would allow people to try out ideas before rejecting them or > incorporating them into the language > (or putting them on hold until someone suggests a better implementation). > > That would be an almost-Python to Python transpiler, I love it! It would > surely help a lot to explain and try out new ideas, as well as for domain > specific needs. And having an index of those features could help a lot. > It would surely also help a lot with backward compatibility of new > functionalities. Someone who would want to use a Python 3.9 functionality > in 3.8 (whatever his reasons) could use a shim from the index. Such shim > wouldn't have to be as optimal or fast as the version that would be used in > 3.9, but it could be functionally equivalent to it. > > For what it's worth, the Ocaml community has something like that: Campl5 https://camlp5.github.io/doc/html/ Despite the name "preprocessor" this actually communicates with the Ocaml compiler proper through an AST. So you get proper source code location, etc. I think you could actually already hack something like this in Python today, by creating custom import hooks, which then run your own compile step on a file, which produces an AST, which is then passed to compile(). However keeping your custom Python++ parser in sync with Python is probably a pain. > I'm just a bit afraid of the popularity that some of these experiments > could get (like `from __future__ import braces` -> No problem!) and the > pressure that could be made upon core devs to push the most popular changes > into Python (popularity not being equivalent to sanity for the language and > its future. There are many devs that want to code in one language the same > way they do in others, which is often wrong, or at least not optimal). > I fully expect the core devs to resist such pressure, especially for the braces ;-) Stephan > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Tue Sep 11 06:53:55 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 11 Sep 2018 20:53:55 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> Message-ID: <20180911105355.GH1596@ando.pearwood.info> On Tue, Sep 11, 2018 at 10:12:56AM +0200, Chris Barker via Python-ideas wrote: > hmm -- this is a trick -- in those cases, I find myself using *args, > **kwargs when overloading methods. But that does hide the method signature, > which is really unfortunate. IT works pretty well for things like GUI > toolkits, where you might be subclassing a wx.Window, and the docs for > wx.Window are pretty easy to find, but for you own custom classes with > nested subclassing, it does get tricky. Do we need to solve this in the interpreter? Surely this is an argument for better tooling. A sophisticated IDE should never be a *requirement* for coding in Python, but good tools can make a big difference in the pleasantness or otherwise of coding. Those tools don't have to be part of the language. At least for methods, code completers ought to be able to search the MRO for the first non-**kwargs signature and display parameters from further up the MRO: class Parent: def method(self, spam): pass class Child(Parent): def method(self, **kwargs): pass Now when I type Child().method() the IDE could search the MRO and find "spam" is the parameter. That becomes a "quality of IDE" issue, and various editors and IDEs can compete to have the best implementation. Or perhaps we could have an officially blessed way to give tools a hint as to what the real signature is. class Child(Parent): @signature_hint(Parent.method) def method(self, **kwargs): pass Statically, that tells the IDE that "true" signature of Child.method can be found from Parent.method; dynamically, the decorator might copy that signature into Child.method.__signature_hint__ for runtime introspection by tools like help(). The beauty of this is that it is independent of inheritance. We could apply this decorator to any function, and point it to any other function or method, or even a signature object. @signature_hint(open) def my_open(*args, **kwargs): ... And being optional, it won't increase the size of any functions unless you specifically decorate them. -- Steve From jamtlu at gmail.com Tue Sep 11 07:38:58 2018 From: jamtlu at gmail.com (James Lu) Date: Tue, 11 Sep 2018 07:38:58 -0400 Subject: [Python-ideas] Python dialect that compiles into python In-Reply-To: References: Message-ID: I wholly support this proposal. From steve at pearwood.info Tue Sep 11 07:34:22 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 11 Sep 2018 21:34:22 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> Message-ID: <20180911113422.GI1596@ando.pearwood.info> On Tue, Sep 11, 2018 at 04:47:37AM +0000, Steve Barnes wrote: > Couldn't just about all of the use cases mentioned so far be met in > quite a neat manner by providing access to a method, or dictionary, > called __params__ which would give access, as a dictionary, to the > parameters as supplied in the call, (or filled in by the defaults). I imagine it would be fairly easy to fill in such a special __params__ local variable when the function is called. The interpreter already has to process the positional and keyword arguments, it probably wouldn't be that hard to add one more implicitly declared local and fill it in: def function(spam, eggs, *args): print( __params__ ) function(2, 6, 99, 100) # prints {'spam': 2, 'eggs': 6, '*args': (99, 100)} But this has some problems: (1) It might be cheap, but it's not free. Function calling in Python is already a minor bottleneck, having to populate one more local whether it is needed or not can only make it slower, not faster. (2) It leads to the same gotchas as locals(). What happens if you assign to the __params__ dict? What happens when the parameters change their local value? The __param__ dict probably won't change. (Like locals(), I expect that will depend on the interpreter.) > If this was accessible externally, as fn.__defaults__ is then examples > such as: Defaults are part of the function definition and are fixed when the function is created. The values assigned to parameters change every time you call the function, whether you need them or not. For non-trivial applications with many function calls, that's likely to add up to a measurable slow-down. Its also going to suffer from race conditions, unless someone much cleverer than me can think of a way to avoid them which doesn't slow down function calls even more. - I call function(a=1, b=2); - function.__params__ is set to {'a': 1, 'b': 2} - meanwhile another thread calls function(a=98, b=99); - setting function.__params__ to {'a': 98, 'b': 99} - and I then access function.__params__, getting the wrong values. I think that __params__ as an implicitly created local variable is just barely justifiable, if you don't care about slowing down all function calls for the benefit of a tiny number of them. But exposing that information as an externally visible attribute of the function object is probably unworkable and unnecessary. -- Steve From rosuav at gmail.com Tue Sep 11 07:58:18 2018 From: rosuav at gmail.com (Chris Angelico) Date: Tue, 11 Sep 2018 21:58:18 +1000 Subject: [Python-ideas] On the Python function call interface In-Reply-To: References: Message-ID: On Tue, Sep 11, 2018 at 8:03 PM, Jonathan Fine wrote: > In some case, the implementer might prefer > > def method(self, aaa, bbb, **kwargs): > > # Do something with aaa and bbb. > super().method(**kwargs) > > > However, the CALLER might wish that the implementer has a signature > > def method(self, aaa, bbb, ccc, ddd, eee, fff, ggg, hhh): > > and this encourages the implementer to write super().method(aaa=aaa, ...). > > > However, there is an alternative: > > def method(self, aaa, bbb, ccc, ddd, eee, fff, ggg, hhh): > > lcls = dict(locals()) > lcls.pop('aaa') > lcls.pop('bbb') > > # Do something with aaa and bbb. > super().method(**lcls) What about this alternative: @override def method(self, aaa, bbb, **kwargs): # ... stuff with aaa and bbb super().method(**kwargs) The decorator would do something akin to functools.wraps(), but assuming that it's the class's immediate parent (not sure how you'd determine that at function definition time, but let's assume that can be done), and intelligently handling the added args. Basically, it would replace kwargs with every legal keyword argument of the corresponding method on the class's immediate parent. Yes, I know that super() can pass a function call sideways, but if that happens, the only flaw is in the signature, not the implementation (ie tab completion might fail but the code still runs). ChrisA From rosuav at gmail.com Tue Sep 11 08:02:15 2018 From: rosuav at gmail.com (Chris Angelico) Date: Tue, 11 Sep 2018 22:02:15 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180911113422.GI1596@ando.pearwood.info> References: <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911113422.GI1596@ando.pearwood.info> Message-ID: On Tue, Sep 11, 2018 at 9:34 PM, Steven D'Aprano wrote: > I think that __params__ as an implicitly created local variable is > just barely justifiable, if you don't care about slowing down all > function calls for the benefit of a tiny number of them. But exposing > that information as an externally visible attribute of the function > object is probably unworkable and unnecessary. Rather than slowing down ALL function calls, you could slow down only those that use it. The interpreter could notice the use of the name __params__ inside a function and go "oh, then I need to include the bytecode to create that". It'd probably need to be made a keyword, or at least unassignable, to ensure that you never try to close over the __params__ of another function, or declare "global __params__", or anything silly like that. I'm still -1 on adding it, though. ChrisA From steve at pearwood.info Tue Sep 11 08:20:39 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 11 Sep 2018 22:20:39 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180911105355.GH1596@ando.pearwood.info> References: <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911105355.GH1596@ando.pearwood.info> Message-ID: <20180911122038.GJ1596@ando.pearwood.info> On Tue, Sep 11, 2018 at 08:53:55PM +1000, Steven D'Aprano wrote: [...] > Or perhaps we could have an officially blessed way to give tools a hint > as to what the real signature is. > > class Child(Parent): > @signature_hint(Parent.method) > def method(self, **kwargs): > pass Here's an untested implementation: import inspect def signature_hint(callable_or_sig, *, follow_wrapped=True): if isinstance(callable_or_sig, inspect.Signature): sig = callable_or_sig else: sig = inspect.signature(callable_or_sig, follow_wrapped=follow_wrapped) def decorator(func): func.__signature_hint__ = sig return func return decorator inspect.signature would need to become aware of these hints too: def f(a, b=1, c=2): pass @signature_hint(f) def g(*args): pass @signature_hint(g) def h(*args): pass At this point h.__signature_hint__ ought to give (Note that this is not quite the same as the existing follow_wrapped argument of inspect.signature.) This doesn't directly help Ander's problem of having to make calls like func(a=a, b=b, c=c) # apologies for the toy example but at least it reduces the pain of needing to Repeat Yourself when overriding methods, which indirectly may help in some (but not all) of Ander's cases. -- Steve From J.Demeyer at UGent.be Tue Sep 11 08:38:29 2018 From: J.Demeyer at UGent.be (Jeroen Demeyer) Date: Tue, 11 Sep 2018 14:38:29 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <7e87ef7a76844062aae3e56b07386402@xmail103.UGent.be> References: <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911105355.GH1596@ando.pearwood.info> <7e87ef7a76844062aae3e56b07386402@xmail103.UGent.be> Message-ID: <5B97B745.6050800@UGent.be> On 2018-09-11 14:20, Steven D'Aprano wrote: > On Tue, Sep 11, 2018 at 08:53:55PM +1000, Steven D'Aprano wrote: > [...] >> Or perhaps we could have an officially blessed way to give tools a hint >> as to what the real signature is. >> >> class Child(Parent): >> @signature_hint(Parent.method) >> def method(self, **kwargs): >> pass I was planning to submit a PEP to support a __signature__ attribute (without the _hint) in inspect.signature(), see PEP 576 and PEP 579 for some context. The reason is very different from what is suggested here: to allow implementing custom function-like classes that inspect should treat as functions. From jfine2358 at gmail.com Tue Sep 11 11:57:16 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Tue, 11 Sep 2018 16:57:16 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180911113422.GI1596@ando.pearwood.info> References: <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911113422.GI1596@ando.pearwood.info> Message-ID: Summary: locals() and suggestion __params__ are similar, and roughly speaking each can be implemented from the other. Experts / pedants would prefer not to use the name __params__ for this purpose. Steve D'Aprano wrote: > I imagine it would be fairly easy to fill in such a special __params__ > local variable when the function is called. The interpreter already has > to process the positional and keyword arguments, it probably wouldn't be > that hard to add one more implicitly declared local and fill it in: [snip] > Its also going to suffer from race conditions, unless someone much > cleverer than me can think of a way to avoid them which doesn't slow > down function calls even more. As far as I know, locals() does not suffer from a race condition. But it's not a local variable. Rather, it's a function that returns a dict. Hence avoiding the race condition. Python has some keyword identifiers. Here's one >>> __debug__ = 1 SyntaxError: assignment to keyword Notice that this is a SYNTAX error. If __params__ were similarly a keyword identifier, then it would avoid the race condition. It would simply be a handle that allows, for example, key-value access to the state of the frame on the execution stack. In other words, a lower-level object from which locals() could be built. By the way, according to https://www.quora.com/What-is-the-difference-between-parameters-and-arguments-in-Python A parameter is a variable in a method definition. When a method is called, the arguments are the data you pass into the method's parameters. Parameter is variable in the declaration of function. Argument is the actual value of this variable that gets passed to function. In my opinion, the technically well-informed would prefer something like __args__ or __locals__ instead of __params__, for the current purpose. Finally, __params__ would simply be the value of __locals__ before any assignment has been done. Here's an example >>> def fn(a, b, c): ... lcls = locals() ... return lcls ... >>> fn(1, 2, 3) {'c': 3, 'b': 2, 'a': 1} Note: Even though lcls is the identifier for a local variable, at the time locals() is called the lcls identifier is unassigned, so not picked up by locals(). So far as I can tell, __params__ and locals() can be implemented in terms of each other. There could be practical performance benefits in providing the lower-level command __params__ (but with the name __locals__ or the like). -- Jonathan From jfine2358 at gmail.com Tue Sep 11 14:53:34 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Tue, 11 Sep 2018 19:53:34 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911113422.GI1596@ando.pearwood.info> Message-ID: I wrote: > In my opinion, the technically well-informed would prefer something > like __args__ or __locals__ instead of __params__, for the current > purpose. > > Finally, __params__ would simply be the value of __locals__ before any > assignment has been done. Following this up, I did a search for "__locals__" Python. The most interesting link I found was Implement PEP 422: Simple class initialisation hook https://bugs.python.org/issue17044#msg184195 Nick Coghlan wrote: Oh, that's bizarre - the presence of __locals__ is a side effect of calling locals() in the class body. So perhaps passing the namespace as a separate __init_class__ parameter is a better option. So it looks like (i) there's some complexity associated with locals(), and (ii) if we wish, it seems that __locals__ is available as a keyword identifier. Finally, another way to see that there's no race condition. The Python debugger supports inspection of stack frames. And it's a pure Python module. https://docs.python.org/3/library/pdb.html https://github.com/python/cpython/tree/3.7/Lib/pdb.py -- Jonathan From barry at python.org Tue Sep 11 20:33:00 2018 From: barry at python.org (Barry Warsaw) Date: Tue, 11 Sep 2018 17:33:00 -0700 Subject: [Python-ideas] PEP 420: implicit namespace sub-package In-Reply-To: <5B840160.6080703@hiphen-plant.com> References: <5B840160.6080703@hiphen-plant.com> Message-ID: Gallian Colombeau wrote on 8/27/18 06:49: > As I understand, for a package to allow being extended in this way, it > must be a namespace package and not contain a marker file. As a matter > of fact, no sub-package until the top level package can have a marker file: No, that's not true. > However, what is not discussed is "implicit namespace sub-package". In There really is no such thing. Packages are either PEP 420 style namespace packages, or regular packages. The latter contain __init__.py files. The language reference goes into quite a bit of detail on the matter. https://docs.python.org/3/reference/import.html#packages > Python 3.6 (I guess since the first implementation), if you have this > layout: > Lib/test/namespace_pkgs > ??? project1 > ??????? parent # Regular package > ??????????? __init__.py > ??????????? child # Namespace package > ??????????????? one.py > > you get "parent" as a regular package and "parent.child" as a namespace > package and it works (although now, every package data directory became > namespace packages and are importable, which may or may not be > desirable). The point is, does that add any value? Personally, I don't think so. You can do it, but it's not the intended purpose, so you're on your own. > I wasn't able to find > any discussion about this and, as far as I can see, there is actually no > use case for this as there is no possible way to contribute to the > "parent.child" namespace. Is that an intended behavior of PEP 420? There can be use cases for subpackage namespace packages, although they are definitely more rare than top-level namespace packages. One possibility would be a plugin system, say for application 'foo', where they reserve a subpackage for separate-distribution plugins, E.g. foo.plugins.ext where foo/plugins/ext has no __init__.py file. > Wouldn't it be more appropriate to enforce a sub-package to be a regular > package if the parent package is a regular package? As Brett says, it's probably way too late to change this. -Barry From tritium-list at sdamon.com Wed Sep 12 06:44:37 2018 From: tritium-list at sdamon.com (Alex Walters) Date: Wed, 12 Sep 2018 06:44:37 -0400 Subject: [Python-ideas] __iter__(), keys(), and the mapping protocol In-Reply-To: References: Message-ID: <005101d44a85$9aac5cc0$d0051640$@sdamon.com> > -----Original Message----- > From: Python-ideas list=sdamon.com at python.org> On Behalf Of Serhiy Storchaka > Sent: Tuesday, September 11, 2018 2:54 AM > To: python-ideas at python.org > Subject: Re: [Python-ideas] __iter__(), keys(), and the mapping protocol > > 11.09.18 05:04, Elias Tarhini ????: > > This has been bouncing around in my head for a while regarding the > > requisite keys() method on mappings: > > > > How come the ** unpacking operator, a built-in language feature, relies > > on a non-dunder to operate? > > > > To me, I mean to say, requiring that classes implement keys() ? a method > > whose name is totally undistinguished ? in order to conform to the > > mapping protocol feels like a design running counter to Python's norm of > > using dunders for everything "hidden". I am not sure if it feels dirty > > to anybody else, however. Interestingly, the docs already say > > > > that /[f]or mappings, [__iter__()] should iterate over the keys of the > > container/, but it of course is not enforced in any way at present. > > > > So, then ? how about enforcing it? Should __iter__(), for the reasons > > above, replace the current purpose of keys() in mappings? > > > > I'm not properly equipped at the moment to mess around with CPython > > (sorry), but I assume at a minimum this would entail either replacing > > all instances of PyMapping_Keys() with PyObject_GetIter()or > > alternatively changing PyMapping_Keys() to call the latter. > > > > Does it sound like a reasonable change overall? > > Dict has keys(), while list doesn't. If use __iter__() instead of > keys(), {**m} or dict(m) will give unexpected result for some > non-mappings, like m = [0, 2, 1]: > > >>> {k: m[k] for k in m} > {0: 0, 2: 1, 1: 2} > I don't think switching to __iter__ will cause dict(a_list) to produce anything other than what it does now - a traceback if the list is anything but a list of pairs. I think if we were to go forward with switching map-unpacking to __iter__, it would produce confusing mappings like you show in your example. I don't think it?s a good idea to switch to __iter__, or even make a dunder method for keys. The dunder methods are dunder methods because they are not generally directly useful. I don't see a major problem with having the mapping api call keys() - this is not a next()/__next__() situation, where the method is not generally directly useful. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ From steve at pearwood.info Wed Sep 12 08:17:49 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 12 Sep 2018 22:17:49 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911113422.GI1596@ando.pearwood.info> Message-ID: <20180912121748.GO1596@ando.pearwood.info> On Tue, Sep 11, 2018 at 04:57:16PM +0100, Jonathan Fine wrote: > Summary: locals() and suggestion __params__ are similar, and roughly > speaking each can be implemented from the other. You cannot get a snapshot of the current locals just from the function parameters, since the current locals will include variables which aren't parameters. Likewise you cannot get references to the original function parameters from the current local variables, since the params may have been re-bound since the call was made. (Unless you can guarantee that locals() is immediately called before any new local variables were created, i.e. on entry to the function, before any other code can run. As you point out further below.) There's a similarity only in the sense that parameters of a function are included as local variables, but the semantics of __params__ as proposed and locals() are quite different. They might even share some parts of implementation, but I don't think that really matters one way or another. Whether they do or don't is a mere implementation detail. > Experts / pedants > would prefer not to use the name __params__ for this purpose. I consider myself a pedant (and on a good day I might pass as something close to an expert on some limited parts of Python) and I don't have any objection to the *name* __params__. From the perspective of *inside* a function, it is a matter of personal taste whether you refer to parameter or argument: def func(a): # in the declaration, "a" is a parameter # inside the running function, once "a" has a value set, # its a matter of taste whether you call it a parameter # or an argument or both; I suppose it depends on whether # you are referring to the *variable* or its *value* # but here 1 is the argument bound to the parameter "a" result = func(1) It is the semantics that I think are problematic, not the choice of name. > Steve D'Aprano wrote: > > Its also going to suffer from race conditions, unless someone much > > cleverer than me can think of a way to avoid them which doesn't slow > > down function calls even more. > > As far as I know, locals() does not suffer from a race condition. But > it's not a local variable. Rather, it's a function that returns a > dict. Hence avoiding the race condition. Indeed. Each time you call locals(), it returns a new dict with a snapshot of the current local namespace. Because it all happens inside the same function call, no external thread can poke inside your current call to mess with your local variables. But that's different from setting function.__params__ to passed in arguments. By definition, each external caller is passing in its own set of arguments. If you have three calls to the function: function(a=1, b=2) # called by A function(a=5, b=8) # called by B function(a=3, b=4) # called by C In single-threaded code, there's no problem here: A makes the first call; the interpreter sets function.__params__ to A's arguments; the function runs with A's arguments and returns; only then can B make its call; the interpreter sets function.__params__ to B's arguments; the function runs with B's arguments and returns; only then can C make its call; the interpreter sets function.__params__ to C's arguments; the function runs with C's arguments and returns but in multi-threaded code, unless there's some form of locking, the three sets can interleave in any unpredictable order, e.g.: A makes its call; B makes its call; the interpreter sets function.__params__ to B's arguments; the interpreter sets function.__params__ to A's arguments; the function runs with B's arguments and returns; C make its call; the interpreter sets function.__params__ to C's arguments; the function runs with A's arguments and returns; the function runs with C's arguments and returns. We could solve this race condition with locking, or by making the pair of steps: the interpreter sets function.__params__ the function runs and returns a single atomic step. But that introduces a deadlock: once A calls function(), threads B and C will pause (potentially for a very long time) waiting for A's call to complete, before they can call the same function. I'm not an expert on threaded code, so it is possible I've missed some non-obvious fix for this, but I expect not. In general, solving race conditions without deadlocks is a hard problem. > Python has some keyword identifiers. Here's one > > >>> __debug__ = 1 > SyntaxError: assignment to keyword > > > Notice that this is a SYNTAX error. If __params__ were similarly a > keyword identifier, then it would avoid the race condition. The problem isn't because the caller assigns to __params__ manually. At no stage does Python code need to try setting "__params__ = x", in fact that ought to be quite safe because it would only be a local variable. The race condition problem comes from trying to set function.__params__ on each call, even if its the interpreter doing the setting. > It would > simply be a handle that allows, for example, key-value access to the > state of the frame on the execution stack. In other words, a > lower-level object from which locals() could be built. That wouldn't have the proposed semantics. __params__ is supposed to be a dict showing the initial values of the arguments passed in to the function, not merely a reference to the current frame. [...] > In my opinion, the technically well-informed would prefer something > like __args__ or __locals__ instead of __params__, for the current > purpose. Oh well, that puts me in my place :-) I have no objection to __args__, but __locals__ would be very inappropriate, as locals refers to *all* the local variables, not just those which are declared as parameters. (Parameters are a *subset* of locals.) > Finally, __params__ would simply be the value of __locals__ before any > assignment has been done. Indeed. As Chris (I think it was) pointed out, we could reduce the cost of this with a bit of compiler magic. A function that never refers to __params__ would run just as it does today: def func(a): print(a) might look something like this: 2 0 LOAD_GLOBAL 0 (print) 2 LOAD_FAST 0 (a) 4 CALL_FUNCTION 1 6 POP_TOP 8 LOAD_CONST 0 (None) 10 RETURN_VALUE just as it does now. But if the compiler sees a reference to __params__ in the body, it could compile in special code like this: def func(a): print(a, __params__) 2 0 LOAD_GLOBAL 0 (locals) 2 CALL_FUNCTION 0 4 STORE_FAST 1 (__params__) 3 6 LOAD_GLOBAL 1 (print) 8 LOAD_FAST 0 (a) 10 LOAD_FAST 1 (__params__) 12 CALL_FUNCTION 2 14 POP_TOP 16 LOAD_CONST 0 (None) 18 RETURN_VALUE Although more likely we'd want a special op-code to populate __params__, rather than calling the built-in locals() function. I don't think that's a bad idea, but it does add more compiler magic, and I'm not sure that there is sufficient justification for it. -- Steve From boxed at killingar.net Wed Sep 12 09:15:24 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Wed, 12 Sep 2018 15:15:24 +0200 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <1ABCDBAD-D477-40F2-AC1B-27E50F921E43@killingar.net> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <20180906131028.GB27312@ando.pearwood.info> <79134417-9a24-4605-85ed-2d2e7b6c88b8@googlegroups.com> <69fdde9f-d661-b901-5eb2-4c1c8f08a946@kynesim.co.uk> <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <1ABCDBAD-D477-40F2-AC1B-27E50F921E43@killingar.net> Message-ID: <536CC768-909E-426D-9F81-C694935D0BA3@killingar.net> > On 8 Sep 2018, at 14:23, Anders Hovm?ller wrote: > >> To me, the "30% of all arguments" deserves more careful examination. >> Does the proposal significant improve the reading and writing of this >> code? And are there other, perhaps better, ways of improving this >> code? > > Maybe my tool should be expanded to produce more nuanced data? Like how many of those 30% are: > > - arity 1,2,3, etc? (Arity 1 maybe should be discarded as being counted unfairly? I don?t think so but some clearly do) > - matches 1 argument, 2,3,4 etc? Matching just one is of less value than matching 5. > > Maybe some other statistics? I've updated the tool to also print statistics on how many arguments there are for the places where it can perform the analysis. I also added statistics for how long variable names it finds. I'm pretty sure almost all places with the length 1 or 2 for variable names passed would be better if they had been synchronized. Those places are also an argument for my suggestion I think, because if you gain something to synchronize then that will make you less likely to shorten variable names down to 1 or 2 characters to get brevity. Maybe... If you exclude calls to functions with just one argument (not parameters) then the hit percentage on the code base at work drops from ~36% to ~31%. Not a big difference overall. I've updated the gist: https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfine2358 at gmail.com Wed Sep 12 09:23:34 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Wed, 12 Sep 2018 14:23:34 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180912121748.GO1596@ando.pearwood.info> References: <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911113422.GI1596@ando.pearwood.info> <20180912121748.GO1596@ando.pearwood.info> Message-ID: Steve Barnes suggested adding __params__, as in > def a_method(self, cr, uid, ids, values, context=None): > ... > params = {k:v for k,v in __params__ if k in parent.a_method.keys()} > # Possibly add some additional entries here! > super(self, parent).a_method(**params) Steve D'Aprano commented > In single-threaded code, there's no problem here: > > A makes the first call; > the interpreter sets function.__params__ to A's arguments; > the function runs with A's arguments and returns; I'm puzzled here. Steve B provided code fragment for k,v in __params__ while Steve D provided code fragment function.__params__ by which I think he meant in terms of Steve B's example a_method.__params__ Perhaps Steve D thought Steve B wrote def a_method(self, cr, uid, ids, values, context=None): ... params = {k:v for k,v in a_method.__params__ # Is this what Steve D thought Steve B wrote? if k in parent.a_method.keys() } # Possibly add some additional entries here! super(self, parent).a_method(**params) If Steve B had written this, then I would agree with Steve D's comment. But as it is, I see no race condition problem, should __params__ be properly implemented as a keyword identifier. Steve D: Please clarify or explain you use of function.__params__ Perhaps it was a misunderstanding. By the way: I've made a similar mistake, on this very thread. So I hope no great shame is attached to such errors. https://mail.python.org/pipermail/python-ideas/2018-September/053224.html Summary: I addressed the DEFINING problem. My mistake. Some rough ideas for the CALLING problem. Anders has kindly pointed out to me, off-list, that I solved the wrong problem. His problem is CALLING the function fn, not DEFINING fn. Thank you very much for this, Anders. -- Jonathan From ethan at stoneleaf.us Wed Sep 12 09:59:44 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 12 Sep 2018 06:59:44 -0700 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180912121748.GO1596@ando.pearwood.info> References: <5B91AC8B.9030909@canterbury.ac.nz> <20180908094150.GK27312@ando.pearwood.info> <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911113422.GI1596@ando.pearwood.info> <20180912121748.GO1596@ando.pearwood.info> Message-ID: <3cca0175-054b-bb81-7e4e-df7f0773bdd6@stoneleaf.us> On 09/12/2018 05:17 AM, Steven D'Aprano wrote: > Indeed. Each time you call locals(), it returns a new dict with a > snapshot of the current local namespace. Because it all happens inside > the same function call, no external thread can poke inside your current > call to mess with your local variables. > > But that's different from setting function.__params__ to passed in > arguments. By definition, each external caller is passing in its own set > of arguments. If you have three calls to the function: > > function(a=1, b=2) # called by A > function(a=5, b=8) # called by B > function(a=3, b=4) # called by C > > In single-threaded code, there's no problem here: > > A makes the first call; > the interpreter sets function.__params__ to A's arguments; > the function runs with A's arguments and returns; > > only then can B make its call; > the interpreter sets function.__params__ to B's arguments; > the function runs with B's arguments and returns; > > only then can C make its call; > the interpreter sets function.__params__ to C's arguments; > the function runs with C's arguments and returns > > > but in multi-threaded code, unless there's some form of locking, the > three sets can interleave in any unpredictable order, e.g.: > > A makes its call; > B makes its call; > the interpreter sets function.__params__ to B's arguments; > the interpreter sets function.__params__ to A's arguments; > the function runs with B's arguments and returns; > C make its call; > the interpreter sets function.__params__ to C's arguments; > the function runs with A's arguments and returns; > the function runs with C's arguments and returns. > > > We could solve this race condition with locking, or by making the pair > of steps: > > the interpreter sets function.__params__ > the function runs and returns > > a single atomic step. But that introduces a deadlock: once A calls > function(), threads B and C will pause (potentially for a very long > time) waiting for A's call to complete, before they can call the same > function. > > I'm not an expert on threaded code, so it is possible I've missed some > non-obvious fix for this, but I expect not. In general, solving race > conditions without deadlocks is a hard problem. I believe the solution is `threading.local()`, and Python would automatically use it in these situations. -- ~Ethan~ From jfine2358 at gmail.com Wed Sep 12 10:15:51 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Wed, 12 Sep 2018 15:15:51 +0100 Subject: [Python-ideas] Off-topic: Email with content: Cannot read property 'get' of null Message-ID: This is off-topic, but you might enjoy reading it. I got an email today, whose entire body text was Cannot read property 'get' of null It reminded me of our discussions of None-aware operators. And also Errors should never pass silently. Unless explicitly silenced. -- Jonathan From steve at pearwood.info Wed Sep 12 10:39:18 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 13 Sep 2018 00:39:18 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911113422.GI1596@ando.pearwood.info> <20180912121748.GO1596@ando.pearwood.info> Message-ID: <20180912143918.GP1596@ando.pearwood.info> On Wed, Sep 12, 2018 at 02:23:34PM +0100, Jonathan Fine wrote: > Steve Barnes suggested adding __params__, as in > > > def a_method(self, cr, uid, ids, values, context=None): > > ... > > params = {k:v for k,v in __params__ if k in parent.a_method.keys()} > > # Possibly add some additional entries here! > > super(self, parent).a_method(**params) In context, what Steve Barnes said was If this [__params__] was accessible externally, as fn.__defaults__ is [...] https://mail.python.org/pipermail/python-ideas/2018-September/053322.html Here is the behaviour of fn.__defaults__: py> def fn(a=1, b=2, c=3): ... pass ... py> fn.__defaults__ (1, 2, 3) Notice that it is an externally acessible attribute of the function object. If that's not what Steve Barnes meant, then I have no idea why fn.__defaults__ is relevant or what he meant. I'll confess that I couldn't work out what Steve's code snippet was supposed to mean: params = {k:v for k,v in __params__ if k in parent.a_method.keys()} Does __params__ refer to the currently executing a_method, or the superclass method being called later on in the line? Why doesn't parent.a_method have parens? Since parent.a_method probably isn't a dict, why are we calling keys() on a method object? The whole snippet was too hard for me to comprehend, so I went by the plain meaning of the words he used to describe the desired semantics. If __params__ is like fn.__defaults__, then that would require setting fn.__params__ on each call. Perhaps I'm reading too much into the "accessible externally" part, since Steve's example doesn't seem to actually be accessing it externally. -- Steve From jfine2358 at gmail.com Wed Sep 12 10:58:25 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Wed, 12 Sep 2018 15:58:25 +0100 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180912143918.GP1596@ando.pearwood.info> References: <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911113422.GI1596@ando.pearwood.info> <20180912121748.GO1596@ando.pearwood.info> <20180912143918.GP1596@ando.pearwood.info> Message-ID: Hi Steve Thank you for your prompt reply. You wrote: > I'll confess that I couldn't work out what Steve B's code snippet was > supposed to mean: > params = {k:v for k,v in __params__ if k in parent.a_method.keys()} The Zen of Python (which might not apply here) says: In the face of ambiguity, refuse the temptation to guess. Now that we have more clarity, Steve D'A, please let me ask you a direct question. My question is about correctly implementing of __params__ as a keyword identifier, with semantics as in Steve B's code snippet above. Here's my question: Do you think implementing this requires the avoidance of a race hazard? Or perhaps it can be done, as I suggested, entirely within the execution frame on the stack? -- Jonathan From steve at pearwood.info Wed Sep 12 11:16:37 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 13 Sep 2018 01:16:37 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <3cca0175-054b-bb81-7e4e-df7f0773bdd6@stoneleaf.us> References: <20180909051901.GP27312@ando.pearwood.info> <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911113422.GI1596@ando.pearwood.info> <20180912121748.GO1596@ando.pearwood.info> <3cca0175-054b-bb81-7e4e-df7f0773bdd6@stoneleaf.us> Message-ID: <20180912151637.GQ1596@ando.pearwood.info> On Wed, Sep 12, 2018 at 06:59:44AM -0700, Ethan Furman wrote: > On 09/12/2018 05:17 AM, Steven D'Aprano wrote: [...] > >We could solve this race condition with locking, or by making the pair > >of steps: [...] > >I'm not an expert on threaded code, so it is possible I've missed some > >non-obvious fix for this, but I expect not. In general, solving race > >conditions without deadlocks is a hard problem. > > I believe the solution is `threading.local()`, and Python would > automatically use it in these situations. I'm finding it hard to understand the documentation for threading.local(): https://docs.python.org/3/library/threading.html#threading.local as there isn't any *wink* although it does refer to the docstring of a private implementation module. But I can't get it to work. Perhaps I'm doing something wrong: import time from threading import Thread, local def func(): pass def attach(value): func.__params__ = local() func.__params__.value = value def worker(i): print("called from thread %s" % i) attach(i) assert func.__params__.value == i time.sleep(3) value = func.__params__.value if value != i: print("mismatch", i, value) for i in range(5): t = Thread(target=worker, args=(i,)) t.start() print() When I run that, each of the threads print their "called from ..." message, the assertions all pass, then a couple of seconds later they consistently all raise exceptions: Exception in thread Thread-1: Traceback (most recent call last): File "/usr/local/lib/python3.5/threading.py", line 914, in _bootstrap_inner self.run() File "/usr/local/lib/python3.5/threading.py", line 862, in run self._target(*self._args, **self._kwargs) File "", line 5, in worker AttributeError: '_thread._local' object has no attribute 'value' In any case, if Steve Barnes didn't actually intend for the __params__ to be attached to the function object as an externally visible attribute, the whole point is moot. -- Steve From steve at pearwood.info Wed Sep 12 11:38:08 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 13 Sep 2018 01:38:08 +1000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: References: <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911113422.GI1596@ando.pearwood.info> <20180912121748.GO1596@ando.pearwood.info> <20180912143918.GP1596@ando.pearwood.info> Message-ID: <20180912153808.GR1596@ando.pearwood.info> On Wed, Sep 12, 2018 at 03:58:25PM +0100, Jonathan Fine wrote: > My question is about correctly implementing of __params__ as a keyword > identifier, with semantics as in Steve B's code snippet above. The semantics of Steve's code snippet are ambiguous. > Here's my question: Do you think implementing this requires the > avoidance of a race hazard? I don't know what "this" is any more. I thought Steve wanted an externally accessible fn.__params__ dict, as that's what he said he wanted, but his code snippet doesn't show that. If there is no externally accessible fn.__params__ dict, then there's no race hazard. I see no reason why a __params__ local variable would be subject to race conditions. But as you so rightly quoted the Zen at me for guessing in the face of ambiguity, without knowing what Steve intends, I can't answer your question. As a purely internal local variable, it would still have the annoyance that writing to the dict might not actually effect the local values, the same issue that locals() has. But if we cared enough, we could make the dict a proxy rather than a real dict. I see no reason why __params__ must be treated as special keyword, like __debug__, although given that it is involved in special compiler magic, that might be prudent. (Although, in sufficient old versions of Python, even __debug__ was just a regular name.) > Or perhaps it can be done, as I suggested, > entirely within the execution frame on the stack? Indeed. Like I said right at the start, there shouldn't be any problem for the compiler adding a local variable to each function (or just when required) containing the initial arguments bound to the function parameters. *How* the compiler does it, whether it is done during compilation or on entry to the function call, or something else, is an implementation detail which presumably each Python interpreter can choose for itself. All of this presumes that it is a desirable feature. -- Steve From gadgetsteve at live.co.uk Wed Sep 12 16:14:37 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Wed, 12 Sep 2018 20:14:37 +0000 Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <20180912153808.GR1596@ando.pearwood.info> References: <8ad99c14-2712-8b89-0fc0-d6a4955eaa2d@stoneleaf.us> <20180911113422.GI1596@ando.pearwood.info> <20180912121748.GO1596@ando.pearwood.info> <20180912143918.GP1596@ando.pearwood.info> <20180912153808.GR1596@ando.pearwood.info> Message-ID: On 12/09/2018 16:38, Steven D'Aprano wrote: > On Wed, Sep 12, 2018 at 03:58:25PM +0100, Jonathan Fine wrote: > >> My question is about correctly implementing of __params__ as a keyword >> identifier, with semantics as in Steve B's code snippet above. > > The semantics of Steve's code snippet are ambiguous. > > >> Here's my question: Do you think implementing this requires the >> avoidance of a race hazard? > > I don't know what "this" is any more. I thought Steve wanted an > externally accessible fn.__params__ dict, as that's what he said he > wanted, but his code snippet doesn't show that. > > If there is no externally accessible fn.__params__ dict, then there's no > race hazard. I see no reason why a __params__ local variable would be > subject to race conditions. But as you so rightly quoted the Zen at me > for guessing in the face of ambiguity, without knowing what Steve > intends, I can't answer your question. > > As a purely internal local variable, it would still have the annoyance > that writing to the dict might not actually effect the local values, the > same issue that locals() has. But if we cared enough, we could make the > dict a proxy rather than a real dict. > > I see no reason why __params__ must be treated as special keyword, like > __debug__, although given that it is involved in special compiler magic, > that might be prudent. > > (Although, in sufficient old versions of Python, even __debug__ was just > a regular name.) > > >> Or perhaps it can be done, as I suggested, >> entirely within the execution frame on the stack? > > Indeed. > > Like I said right at the start, there shouldn't be any problem for the > compiler adding a local variable to each function (or just when > required) containing the initial arguments bound to the function > parameters. *How* the compiler does it, whether it is done during > compilation or on entry to the function call, or something else, is an > implementation detail which presumably each Python interpreter can > choose for itself. > > All of this presumes that it is a desirable feature. > > > Hi, My intent with the __params__, (or whatever it might end up being called), was to provide a mechanism whereby we could: a) find out, before calling, which parameters a function/method accepts, (just as __defaults__ gives us which values the function/method has default values for so does not require in every call with its defaults). Since this would normally be a compile time operation I do not anticipate any race conditions. I suspect that this would also be of great use to IDE authors and others as well as the use case on this thread. b) a convenient mechanism for accessing all of the supplied parameters/arguments, (whether actually given or from defaults), from within the function/method both the parameter names and the values supplied at the time of the specific call. The example I gave was a rough and ready filtering of the outer functions parameters down to those that are accepted by the function that is about to be called, (I suspect that __locals__() might have been a better choice here). I don't anticipate race conditions here either as the values would be local at this point. The idea was to provide a similar mechanism to the examples of functions that accept a list and dictionary in addition to the parameters that they do consume so as to be able to work with parameter lists/dictionaries that exceed the requirements. The difference is that, since we can query the function/method for what parameters it accepts and filter what we have to match, we do not need to alter the signature of the called item. This is important when providing wrappers for code that we do not have the freedom to alter. I have done a little testing and found that: a) if we have a fn(a, b, c) and call it with fn(b=2, c=3, a=1) it is quite happy and assigns the correct values so constructing a dictionary that satisfies all of the required parameters and calling with fn(**the_dict) is fine. b) Calling dir() or __locals__() on the first line of the function gives the required information (but blocks the docstring which would be a bad idea). The one worry is how to get the required parameter/argument list for overloaded functions or methods but AFAIK these are all calls to wrapped C/C++/other items so already take (*arg, **argv) inputs. I would guess that we would need some sort of indicator for this type of function. I hope I have made my thoughts clearer rather than muddier :-) thank you all for taking the time to think about this. -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From jamtlu at gmail.com Wed Sep 12 17:15:51 2018 From: jamtlu at gmail.com (James Lu) Date: Wed, 12 Sep 2018 17:15:51 -0400 Subject: [Python-ideas] PTPython REPL in IDLE In-Reply-To: References: Message-ID: Have y?all seen ptpython?s autocomplete and syntax highlighting features? Ptpython, usually used as a cli application, might be worth integrating into IDLE. From steve at pearwood.info Wed Sep 12 19:19:46 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 13 Sep 2018 09:19:46 +1000 Subject: [Python-ideas] PTPython REPL in IDLE In-Reply-To: References: Message-ID: <20180912231946.GB19437@ando.pearwood.info> On Wed, Sep 12, 2018 at 05:15:51PM -0400, James Lu wrote: > Have y?all seen ptpython?s autocomplete and syntax highlighting > features? No. Do you have a link? What specific features have excited you? The standard Python REPL now comes with autocomplete turned on by default. How does that compare? > Ptpython, usually used as a cli application, might be worth > integrating into IDLE. Is there a reason why Ptpython should be preferred over, say, bpython? https://bpython-interpreter.org/ Not that I'm suggesting bpython, its just that I've used bpython. -- Steve From mike at selik.org Wed Sep 12 19:42:41 2018 From: mike at selik.org (Michael Selik) Date: Wed, 12 Sep 2018 16:42:41 -0700 Subject: [Python-ideas] __iter__(), keys(), and the mapping protocol In-Reply-To: References: Message-ID: Elias, I'm a little confused about what you're suggesting. You want to have a Mapping that does not supply a keys method? What use case motivated your proposal? On Mon, Sep 10, 2018, 7:04 PM Elias Tarhini wrote: > This has been bouncing around in my head for a while regarding the > requisite keys() method on mappings: > > How come the ** unpacking operator, a built-in language feature, relies on > a non-dunder to operate? > > To me, I mean to say, requiring that classes implement keys() ? a method > whose name is totally undistinguished ? in order to conform to the mapping > protocol feels like a design running counter to Python's norm of using > dunders for everything "hidden". I am not sure if it feels dirty to anybody > else, however. Interestingly, the docs already say > that *[f]or > mappings, [__iter__()] should iterate over the keys of the container*, > but it of course is not enforced in any way at present. > > So, then ? how about enforcing it? Should __iter__(), for the reasons > above, replace the current purpose of keys() in mappings? > > I'm not properly equipped at the moment to mess around with CPython > (sorry), but I assume at a minimum this would entail either replacing all > instances of PyMapping_Keys() with PyObject_GetIter() or alternatively > changing PyMapping_Keys() to call the latter. > > Does it sound like a reasonable change overall? > > Eli > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at selik.org Wed Sep 12 21:21:32 2018 From: mike at selik.org (Michael Selik) Date: Wed, 12 Sep 2018 18:21:32 -0700 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: Message-ID: On Sat, Sep 8, 2018 at 4:17 AM Jonathan Fine wrote: > I thank Steve D'Aprano for pointing me to this real-life (although > perhaps extreme) code example > > > https://github.com/Tinche/aiofiles/blob/master/aiofiles/threadpool/__init__.py#L17-L37 It's hard to know from just a brief glance how the "private" _open function will be used. Using GitHub's repo search, it looks like it's only called once and only in this module. Since it has essentially the same signature as the "public" open function, why not just use *args and **kwds for the private _open? Here's a quick refactor that emphasizes the pass-through relationship of open and _open: def open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None, *, loop=None, executor=None): return AiofilesContextManager(_open(**locals())) @asyncio.coroutine def _open(file, *args, **kwds): """Open an asyncio file.""" executor = kwds.pop('executor') loop = kwds.pop('loop') if loop is None: loop = asyncio.get_event_loop() callback = partial(sync_open, file, *args, **kwds) f = yield from loop.run_in_executor(executor, callback) return wrap(f, loop=loop, executor=executor) I normally dislike the use of **locals(), but here it helps make explicit that the "public" open function passes all arguments through to its helper. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkteresi at gmail.com Wed Sep 12 23:45:33 2018 From: dkteresi at gmail.com (David Teresi) Date: Wed, 12 Sep 2018 23:45:33 -0400 Subject: [Python-ideas] Python dialect that compiles into python Message-ID: <5b99dd60.1c69fb81.33777.5d83@mx.google.com> Not totally convinced about this. It would require PEP writers? to fully implement their proposed feature, people can usually get a pretty good idea of how a feature is supposed to work through PEPs (most of which are extremely well written), and these "mods" wouldn't be used in production code anyway since they're not backwards compatible with the rest of the language.On Sep 10, 2018 8:15 PM, Abe Dillon wrote: > > [Steven D'Aprano] >> >> It would be great for non-C coders to be able?to prototype proposed >> syntax changes to get a feel for what works and?what doesn't. > > > I think it would be great in general for the community to be able to try out ideas and mull things over. > > If there was something like a Python Feature Index (PyFI) and you could install mods to the language, > it would allow people to try out ideas before rejecting them or incorporating them into the language > (or putting them on hold until someone suggests a better implementation). > > I could even see features that never make it into the language, but stick around PyFI and get regular > maintenance because: > ? ? A) they're controversial changes that some love and some hate > ? ? B) they make things easier in some domain but otherwise don't warrant adoption > > It would have to be made clear from the start that Python can't guarantee backward compatibility with > any mods, which should prevent excessive fragmentation (if you want your code to be portable, don't > use mod). > > On Fri, Sep 7, 2018 at 7:30 PM Steven D'Aprano wrote: >> >> On Fri, Sep 07, 2018 at 11:57:50AM +0000, Robert Vanden Eynde wrote: >> >> > Many features on this list propose different syntax to python, >> > producing different python "dialects" that can statically be >> > transformed to python : >> >> [...] >> > Using a modified version of ast, it is relatively easy to modifiy the >> > syntax tree of a program to produce another program. So one could >> > compile the "python dialect" into regular python. The last example >> > with partially for example doesn't even need new syntax. >> >> [...] >> > Actually, I might start to write this lib, that looks fun. >> >> I encourage you to do so! It would be great for non-C coders to be able >> to prototype proposed syntax changes to get a feel for what works and >> what doesn't. >> >> There are already a few joke Python transpilers around, such as >> "Like, Python": >> >> https://jon.how/likepython/ >> >> but I think this is a promising technique that could be used more to >> keep the core Python language simple while not *entirely* closing the >> door to people using domain-specific (or project-specific) syntax. >> >> >> -- >> Steve >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ From boxed at killingar.net Thu Sep 13 00:58:36 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 13 Sep 2018 06:58:36 +0200 Subject: [Python-ideas] PTPython REPL in IDLE In-Reply-To: <20180912231946.GB19437@ando.pearwood.info> References: <20180912231946.GB19437@ando.pearwood.info> Message-ID: <4D77F64A-A68F-4E07-96DA-2E8A7DD923DD@killingar.net> >> Have y?all seen ptpython?s autocomplete and syntax highlighting >> features? > > No. Do you have a link? What specific features have excited you? IPython autocomplete is now powered by ptpython and Jedi. So if you?ve used it recently you?re already familiar with it. In the context of IDLE it seems like Jedi is the relevant piece of technology. Ptpython is short for ?prompt toolkit Python?, ie command line/text mode. / Anders From eltrhn at gmail.com Thu Sep 13 01:50:03 2018 From: eltrhn at gmail.com (Elias Tarhini) Date: Wed, 12 Sep 2018 22:50:03 -0700 Subject: [Python-ideas] __iter__(), keys(), and the mapping protocol In-Reply-To: References: Message-ID: On Wed, Sep 12, 2018 at 4:42 PM Michael Selik wrote: > You want to have a Mapping that does not supply a keys method? What use > case motivated your proposal? > Yes, my proposal was to consider allowing __iter__() to subsume keys() entirely, for the reasons outlined in my second email -- which I'm just realizing was an accidental "reply one" to Alex Walters rather than a "reply all", yikes! Here it is, duplicated: Ahh, no, I phrased my question a bit badly -- I'm not proposing that the > keys() method be *removed* from what it exists on currently, but rather > that this: > > > To be treated like a mapping everywhere, python requires that you > define* a keys() method, so why not use it? > > be changed, such that only __iter__() and __getitem__() indicate a mapping > rather than keys() and the same. The latter method would then be > optional, but not necessarily removed from the front-facing API. > > Granted, my only strong argument is that the ** unpacking operator depends > on this method to do its job, and it's currently alone amongst Python's > operators in depending on a non-dunder to do so; that, combined with the > point heading the below paragraph, is also why I'm only going after keys() > ? not items() or sequences' index() or any of the other > normally-named-but-important methods. > > And given that __iter__() is already recommended to iterate over a > mapping's keys, it doesn't seem illogical to me to just let it take that > job officially (and in turn, I guess, to make keys() on stdlib/built-in > types just return __iter__()). I don't believe there would be another > expected use for it in a mapping, anyway, correct? (I haven't done the > research here, but there could be third-party authors who use it to iterate > over items(), given that ?why does it iterate over just keys? a > somewhat-recognizable discussion topic on such beginner forums as > /r/learnpython... if so, this could be a good opportunity to > force-standardize that behavior as well.) > > Does this clarify my intent? ...I don't doubt that there are roadblocks > I'm still ignoring, though. > > Eli > However, Serihy's example of {**[0, 2, 1]} is so-plainly irreconcilable -- somewhat embarrassing for me to have missed it -- that I'm now reluctantly no longer in favor. (Well, really, I'm tempted to say *why not?*, but I do see that it wouldn't be a good thing overall.) And I still kind of feel that there should be a dunder involved somewhere in this, but nowhere near strongly enough to dispute that *"[t]he dunder methods are dunder methods because they are not generally directly useful. [There doesn't seem to be] a major problem with having the mapping api call keys() [...]"*, as it's reasonable and rationalizes the current system well enough. Thank you for bearing with ;) Eli On Wed, Sep 12, 2018 at 4:42 PM, Michael Selik wrote: > Elias, > I'm a little confused about what you're suggesting. You want to have a > Mapping that does not supply a keys method? What use case motivated your > proposal? > > > On Mon, Sep 10, 2018, 7:04 PM Elias Tarhini wrote: > >> This has been bouncing around in my head for a while regarding the >> requisite keys() method on mappings: >> >> How come the ** unpacking operator, a built-in language feature, relies >> on a non-dunder to operate? >> >> To me, I mean to say, requiring that classes implement keys() ? a method >> whose name is totally undistinguished ? in order to conform to the mapping >> protocol feels like a design running counter to Python's norm of using >> dunders for everything "hidden". I am not sure if it feels dirty to anybody >> else, however. Interestingly, the docs already say >> >> that *[f]or mappings, [__iter__()] should iterate over the keys of the >> container*, but it of course is not enforced in any way at present. >> >> So, then ? how about enforcing it? Should __iter__(), for the reasons >> above, replace the current purpose of keys() in mappings? >> >> I'm not properly equipped at the moment to mess around with CPython >> (sorry), but I assume at a minimum this would entail either replacing all >> instances of PyMapping_Keys() with PyObject_GetIter() or alternatively >> changing PyMapping_Keys() to call the latter. >> >> Does it sound like a reasonable change overall? >> >> Eli >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at selik.org Thu Sep 13 01:55:55 2018 From: mike at selik.org (Michael Selik) Date: Wed, 12 Sep 2018 22:55:55 -0700 Subject: [Python-ideas] __iter__(), keys(), and the mapping protocol In-Reply-To: References: Message-ID: The dict keys method has other benefits beyond iteration. For example, it provides a set-like interface. On Wed, Sep 12, 2018, 10:50 PM Elias Tarhini wrote: > On Wed, Sep 12, 2018 at 4:42 PM Michael Selik wrote: > >> You want to have a Mapping that does not supply a keys method? What use >> case motivated your proposal? >> > > Yes, my proposal was to consider allowing __iter__() to subsume keys() > entirely, for the reasons outlined in my second email -- which I'm just > realizing was an accidental "reply one" to Alex Walters rather than a > "reply all", yikes! Here it is, duplicated: > > Ahh, no, I phrased my question a bit badly -- I'm not proposing that the >> keys() method be *removed* from what it exists on currently, but rather >> that this: >> >> > To be treated like a mapping everywhere, python requires that you >> define* a keys() method, so why not use it? >> >> be changed, such that only __iter__() and __getitem__() indicate a >> mapping rather than keys() and the same. The latter method would then be >> optional, but not necessarily removed from the front-facing API. >> >> Granted, my only strong argument is that the ** unpacking operator >> depends on this method to do its job, and it's currently alone amongst >> Python's operators in depending on a non-dunder to do so; that, combined >> with the point heading the below paragraph, is also why I'm only going >> after keys() ? not items() or sequences' index() or any of the other >> normally-named-but-important methods. >> >> And given that __iter__() is already recommended to iterate over a >> mapping's keys, it doesn't seem illogical to me to just let it take that >> job officially (and in turn, I guess, to make keys() on stdlib/built-in >> types just return __iter__()). I don't believe there would be another >> expected use for it in a mapping, anyway, correct? (I haven't done the >> research here, but there could be third-party authors who use it to iterate >> over items(), given that ?why does it iterate over just keys? a >> somewhat-recognizable discussion topic on such beginner forums as >> /r/learnpython... if so, this could be a good opportunity to >> force-standardize that behavior as well.) >> >> Does this clarify my intent? ...I don't doubt that there are roadblocks >> I'm still ignoring, though. >> >> Eli >> > > However, Serihy's example of {**[0, 2, 1]} is so-plainly irreconcilable > -- somewhat embarrassing for me to have missed it -- that I'm now > reluctantly no longer in favor. (Well, really, I'm tempted to say *why > not?*, but I do see that it wouldn't be a good thing overall.) > And I still kind of feel that there should be a dunder involved somewhere > in this, but nowhere near strongly enough to dispute that *"[t]he dunder > methods are dunder methods because they are not generally directly useful. > [There doesn't seem to be] a major problem with having the mapping api call > keys() [...]"*, as it's reasonable and rationalizes the current system > well enough. Thank you for bearing with ;) > > > Eli > > On Wed, Sep 12, 2018 at 4:42 PM, Michael Selik wrote: > >> Elias, >> I'm a little confused about what you're suggesting. You want to have a >> Mapping that does not supply a keys method? What use case motivated your >> proposal? >> >> >> On Mon, Sep 10, 2018, 7:04 PM Elias Tarhini wrote: >> >>> This has been bouncing around in my head for a while regarding the >>> requisite keys() method on mappings: >>> >>> How come the ** unpacking operator, a built-in language feature, relies >>> on a non-dunder to operate? >>> >>> To me, I mean to say, requiring that classes implement keys() ? a >>> method whose name is totally undistinguished ? in order to conform to the >>> mapping protocol feels like a design running counter to Python's norm of >>> using dunders for everything "hidden". I am not sure if it feels dirty to >>> anybody else, however. Interestingly, the docs already say >>> >>> that *[f]or mappings, [__iter__()] should iterate over the keys of the >>> container*, but it of course is not enforced in any way at present. >>> >>> So, then ? how about enforcing it? Should __iter__(), for the reasons >>> above, replace the current purpose of keys() in mappings? >>> >>> I'm not properly equipped at the moment to mess around with CPython >>> (sorry), but I assume at a minimum this would entail either replacing all >>> instances of PyMapping_Keys() with PyObject_GetIter() or alternatively >>> changing PyMapping_Keys() to call the latter. >>> >>> Does it sound like a reasonable change overall? >>> >>> Eli >>> _______________________________________________ >>> Python-ideas mailing list >>> Python-ideas at python.org >>> https://mail.python.org/mailman/listinfo/python-ideas >>> Code of Conduct: http://python.org/psf/codeofconduct/ >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfine2358 at gmail.com Thu Sep 13 03:21:18 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Thu, 13 Sep 2018 08:21:18 +0100 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: Message-ID: Summary: Michael Selik has produced a nice refactoring of an example. I suggest further refactoring, to create a function decorator that does the job. This might be useful if the example is an instance of a common use pattern. Michael Selik wrote > def open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, > closefd=True, opener=None, *, loop=None, executor=None): > return AiofilesContextManager(_open(**locals())) > I normally dislike the use of **locals(), but here it helps make explicit > that the "public" open function passes all arguments through to its helper. Here's the call signature for the built-in open >>> help(open) open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None) And for the new open Same as builtin open, with (keyword-only) loop=None, executor=None added. To explore, let's refactor, as if this were a common use pattern. Your code, Michael, (almost) reduces the definition of the new open function to A signature A function to call on the resulting locals() So how would we refactor this. I suggest (not tested) @wibble(sig) def new_open(kwargs): return AiofilesContextManager(_open(**kwargs)) The sig argument to the decorator wibble could be an instance of https://docs.python.org/3/library/inspect.html#inspect.Signature The semantics of wibble should be such that the above is equivalent to Michael's code, quoted at the top of this post. Finally, we'd like some easy way of combining the signature of (the built-in) open with the additional loop and executor parameters. I see two questions arising. First, can this conveniently implemented using Python as it is? Second, does the decorator wibble (perhaps enhanced) help with common use patterns. I'm grateful to you, Michael, for your nice clear example of refactoring. Given the narrow context of the problem, I don't see how it could be bettered. But if it's a common use pattern, refactoring into a decorator might be good. Would anyone here like to try coding up a proof-of-concept decorator? Or explore code-bases for this and similar use patterns? I'm busy to the end of this month, so have little time myself. -- Jonathan From boxed at killingar.net Thu Sep 13 03:39:01 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 13 Sep 2018 09:39:01 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: Message-ID: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> > Summary: Michael Selik has produced a nice refactoring of an example. > I suggest further refactoring, to create a function decorator that > does the job. This might be useful if the example is an instance of a > common use pattern. It seems to me this discussion has drifted away from the original discussion toward one where you have a chain of functions with the same or almost the same signature. This is interesting for sure but we shouldn?t forget about the original topic: how can we make it less painful to use keyword arguments? Just my two cents. / Anders From jfine2358 at gmail.com Thu Sep 13 04:07:33 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Thu, 13 Sep 2018 09:07:33 +0100 Subject: [Python-ideas] __iter__(), keys(), and the mapping protocol In-Reply-To: References: Message-ID: Someone wrote: Granted, my only strong argument is that the ** unpacking operator depends on this method to do its job, and it's currently alone amongst Python's operators in depending on a non-dunder to do so I like this argument. And I think it's important. Here's some background facts class dict(object) | dict() -> new empty dictionary | dict(mapping) -> new dictionary initialized from a mapping object's | (key, value) pairs | dict(iterable) -> new dictionary initialized as if via: | d = {} | for k, v in iterable: | d[k] = v | dict(**kwargs) -> new dictionary initialized with the name=value pairs | in the keyword argument list. For example: dict(one=1, two=2) >>> list(zip('abc', range(3))) [('a', 0), ('b', 1), ('c', 2)] >>> dict(list(zip('abc', range(3)))) {'b': 1, 'a': 0, 'c': 2} >>> dict(zip('abc', range(3))) {'b': 1, 'a': 0, 'c': 2} >>> dict(**zip('abc', range(3))) TypeError: type object argument after ** must be a mapping, not zip >>> dict(**list(zip('abc', range(3)))) TypeError: type object argument after ** must be a mapping, not list Now for my opinions. (Yours might be different.) First, it is my opinion that it is not reasonable to insist that the argument after ** must be a mapping. All that is required to construct a dictionary is a sequence of (key, value) pairs. The dict(iterable) construction proves that point. Second, relaxing the ** condition is useful. Consider the following. >>> class NS: pass >>> ns = NS() >>> ns.a = 3 >>> ns.b = 5 >>> ns.__dict__ {'b': 5, 'a': 3} >>> def fn(**kwargs): return kwargs >>> fn(**ns) TypeError: fn() argument after ** must be a mapping, not NS >>> fn(**ns.__dict__) {'b': 5, 'a': 3} The Zen of Python says Namespaces are one honking great idea -- let's do more of those! I see many advantages in using a namespace to build up the keyword arguments for a function call. For example, it could do data validation (of both keys/names and values). And once we have the namespace, used for this purpose, I find it natural to call it like so >>> fn(**ns) I don't see any way to do this, other than defining NS.keys and NS.__getitem__. But why should Python itself force me to expose ns.__dict__ in that way. I don't want my users getting a value via ns[key]. By the way, in JavaScript the constructs obj.aaa and obj['aaa'] are always equivalent. POSTSCRIPT: Here are some additional relevant facts. >>> fn(**dict(ns)) TypeError: 'NS' object is not iterable >>> def tmp(self): return iter(self.__dict__.items()) >>> NS.__iter__ = tmp >>> fn(**dict(ns)) {'b': 5, 'a': 3} >>> list(ns) [('b', 5), ('a', 3)] I find allowing f(**dict(ns)) but forbidding f(**ns) to be a restriction of functionality removes, rather than adds, values. Perhaps (I've not thought it through), *args and **kwargs should be treated as special contexts. Just as bool(obj) calls obj.__bool__ if available. https://docs.python.org/3.3/reference/datamodel.html#object.__bool__ In other words, have *args call __star__ if available, and **kwargs call __starstar__ if available. But I've not thought this through. -- Jonathan From sammiequan at yandex.com Thu Sep 13 04:36:42 2018 From: sammiequan at yandex.com (Samantha Quan) Date: Thu, 13 Sep 2018 09:36:42 +0100 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause Message-ID: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From j.van.dorp at deonet.nl Thu Sep 13 05:05:53 2018 From: j.van.dorp at deonet.nl (Jacco van Dorp) Date: Thu, 13 Sep 2018 11:05:53 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: -1. The concept of ugly code is everywhere on the internet. Everyone on this planet has either written ugly code or no code at all. Some have also written beautiful code. People aren't code, and code isn't people. I can't see this becoming a problem until we have an AI that can feel insulted because someone tells it it's code looks ugly, and that's waaaaay off. Don't conflate code with people, please. At the risk of politicizing (which you also took), I'd like to add that diversity and inclusivity of thought is more important than that of whatever arbitrary beauty standard. The Zen is clear and not about people. Don't try to make it to be so. Op do 13 sep. 2018 om 10:38 schreef Samantha Quan : > First, I'd like to express how grateful I am to see more and more > technical communities embrace diversity and inclusivity, particularly big > tech communities like Python, Redis, and Django. > > In the spirit of the big recent terminology change, I propose retiring or > rewording the "Beautiful is better than ugly" Zen clause for perpetuating > beauty bias and containing lookist slur. I realize that Zen is old, but you > can't argue that the word "ugly" is harmless, now that society condemns > body shaming, and instead promotes body acceptance and self-love. One > alternative to that clause I could think of is "Clean is better than > dirty", but please do speak up if you have better ideas. > > I ask you to give this change serious consideration, even if it seems > over-the-top to you now, because times change, and this will be of great > help in the battle for the more tolerant and less judgemental society. > > I understand that this topic may seem controversial to some, so please be > open-minded and take extra care to respect the PSF Code Of Conduct when > replying. > > Thank you! > > - Sam > > Some references: > > https://www.urbandictionary.com/define.php?term=Lookism > https://en.m.wikipedia.org/wiki/Lookism > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Thu Sep 13 05:53:11 2018 From: phd at phdru.name (Oleg Broytman) Date: Thu, 13 Sep 2018 11:53:11 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: <20180913095311.z66wn34vbv37s2zq@phdru.name> On Thu, Sep 13, 2018 at 09:36:42AM +0100, Samantha Quan wrote: > First, I'd like to express how grateful I am to see more and more > technical communities embrace diversity and inclusivity, particularly big > tech communities like Python, Redis, and Django. > In the spirit of the big recent terminology change, I propose retiring or > rewording the "Beautiful is better than ugly" Zen clause for perpetuating > beauty bias and containing lookist slur. I realize that Zen is old, but > you can't argue that the word "ugly" is harmless, now that society > condemns body shaming, and instead promotes body acceptance and self-love. > One alternative to that clause I could think of is "Clean is better than > dirty", but please do speak up if you have better ideas. > I ask you to give this change serious consideration, even if it seems > over-the-top to you now, because times change, and this will be of great > help in the battle for the more tolerant and less judgemental society. > I understand that this topic may seem controversial to some, so please be > open-minded and take extra care to respect the PSF Code Of Conduct when > replying. > Thank you! > ?? - Sam > Some references: > https://www.urbandictionary.com/define.php?term=Lookism > https://en.m.wikipedia.org/wiki/Lookism Nice trolling, go on! :-D PS. But please can you configure your mail to send text, not HTML? Oleg. -- Oleg Broytman https://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From g.rodola at gmail.com Thu Sep 13 05:55:40 2018 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Thu, 13 Sep 2018 11:55:40 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: On Thu, Sep 13, 2018 at 10:38 AM Samantha Quan wrote: > > First, I'd like to express how grateful I am to see more and more technical communities embrace diversity and inclusivity, particularly big tech communities like Python, Redis, and Django. > > In the spirit of the big recent terminology change, I propose retiring or rewording the "Beautiful is better than ugly" Zen clause for perpetuating beauty bias and containing lookist slur. I realize that Zen is old, but you can't argue that the word "ugly" is harmless, now that society condemns body shaming, and instead promotes body acceptance and self-love. One alternative to that clause I could think of is "Clean is better than dirty", but please do speak up if you have better ideas. > > I ask you to give this change serious consideration, even if it seems over-the-top to you now, because times change, and this will be of great help in the battle for the more tolerant and less judgemental society. > > I understand that this topic may seem controversial to some, so please be open-minded and take extra care to respect the PSF Code Of Conduct when replying. > > Thank you! > > - Sam > > Some references: > > https://www.urbandictionary.com/define.php?term=Lookism > https://en.m.wikipedia.org/wiki/Lookism > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ This is simply ridiculous. I'm not sure if this is political correctness pushed to its limits or just trolling. -- Giampaolo - http://grodola.blogspot.com From phd at phdru.name Thu Sep 13 05:51:58 2018 From: phd at phdru.name (Oleg Broytman) Date: Thu, 13 Sep 2018 11:51:58 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: <20180913095158.pcwoklhrkub7jkvg@phdru.name> On Thu, Sep 13, 2018 at 11:05:53AM +0200, Jacco van Dorp wrote: > -1. The concept of ugly code is everywhere on the internet. Everyone on > this planet has either written ugly code or no code at all. Some have also > written beautiful code. > > People aren't code, and code isn't people. I can't see this becoming a > problem until we have an AI that can feel insulted because someone tells it > it's code looks ugly, and that's waaaaay off. Don't conflate code with > people, please. > > At the risk of politicizing (which you also took), I'd like to add that > diversity and inclusivity of thought is more important than that of > whatever arbitrary beauty standard. > > The Zen is clear and not about people. Don't try to make it to be so. "Master/slave" technical term is also not about people but some think the term must be changed: https://www.theregister.co.uk/2018/09/11/python_purges_master_and_slave_in_political_pogrom/ I'm against the move and like the PRs to be reverted. > Op do 13 sep. 2018 om 10:38 schreef Samantha Quan : > > > First, I'd like to express how grateful I am to see more and more > > technical communities embrace diversity and inclusivity, particularly big > > tech communities like Python, Redis, and Django. > > > > In the spirit of the big recent terminology change, I propose retiring or > > rewording the "Beautiful is better than ugly" Zen clause for perpetuating > > beauty bias and containing lookist slur. I realize that Zen is old, but you > > can't argue that the word "ugly" is harmless, now that society condemns > > body shaming, and instead promotes body acceptance and self-love. One > > alternative to that clause I could think of is "Clean is better than > > dirty", but please do speak up if you have better ideas. > > > > I ask you to give this change serious consideration, even if it seems > > over-the-top to you now, because times change, and this will be of great > > help in the battle for the more tolerant and less judgemental society. > > > > I understand that this topic may seem controversial to some, so please be > > open-minded and take extra care to respect the PSF Code Of Conduct when > > replying. > > > > Thank you! > > > > - Sam > > > > Some references: > > > > https://www.urbandictionary.com/define.php?term=Lookism > > https://en.m.wikipedia.org/wiki/Lookism Oleg. -- Oleg Broytman https://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From solipsis at pitrou.net Thu Sep 13 06:03:13 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 13 Sep 2018 12:03:13 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: <20180913120313.167c74a3@fsol> On Thu, 13 Sep 2018 11:55:40 +0200 "Giampaolo Rodola'" wrote: > > This is simply ridiculous. I'm not sure if this is political > correctness pushed to its limits or just trolling. Indeed she might be trolling. Though the fact we're hesitating on the diagnosis shows how far reality has come on the matter... Regards Antoine. From stephanh42 at gmail.com Thu Sep 13 06:05:15 2018 From: stephanh42 at gmail.com (Stephan Houben) Date: Thu, 13 Sep 2018 12:05:15 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <20180913120313.167c74a3@fsol> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913120313.167c74a3@fsol> Message-ID: Op do 13 sep. 2018 12:03 schreef Antoine Pitrou : > On Thu, 13 Sep 2018 11:55:40 +0200 > "Giampaolo Rodola'" > wrote: > > > > This is simply ridiculous. I'm not sure if this is political > > correctness pushed to its limits or just trolling. > > Indeed she might be trolling. Though the fact we're hesitating on the > diagnosis shows how far reality has come on the matter... > Poe's Law? https://en.m.wikipedia.org/wiki/Poe%27s_law Stephan > > Regards > > Antoine. > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at brice.xyz Thu Sep 13 06:12:06 2018 From: contact at brice.xyz (Brice Parent) Date: Thu, 13 Sep 2018 12:12:06 +0200 Subject: [Python-ideas] __iter__(), keys(), and the mapping protocol In-Reply-To: References: Message-ID: <00702965-e891-ffde-6562-4239f9d07a3d@brice.xyz> Le 13/09/2018 ? 10:07, Jonathan Fine a ?crit?: > Now for my opinions. (Yours might be different.) > > First, it is my opinion that it is not reasonable to insist that the > argument after ** must be a mapping. All that is required to construct > a dictionary is a sequence of (key, value) pairs. The dict(iterable) > construction proves that point. > > Second, relaxing the ** condition is useful. Consider the following. > > >>> class NS: pass > >>> ns = NS() > > >>> ns.a = 3 > >>> ns.b = 5 > > >>> ns.__dict__ > {'b': 5, 'a': 3} > > >>> def fn(**kwargs): return kwargs > > >>> fn(**ns) > TypeError: fn() argument after ** must be a mapping, not NS > > >>> fn(**ns.__dict__) > {'b': 5, 'a': 3} I don't know about namespaces. It's probably been a concept I've often used without putting a name on it. But for dataclasses, I'd find it quite useful to have {**my_data_class} be a shortcut to {**dataclasses.asdict(my_data_class)} It's most likely what we'd want to achieve by unpacking a dataclass (or at least, to my opinion). I'm not sure about the internals and the weight of such a feature, but I guess a toy implementation would just be, whenever we should raise a TypeError because the variable is not a mapping, to check whether it's a dataclass instance, and if so, call asdict on it, and return its result. I'm not sure I'm not off-topic though... From solipsis at pitrou.net Thu Sep 13 06:07:40 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 13 Sep 2018 12:07:40 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: <20180913120740.2b742dd8@fsol> Hi Samantha, You ask others to be open-minded, but fail to show such an attitude yourself. Beauty is a very old and important concept in the history of human societies, present in most or all of them, and has been the subject of a wide range of interpretations, studies and theories. And, as a French person, I have to notice this is yet another attempt to impose reactionary, intolerant American politics on the rest of the world (or of the Python community). Regards, Antoine. On Thu, 13 Sep 2018 09:36:42 +0100 Samantha Quan wrote: > First, I'd like to express how grateful I am to see more and more technical communities embrace diversity and inclusivity, particularly big tech communities like Python, Redis, and Django. > > In the spirit of the big recent terminology change, I propose retiring or rewording the "Beautiful is better than ugly" Zen clause for perpetuating beauty bias and containing lookist slur. I realize that Zen is old, but you can't argue that the word "ugly" is harmless, now that society condemns body shaming, and instead promotes body acceptance and self-love. One alternative to that clause I could think of is "Clean is better than dirty", but please do speak up if you have better ideas. > > I ask you to give this change serious consideration, even if it seems over-the-top to you now, because times change, and this will be of great help in the battle for the more tolerant and less judgemental society. > > I understand that this topic may seem controversial to some, so please be open-minded and take extra care to respect the PSF Code Of Conduct when replying. > > Thank you! > > ? - Sam > > Some references: > > https://www.urbandictionary.com/define.php?term=Lookism > https://en.m.wikipedia.org/wiki/Lookism > From mertz at gnosis.cx Thu Sep 13 06:17:22 2018 From: mertz at gnosis.cx (David Mertz) Date: Thu, 13 Sep 2018 06:17:22 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: Some books we should burn include: Title Beautiful Evidence Author Edward R. Tufte Edition illustrated Publisher Graphics Press, 2006 ISBN 1930824165, 9781930824164 Length 213 pages Title Beautiful Code: Leading Programmers Explain How They Think *Theory in practice* Editors Andy Oram , Greg Wilson Edition illustrated Publisher O'Reilly Media, 2007 ISBN 0596510047, 9780596510046 Length 593 pages Subjects Computers ? Programming ? General Title Beautiful Visualization: Looking at Data Through the Eyes of Experts *Theory in practice* Editors Julia S. Steele , Noah P. N. Iliinsky Publisher O'Reilly, 2010 ISBN 1449379885, 9781449379889 Length 397 pages Thu, Sep 13, 2018, 4:39 AM Samantha Quan wrote: > > First, I'd like to express how grateful I am to see more and more > technical communities embrace diversity and inclusivity, particularly big > tech communities like Python, Redis, and Django. > > In the spirit of the big recent terminology change, I propose retiring or > rewording the "Beautiful is better than ugly" Zen clause for perpetuating > beauty bias and containing lookist slur. I realize that Zen is old, but you > can't argue that the word "ugly" is harmless, now that society condemns > body shaming, and instead promotes body acceptance and self-love. One > alternative to that clause I could think of is "Clean is better than > dirty", but please do speak up if you have better ideas. > > I ask you to give this change serious consideration, even if it seems > over-the-top to you now, because times change, and this will be of great > help in the battle for the more tolerant and less judgemental society. > > I understand that this topic may seem controversial to some, so please be > open-minded and take extra care to respect the PSF Code Of Conduct when > replying. > > Thank you! > > - Sam > > Some references: > > https://www.urbandictionary.com/define.php?term=Lookism > https://en.m.wikipedia.org/wiki/Lookism > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.van.dorp at deonet.nl Thu Sep 13 06:22:14 2018 From: j.van.dorp at deonet.nl (Jacco van Dorp) Date: Thu, 13 Sep 2018 12:22:14 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: > I'm pleasantly surprised by the general response here. I was taking it > seriously because, well, that's how far it's going everywhere. -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Thu Sep 13 06:32:43 2018 From: phd at phdru.name (Oleg Broytman) Date: Thu, 13 Sep 2018 12:32:43 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: <20180913103243.oegwnauiscf2ryfg@phdru.name> On Thu, Sep 13, 2018 at 12:22:14PM +0200, Jacco van Dorp wrote: > I'm pleasantly surprised by the general response here. I was taking it > seriously because, well, that's how far it's going everywhere. 1. I couldn't believe it could be serious. 2. I was sure it was trolling on the trail of https://bugs.python.org/issue34605 3. The name of a Canadian actress combined with Russian free email service made the suspicion more obvious. Oleg. -- Oleg Broytman https://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From jfine2358 at gmail.com Thu Sep 13 07:03:58 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Thu, 13 Sep 2018 12:03:58 +0100 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <20180913103243.oegwnauiscf2ryfg@phdru.name> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> Message-ID: The first line from "import this" is The Zen of Python, by Tim Peters I suggest we put this discussion on hold, until Tim Peters (copied) has had a chance to respond. -- Jonathan From solipsis at pitrou.net Thu Sep 13 07:16:00 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 13 Sep 2018 13:16:00 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> Message-ID: <20180913131600.1d89182e@fsol> On Thu, 13 Sep 2018 12:32:43 +0200 Oleg Broytman wrote: > On Thu, Sep 13, 2018 at 12:22:14PM +0200, Jacco van Dorp wrote: > > I'm pleasantly surprised by the general response here. I was taking it > > seriously because, well, that's how far it's going everywhere. > > 1. I couldn't believe it could be serious. > > 2. I was sure it was trolling on the trail of > https://bugs.python.org/issue34605 > > 3. The name of a Canadian actress combined with Russian free email > service made the suspicion more obvious. Good point. I don't know much about actresses and gmane hides the provider part of e-mail addresses. Regards Antoine. From jfine2358 at gmail.com Thu Sep 13 07:18:07 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Thu, 13 Sep 2018 12:18:07 +0100 Subject: [Python-ideas] __iter__(), keys(), and the mapping protocol In-Reply-To: <00702965-e891-ffde-6562-4239f9d07a3d@brice.xyz> References: <00702965-e891-ffde-6562-4239f9d07a3d@brice.xyz> Message-ID: Hi Brice Good comment. I liked it. Not badly off-topic I think, because it looks to be an interesting work-around for the original problem. You wrote > But for dataclasses, I'd find it quite useful to have > {**my_data_class} > be a shortcut to > {**dataclasses.asdict(my_data_class)} How about writing fn( ** stst( data_class_obj ) ) where stst() does whatever it is you consider to be the right thing. My suggestion would be something like def stst(obj): method = getattr(obj, '__stst') if method: return method() else: return obj And then it's your responsibility to add an '__stst' attribute to your data classes. -- Jonathan From jmcs at jsantos.eu Thu Sep 13 08:16:44 2018 From: jmcs at jsantos.eu (=?UTF-8?B?Sm/Do28gU2FudG9z?=) Date: Thu, 13 Sep 2018 14:16:44 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <20180913131600.1d89182e@fsol> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> <20180913131600.1d89182e@fsol> Message-ID: One important difference between master/slave and beautiful/ugly is that the first pair are concrete concepts that typically applies to people, and the second are abstract concepts that always applied also to objects and abstract concepts. On Thu, 13 Sep 2018 at 13:16, Antoine Pitrou wrote: > On Thu, 13 Sep 2018 12:32:43 +0200 > Oleg Broytman wrote: > > On Thu, Sep 13, 2018 at 12:22:14PM +0200, Jacco van Dorp < > j.van.dorp at deonet.nl> wrote: > > > I'm pleasantly surprised by the general response here. I was taking it > > > seriously because, well, that's how far it's going everywhere. > > > > 1. I couldn't believe it could be serious. > > > > 2. I was sure it was trolling on the trail of > > https://bugs.python.org/issue34605 > > > > 3. The name of a Canadian actress combined with Russian free email > > service made the suspicion more obvious. > > Good point. I don't know much about actresses and gmane hides the > provider part of e-mail addresses. > > Regards > > Antoine. > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Thu Sep 13 08:21:45 2018 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 13 Sep 2018 22:21:45 +1000 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> <20180913131600.1d89182e@fsol> Message-ID: On Thu, Sep 13, 2018 at 10:16 PM, Jo?o Santos wrote: > One important difference between master/slave and beautiful/ugly is that the > first pair are concrete concepts that typically applies to people, and the > second are abstract concepts that always applied also to objects and > abstract concepts. You may or may not be right about "slave", but "master" is frequently applied to objects - the document from which other copies are taken, the template from which a cast is formed, etc. Even when applied to people, it doesn't have to be paired with slavery - a "master" of a skill is, well, someone who has mastered it. Excising the word master from all documentation is likely impossible, and pointless. And yes, I'm probably going to be slaughtered for saying this. But I grew up around photocopiers, so to me, the "master" was the good quality print-out that we stuck into the top of the copier, as opposed to the "copies" that came out the front of it. Not everyone assumes the worst about words. ChrisA From j.van.dorp at deonet.nl Thu Sep 13 08:29:14 2018 From: j.van.dorp at deonet.nl (Jacco van Dorp) Date: Thu, 13 Sep 2018 14:29:14 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> <20180913131600.1d89182e@fsol> Message-ID: Op do 13 sep. 2018 om 14:22 schreef Chris Angelico : > On Thu, Sep 13, 2018 at 10:16 PM, Jo?o Santos wrote: > > One important difference between master/slave and beautiful/ugly is that > the > > first pair are concrete concepts that typically applies to people, and > the > > second are abstract concepts that always applied also to objects and > > abstract concepts. > > You may or may not be right about "slave", but "master" is frequently > applied to objects - the document from which other copies are taken, > the template from which a cast is formed, etc. Even when applied to > people, it doesn't have to be paired with slavery - a "master" of a > skill is, well, someone who has mastered it. Excising the word master > from all documentation is likely impossible, and pointless. > > And yes, I'm probably going to be slaughtered for saying this. But I > grew up around photocopiers, so to me, the "master" was the good > quality print-out that we stuck into the top of the copier, as opposed > to the "copies" that came out the front of it. Not everyone assumes > the worst about words. > > ChrisA > Nah, you're pretty right. Removing master/slave is almost as stupid as ugly/beautiful. You can have master and slave devices - for example, if I have a PC that tells a robot what to do, my PC is the master and the robot the slave. Nothing wrong there either. It's just what the words mean. People shouldn't try and take personal offense to things that haven't been applied to them personally, or, even worse, complain about a term applied to anything/anyone else in a way they perceive to be offensive. Perception is different between people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mal at egenix.com Thu Sep 13 09:14:10 2018 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 13 Sep 2018 22:14:10 +0900 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: I just want to add my -1 to the list of others who have already expressed similar opinions. Please keep the meaning of language associated with the respective context. Language is always open to interpretation. It doesn't imply that one particular interpretation is more right or more wrong, nor does it imply that a meaning in one context can be equally applied to other contexts. The Zen of Python clearly applies to programming and language design. Beauty in design is very similar to beauty in flora. For me, it refers to a general feeling of consistency, pureness and standing out on its own. It's abstract and doesn't have anything to do with humans. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From jmcs at jsantos.eu Thu Sep 13 09:15:27 2018 From: jmcs at jsantos.eu (=?UTF-8?B?Sm/Do28gU2FudG9z?=) Date: Thu, 13 Sep 2018 15:15:27 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> <20180913131600.1d89182e@fsol> Message-ID: That's why I focused on pairs. I understand why some people might feel offended by the term slave (and master in opposition to it), despite personally feeling the concepts are detached. I never saw anyone oppose using the terms master/copy. Trying to tie something as abstract and general as ugly/beautiful to body shaming is a considerably bigger stretch. On Thu, 13 Sep 2018 at 14:22, Chris Angelico wrote: > On Thu, Sep 13, 2018 at 10:16 PM, Jo?o Santos wrote: > > One important difference between master/slave and beautiful/ugly is that > the > > first pair are concrete concepts that typically applies to people, and > the > > second are abstract concepts that always applied also to objects and > > abstract concepts. > > You may or may not be right about "slave", but "master" is frequently > applied to objects - the document from which other copies are taken, > the template from which a cast is formed, etc. Even when applied to > people, it doesn't have to be paired with slavery - a "master" of a > skill is, well, someone who has mastered it. Excising the word master > from all documentation is likely impossible, and pointless. > > And yes, I'm probably going to be slaughtered for saying this. But I > grew up around photocopiers, so to me, the "master" was the good > quality print-out that we stuck into the top of the copier, as opposed > to the "copies" that came out the front of it. Not everyone assumes > the worst about words. > > ChrisA > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cspealma at redhat.com Thu Sep 13 09:16:17 2018 From: cspealma at redhat.com (Calvin Spealman) Date: Thu, 13 Sep 2018 09:16:17 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: Samantha, I came into this thread reading the subject and thinking "over my dead body!" until I read your well-thought reasoning and gave even a little bit of thought to the idea. You're absolutely right and while I think its very unlikely to get enough support I do think it is a very good suggestion, totally reasonable, and that we *should* change it. I ask everyone on this thread being rude to please step back and try to look at the issue without your bias and knee-jerk reactions. Even if you can't change your minds, at least be more civil about it. On Thu, Sep 13, 2018 at 4:38 AM Samantha Quan wrote: > First, I'd like to express how grateful I am to see more and more > technical communities embrace diversity and inclusivity, particularly big > tech communities like Python, Redis, and Django. > > In the spirit of the big recent terminology change, I propose retiring or > rewording the "Beautiful is better than ugly" Zen clause for perpetuating > beauty bias and containing lookist slur. I realize that Zen is old, but you > can't argue that the word "ugly" is harmless, now that society condemns > body shaming, and instead promotes body acceptance and self-love. One > alternative to that clause I could think of is "Clean is better than > dirty", but please do speak up if you have better ideas. > > I ask you to give this change serious consideration, even if it seems > over-the-top to you now, because times change, and this will be of great > help in the battle for the more tolerant and less judgemental society. > > I understand that this topic may seem controversial to some, so please be > open-minded and take extra care to respect the PSF Code Of Conduct when > replying. > > Thank you! > > - Sam > > Some references: > > https://www.urbandictionary.com/define.php?term=Lookism > https://en.m.wikipedia.org/wiki/Lookism > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.rodola at gmail.com Thu Sep 13 09:43:30 2018 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Thu, 13 Sep 2018 15:43:30 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <20180913131600.1d89182e@fsol> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> <20180913131600.1d89182e@fsol> Message-ID: On Thu, Sep 13, 2018 at 12:51 PM Oleg Broytman wrote: > 2. I was sure it was trolling on the trail of > https://bugs.python.org/issue34605 Wow! I find it a bit excessive that #34605 was not discussed first and got checked in so quickly. I hope there won't be similar initiatives about terms such as killing, abortion, daemon, termination, disabled, etc. If somebody gets offended about these terms being used in computer science it's entirely their problem. Trying to make such individuals happy is useless and a waste of python-dev time. -- Giampaolo - http://grodola.blogspot.com -- Giampaolo - http://grodola.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Sep 13 09:52:30 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 13 Sep 2018 15:52:30 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: <20180913155230.4ac0d3fd@fsol> On Thu, 13 Sep 2018 09:16:17 -0400 Calvin Spealman wrote: > > I came into this thread reading the subject and thinking "over my dead > body!" until I read your well-thought reasoning and gave even a little bit > of thought to the idea. > > You're absolutely right and while I think its very unlikely to get enough > support I do think it is a very good suggestion, totally reasonable, and > that we *should* change it. > > I ask everyone on this thread being rude to please step back and try to > look at the issue without your bias and knee-jerk reactions. If you want to call other people rude, at least show a bit of courage and spell their names clearly instead of casting mass ad hominem attacks. And of course you are not free of bias yourself, and it seems you have knee-jerk reactions of your own, so perhaps you could have avoided posting this entirely. Attack the arguments, not the people. Regards Antoine. From rhodri at kynesim.co.uk Thu Sep 13 10:48:17 2018 From: rhodri at kynesim.co.uk (Rhodri James) Date: Thu, 13 Sep 2018 15:48:17 +0100 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: <738b924c-8711-ab90-245c-a58e6724c908@kynesim.co.uk> On 13/09/18 14:16, Calvin Spealman wrote: > Samantha, > > I came into this thread reading the subject and thinking "over my dead > body!" until I read your well-thought reasoning and gave even a little bit > of thought to the idea. > > You're absolutely right and while I think its very unlikely to get enough > support I do think it is a very good suggestion, totally reasonable, and > that we *should* change it. > > I ask everyone on this thread being rude to please step back and try to > look at the issue without your bias and knee-jerk reactions. Even if you > can't change your minds, at least be more civil about it. I couldn't disagree more, and I say that as a card-carrying liberal. First, did you check out Oleg's post about the likelihood that this is a troll? More importantly, this whole idea of banning and/or changing terminology is psychologically and sociologically wrong-headed. The moment you say "You may not use that word" you create a taboo, and give the word a power that it did not have before. It actually becomes more destructive when it is (inevitably) wheeled out, not less. You may claim that it stops everyday usage of the word, and to an extent that's true, but if people want to use the concept as an insult they will just load that intent onto some other previously innocent word. I got to watch this happen when I was growing up. My father was a Disablement Resettlement Officer, which means he found jobs for people with a wide variety of disabilities. I watched as the words that could be used for disabled people changed as the current word was deemed insulting, and even as a youngster I was boggled that no one seemed to notice or care that exactly the same thing happened every time. For a brief moment the new terminology would be all novel and different (and sometimes laughable), but after a short while all the connotations of the previous term would catch up with the new term and bring some new friends they had made on the way (see "laughable" above). So no, I'm not changing my mind. The suggestion to change is the knee-jerk reaction, and we shouldn't fall for it. > On Thu, Sep 13, 2018 at 4:38 AM Samantha Quan wrote: > >> First, I'd like to express how grateful I am to see more and more >> technical communities embrace diversity and inclusivity, particularly big >> tech communities like Python, Redis, and Django. >> >> In the spirit of the big recent terminology change, I propose retiring or >> rewording the "Beautiful is better than ugly" Zen clause for perpetuating >> beauty bias and containing lookist slur. I realize that Zen is old, but you >> can't argue that the word "ugly" is harmless, now that society condemns >> body shaming, and instead promotes body acceptance and self-love. One >> alternative to that clause I could think of is "Clean is better than >> dirty", but please do speak up if you have better ideas. >> >> I ask you to give this change serious consideration, even if it seems >> over-the-top to you now, because times change, and this will be of great >> help in the battle for the more tolerant and less judgemental society. >> >> I understand that this topic may seem controversial to some, so please be >> open-minded and take extra care to respect the PSF Code Of Conduct when >> replying. >> >> Thank you! >> >> - Sam >> >> Some references: >> >> https://www.urbandictionary.com/define.php?term=Lookism >> https://en.m.wikipedia.org/wiki/Lookism >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -- Rhodri James *-* Kynesim Ltd From mike at selik.org Thu Sep 13 11:12:53 2018 From: mike at selik.org (Michael Selik) Date: Thu, 13 Sep 2018 08:12:53 -0700 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> Message-ID: On Thu, Sep 13, 2018, 12:39 AM Anders Hovm?ller wrote: > It seems to me this discussion has drifted away from the original > discussion toward one where you have a chain of functions with the same or > almost the same signature. This is interesting for sure but we shouldn?t > forget about the original topic: how can we make it less painful to use > keyword arguments? > Using keyword arguments is not painful. It's ugly in some unusual cases, such as creating helper functions with nearly the same signature. I try to avoid that situation in my own code. It sometimes requires a significant design change, but I'm usually pleased with the results. If you don't have the inclination to reconsider the design, it could be nice to create a standard code pattern for these pass-through functions to make them more beautiful. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.peters at gmail.com Thu Sep 13 11:52:21 2018 From: tim.peters at gmail.com (Tim Peters) Date: Thu, 13 Sep 2018 10:52:21 -0500 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> Message-ID: [Jonathan Fine ] > The first line from "import this" is > > The Zen of Python, by Tim Peters > > I suggest we put this discussion on hold, until Tim Peters (copied) > has had a chance to respond. > > Don't look at me - I was merely channeling Guido ;-) That said, "beautiful v. ugly" in this context has nothing to do with human appearance. It's in the same general sense as in other technical fields: there's beautiful & ugly physics, beautiful & ugly mathematics, beautiful & ugly computer code. And not all people agree on which is which, and that's fine. Whatever _your_ aesthetic standards, you almost certainly prefer what you perceive to be beautiful than what you perceive to be ugly. It's as neutral, to me, as "good is better than evil". So I oppose changing it. If I were to change anything, I'd drop the reference to "Zen". That wasn't part of the original, and was added by someone else. If, e.g., a Zen Buddhist objected that this use trivializes their beliefs, I'd have real sympathy with _that_. But I'd be greatly surprised if a Zen Buddhist exists who objected to wordplay ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mehaase at gmail.com Thu Sep 13 12:13:58 2018 From: mehaase at gmail.com (Mark E. Haase) Date: Thu, 13 Sep 2018 12:13:58 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <738b924c-8711-ab90-245c-a58e6724c908@kynesim.co.uk> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <738b924c-8711-ab90-245c-a58e6724c908@kynesim.co.uk> Message-ID: On Thu, Sep 13, 2018 at 10:49 AM Rhodri James wrote: More importantly, this whole idea of banning and/or changing terminology is > psychologically and sociologically wrong-headed. The moment you say "You > may not use that word" you create a taboo, and give the word a power that > it did not have before. Samantha posted this as a *proposal* to python-*ideas*, the mailing list where we purportedly discuss... umm... ideas. Samantha has not banned any words from Python, so let's tone down the hyperbole. These responses that assume Samantha is a troll are based on... what? Other posters on this list use Yandex e-mails, and nobody called those people trolls. And there are a lot of disagreements about ideas, and most of those people don't get called trolls, either. The Python CoC calls for *respect*, and I posit that the majority reaction to Samantha's first post has been disrespectful. Engage the post on the ideas?or ignore it altogether?but please don't automatically label newcomers with controversial ideas as trolls. Let's assume her proposal was made in good faith. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Sep 13 12:44:54 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 13 Sep 2018 17:44:54 +0100 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <738b924c-8711-ab90-245c-a58e6724c908@kynesim.co.uk> Message-ID: On Thu, 13 Sep 2018 at 17:15, Mark E. Haase wrote: > Let's assume her proposal was made in good faith. Certainly. I'm opposed to any proposal to change long-established and common usage wording on the basis that it has the *potential* to cause offense. If anyone is *actually* offended by the wording, let them speak up, and explain why they find it offensive. Otherwise, I'd prefer to assume that people are sensible, and have a certain level of willingness to take others' words in good faith, rather than assuming offense where none is intended. It would be easy for me to claim that the culture of assuming offense where none was intended is itself a divisive and corrosive factor in society at the moment. But if I did so, that in itself would be making unfair assumptions of the intention of people making proposals like Samantha's, so I won't - I'll merely say that I'd like any proposal such as this to be backed by specific evidence of real-world cases that demonstrate that the change is needed, exactly the same criteria as we would use for a proposal for a technical change[1]. For what it's worth, I'd also have preferred it if the recent change to eliminate the (pretty standard) master/slave terminology from the documentation had been subject to the same requirement for evidence of need. Words have multiple meanings. Assuming that a word used in one context automatically brings along context and connotations from a totally unrelated area seems silly to me, to be honest. Language isn't that black and white[2]. Paul [1] I appreciate that questions of what makes good, or even acceptable, prose are very subjective. So concrete evidence is harder to produce. But nevertheless, at least an honest attempt to produce *something* would be better than simple unsubstantiated statements like "you can't argue that the word "ugly" is harmless" (yes I can - and I will, if you insist), or references like "In the spirit of the big recent terminology change" to other controversial changes as if they offered unqualified justification for more of the same. [2] There's an example - in case anyone thought otherwise, the phrase "black and white" referred to contrast between opposites, and not racial stereotyping, or indeed any reference to people as opposed to abstract concepts... From jfine2358 at gmail.com Thu Sep 13 13:19:05 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Thu, 13 Sep 2018 18:19:05 +0100 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <738b924c-8711-ab90-245c-a58e6724c908@kynesim.co.uk> Message-ID: The line in question is from Tim Peters' "The Zen of Python" Beautiful is better than . where at present is "ugly". My opinion, based on my present experience and knowledge, is that it is reasonable to consider asking Tim to change . Also, I suggest that in the context of Python and its Zen Beautiful is better than cryptic. may work better. I don't wish to change Beautiful is better than but I think in the Python context "beautiful" might have a better opposing idea. Hence my suggestion of "cryptic". By the way, https://www.thesaurus.com/browse/beautiful gives 20 'equal first' antonyms for beautiful, starting with awkward and ending with unrefined. You might also want to look at https://www.thesaurus.com/browse/cryptic https://www.thesaurus.com/browse/ugly At present I'm neither for or against making any change. I am in favour of having a respectful discussion, where we learn from each other. -- Jonathan From boxed at killingar.net Thu Sep 13 13:19:43 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 13 Sep 2018 19:19:43 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> Message-ID: <0DC51CD0-46DD-400C-AD96-77B496215086@killingar.net> > If I were to change anything, I'd drop the reference to "Zen". That wasn't part of the original, and was added by someone else. If, e.g., a Zen Buddhist objected that this use trivializes their beliefs, I'd have real sympathy with _that_. But I'd be greatly surprised if a Zen Buddhist exists who objected to wordplay ;-) I just happen to be a Zen Buddhist! And you?re right. The worst reaction you are likely to get is an eye roll. / Kankyo (aka Anders) From rymg19 at gmail.com Thu Sep 13 13:27:08 2018 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Thu, 13 Sep 2018 12:27:08 -0500 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <0DC51CD0-46DD-400C-AD96-77B496215086@killingar.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> <0DC51CD0-46DD-400C-AD96-77B496215086@killingar.net> Message-ID: FWIW a big flag to me was putting an Urban Dictionary link under "references"... On Thu, Sep 13, 2018, 12:21 PM Anders Hovm?ller wrote: > > > If I were to change anything, I'd drop the reference to "Zen". That > wasn't part of the original, and was added by someone else. If, e.g., a > Zen Buddhist objected that this use trivializes their beliefs, I'd have > real sympathy with _that_. But I'd be greatly surprised if a Zen Buddhist > exists who objected to wordplay ;-) > > I just happen to be a Zen Buddhist! And you?re right. The worst reaction > you are likely to get is an eye roll. > > / Kankyo (aka Anders) > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -- Ryan (????) Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone else https://refi64.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfine2358 at gmail.com Thu Sep 13 13:31:51 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Thu, 13 Sep 2018 18:31:51 +0100 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <0DC51CD0-46DD-400C-AD96-77B496215086@killingar.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> <0DC51CD0-46DD-400C-AD96-77B496215086@killingar.net> Message-ID: Possibly off topic - but it is about beauty. Anders wrote > I just happen to be a Zen Buddhist! And you?re right. The worst reaction you are likely to get is an eye roll. https://en.wikipedia.org/wiki/Thich_Naht_Hahn is a Zen Buddhist. He has said To be beautiful means to be yourself. You don't need to be accepted by others. You need to accept yourself. I wonder if this is related to the beauty in "The Zen of Python". -- Jonathan From mikhailwas at gmail.com Thu Sep 13 13:57:30 2018 From: mikhailwas at gmail.com (Mikhail V) Date: Thu, 13 Sep 2018 20:57:30 +0300 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: On Thu, Sep 13, 2018 at 11:39 AM Samantha Quan wrote: > > One alternative to that clause I could think of is "Clean is better than dirty", > but please do speak up if you have better ideas. "Clean is better than hairy!" :-D > I ask you to give this change serious consideration, On a serious note: 1. Even if this slogan will be changed - it's an old folklore and it's written in many places, so at best it can be changed only in few places. Therefore your wish will not be satisfied anyway. And why care? I've started using Python long before I even knew there is such a thing called "Zen of Python". 2. Trying to take the word "ugly" out of the context and pretend it's offensive or sexism or something - are you serious about that? If so - then sorry, it borders with absurd. I could understand possible bad associations with the word "slave", but "ugly" is an general purpose word like e.g. big, small, tall, heavy, etc. 3. As for the wording - TBH, I think "Beautiful is better than ugly" sounds slightly crude, not because of possible associations, but for a topmost slogan - I find it not the most elegant wording to be honest. And I don't know what can be really 'beautiful' in _any_ code. Ugly - yes, it's often can be said about the code which is full of redundant punctuation, bad formatting, etc. But this sound strange: "this code is beautiful". Do people really say like that? I think "clean" is a better adjective for the code. I'd say: "Clean is better than untidy" more elegant wording, but anyway, it would not really change anything. From rosuav at gmail.com Thu Sep 13 14:14:47 2018 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 14 Sep 2018 04:14:47 +1000 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: On Fri, Sep 14, 2018 at 3:57 AM, Mikhail V wrote: > And I don't know what can be really 'beautiful' > in _any_ code. > Ugly - yes, it's often can be said about the code which is full of > redundant punctuation, bad formatting, etc. > But this sound strange: "this code is beautiful". Do people really > say like that? > Yes. Yes, I do. Not often, because code is seldom beautiful enough to warrant comment, but it definitely does happen. ChrisA From boxed at killingar.net Thu Sep 13 14:35:02 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 13 Sep 2018 20:35:02 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> Message-ID: <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> > Using keyword arguments is not painful. It's ugly in some unusual cases, such as creating helper functions with nearly the same signature. It?s more painful than positional. To me the fact that everyone who works on code bases that are of non-trivial size see positional arguments being used for calls with more than 3 arguments is a pretty obvious proof. I know I write them myself because it?s nicer even though you?ll have a hard time finding someone who is more of a proponent for kwargs everywhere than me! I feel this pain daily. You aren?t me so you can?t say if I feel this or not. > I try to avoid that situation in my own code. It sometimes requires a significant design change, but I'm usually pleased with the results. If you don't have the inclination to reconsider the design, it could be nice to create a standard code pattern for these pass-through functions to make them more beautiful. Not talking about those. But I agree that fixing those might be a good first step. Unless it hinders the progress of the bigger issue of course. I?ll repeat myself: what about .format()? If you localize you can?t use f-strings. What about templates in web apps? Obviously f-strings won?t do. What about json blobs in REST APIs? Again no help from f-strings. What about functions with more than 3 arguments generally? / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Sep 13 14:36:08 2018 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 13 Sep 2018 11:36:08 -0700 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <738b924c-8711-ab90-245c-a58e6724c908@kynesim.co.uk> Message-ID: On Thu, Sep 13, 2018 at 9:13 AM, Mark E. Haase wrote: > On Thu, Sep 13, 2018 at 10:49 AM Rhodri James wrote: > >> More importantly, this whole idea of banning and/or changing terminology >> is psychologically and sociologically wrong-headed. The moment you say "You >> may not use that word" you create a taboo, and give the word a power that it >> did not have before. > > > Samantha posted this as a *proposal* to python-*ideas*, the mailing list > where we purportedly discuss... umm... ideas. Samantha has not banned any > words from Python, so let's tone down the hyperbole. > > These responses that assume Samantha is a troll are based on... what? Other > posters on this list use Yandex e-mails, and nobody called those people > trolls. And there are a lot of disagreements about ideas, and most of those > people don't get called trolls, either. The Python CoC calls for *respect*, > and I posit that the majority reaction to Samantha's first post has been > disrespectful. > > Engage the post on the ideas?or ignore it altogether?but please don't > automatically label newcomers with controversial ideas as trolls. Let's > assume her proposal was made in good faith. It's not just automatically labeling newcomers with controversial ideas ? This is a very common tactic that online organized bigotry groups use: invent fake "socially progressive" personas, and use them to stir up arguments, undermine trust, split communities, etc. The larger campaigns are pretty well documented: http://www.slate.com/blogs/xx_factor/2014/06/16/_endfathersday_is_a_hoax_fox_news_claims_feminists_want_to_get_rid_of_father.html https://www.buzzfeednews.com/article/ryanhatesthis/your-slip-is-showing-4chan-trolls-operation-lollipop https://birdeemag.com/free-bleeding-thing/ https://www.dailydot.com/parsec/femcon-4chan-convention-scam/ http://www.newnownext.com/clovergender-hoax-fake-prank-pharma-bro-martin-shkreli-4chan-troll/01/2017/ Smaller-scale versions are also common ? these people love to jump into difficult conversations and try to make them more difficult. That said, in OP's case we don't actually know either way, and even trolls can inadvertently suggest good ideas, so we should consider the proposal on its merits. Applied to people, lookism is a real and honestly kind of horrifying thing: humans who happen to be born with less symmetric faces get paid worse, receive worse health care, all kinds of unfair things. It wasn't too long ago that being sufficiently ugly in public was actually illegal in many places: https://en.wikipedia.org/wiki/Ugly_law But even if we all agree that beautiful and ugly people should be treated equally, I don't see how it follows that beautiful and ugly buildings should be treated equally, or beautiful and ugly music should be treated equally, or beautiful and ugly code should be treated equally. The situations are totally different. Maybe there's some connection I'm missing, and if anyone (Samantha?) has links to deeper discussion then I'll happily take a look. But until then I'm totally comfortable with keeping the Zen as-is. (And I'm someone pretty far on the "SJW" side of the spectrum, and 100% in favor of Victor's original PR.) -n -- Nathaniel J. Smith -- https://vorpus.org From jfine2358 at gmail.com Thu Sep 13 15:34:41 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Thu, 13 Sep 2018 20:34:41 +0100 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> Message-ID: Hi Anders and Michael I think it would be great if plugging existing functions into our code was as easy as, well, plugging an electrical appliance into a wall socket. However, even this ease of use has taken time. https://en.wikipedia.org/wiki/AC_power_plugs_and_sockets#History And travellers know that adapters are required. And for some equipment, non-trivial transformers. Even though there's been much progress, such as multiple inheritance, super() and iterators, there's still nuisance, pain and errors.The good news is that we've identified some ways of improving the situation. Although there are the usual contending views and differences of opinion, there's also positive energy and shared values. So I'm optimistic. Anders, you wrote: > I?ll repeat myself: what about .format()? If you localize you can?t use > f-strings. What about templates in web apps? Obviously f-strings won?t do. > What about json blobs in REST APIs? Again no help from f-strings. What about > functions with more than 3 arguments generally? For f-strings, can't we turn the dictionary into a namespace, like so Python 3.6.2 (default, Jul 29 2017, 00:00:00) >>> d = dict(a='apple', b='banana', c='cherry', d=137) >>> class A: pass >>> ns = A() >>> ns.__dict__ = d # Yes, we can do this! >>> ns.a 'apple' >>> f'The a-fruit is {ns.a}, and d is {ns.d}.' 'The a-fruit is apple, and d is 137.' I don't see why this doesn't help reduce the pain. For functions with more than 3 arguments, perhaps you could give some examples where you'd like improvement. best wishes Jonathan From mike at selik.org Thu Sep 13 15:34:46 2018 From: mike at selik.org (Michael Selik) Date: Thu, 13 Sep 2018 12:34:46 -0700 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> Message-ID: On Thu, Sep 13, 2018 at 11:35 AM Anders Hovm?ller wrote: > Using keyword arguments is not painful. It's ugly in some unusual cases, > such as creating helper functions with nearly the same signature. > > It?s more painful than positional. To me the fact that everyone who works > on code bases that are of non-trivial size see positional arguments being > used for calls with more than 3 arguments is a pretty obvious proof. > Looking through my recent code, I find I rarely call functions passing more than 3 arguments, unless using * or ** unpacking. This might be part of the difference in our styles. I feel this pain daily. You aren?t me so you can?t say if I feel this or > not. > Yes, of course. Pain is subjective. Are you aware that changing Python syntax is painful for not only the implementers, but also the users? -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Thu Sep 13 16:06:29 2018 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Thu, 13 Sep 2018 23:06:29 +0300 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> <0DC51CD0-46DD-400C-AD96-77B496215086@killingar.net> Message-ID: On Thu, Sep 13, 2018 at 8:33 PM Jonathan Fine wrote: > > > To be beautiful means to be yourself. You don't need > to be accepted by others. You need to accept yourself. > > I wonder if this is related to the beauty in "The Zen of Python". > > Different people may need different advice. Python and open source have always been more about thingslike inclusivity and diversity than about excessive political correctness. I'm sure the historical concepts of master/slave were so distant to the handsome young men that developed the computer science concepts that they didn't expect to cause any naming conflicts. To avoid forming any unnecessary taboos and making things more fragile than they actually are, I'd like to point out that a computer was is the perfect slave, and that code that causes harm to others is the perfect . Don't judge the use of a single word. Look at the words next to it ? and the wider context. If you still think the wording itself is harmful, consider doing something about it. And if you think the background of some people makes it difficult for them to understand the context of a word, consider helping them out. If you can't tell inclusivity/diversity from political correctness, or dirty words from dirty bytes or from unfriendliness and intolerance, you'd better go fuck yourself. There's nothing interesting here, anyway ;-). Or at least it my be a good idea to be somewhat careful. Tim or others would probably have a better and more relevant response, though. Sorry, if you have to settle for mine. ?Koos -------------- next part -------------- An HTML attachment was scrubbed... URL: From sammiequan at yandex.com Thu Sep 13 16:06:53 2018 From: sammiequan at yandex.com (Samantha Quan) Date: Thu, 13 Sep 2018 21:06:53 +0100 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <738b924c-8711-ab90-245c-a58e6724c908@kynesim.co.uk> Message-ID: <3376791536869213@iva4-406defa25fee.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From xrmxrm at gmail.com Thu Sep 13 16:27:28 2018 From: xrmxrm at gmail.com (Richard Mateosian) Date: Thu, 13 Sep 2018 13:27:28 -0700 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: Body shaming is bad. Don't call people "ugly," regardless of how they look. Code shaming, on the other hand, can be productive. Nothing wrong with calling ugly code ugly. ...RM On Thu, Sep 13, 2018 at 1:38 AM Samantha Quan wrote: You can't argue that the word "ugly" is harmless, now that society condemns body shaming, and instead promotes body acceptance and self-love. -- Richard Mateosian Berkeley, California From phd at phdru.name Thu Sep 13 17:28:25 2018 From: phd at phdru.name (Oleg Broytman) Date: Thu, 13 Sep 2018 23:28:25 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <3376791536869213@iva4-406defa25fee.qloud-c.yandex.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <738b924c-8711-ab90-245c-a58e6724c908@kynesim.co.uk> <3376791536869213@iva4-406defa25fee.qloud-c.yandex.net> Message-ID: <20180913212825.y5vx7yif3km7dzdm@phdru.name> On Thu, Sep 13, 2018 at 09:06:53PM +0100, Samantha Quan wrote: > It's my understanding that master/slave terminology is now deprecated, > because these words carry dark meanings, too, and further alienate folks > who feel uncomfortable being reminded of them everywhere. That's the idea > of inclusivity: to make other people (usually from marginalized groups) > feel welcome and safe. By removing/replacing the word "ugly", we could > make one additional step towards being more inclusive. I also propose to ban the following technical terms that carry dark meanings: "abort", "kill" and "execute" (stop the genocide!) Not sure about "terminate". There are also nationalist jokes about Dutchs. That also must be stopped! Let's decide how to replace them and who'll send pull requests about what. Oleg. -- Oleg Broytman https://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From marko.ristin at gmail.com Thu Sep 13 18:24:11 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Fri, 14 Sep 2018 00:24:11 +0200 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: References: <140891b8-3aef-0991-9421-7479e6a63eb6@gmail.com> Message-ID: Hi, A brief follow-up (latest version 1.5.3): I removed the dependency on meta package so that now all comprehensions and generator expressions work. I still had to depend on asttokens in order to get the source code of the condition function. Is there maybe an alternative solution which uses only standard libraries? Any thoughts or feedback on the icontract library in general? Cheers, Marko On Mon, 10 Sep 2018 at 09:29, Marko Ristin-Kaufmann wrote: > Hi, > > I implemented the inheritance via meta classes and function and class > attributes for pre/postconditions and invariants, respectively. Unless I > missed something, this is as far as we can go without the proper language > support with a library based on decorators: > https://github.com/Parquery/icontract (version 1.5.0) > > Note that it is actually a complete implementation of design-by-contract > that supports both weakening of the preconditions and strengthening of the > postconditions and invariants. > > Could you please have a look and let me know what you think about the > current implementation? > > Once we are sure that there is nothing obvious missing, I'd like to move > forward and discuss whether we could add this library (or rewrite it) into > the standard Python libraries and what needs to be all fixed till to make > it that far. > > Cheers, > Marko > > On Sat, 8 Sep 2018 at 21:34, Jonathan Fine wrote: > >> Michel Desmoulin wrote: >> >> > Isn't the purpose of "assert" to be able to do design by contract ? >> > >> > assert test, "error message is the test fail" >> > >> > I mean, you just write your test, dev get a feedback on problems, and >> > prod can remove all assert using -o. >> > >> > What more do you need ? >> >> Good question. My opinion is that assert statements are good. I like them. >> >> But wait, more is possible. Here are some ideas. >> >> 1. Checking the return value (or exception). This is a post-condition. >> >> 2. Checking return value, knowing the input values. This is a more >> sophisticated post-condition. >> >> 3. Adding checks around an untrusted function - possibly third party, >> possibly written in C. >> >> 4. Selective turning on and off of checking. >> >> The last two, selective checks around untrusted functions, I find >> particularly interesting. >> >> Suppose you have a solid, trusted, well-tested and reliable system. >> And you add, or change, a function called wibble(). In this situation, >> errors are most likely to be in wibble(), or in the interface to >> wibble(). >> >> So which checks are most valuable? I suggest the answer is >> >> 1. Checks internal to wibble. >> >> 2. Pre-conditions and post-conditions for wibble >> >> 3. Pre-conditions for any function called by wibble. >> >> Suppose wibble calls wobble. We should certainly have the system check >> wobble's preconditions, in this situation. But we don't need wobble to >> run checks all the time. Only when the immediate caller is wibble. >> >> I think assertions and design-by-contract point in similar directions. >> But design-by-contract takes you further, and is I suspect more >> valuable when the system being built is large. >> >> Thank you, Michel, for your good question. >> >> -- >> Jonathan >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu Sep 13 18:41:31 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 14 Sep 2018 10:41:31 +1200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> <20180913131600.1d89182e@fsol> Message-ID: <5B9AE79B.1070908@canterbury.ac.nz> Jacco van Dorp wrote: > You can have master and slave devices - for example, if > I have a PC that tells a robot what to do, my PC is the master and the > robot the slave. If we're going to object to "slave", we should object to "robot" as well, since it's derived from a Czech word meaning "forced worker". https://www.etymonline.com/word/robot -- Greg From greg.ewing at canterbury.ac.nz Thu Sep 13 18:47:07 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 14 Sep 2018 10:47:07 +1200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: <5B9AE8EB.9040504@canterbury.ac.nz> M.-A. Lemburg wrote: > For > me, it refers to a general feeling of consistency, pureness and > standing out on its own. It's abstract and doesn't have > anything to do with humans. Yep. And the proposed replacement "clean/dirty" doesn't even mean the same thing. It's entirely possible for a thing to be spotlessly clean without being beautiful or elegant. -- Greg From timothy.c.delaney at gmail.com Thu Sep 13 18:55:15 2018 From: timothy.c.delaney at gmail.com (Tim Delaney) Date: Fri, 14 Sep 2018 08:55:15 +1000 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <5B9AE8EB.9040504@canterbury.ac.nz> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: On Fri, 14 Sep 2018 at 08:48, Greg Ewing wrote: > M.-A. Lemburg wrote: > > For > > me, it refers to a general feeling of consistency, pureness and > > standing out on its own. It's abstract and doesn't have > > anything to do with humans. > > Yep. And the proposed replacement "clean/dirty" doesn't even > mean the same thing. It's entirely possible for a thing to > be spotlessly clean without being beautiful or elegant. > "Elegant" is the *only* word I think it would be appropriate to replace "beautiful" with. And I can't think of an elegant replacement for "ugly" to pair with "elegant". "Awkward" would probably be the best I can think of, and "Elegant is better than awkward" just feels kinda awkward ... Tim Delaney -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Thu Sep 13 18:56:55 2018 From: guido at python.org (Guido van Rossum) Date: Thu, 13 Sep 2018 15:56:55 -0700 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <5B9AE8EB.9040504@canterbury.ac.nz> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: Everyone who still wants to reply to this thread: please decide for yourself whether the OP, "Samantha Quan" who started it could be a Russian troll. Facts to consider: (a) the OP's address is ... at yandex.com, a well-known Russian website (similar to Google); (b) there's a Canadian actress named Samantha Quan. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu Sep 13 19:02:51 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 14 Sep 2018 11:02:51 +1200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: <5B9AEC9B.2000104@canterbury.ac.nz> Calvin Spealman wrote: > I ask everyone on this thread being rude to please step back and try to > look at the issue without your bias and knee-jerk reactions. I've given it some thought, and this is what I think: As has been pointed out, context is important. The reason that shunning people for not having beautiful bodies is distasteful is that people don't have much choice about their physical appearance. But we *do* have a choice about which non-human things we surround ourselves with, especially those things that we make ourselves. Calling for the words "beautiful" and "ugly" to be expunged from the language is saying that we shouldn't be allowed to choose *anything* based on how it affects us aesthetically. That, I think, would make the world a rather miserable place to live in. -- Greg From tim.peters at gmail.com Thu Sep 13 19:34:07 2018 From: tim.peters at gmail.com (Tim Peters) Date: Thu, 13 Sep 2018 18:34:07 -0500 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: [Tim Delaney ] > "Elegant" is the *only* word I think it would be appropriate to replace > "beautiful" with. > And I can't think of an elegant replacement for "ugly" to pair with > "elegant". "Awkward" would probably be the best I can think of, and > "Elegant is better than awkward" just feels kinda awkward ... > > I already made clear that I'm opposed to changing it. But, if fashion dictates it must change, then the only worthy alternative would be: Elegant is better than inelegant. Which illustrates all by itself why inelegant sucks ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kulakov.ilya at gmail.com Thu Sep 13 20:18:01 2018 From: kulakov.ilya at gmail.com (Ilya Kulakov) Date: Thu, 13 Sep 2018 17:18:01 -0700 Subject: [Python-ideas] Deprecation utilities for the warnings module Message-ID: <3E800307-2D67-4E8E-B2C3-4C83875F0226@gmail.com> (Apologies if it's a duplicate. I originally posted to python-ideas at googlegroups.com) I've been recently working on an internal library written entirely in Python. The library while being under development was as actively used by other teams. The product was under pressure and hasty design decisions were made. Certain interfaces were exposed before scope of the problem was entirely clear. Didn't take long before the code started to beg for refactoring. But compatibility needed to be preserved. As a result a set of utilities, which I'd like to share, came to life. --- The warnings module is a robust foundation. Authors can warn what should be avoided and users can choose how to act. However in its current form it still requires utilities to be useful. E.g. it's non-trivial to properly replace a deprecated class. I'd like to propose an extension for the warnings module to address this problem. The extensions consists of 4 decorators: - @deprecated - @obsolete - @deprecated_arg - @obsolete_arg The @deprecated decorator marks an object to issue a warning if it's used: - Callable objects issue a warning upon a call - Property attributes issue a warning upon an access - Classes issue a warning upon instantiation and subclassing (directly or via a metaclass) The @obsolete decorator marks an object in the same way as @deprecated does but forwards usage to the replacement: - Callable object redirect the call - Property attribute redirect the access (get / set / del) - Class is replaced in a way that during both instantiation and subclassing replacement is used In case of classes extra care is taken to preserve validity of existing isinstance and issubclass checks. The @deprecated_arg and @obsolete_arg work with signatures of callable objects. Upon a call either a warning is issued or an argument mapping is performed. Please take a look at the implementation and a few examples: https://gist.github.com/Kentzo/53df97c7a54609d3febf5f8eb6b67118 Why I think it should be a part of stdlib: - Library authors are reluctant to add dependencies especially when it's for internal usage - Ease of use will reduce compatibility problems and ease migration burden since the soltion will be readily available - IDEs and static analyzers are more likely to support it --- Joshua Harlow shared a link to OpenStack's debtcollector: https://docs.openstack.org/debtcollector/latest/reference/index.html Certain aspects of my implementation are inspired by it. Please let me know what you think about the idea in general and implementation in particular. If that's something the community is interested in, I'm willing to work on it. From boxed at killingar.net Thu Sep 13 20:22:44 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 14 Sep 2018 02:22:44 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> Message-ID: >> I?ll repeat myself: what about .format()? If you localize you can?t use >> f-strings. What about templates in web apps? Obviously f-strings won?t do. >> What about json blobs in REST APIs? Again no help from f-strings. What about >> functions with more than 3 arguments generally? > > For f-strings, can't we turn the dictionary into a namespace, like so I say I?m talking about cases where you can?t use f-strings and you directly talk about f-strings? I think you might have missed the point. > For functions with more than 3 arguments, perhaps you could give some > examples where you'd like improvement. Sure. Run this script against django: https://gist.github.com/boxed/e60e3e19967385dc2c7f0de483723502 It will print all function calls that are positional and have > 2 arguments. Not a single one is good as is, all would be better with keyword arguments. A few of them are pass through as discussed before, but far from all. For example: django-master/django/http/multipartparser.py 225 handler.new_file( field_name, file_name, content_type, content_length, charset, content_type_extra, ) That?s positional because keyword is more painful. / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Thu Sep 13 20:34:28 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 14 Sep 2018 02:34:28 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> Message-ID: > On 13 Sep 2018, at 21:34, Michael Selik wrote: > > On Thu, Sep 13, 2018 at 11:35 AM Anders Hovm?ller > wrote: >> Using keyword arguments is not painful. It's ugly in some unusual cases, such as creating helper functions with nearly the same signature. > > It?s more painful than positional. To me the fact that everyone who works on code bases that are of non-trivial size see positional arguments being used for calls with more than 3 arguments is a pretty obvious proof. > > Looking through my recent code, I find I rarely call functions passing more than 3 arguments, unless using * or ** unpacking. This might be part of the difference in our styles. I wrote a script so you can get a list of them all in big code bases without looking through the code randomly. https://gist.github.com/boxed/e60e3e19967385dc2c7f0de483723502 > > I feel this pain daily. You aren?t me so you can?t say if I feel this or not. > > Yes, of course. Pain is subjective. Are you aware that changing Python syntax is painful for not only the implementers, but also the users? Let?s not exaggerate here. I?ve already implemented my original proposed syntax, and I?m pretty sure the =foo syntax is similarly easy (if not easier!) to implement. It took me something like one hour and I haven?t coded C for 20 years, 7 years if you count C++, and I?ve never looked at the CPython code base before. It?s not that painful to implement. As for users I don?t buy that argument much either. We aren?t optimizing for not changing the language just to make people not learn new things. See f-strings, assignment expressions, new except syntax, matrix multiplication operator. We are optimizing for the value of python. Or, that?s how I see it anyway. / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Thu Sep 13 20:41:49 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 14 Sep 2018 02:41:49 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <3376791536869213@iva4-406defa25fee.qloud-c.yandex.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <738b924c-8711-ab90-245c-a58e6724c908@kynesim.co.uk> <3376791536869213@iva4-406defa25fee.qloud-c.yandex.net> Message-ID: <96CA5781-D71A-42B6-884B-AEDD4A243F37@killingar.net> > "Ugly" is very obviously a slur. It carries a dark meaning *and* it's still being actively used towards people. Honestly, I can't imagine someone cheering up when they see that word, especially if they're self-conscious about their appearance or were told they were "ugly" at some point of their life. Many things are slurs for this reason. This thread has suggested ?hairy? which will have the exact same problem. ?Smell? as in ?code smell? has bad connotations for every man who has ever been a teenager and I?m guessing for many women too. At least we use ?ugly? for cars or trees, but ?hairy? and ?smelly? not so much. I?d like to see some better suggestions for replacements here. The Zen is trying to express what the Python community feels about how code looks and feels, and just removing this point would make the Zen less reflective of the actual values we share. / Anders From mike at selik.org Thu Sep 13 21:35:20 2018 From: mike at selik.org (Michael Selik) Date: Thu, 13 Sep 2018 18:35:20 -0700 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> Message-ID: On Thu, Sep 13, 2018 at 5:34 PM Anders Hovm?ller wrote: > I wrote a script so you can get a list of [good use cases] in big code > bases without looking through the code randomly. > https://gist.github.com/boxed/e60e3e19967385dc2c7f0de483723502 > In that case, you should be able to link to a compelling example. If you go to the trouble of finding one, I'll take time to try to refactor it. Let?s not exaggerate [the pain of implementation]. > The pain is not just time to implement, but also code review, more surface area for bugs, increased maintenance burden, and teaching users about the new feature. We aren?t optimizing for not changing the language just to make people not > learn new things. > On the contrary, I think we should be. Ceteris paribus, it's not good to churn the language. It's not some startup hoping to find a business model. The more techniques available, the less consistent Pythonic style will be. There are always trade-offs. In this case, I don't think the benefit is worth the cost. -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Thu Sep 13 21:49:58 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 14 Sep 2018 03:49:58 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> Message-ID: <81EE93A7-C0C4-4ADA-9E66-99E4387A4ADE@killingar.net> > On 14 Sep 2018, at 03:35, Michael Selik wrote: > > On Thu, Sep 13, 2018 at 5:34 PM Anders Hovm?ller > wrote: > I wrote a script so you can get a list of [good use cases] in big code bases without looking through the code randomly. https://gist.github.com/boxed/e60e3e19967385dc2c7f0de483723502 > > In that case, you should be able to link to a compelling example. If you go to the trouble of finding one, I'll take time to try to refactor it. https://github.com/django/django/blob/master/django/db/models/sql/compiler.py#L707 Is a pretty typical one. The full list for django looks like this: django-master/tests/model_forms/tests.py 2837 super().__new__(cls, name, bases, attrs) django-master/tests/middleware/tests.py 374 super().is_ignorable_request(request, uri, domain, referer) django-master/tests/migrations/test_operations.py 1321 operation.database_forwards(app_label, editor, project_state, first_state) django-master/tests/migrations/test_operations.py 1332 operation.database_forwards(app_label, editor, first_state, second_state) django-master/tests/migrations/test_operations.py 1337 operation.database_backwards(app_label, editor, second_state, first_state) django-master/tests/migrations/test_operations.py 2669 operation.database_forwards(app_label, editor, project_state, new_state) django-master/tests/migrations/test_operations.py 2674 operation.database_backwards(app_label, editor, new_state, project_state) django-master/tests/migrations/test_multidb.py 65 operation.database_forwards(app_label, editor, project_state, new_state) django-master/tests/migrations/test_multidb.py 72 operation.database_backwards(app_label, editor, new_state, project_state) django-master/tests/migrations/test_multidb.py 124 operation.database_forwards(app_label, editor, project_state, new_state) django-master/tests/migrations/test_multidb.py 160 operation.database_forwards(app_label, editor, project_state, new_state) django-master/tests/raw_query/tests.py 54 self.assertProcessed(model, results, expected_results, expected_annotations) django-master/tests/raw_query/tests.py 255 self.assertSuccessfulRawQuery(Author, query, authors, expected_annotations) django-master/tests/forms_tests/widget_tests/test_multiwidget.py 45 super().__init__(fields, required, widget, label, initial) django-master/tests/contenttypes_tests/test_models.py 52 ContentType.objects.get_for_models(ContentType, FooWithUrl, ProxyModel, ConcreteModel) django-master/tests/contenttypes_tests/test_models.py 63 ContentType.objects.get_for_models(ContentType, FooWithUrl, ProxyModel, ConcreteModel) django-master/tests/admin_views/admin.py 154 super().save_model(request, obj, form, change) django-master/tests/admin_views/admin.py 335 super().save_related(request, form, formsets, change) django-master/django/templatetags/i18n.py 142 translation.npgettext(message_context, singular, plural, count) django-master/django/middleware/common.py 126 self.is_ignorable_request(request, path, domain, referer) django-master/django/forms/models.py 216 super(ModelFormMetaclass, mcs).__new__(mcs, name, bases, attrs) django-master/django/forms/widgets.py 174 super(MediaDefiningClass, mcs).__new__(mcs, name, bases, attrs) django-master/django/forms/widgets.py 899 super().__init__(attrs, date_format, time_format, date_attrs, time_attrs) django-master/django/forms/forms.py 36 super(DeclarativeFieldsMetaclass, mcs).__new__(mcs, name, bases, attrs) django-master/django/core/cache/backends/filebased.py 28 self.set(key, value, timeout, version) django-master/django/core/mail/message.py 430 super().__init__( subject, body, from_email, to, bcc, connection, attachments, headers, cc, reply_to, ) django-master/django/core/management/sql.py 17 connection.ops.sql_flush(style, tables, seqs, allow_cascade) django-master/django/core/management/commands/inspectdb.py 163 self.get_meta(table_name, constraints, column_to_field_name, is_view) django-master/django/core/serializers/python.py 123 base.deserialize_m2m_values(field, field_value, using, handle_forward_references) django-master/django/core/serializers/python.py 133 base.deserialize_fk_value(field, field_value, using, handle_forward_references) django-master/django/core/files/uploadedfile.py 62 super().__init__(file, name, content_type, size, charset, content_type_extra) django-master/django/core/files/uploadedfile.py 83 super().__init__(file, name, content_type, size, charset, content_type_extra) django-master/django/test/selenium.py 21 super().__new__(cls, name, bases, attrs) django-master/django/test/testcases.py 381 self._assert_contains( response, text, status_code, msg_prefix, html) django-master/django/test/testcases.py 398 self._assert_contains( response, text, status_code, msg_prefix, html) django-master/django/test/testcases.py 619 self._assert_raises_or_warns_cm(func, cm_attr, expected_exception, expected_message) django-master/django/template/library.py 187 super().__init__(func, takes_context, args, kwargs) django-master/django/template/library.py 204 super().__init__(func, takes_context, args, kwargs) django-master/django/template/response.py 144 super().__init__(template, context, content_type, status, charset, using) django-master/django/utils/deprecation.py 48 super().__new__(cls, name, bases, attrs) django-master/django/utils/duration.py 40 '{}P{}DT{:02d}H{:02d}M{:02d}{}S'.format(sign, days, hours, minutes, seconds, ms) django-master/django/utils/http.py 176 datetime.datetime(year, month, day, hour, min, sec) django-master/django/utils/text.py 99 self._text_chars(length, truncate, text, truncate_len) django-master/django/utils/decorators.py 138 middleware.process_view(request, view_func, args, kwargs) django-master/django/utils/translation/__init__.py 95 _trans.npgettext(context, singular, plural, number) django-master/django/contrib/admin/options.py 796 self.paginator(queryset, per_page, orphans, allow_empty_first_page) django-master/django/contrib/admin/options.py 1526 self._changeform_view(request, object_id, form_url, extra_context) django-master/django/contrib/admin/options.py 1567 self.construct_change_message(request, form, formsets, add) django-master/django/contrib/admin/options.py 1597 self.get_inline_formsets(request, formsets, inline_instances, obj) django-master/django/contrib/admin/options.py 1640 self.changeform_view(request, object_id, form_url, extra_context) django-master/django/contrib/admin/widgets.py 436 self.create_option(name, option_value, option_label, selected_choices, index) django-master/django/contrib/admin/helpers.py 333 super().__init__(form, fieldsets, prepopulated_fields, readonly_fields, model_admin) django-master/django/contrib/admin/filters.py 67 super().__init__(request, params, model, model_admin) django-master/django/contrib/admin/filters.py 126 super().__init__(request, params, model, model_admin) django-master/django/contrib/admin/filters.py 169 super().__init__(field, request, params, model, model_admin, field_path) django-master/django/contrib/admin/filters.py 228 super().__init__(field, request, params, model, model_admin, field_path) django-master/django/contrib/admin/filters.py 263 super().__init__(field, request, params, model, model_admin, field_path) django-master/django/contrib/admin/filters.py 344 super().__init__(field, request, params, model, model_admin, field_path) django-master/django/contrib/admin/filters.py 382 super().__init__(field, request, params, model, model_admin, field_path) django-master/django/contrib/postgres/search.py 62 super().resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/contrib/postgres/search.py 65 Value(self.config).resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/contrib/postgres/search.py 67 self.config.resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/contrib/postgres/search.py 89 super().__init__(lhs, connector, rhs, output_field) django-master/django/contrib/postgres/search.py 133 super().resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/contrib/postgres/search.py 136 Value(self.config).resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/contrib/postgres/search.py 138 self.config.resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/contrib/postgres/search.py 169 super().__init__(lhs, connector, rhs, output_field) django-master/django/contrib/postgres/aggregates/statistics.py 19 super().resolve_expression(query, allow_joins, reuse, summarize) django-master/django/contrib/gis/geos/mutable_list.py 269 self._assign_extended_slice(start, stop, step, valueList) django-master/django/contrib/gis/db/backends/oracle/operations.py 49 super().as_sql(connection, lookup, template_params, sql_params) django-master/django/contrib/gis/db/backends/postgis/schema.py 48 super()._alter_column_type_sql(table, old_field, new_field, new_type) django-master/django/contrib/gis/db/backends/spatialite/operations.py 22 super().as_sql(connection, lookup, template_params, sql_params) django-master/django/contrib/gis/db/backends/spatialite/schema.py 138 # Alter table super().alter_db_table(model, old_db_table, new_db_table, disable_constraints) django-master/django/contrib/gis/db/models/lookups.py 83 rhs_op.as_sql(connection, self, template_params, sql_params) django-master/django/contrib/gis/db/models/aggregates.py 35 super().resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/http/multipartparser.py 225 handler.new_file( field_name, file_name, content_type, content_length, charset, content_type_extra, ) django-master/django/db/migrations/autodetector.py 827 self.questioner.ask_rename(model_name, rem_field_name, field_name, field) django-master/django/db/migrations/operations/models.py 400 self.database_forwards(app_label, schema_editor, from_state, to_state) django-master/django/db/migrations/operations/models.py 472 self.database_forwards(app_label, schema_editor, from_state, to_state) django-master/django/db/migrations/operations/models.py 534 self.database_forwards(app_label, schema_editor, from_state, to_state) django-master/django/db/migrations/operations/models.py 615 self.database_forwards(app_label, schema_editor, from_state, to_state) django-master/django/db/migrations/operations/fields.py 254 self.database_forwards(app_label, schema_editor, from_state, to_state) django-master/django/db/migrations/operations/special.py 41 database_operation.database_forwards(app_label, schema_editor, from_state, to_state) django-master/django/db/migrations/operations/special.py 57 database_operation.database_backwards(app_label, schema_editor, from_state, to_state) django-master/django/db/backends/ddl_references.py 109 super().__init__(table, columns, quote_name, col_suffixes) django-master/django/db/backends/postgresql/schema.py 105 super()._alter_column_type_sql(model, old_field, new_field, new_type) django-master/django/db/backends/postgresql/schema.py 119 super()._alter_field( model, old_field, new_field, old_type, new_type, old_db_params, new_db_params, strict, ) django-master/django/db/backends/postgresql/schema.py 138 super()._index_columns(table, columns, col_suffixes, opclasses) django-master/django/db/backends/oracle/creation.py 35 self._execute_test_db_creation(cursor, parameters, verbosity, keepdb) django-master/django/db/backends/oracle/creation.py 52 self._handle_objects_preventing_db_destruction(cursor, parameters, verbosity, autoclobber) django-master/django/db/backends/oracle/creation.py 62 self._execute_test_db_creation(cursor, parameters, verbosity, keepdb) django-master/django/db/backends/oracle/creation.py 74 self._create_test_user(cursor, parameters, verbosity, keepdb) django-master/django/db/backends/oracle/creation.py 91 self._create_test_user(cursor, parameters, verbosity, keepdb) django-master/django/db/backends/oracle/creation.py 202 self._execute_allow_fail_statements(cursor, statements, parameters, verbosity, acceptable_ora_err) django-master/django/db/backends/oracle/creation.py 223 self._execute_allow_fail_statements(cursor, statements, parameters, verbosity, acceptable_ora_err) django-master/django/db/backends/oracle/creation.py 241 self._execute_statements(cursor, statements, parameters, verbosity) django-master/django/db/backends/oracle/creation.py 250 self._execute_statements(cursor, statements, parameters, verbosity) django-master/django/db/backends/oracle/schema.py 57 super().alter_field(model, old_field, new_field, strict) django-master/django/db/backends/oracle/schema.py 68 self.alter_field(model, old_field, new_field, strict) django-master/django/db/backends/mysql/schema.py 97 super()._alter_column_type_sql(model, old_field, new_field, new_type) django-master/django/db/backends/mysql/schema.py 101 super()._rename_field_sql(table, old_field, new_field, new_type) django-master/django/db/backends/base/schema.py 518 self._alter_many_to_many(model, old_field, new_field, strict) django-master/django/db/backends/base/schema.py 532 self._alter_field(model, old_field, new_field, old_type, new_type, old_db_params, new_db_params, strict) django-master/django/db/backends/base/schema.py 626 self._alter_column_type_sql(model, old_field, new_field, new_type) django-master/django/db/backends/base/schema.py 943 self._index_columns(table, columns, col_suffixes, opclasses) django-master/django/db/models/query.py 1041 clone.query.add_extra(select, select_params, where, params, tables, order_by) django-master/django/db/models/expressions.py 239 expr.resolve_expression(query, allow_joins, reuse, summarize) django-master/django/db/models/expressions.py 445 c.lhs.resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/db/models/expressions.py 446 c.rhs.resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/db/models/expressions.py 599 arg.resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/db/models/expressions.py 666 super().resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/db/models/expressions.py 893 c.result.resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/db/models/expressions.py 957 case.resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/db/models/expressions.py 958 c.default.resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/db/models/lookups.py 237 self.resolve_expression_parameter(compiler, connection, sql, param) django-master/django/db/models/aggregates.py 39 super().resolve_expression(query, allow_joins, reuse, summarize) django-master/django/db/models/aggregates.py 40 c.filter.resolve_expression(query, allow_joins, reuse, summarize) django-master/django/db/models/base.py 829 self._do_update(base_qs, using, pk_val, values, update_fields, forced_update) django-master/django/db/models/functions/datetime.py 64 super().resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/db/models/functions/datetime.py 191 super().resolve_expression(query, allow_joins, reuse, summarize, for_save) django-master/django/db/models/sql/compiler.py 652 self.query.join_parent_model(opts, model, start_alias, seen_models) django-master/django/db/models/sql/compiler.py 707 self.find_ordering_name(item, opts, alias, order, already_seen) django-master/django/db/models/sql/query.py 1197 self.resolve_lookup_value(value, can_reuse, allow_joins, simple_col) django-master/django/db/models/sql/query.py 1305 self._add_q( child, used_aliases, branch_negated, current_negated, allow_joins, split_subq) django-master/django/views/decorators/csrf.py 35 super().process_view(request, callback, callback_args, callback_kwargs) / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at selik.org Thu Sep 13 22:07:33 2018 From: mike at selik.org (Michael Selik) Date: Thu, 13 Sep 2018 19:07:33 -0700 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> Message-ID: On Thu, Sep 13, 2018 at 5:22 PM Anders Hovm?ller wrote: > For example: > django-master/django/http/multipartparser.py 225 > Sorry, I didn't recognize this as a link on first read. I'll provide a link here to the code in context. https://github.com/django/django/blob/e7a0a5c8b21f5ad1a0066bd0dfab84466b474e15/django/http/multipartparser.py#L225 This is a fairly large function that might benefit from being refactored to clarify the code. On the other hand, I don't advocate creating too many helper functions with only one callsite. The one thing that catches my eye is the repeated use of ``exhaust`` in the else clause to several layers of the nested try and if blocks. It's a bit too large for me to make sense of it quickly. My apologies for not offering a holistic refactor. That?s positional because keyword is more painful. > Why would keyword arguments be more painful here? They've already split the call across 4 lines. Why not go a bit further and use keyword args to make it 6 or 7 lines? Maybe they decided it reads fine as it is. Sure. Run this script against django: > https://gist.github.com/boxed/e60e3e19967385dc2c7f0de483723502 > > It will print all function calls that are positional and have > 2 > arguments. Not a single one is good as is, all would be better with keyword > arguments. > I disagree. Please expand your assertion by explaining why an example is not good as-is and would be better with keyword arguments. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at selik.org Thu Sep 13 22:12:26 2018 From: mike at selik.org (Michael Selik) Date: Thu, 13 Sep 2018 19:12:26 -0700 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: <81EE93A7-C0C4-4ADA-9E66-99E4387A4ADE@killingar.net> References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> <81EE93A7-C0C4-4ADA-9E66-99E4387A4ADE@killingar.net> Message-ID: On Thu, Sep 13, 2018 at 6:50 PM Anders Hovm?ller wrote: > On 14 Sep 2018, at 03:35, Michael Selik wrote: > In that case, you should be able to link to a compelling example. If you > go to the trouble of finding one, I'll take time to try to refactor it. > > > https://github.com/django/django/blob/master/django/db/models/sql/compiler.py#L707 > > Is a pretty typical one. > That call is recursive, so it's unlikely that the author would shift the parameters around without testing the call and changing the argument positions appropriately. The signature uses the parameter "default_order" and the call uses the argument "order". It seems that was a deliberate choice that wouldn't conform to the `f(a=a)` pattern. The call is oddly split across lines 707 and 708, despite nearby lines being much longer. it could easily have been written as a single line. I don't find this a compelling example. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Sep 13 23:02:14 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 13 Sep 2018 23:02:14 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: On 9/13/2018 7:34 PM, Tim Peters wrote: > I already made clear that I'm opposed to changing it.\ To me, this settles the issues. As author, you own the copyright on your work. The CLA allows revision of contributions, but I don't think that contributed poetry should be treated the same as code and docs. The free verse form reminds me more of Hindu-Jain-Buddhist sutras, with a bit of Monty Python tossed in, rather than of Zen writing. I presume that 'Zen' refers more to the method of composition, and the lack of post-production editing, than to the content. If the text were up for grabs, I would want to change some periods to semi-colons and reconsider some of the other lines. The 'beauty' line is one of multiple contrasts, and should be judged in that context, not in isolation. -- Terry Jan Reedy From rymg19 at gmail.com Thu Sep 13 23:32:24 2018 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Thu, 13 Sep 2018 22:32:24 -0500 Subject: [Python-ideas] Deprecation utilities for the warnings module In-Reply-To: <3E800307-2D67-4E8E-B2C3-4C83875F0226@gmail.com> References: <3E800307-2D67-4E8E-B2C3-4C83875F0226@gmail.com> Message-ID: I have to say, this would be amazing! I've basically had to create many of these by hand over time, and I doubt I'm the only person who's wondered how this isn't in the stdlib! On Thu, Sep 13, 2018, 7:18 PM Ilya Kulakov wrote: > (Apologies if it's a duplicate. I originally posted to > python-ideas at googlegroups.com) > > I've been recently working on an internal library written > entirely in Python. The library while being under development > was as actively used by other teams. The product was under > pressure and hasty design decisions were made. > Certain interfaces were exposed before scope of the problem > was entirely clear. > > Didn't take long before the code started to beg for refactoring. > But compatibility needed to be preserved. As a result a set of utilities, > which I'd like to share, came to life. > > --- > > The warnings module is a robust foundation. Authors can warn > what should be avoided and users can choose how to act. > However in its current form it still requires utilities > to be useful. E.g. it's non-trivial to properly replace > a deprecated class. > > I'd like to propose an extension for the warnings module > to address this problem. > > The extensions consists of 4 decorators: > - @deprecated > - @obsolete > - @deprecated_arg > - @obsolete_arg > > The @deprecated decorator marks an object to issue a warning > if it's used: > - Callable objects issue a warning upon a call > - Property attributes issue a warning upon an access > - Classes issue a warning upon instantiation and subclassing > (directly or via a metaclass) > > The @obsolete decorator marks an object in the same way > as @deprecated does but forwards usage to the replacement: > - Callable object redirect the call > - Property attribute redirect the access (get / set / del) > - Class is replaced in a way that during both instantiation > and subclassing replacement is used > > In case of classes extra care is taken to preserve validity > of existing isinstance and issubclass checks. > > The @deprecated_arg and @obsolete_arg work with signatures > of callable objects. Upon a call either a warning is issued > or an argument mapping is performed. > > Please take a look at the implementation and a few examples: > https://gist.github.com/Kentzo/53df97c7a54609d3febf5f8eb6b67118 > > Why I think it should be a part of stdlib: > - Library authors are reluctant to add dependencies > especially when it's for internal usage > - Ease of use will reduce compatibility problems > and ease migration burden since the soltion will be > readily available > - IDEs and static analyzers are more likely > to support it > > --- > > Joshua Harlow shared a link to OpenStack's debtcollector: > https://docs.openstack.org/debtcollector/latest/reference/index.html > Certain aspects of my implementation are inspired by it. > > Please let me know what you think about the idea in general > and implementation in particular. If that's something the community > is interested in, I'm willing to work on it. > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -- Ryan (????) Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone else https://refi64.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.peters at gmail.com Thu Sep 13 23:41:40 2018 From: tim.peters at gmail.com (Tim Peters) Date: Thu, 13 Sep 2018 22:41:40 -0500 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: [Tim] > > > I already made clear that I'm opposed to changing it. > [Terry Reedy ] > To me, this settles the issues. As author, you own the copyright on > your work. The CLA allows revision of contributions, but I don't think > that contributed poetry should be treated the same as code and docs. > I don't care about legalities here. If people want to change it into something it never intended to say, so it goes. It wouldn't be the first time A Prophet's words were bastardized to suit political fashion ;-) > The free verse form reminds me more of Hindu-Jain-Buddhist sutras, with > a bit of Monty Python tossed in, rather than of Zen writing. I presume > that 'Zen' refers more to the method of composition, and the lack of > post-production editing, than to the content. > As I noted before, "Zen" wasn't my word. Somebody else dreamed up that to give it "a title". In real life, it was originally buried in a comp.lang.python post talking about what guided Guido's _language_ design decisions. I presume "Zen" came to their mind because it's brief, and a critical reading reveals a number of seeming ambiguities and contradictions, yet it nevertheless _appears_ to say _something_ ;-) It has those aspects in common with any number of (English translations of) Zen koans. > If the text were up for grabs, I would want to change some periods to > semi-colons and reconsider some of the other lines. > While I would not ;-) > The 'beauty' line is one of multiple contrasts, and should be judged in > that context, not in isolation. > FYI, that line came first because I channeled that what it said was truly fundamental to Python's design: Guido's ineffable sense of aesthetics. Language design isn't a purely deductive science, and Guido never pretended it was. Back then, various proposals elicited encouragement or visceral disgust very quickly. Beautiful or ugly? Indeed, the rest of the aphorisms can be viewed as elaborating on aspects of what "beautiful" and "ugly" _mean_ in this context. That "beautiful" and "ugly" are subjective is essential to the point it intended. Any objectively definable terms instead would miss that point entirely. At heart, Python's design emerged from Guido's sense of beauty (and of its opposite in ordinary language: ugliness). -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Thu Sep 13 23:53:50 2018 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 14 Sep 2018 13:53:50 +1000 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: On Fri, Sep 14, 2018 at 1:41 PM, Tim Peters wrote: > I presume "Zen" came to their mind because it's brief, and a critical > reading reveals a number of seeming ambiguities and contradictions, yet it > nevertheless _appears_ to say _something_ ;-) "Somehow it seems to fill my head with ideas - only I don't exactly know what they are! However, SOMEBODY killed SOMETHING: that's clear, at any rate..." -- Alice Liddell, regarding "Jabberwocky" ChrisA From rainventions at gmail.com Fri Sep 14 00:37:11 2018 From: rainventions at gmail.com (Ryan Birmingham) Date: Fri, 14 Sep 2018 00:37:11 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: Discussions like this are always difficult and charged, but I think there's a good opportunity for growth here. I love being involved with the Python community for, among many other reasons, I think Python is quite inclusive, especially as a technical community. However, I know that people often feel excluded from technical communities, and I don't think that Python is an exception to that. Referring to code as ugly doesn't bother me, but I could see how it could bother others. I do also see how it's useful shorthand. I think that there's some sort of balancing test that corresponds to making the decision on not just this case, but others like it. I know that this kind of decision is made all the time, however, I don't see a PEP which seems to touch on anything like this. I think we need to document it for the sake of transparency and consistency. Forgetting for a moment the charged context of the conversation itself, does anyone have any opinions on how this would come to be? Thank you for reading and hopefully listening, -Ryan Birmingham On Thu, 13 Sep 2018 at 23:03, Terry Reedy wrote: > On 9/13/2018 7:34 PM, Tim Peters wrote: > > > I already made clear that I'm opposed to changing it.\ > > To me, this settles the issues. As author, you own the copyright on > your work. The CLA allows revision of contributions, but I don't think > that contributed poetry should be treated the same as code and docs. > > The free verse form reminds me more of Hindu-Jain-Buddhist sutras, with > a bit of Monty Python tossed in, rather than of Zen writing. I presume > that 'Zen' refers more to the method of composition, and the lack of > post-production editing, than to the content. > > If the text were up for grabs, I would want to change some periods to > semi-colons and reconsider some of the other lines. > > The 'beauty' line is one of multiple contrasts, and should be judged in > that context, not in isolation. > > -- > Terry Jan Reedy > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Fri Sep 14 02:07:00 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 14 Sep 2018 08:07:00 +0200 Subject: [Python-ideas] Deprecation utilities for the warnings module In-Reply-To: <3E800307-2D67-4E8E-B2C3-4C83875F0226@gmail.com> References: <3E800307-2D67-4E8E-B2C3-4C83875F0226@gmail.com> Message-ID: <7F6B5676-91D9-4FBE-98B8-501BA77D6897@killingar.net> > I'd like to propose an extension for the warnings module > to address this problem. I like all of that. The only issue I have with it is that the warnings module is designed to namespace depredations so you can turn them on per library and this code doesn?t seem to handle that. We really want to avoid libraries using these convenience functions instead of creating their own warning that can be properly filtered. I?m not sure what the solution to this would be. Maybe just accessing .DeprecationWarning dynamically? Seems a bit magical though. / Anders From boxed at killingar.net Fri Sep 14 02:48:42 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 14 Sep 2018 08:48:42 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> Message-ID: > It's a bit too large for me to make sense of it quickly. My apologies for not offering a holistic refactor. My tool will print plenty of other examples. You can pick anyone really... > > That?s positional because keyword is more painful. > > Why would keyword arguments be more painful here? They've already split the call across 4 lines. Why not go a bit further and use keyword args to make it 6 or 7 lines? Maybe they decided it reads fine as it is. Yes, exactly. Why not go further? This is exactly my point. > Sure. Run this script against django: https://gist.github.com/boxed/e60e3e19967385dc2c7f0de483723502 > > It will print all function calls that are positional and have > 2 arguments. Not a single one is good as is, all would be better with keyword arguments. > > I disagree. Please expand your assertion by explaining why an example is not good as-is and would be better with keyword arguments. Because keyword arguments are checked to correspond to the parameters. The example I gave was: handler.new_file( field_name, file_name, content_type, content_length, charset, content_type_extra, ) it reads fine precisely because the variables names match with the signature of new_file(). But if new_file() is changed they won't match up anymore and it will still read fine and look ok, but now the parameters don't line up and it's broken in potentially very subtle ways. For example if content_type and file_name switch places. Those are the same time but the consequences for the mixup are fairly large and potentially very annoying to track down. Do you disagree on this point? / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Fri Sep 14 02:58:23 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 14 Sep 2018 08:58:23 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> <81EE93A7-C0C4-4ADA-9E66-99E4387A4ADE@killingar.net> Message-ID: <02A356CE-B716-47CD-949F-2CEC64D72886@killingar.net> >> In that case, you should be able to link to a compelling example. If you go to the trouble of finding one, I'll take time to try to refactor it. > > https://github.com/django/django/blob/master/django/db/models/sql/compiler.py#L707 > > Is a pretty typical one. > > That call is recursive, so it's unlikely that the author would shift the parameters around without testing the call and changing the argument positions appropriately. Maybe. Still it would be better with keyword arguments. Testing is one thing, quickly finding problems is another. Keyword arguments fail fast and cleanly with signature changes, positional arguments only do when you add or remove parameters at the very end, all other changes are potentially very annoying to debug because you can get a long way past the problem call site before hitting a type or behavior error. I'm not sure we even agree on this basic point though. Do you agree on that? > The signature uses the parameter "default_order" and the call uses the argument "order". It seems that was a deliberate choice that wouldn't conform to the `f(a=a)` pattern. Maybe. It's a bit weirdly named quite frankly. Do you set the default_order by passing that argument? Or do you set the order? The code below sounds like it's order, but the signature sounds like it's default order. It can't be both, can it? I don't know the specifics but it might just be that this code is hard to read precisely because it doesn't conform to the pattern, and if it *did* use the suggested feature the mismatch in names would stand out and I would believe that it was intentional and not a mistake at the call site, because it would look like: results.extend(self.find_ordering_name( =item, =opts, =alias, default_order=order, =already_seen, )) now the odd ball out stands out instead of hiding. Different things should look different, and similar things should look similar (which would have been a good addition to the Zen of Python imo). > The call is oddly split across lines 707 and 708, despite nearby lines being much longer. it could easily have been written as a single line. Agreed. It's the ghost of PEP8, but that's a totally different morass! Let's keep to one extremely controversial topic at a time :) / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at selik.org Fri Sep 14 03:04:40 2018 From: mike at selik.org (Michael Selik) Date: Fri, 14 Sep 2018 00:04:40 -0700 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> Message-ID: On Thu, Sep 13, 2018, 11:48 PM Anders Hovm?ller wrote: > > It's a bit too large for me to make sense of it quickly. My apologies for > not offering a holistic refactor. > > > My tool will print plenty of other examples. You can pick anyone really... > > > That?s positional because keyword is more painful. >> > > Why would keyword arguments be more painful here? They've already split > the call across 4 lines. Why not go a bit further and use keyword args to > make it 6 or 7 lines? Maybe they decided it reads fine as it is. > > > Yes, exactly. Why not go further? This is exactly my point. > > > Sure. Run this script against django: >> https://gist.github.com/boxed/e60e3e19967385dc2c7f0de483723502 >> > >> It will print all function calls that are positional and have > 2 >> arguments. Not a single one is good as is, all would be better with keyword >> arguments. >> > > I disagree. Please expand your assertion by explaining why an example is > not good as-is and would be better with keyword arguments. > > > Because keyword arguments are checked to correspond to the parameters. The > example I gave was: > > > handler.new_file( > field_name, file_name, content_type, > content_length, charset, content_type_extra, > ) > > > it reads fine precisely because the variables names match with the > signature of new_file(). But if new_file() is changed they won't match up > anymore and it will still read fine and look ok, but now the parameters > don't line up and it's broken in potentially very subtle ways. For example > if content_type and file_name switch places. Those are the same time but > the consequences for the mixup are fairly large and potentially very > annoying to track down. > > Do you disagree on this point? > The mixup you describe would probably cause an immediate error as the filename would not be a valid content type. I suspect it'd be easy to track down. Keyword arguments are especially important when the type and value of two arguments are hard to distinguish, like a height and width. If one is an int and the other is a str, or if it must be a str with only a handful of valid values, I'm not so worried about making mistakes. Further, I think it's reasonable to use keyword arguments here and no more "painful" than the use of positional arguments in this case. Both are annoyingly verbose. I'd prefer refactoring the new_file method, but I didn't spot where it's defined. -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.van.dorp at deonet.nl Fri Sep 14 03:13:06 2018 From: j.van.dorp at deonet.nl (Jacco van Dorp) Date: Fri, 14 Sep 2018 09:13:06 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: My mom is the only one who ever called me any shade of beautiful. I think we all know what that means. However, if merely the word ugly being on a page can be "harmful", what you really need is professional help, not a change to Python. Because there's obviously been some things in your past you need to work through. > Python and open source have always been more about things like inclusivity and diversity than about excessive political correctness. I'm sure the historical concepts of master/slave were so distant to the handsome young men that developed the computer > science concepts that they didn't expect to cause any naming conflicts. And not the kind of inclusivity you hear about in [current year]. Open source and related cultures never cared about diversity, they always didn't care about who you were or how you looked, and solely judged you by your work or contributions. I don't use python because Guido or whoever is such a great guy and really worked hard, I use Python because Python is a very useful tool for me. Inclusivity for inclusivity's sake is a bad thing and kills communities and companies. Any charged context is not our problem - it's societies' problem. I'll admit it's a bigger problem, but it's one we need to address through elections, demonstrations, or other political means. Not by self-censorship. Even aside, Python is a world-wide language. Not american or even european. If we have to ban "Ugly" for american sensitivities, then perhaps we need to ban a number of others for china's sensitivities. Where will it end ? Nowhere, it'll keep going on forever. That's why i'm +1 for reverting the master/slave removal PR's. > There are also nationalist jokes about Dutchs. That also must be stopped! Well, we just are superior ;P (If I wanted to whine, though...."Dutch" isn't what we call ourselves. Etymological, it's root is the same as our word "Duits" or the German word "Deutz" (or something close), which means...German (Duitsland / Germany). Since we had a war a bit ago where we were occupied, this makes Dutch sound a lot like we'd still be under foreign rule by the nazi's. Terrible, huh ? We call ourself "Nederlanders", which can't really be translated, as the country name, translated as "The Netherlands", is already a multiple, so the normal transformation like america -> american doesn't work very well. Strangly, the Germans don't do this - they call us "Niederlanders", while referring to themselves as "Deutschland".) -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Fri Sep 14 03:17:09 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 14 Sep 2018 09:17:09 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> Message-ID: > it reads fine precisely because the variables names match with the signature of new_file(). But if new_file() is changed they won't match up anymore and it will still read fine and look ok, but now the parameters don't line up and it's broken in potentially very subtle ways. For example if content_type and file_name switch places. Those are the same time but the consequences for the mixup are fairly large and potentially very annoying to track down. > > Do you disagree on this point? > > The mixup you describe would probably cause an immediate error as the filename would not be a valid content type. I suspect it'd be easy to track down. So no you don't agree? > Keyword arguments are especially important when the type and value of two arguments are hard to distinguish, like a height and width. If one is an int and the other is a str, or if it must be a str with only a handful of valid values, I'm not so worried about making mistakes. > > Further, I think it's reasonable to use keyword arguments [...] ...now I'm confused. Now it sounds like you do agree? > [...] here and no more "painful" than the use of positional arguments in this case. Both are annoyingly verbose. That's a bit of a dodge. There is a huge difference in verbosity between handler.new_file(field_name, file_name, content_type, content_length, charset, content_type_extra) and handler.new_file(field_name=field_name, file_name=file_name, content_type=content_type, content_length=content_length, charset=charset, content_type_extra=content_type_extra) and it's pretty obvious when it's spelled out. Now compare to my two suggested syntaxes: handler.new_file(*, field_name, file_name, content_type, content_length, charset, content_type_extra) handler.new_file(=field_name, =file_name, =content_type, =content_length, =charset, =content_type_extra) > I'd prefer refactoring the new_file method, but I didn't spot where it's defined. People keep saying that, but I don't buy it. Refactoring isn't magic, it won't just make the data required to a function go away. I've been pressed to get hard numbers, I have. I've been pressed to come up with actual examples, I have. Now when I have a point you're just waving your hand and saying "refactoring". I don't think that's an actual argument. / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at selik.org Fri Sep 14 03:19:01 2018 From: mike at selik.org (Michael Selik) Date: Fri, 14 Sep 2018 00:19:01 -0700 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: <02A356CE-B716-47CD-949F-2CEC64D72886@killingar.net> References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> <81EE93A7-C0C4-4ADA-9E66-99E4387A4ADE@killingar.net> <02A356CE-B716-47CD-949F-2CEC64D72886@killingar.net> Message-ID: On Thu, Sep 13, 2018, 11:58 PM Anders Hovm?ller wrote: > In that case, you should be able to link to a compelling example. If you >> go to the trouble of finding one, I'll take time to try to refactor it. >> >> >> https://github.com/django/django/blob/master/django/db/models/sql/compiler.py#L707 >> >> Is a pretty typical one. >> > > That call is recursive, so it's unlikely that the author would shift the > parameters around without testing the call and changing the argument > positions appropriately. > > > Maybe. Still it would be better with keyword arguments. Testing is one > thing, quickly finding problems is another. Keyword arguments fail fast and > cleanly with signature changes, positional arguments only do when you add > or remove parameters at the very end, all other changes are potentially > very annoying to debug because you can get a long way past the problem call > site before hitting a type or behavior error. > > I'm not sure we even agree on this basic point though. Do you agree on > that? > I agree those are benefits to keyword arguments, but I disagree that those benefits accrue in all cases. I do not believe that keyword arguments are strictly better than positional. Maybe our difference of opinion stems from tooling and the way others refactor the code we work on. I enjoy using keywords to explain the values I'm passing. If I already have a well-named variable, I'm less keen on using a keyword. Here's a possible refactor. I didn't bother with keyword arguments, because the variable names are easy to match up with arguments positionally. My screen was large enough that I could read the signature at the same time as I wrote the call. def recurse(name): return self.find_ordering_name(name, opts, alias, order, already_seen) return itertools.chain.from_iterable(map(recurse, opts.ordering)) -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.van.dorp at deonet.nl Fri Sep 14 03:19:22 2018 From: j.van.dorp at deonet.nl (Jacco van Dorp) Date: Fri, 14 Sep 2018 09:19:22 +0200 Subject: [Python-ideas] Deprecation utilities for the warnings module In-Reply-To: <7F6B5676-91D9-4FBE-98B8-501BA77D6897@killingar.net> References: <3E800307-2D67-4E8E-B2C3-4C83875F0226@gmail.com> <7F6B5676-91D9-4FBE-98B8-501BA77D6897@killingar.net> Message-ID: Op vr 14 sep. 2018 om 08:07 schreef Anders Hovm?ller : > > > I'd like to propose an extension for the warnings module > > to address this problem. > > I like all of that. The only issue I have with it is that the warnings > module is designed to namespace depredations so you can turn them on per > library and this code doesn?t seem to handle that. We really want to avoid > libraries using these convenience functions instead of creating their own > warning that can be properly filtered. > I feel there could be solutions. Either module.__getattribute__ (which, IIRC, was implemented recently), or just using a proxy/wrapper class around non-Union[class, function] objects. Those are rare, though, in imported modules. And I don't think this library will see much use without the subjects being imported first - at least I know I do my refactors on a per-file basis. And if you can't, you might want to split your files first. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at selik.org Fri Sep 14 03:26:13 2018 From: mike at selik.org (Michael Selik) Date: Fri, 14 Sep 2018 00:26:13 -0700 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> Message-ID: On Fri, Sep 14, 2018, 12:17 AM Anders Hovm?ller wrote: > > That's a bit of a dodge. There is a huge difference in verbosity between > > handler.new_file(field_name, file_name, content_type, content_length, > charset, content_type_extra) > > and > > handler.new_file(field_name=field_name, file_name=file_name, > content_type=content_type, content_length=content_length, charset=charset, > content_type_extra=content_type_extra) > Since neither version fits well on one line or even three, I'd have written each of those on a separate line, indented nicely to emphasize the repetition. Seems fine to me. The real problem with this code section wasn't the positional arguments, but that it was so heavily indented. Whenever I have code so nested, I put in a great deal of effort to refactor so that I can avoid the nesting of many try/except, for, and if/else statements. -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Fri Sep 14 03:27:46 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 14 Sep 2018 09:27:46 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> <81EE93A7-C0C4-4ADA-9E66-99E4387A4ADE@killingar.net> <02A356CE-B716-47CD-949F-2CEC64D72886@killingar.net> Message-ID: <1D77080C-0E93-4101-A9D8-23B14DC93379@killingar.net> > Maybe our difference of opinion stems from tooling and the way others refactor the code we work on. Maybe. Or the code we have to refactor that others have written. Or both. > I enjoy using keywords to explain the values I'm passing. If I already have a well-named variable, I'm less keen on using a keyword. Here lies the crux for me. You're describing telling the next reader what the arguments are. But without keyword arguments you aren't telling the computer what the arguments are, not really. The position and the names become disconnected when they should be connected. It's very similar to how in C you use {} for the computer but indent for the human, and no one checks that they are the same (not true anymore strictly because I believe clang has an option you can turn off to validate this but it's default off). In python you tell the computer and the human the same thing with the same language. This is robust and clear. I think this situation is the same. > > Here's a possible refactor. I didn't bother with keyword arguments, because the variable names are easy to match up with arguments positionally. My screen was large enough that I could read the signature at the same time as I wrote the call. > > def recurse(name): > return self.find_ordering_name(name, opts, alias, order, already_seen) > return itertools.chain.from_iterable(map(recurse, opts.ordering)) I don't see how that changed anything. That just changes it to a functional style, but otherwise it's identical. / Anders From boxed at killingar.net Fri Sep 14 03:30:47 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 14 Sep 2018 09:30:47 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> Message-ID: <722967B2-42E2-4FCC-ACD2-E16124DB75AE@killingar.net> > On 14 Sep 2018, at 09:26, Michael Selik wrote: > > > > On Fri, Sep 14, 2018, 12:17 AM Anders Hovm?ller > wrote: > > That's a bit of a dodge. There is a huge difference in verbosity between > > handler.new_file(field_name, file_name, content_type, content_length, charset, content_type_extra) > > and > > handler.new_file(field_name=field_name, file_name=file_name, content_type=content_type, content_length=content_length, charset=charset, content_type_extra=content_type_extra) > > Since neither version fits well on one line or even three, I'd have written each of those on a separate line, indented nicely to emphasize the repetition. Seems fine to me. Sure. As would I. Doesn't change anything: handler.new_file( field_name=field_name, file_name=file_name, content_type=content_type, content_length=content_length, charset=charset, content_type_extra=content_type_extra, ) > The real problem with this code section wasn't the positional arguments, but that it was so heavily indented. Whenever I have code so nested, I put in a great deal of effort to refactor so that I can avoid the nesting of many try/except, for, and if/else statements. Most code has many problems. Bringing up random other problems instead of sticking to the topic doesn't seem very helpful. Or did you want to change the topic to generally be about how to clean up code? If so I'd like to know so I can stop responding, because that's not what I am trying to discuss. / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From szport at gmail.com Fri Sep 14 03:47:39 2018 From: szport at gmail.com (Zaur Shibzukhov) Date: Fri, 14 Sep 2018 00:47:39 -0700 (PDT) Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: ???????, 14 ???????? 2018 ?., 1:56:58 UTC+3 ???????????? Guido van Rossum ???????: > > Everyone who still wants to reply to this thread: please decide for > yourself whether the OP, "Samantha Quan" who started it could be a Russian > troll. Facts to consider: (a) the OP's address is ... at yandex.com, a > well-known Russian website (similar to Google); (b) there's a Canadian > actress named Samantha Quan. > I completely agree with the fact that this discussion should be stopped without starting it. I just want to note that anyone (with this it does not have to be Russian at all) can create an account on Yandex. I would not want this trolling to be considered in the spirit of forging a negative attitude towards the Russians. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at selik.org Fri Sep 14 04:08:45 2018 From: mike at selik.org (Michael Selik) Date: Fri, 14 Sep 2018 01:08:45 -0700 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: <722967B2-42E2-4FCC-ACD2-E16124DB75AE@killingar.net> References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> <722967B2-42E2-4FCC-ACD2-E16124DB75AE@killingar.net> Message-ID: On Fri, Sep 14, 2018 at 12:30 AM Anders Hovm?ller wrote: > Since neither version fits well on one line or even three, I'd have > written each of those on a separate line, indented nicely to emphasize the > repetition. Seems fine to me. > > Sure. As would I. Doesn't change anything[.] > Our aesthetics are different. Some people like Rothko, others don't. Since that pattern is rare for me, I don't wish for a better way. The real problem with this code section wasn't the positional arguments, > but that it was so heavily indented. Whenever I have code so nested, I put > in a great deal of effort to refactor so that I can avoid the nesting of > many try/except, for, and if/else statements. > > Most code has many problems. Bringing up random other problems instead of > sticking to the topic doesn't seem very helpful. Or did you want to change > the topic to generally be about how to clean up code? If so I'd like to > know so I can stop responding, because that's not what I am trying to > discuss. > I don't think it's off-topic. The ugliness of the line you're talking about stems in large part from that it's in a heavily indented complex section. The distinction between positional and keyword args in this case is superficial. On Fri, Sep 14, 2018 at 12:27 AM Anders Hovm?ller wrote: > > I enjoy using keywords to explain the values I'm passing. If I already > have a well-named variable, I'm less keen on using a keyword. > > Here lies the crux for me. You're describing telling the next reader what > the arguments are. But without keyword arguments you aren't telling the > computer what the arguments are, not really. The position and the names > become disconnected when they should be connected. > > It's very similar to how in C you use {} for the computer but indent for > the human, and no one checks that they are the same (not true anymore > strictly because I believe clang has an option you can turn off to validate > this but it's default off). In python you tell the computer and the human > the same thing with the same language. This is robust and clear. I think > this situation is the same. > It may be similar in theme, but in my experience it's different in practice -- how often the problem arises and how difficult it is to detect and fix. Remembering your comments about string interpolation, it sounds like you write a lot of templating code and craft large JSON blobs for REST APIs. I agree that keyword arguments are critical in those situations for avoiding tedious and error-prone positional matching. When I have similar situations I tend to avoid the `f(a=a)` annoyance by creating kwds dicts and passing them around with ** unpacking. > Here's a possible refactor. I didn't bother with keyword arguments, > because the variable names are easy to match up with arguments > positionally. My screen was large enough that I could read the signature at > the same time as I wrote the call. > > > > def recurse(name): > > return self.find_ordering_name(name, opts, alias, order, > already_seen) > > return itertools.chain.from_iterable(map(recurse, opts.ordering)) > > I don't see how that changed anything. That just changes it to a > functional style, but otherwise it's identical. It avoids creating extra lists, which might make it more efficient. It also emphasizes the recursion, which wasn't obvious to me when I first glanced at the code. No, I didn't bother switching to keyword arguments, because I didn't feel it helped. My screen was big enough to verify the order as I wrote the arguments. On Thu, Sep 13, 2018 at 11:35 AM Anders Hovm?ller wrote: > I?ll repeat myself [...] > If you feel like you're repeating yourself, the easiest way to avoid frustration is to simply stop. Don't fall for the sunk cost fallacy, despite the effort you've put into this proposal. Have a good evening. -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Fri Sep 14 04:43:16 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 14 Sep 2018 20:43:16 +1200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: <5B9B74A4.1030002@canterbury.ac.nz> Tim Delaney wrote: > And I can't think of an elegant replacement for "ugly" to pair with > "elegant". There's "inelegant", but it doesn't have the same punch as "ugly". And I think Tim deliberately chose a very punchy word for that line, to reflect that we care a *lot* about aesthetics in Python. -- Greg From boxed at killingar.net Fri Sep 14 05:04:14 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 14 Sep 2018 11:04:14 +0200 Subject: [Python-ideas] Improving fn(arg=arg, name=name, wibble=wibble) code In-Reply-To: References: <3FD652E4-F6CD-4CA7-85FE-BC83E21201F1@killingar.net> <4A878DC6-DD74-4D8B-9614-0785BC008C03@killingar.net> <722967B2-42E2-4FCC-ACD2-E16124DB75AE@killingar.net> Message-ID: <5F4006B9-FC91-41FF-976C-CA6A0B9E4609@killingar.net> > I don't think it's off-topic. The ugliness of the line you're talking about stems in large part from that it's in a heavily indented complex section. The distinction between positional and keyword args in this case is superficial. Indenting 4 levels (32 spaces) is a problem, but adding an extra 77 non-space characters is superficial? Is this your position? > Remembering your comments about string interpolation, it sounds like you write a lot of templating code and craft large JSON blobs for REST APIs. I agree that keyword arguments are critical in those situations for avoiding tedious and error-prone positional matching. Well no they are not. You wouldn't have positional arguments because you'd create a dict and pass it. > When I have similar situations I tend to avoid the `f(a=a)` annoyance by creating kwds dicts and passing them around with ** unpacking. Which everyone does. But how would you create that dict? > It avoids creating extra lists, which might make it more efficient. It also emphasizes the recursion, which wasn't obvious to me when I first glanced at the code. Sure. All good improvements. No argument from me. > No, I didn't bother switching to keyword arguments, because I didn't feel it helped. My screen was big enough to verify the order as I wrote the arguments. This makes me very frustrated. You could rely on the machine to verify the arguments, but you spend extra time to avoid it. You're explicitly avoiding to send a machine to do a machines job. Why? I'm arguing you're doing it *because it produces shorter code*. This is exactly my point. We should send a machine to do a machines job, and if the language pushes people to send a man to do a machines job then this is bad. > If you feel like you're repeating yourself, the easiest way to avoid frustration is to simply stop. Don't fall for the sunk cost fallacy, despite the effort you've put into this proposal. Well ok, you should have started with that. I'll just send what I've written so far :) / Anders From hpolak at polak.es Fri Sep 14 05:02:44 2018 From: hpolak at polak.es (Hans Polak) Date: Fri, 14 Sep 2018 11:02:44 +0200 Subject: [Python-ideas] Combine f-strings with i18n Message-ID: I have recently updated my code to use the more pythonic f-string instead of '{}'.format() Now, I want to start on the road to multilingual internationalization, and I run into two problems. The first problem is that f-strings do not combine with i18n. I have to revert to the '{}'.format() style. The second problem is that I need to translate strings on the fly. I propose to add a f''.language() method to the f-string format. Example: user = 'Pedro' f'Hi {user}' would be translated to 'Hola Pedro' if the locale were set to Spanish. f'Hi {user}'.language('es_ES') would be translated in the same way. To extract translatable strings from the source code, the source code could contain a 'HAS_LOCALES' flag (or something similar) at the top of the code. This way, the pygettext.py program would know that translatable f-strings are within the code. Rationale: More pythonic. At this moment, _('').format() is the way to go, so I would need to wrap another call around that: T(_(''), args, 'es_ES') <===This is an ugly hack. # Set the _() function to return the same string _ = lambda s: s es = gettext.translation('myapplication', languages=['es_ES']) def T(translatable_string, args_dictionary = None, language = None) ??? if 'es_ES' == language: ??? ??? # Return translated, formatted string ??? ??? return es.gettext(translatable_string).format(args) ??? # Default, return formatted string ??? return translatable_string.format(args) From rosuav at gmail.com Fri Sep 14 05:33:24 2018 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 14 Sep 2018 19:33:24 +1000 Subject: [Python-ideas] Combine f-strings with i18n In-Reply-To: References: Message-ID: On Fri, Sep 14, 2018 at 7:02 PM, Hans Polak wrote: > I have recently updated my code to use the more pythonic f-string instead of > '{}'.format() Well there's your problem right there. Don't change your string formatting choice on that basis. F-strings aren't "more Pythonic" than either .format() or percent-formatting; all three of them are supported for good reasons. For i18n, I think .format() is probably your best bet. Trying to mess with f-strings to give them methods is a path of great hairiness, as they are not actually objects (they're expressions). ChrisA From solipsis at pitrou.net Fri Sep 14 05:35:46 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 14 Sep 2018 11:35:46 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: <20180914113546.7408afa7@fsol> On Fri, 14 Sep 2018 10:47:07 +1200 Greg Ewing wrote: > M.-A. Lemburg wrote: > > For > > me, it refers to a general feeling of consistency, pureness and > > standing out on its own. It's abstract and doesn't have > > anything to do with humans. > > Yep. And the proposed replacement "clean/dirty" doesn't even > mean the same thing. It's entirely possible for a thing to > be spotlessly clean without being beautiful or elegant. Well, not to mention that if you care about discrimination of people (assuming one doesn't understand what polysemy is :-)), then I'm not sure that clean/dirty is much better than beautiful/ugly (see e.g. Norbert Elias "The Civilizing Process" about how cleanliness norms historically developed - at least in the Western world - in the upper classes of pacified European kingdoms), while elegant/inelegant may even be worse. Regards Antoine. From phd at phdru.name Fri Sep 14 05:36:38 2018 From: phd at phdru.name (Oleg Broytman) Date: Fri, 14 Sep 2018 11:36:38 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: <20180914093638.5clw27oighlwjkow@phdru.name> On Fri, Sep 14, 2018 at 12:47:39AM -0700, Zaur Shibzukhov wrote: > I completely agree with the fact that this discussion should be stopped The discussion should be stopped before those 3 pull requests. Now they should be reverted. Or more discussion will be sparked and more PRs created. Oleg. -- Oleg Broytman https://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From hpolak at polak.es Fri Sep 14 05:39:42 2018 From: hpolak at polak.es (Hans Polak) Date: Fri, 14 Sep 2018 11:39:42 +0200 Subject: [Python-ideas] Combine f-strings with i18n In-Reply-To: References: Message-ID: <130b4ed3-0730-ee39-78fe-5066e8f3fc06@polak.es> f-strings & pythonic ?- Simple is better than complex. ?- Flat is better than nested. (I think this applies) ?- Readability counts. ?- If the implementation is easy to explain, it may be a good idea. Just saying. Cheers, Hans On 14/09/18 11:33, Chris Angelico wrote: > On Fri, Sep 14, 2018 at 7:02 PM, Hans Polak wrote: >> I have recently updated my code to use the more pythonic f-string instead of >> '{}'.format() > Well there's your problem right there. Don't change your string > formatting choice on that basis. F-strings aren't "more Pythonic" than > either .format() or percent-formatting; all three of them are > supported for good reasons. > > For i18n, I think .format() is probably your best bet. Trying to mess > with f-strings to give them methods is a path of great hairiness, as > they are not actually objects (they're expressions). > > ChrisA > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ From greg.ewing at canterbury.ac.nz Fri Sep 14 04:43:20 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 14 Sep 2018 20:43:20 +1200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <5B9AE8EB.9040504@canterbury.ac.nz> Message-ID: <5B9B74A8.1040204@canterbury.ac.nz> Guido van Rossum wrote: > Facts to consider: (a) the OP's address is ... at yandex.com > , a well-known Russian website (similar to Google); > (b) there's a Canadian actress named Samantha Quan. Now I'm waiting for the Kremlin to deny rumours that the Canadian actress Samantha Quan is a russian spy... -- Greg From brett at python.org Fri Sep 14 12:23:55 2018 From: brett at python.org (Brett Cannon) Date: Fri, 14 Sep 2018 09:23:55 -0700 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> <0DC51CD0-46DD-400C-AD96-77B496215086@killingar.net> Message-ID: On Thu, 13 Sep 2018 at 13:10 Koos Zevenhoven wrote: > > If you can't tell inclusivity/diversity from political correctness, or > dirty words from dirty bytes or from unfriendliness and intolerance, you'd > better go fuck yourself. > That language and tone is entirely uncalled for and you have been participating here long enough to know that it isn't. Due to the severity of the language and the fact that I have received previous reports of negative interactions I am implementing a 2 month ban for you, Koos. After two months you can submit a request to have your posting abilities restored. -------------- next part -------------- An HTML attachment was scrubbed... URL: From keats.kelleher at gmail.com Fri Sep 14 12:56:17 2018 From: keats.kelleher at gmail.com (Keats Kelleher) Date: Fri, 14 Sep 2018 12:56:17 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> <20180913103243.oegwnauiscf2ryfg@phdru.name> <0DC51CD0-46DD-400C-AD96-77B496215086@killingar.net> Message-ID: I don't think additional replies on this thread are really constructive. If you aren't contributing any new thoughts on the original message consider not replying at all. On Fri, Sep 14, 2018 at 12:24 PM Brett Cannon wrote: > > > On Thu, 13 Sep 2018 at 13:10 Koos Zevenhoven wrote: > >> >> If you can't tell inclusivity/diversity from political correctness, or >> dirty words from dirty bytes or from unfriendliness and intolerance, you'd >> better go fuck yourself. >> > > That language and tone is entirely uncalled for and you have been > participating here long enough to know that it isn't. > > Due to the severity of the language and the fact that I have received > previous reports of negative interactions I am implementing a 2 month ban > for you, Koos. After two months you can submit a request to have your > posting abilities restored. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -- Andrew K Kelleher Brooklyn, NY -------------- next part -------------- An HTML attachment was scrubbed... URL: From kulakov.ilya at gmail.com Fri Sep 14 15:24:17 2018 From: kulakov.ilya at gmail.com (Ilya Kulakov) Date: Fri, 14 Sep 2018 12:24:17 -0700 Subject: [Python-ideas] Deprecation utilities for the warnings module In-Reply-To: <7F6B5676-91D9-4FBE-98B8-501BA77D6897@killingar.net> References: <3E800307-2D67-4E8E-B2C3-4C83875F0226@gmail.com> <7F6B5676-91D9-4FBE-98B8-501BA77D6897@killingar.net> Message-ID: Hi Anders, If correctly understood your concern, it's about usage of stdlib's *Warning classes directly that makes all warnings coming from different libraries indistinguishable. I think that's not the case, since warnings.filterwarnings allows to specify custom filter using a regular expression to match module names. Therefore it's not redundant to subclass *Warning for namespacing alone. > On Sep 13, 2018, at 11:07 PM, Anders Hovm?ller wrote: > > >> I'd like to propose an extension for the warnings module >> to address this problem. > > I like all of that. The only issue I have with it is that the warnings module is designed to namespace depredations so you can turn them on per library and this code doesn?t seem to handle that. We really want to avoid libraries using these convenience functions instead of creating their own warning that can be properly filtered. > > I?m not sure what the solution to this would be. Maybe just accessing .DeprecationWarning dynamically? Seems a bit magical though. > > / Anders From sorcio at gmail.com Fri Sep 14 16:09:06 2018 From: sorcio at gmail.com (Davide Rizzo) Date: Fri, 14 Sep 2018 22:09:06 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause Message-ID: Regardless of whether the original posting is a trolling attempt or not, the argument does have value that I want to corroborate with an anecdote. At one Python conference in Italy, participants were given an elegantly designed and well-crafted t-shirt with a writing in large characters that read "Beautiful is better than ugly" in reference to the Zen of Python. Back home, a close person who is not a Python programmer nor familiar with PEP 20 saw the t-shirt. They were horrified by the out-of-context sentence for reasons similar to what has been already stated in support of this argument. It prompted them of lookism and judgmentality, and found the message to be disturbing in its suggestion to compare by some standard of beauty and to discriminate. Let me add some context: this person is socially and politically active (maybe what someone would call a "SJW"; definitely not what anyone would call "politically correct"), and is specially sensitive to issues of discrimination and sexism. This was enough, though, for me to wonder what kind of message I would be projecting by wearing that writing on me. I've been since then discouraged to ever wear the t-shirt in any public context. This story might have limited value because it's one anecdote, and because the central point is the impact of the clause when taken out of its original context. I don't want to construct this as an argument in favor of removal of the clause, but I want to mention this as evidence that it does carry emotionally (negatively) charged content. If this content can be avoided without compromising the purpose and function of the message, than by all means I would welcome and support the change. It's meaningful, as a community, to show willingness to respond to discomfort of any kind. In this case, I even see the potential to convey the original message in a more powerful way than the current formulation does. I'm not a good candidate for this, as the chosen language for this community is English, which is not my native language nor a language I feel very good at. I appreciate the poetic style of the original, and I think that Tim Peters has done an outstanding job at capturing these ideas into easy and humor-rich language. The opportunity would be to express the idea of aesthetic appeal of code in some way beyond the simplistic judgmental labelling of "beautiful" vs "ugly". To be fair, in my experience this has been a source of confusion to many Python newcomers, as the notion of "beauty", as with any other value judgment, is highly relative to the subject evaluating it. I've seen people think of the Python community as conceited because they would think they possess some absolute metric of beauty. One way out of the impasse is to draw upon the feeling behind the adjective. We call "beautiful" something that appeals to us, makes us comfortable, or inspires us awe. Ugly is something that makes us uncomfortable, repels us, disconcerts us. "Let awe and disconcert drive you"? "Attraction and repulsion are important"? "If it disturbs you, it's probably wrong"? I know these are terrible and will all fail the spirit and the style of the original, but I'm dropping suggestions with the hope to stimulate some constructive thought on the matter. I'm fine with PEP 20 being unchanged; and my goal is not to find a replacement or urge for a change, but rather to be willing to think about it. Cheers, Davide From rosuav at gmail.com Fri Sep 14 17:29:08 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sat, 15 Sep 2018 07:29:08 +1000 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: Message-ID: On Sat, Sep 15, 2018 at 6:09 AM, Davide Rizzo wrote: > One way out of the impasse is to draw upon the feeling behind the > adjective. We call "beautiful" something that appeals to us, makes us > comfortable, or inspires us awe. Ugly is something that makes us > uncomfortable, repels us, disconcerts us. "Let awe and disconcert > drive you"? "Attraction and repulsion are important"? Oooh. This is a good one! Let's start using more electromagnets in our source code. ChrisA From boxed at killingar.net Sat Sep 15 01:55:24 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sat, 15 Sep 2018 07:55:24 +0200 Subject: [Python-ideas] Deprecation utilities for the warnings module In-Reply-To: References: <3E800307-2D67-4E8E-B2C3-4C83875F0226@gmail.com> <7F6B5676-91D9-4FBE-98B8-501BA77D6897@killingar.net> Message-ID: <0E10AE5C-E650-42D8-9BCA-B42D9F5902B7@killingar.net> > If correctly understood your concern, it's about usage of stdlib's *Warning classes directly > that makes all warnings coming from different libraries indistinguishable. That was my concern yes. > I think that's not the case, since warnings.filterwarnings allows > to specify custom filter using a regular expression to match module names. And what does that match against? The module name of the exception type right? > Therefore it's not redundant to subclass *Warning for namespacing alone. Not redundant? You mean you must subclass? In that case my concern stands. / Anders From marko.ristin at gmail.com Sat Sep 15 02:51:59 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sat, 15 Sep 2018 08:51:59 +0200 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: References: <140891b8-3aef-0991-9421-7479e6a63eb6@gmail.com> Message-ID: Hi, Let me make a couple of practical examples from the work-in-progress ( https://github.com/Parquery/pypackagery, branch mristin/initial-version) to illustrate again the usefulness of the contracts and why they are, in my opinion, superior to assertions and unit tests. What follows is a list of function signatures decorated with contracts from pypackagery library preceded by a human-readable description of the contracts. The invariants tell us what format to expect from the related string properties. @icontract.inv(lambda self: self.name.strip() == self.name) @icontract.inv(lambda self: self.line.endswith("\n")) class Requirement: """Represent a requirement in requirements.txt.""" def __init__(self, name: str, line: str) -> None: """ Initialize. :param name: package name :param line: line in the requirements.txt file """ ... The postcondition tells us that the resulting map keys the values on their name property. @icontract.post(lambda result: all(val.name == key for key, val in result.items())) def parse_requirements(text: str, filename: str = '') -> Mapping[str, Requirement]: """ Parse requirements file and return package name -> package requirement as in requirements.txt :param text: content of the ``requirements.txt`` :param filename: where we got the ``requirements.txt`` from (URL or path) :return: name of the requirement (*i.e.* pip package) -> parsed requirement """ ... The postcondition ensures that the resulting list contains only unique elements. Mind that if you returned a set, the order would have been lost. @icontract.post(lambda result: len(result) == len(set(result)), enabled=icontract.SLOW) def missing_requirements(module_to_requirement: Mapping[str, str], requirements: Mapping[str, Requirement]) -> List[str]: """ List requirements from module_to_requirement missing in the ``requirements``. :param module_to_requirement: parsed ``module_to_requiremnt.tsv`` :param requirements: parsed ``requirements.txt`` :return: list of requirement names """ ... Here is a bit more complex example. - The precondition A requires that all the supplied relative paths (rel_paths) are indeed relative (as opposed to absolute). - The postcondition B ensures that the initial set of paths (given in rel_paths) is included in the results. - The postcondition C ensures that the requirements in the results are the subset of the given requirements. - The precondition D requires that there are no missing requirements (*i.e. *that each requirement in the given module_to_requirement is also defined in the given requirements). @icontract.pre(lambda rel_paths: all(rel_pth.root == "" for rel_pth in rel_paths)) # A @icontract.post( lambda rel_paths, result: all(pth in result.rel_paths for pth in rel_paths), enabled=icontract.SLOW, description="Initial relative paths included") # B @icontract.post( lambda requirements, result: all(req.name in requirements for req in result.requirements), enabled=icontract.SLOW) # C @icontract.pre( lambda requirements, module_to_requirement: missing_requirements(module_to_requirement, requirements) == [], enabled=icontract.SLOW) # D def collect_dependency_graph(root_dir: pathlib.Path, rel_paths: List[pathlib.Path], requirements: Mapping[str, Requirement], module_to_requirement: Mapping[str, str]) -> Package: """ Collect the dependency graph of the initial set of python files from the code base. :param root_dir: root directory of the codebase such as "/home/marko/workspace/pqry/production/src/py" :param rel_paths: initial set of python files that we want to package. These paths are relative to root_dir. :param requirements: requirements of the whole code base, mapped by package name :param module_to_requirement: module to requirement correspondence of the whole code base :return: resolved depedendency graph including the given initial relative paths, """ I hope these examples convince you (at least a little bit :-)) that contracts are easier and clearer to write than asserts. As noted before in this thread, you can have the same *behavior* with asserts as long as you don't need to inherit the contracts. But the contract decorators make it very explicit what conditions should hold *without* having to look into the implementation. Moreover, it is very hard to ensure the postconditions with asserts as soon as you have a complex control flow since you would need to duplicate the assert at every return statement. (You could implement a context manager that ensures the postconditions, but a context manager is not more readable than decorators and you have to duplicate them as documentation in the docstring). In my view, contracts are also superior to many kinds of tests. As the contracts are *always* enforced, they also enforce the correctness throughout the program execution whereas the unit tests and doctests only cover a list of selected cases. Furthermore, writing the contracts in these examples as doctests or unit tests would escape the attention of most less experienced programmers which are not used to read unit tests as documentation. Finally, these unit tests would be much harder to read than the decorators (*e.g.*, the unit test would supply invalid arguments and then check for ValueError which is already a much more convoluted piece of code than the preconditions and postconditions as decorators. Such testing code also lives in a file separate from the original implementation making it much harder to locate and maintain). Mind that the contracts *do not* *replace* the unit tests or the doctests. The contracts make merely tests obsolete that test that the function or class actually observes the contracts. Design-by-contract helps you skip those tests and focus on the more complex ones that test the behavior. Another positive effect of the contracts is that they make your tests deeper: if you specified the contracts throughout the code base, a test of a function that calls other functions in its implementation will also make sure that all the contracts of that other functions hold. This can be difficult to implement with standard unit test frameworks. Another aspect of the design-by-contract, which is IMO ignored quite often, is the educational one. Contracts force the programmer to actually sit down and think *formally* about the inputs and the outputs (hopefully?) *before* she starts to implement a function. Since many schools use Python to teach programming (especially at high school level), I imagine writing contracts of a function to be a very good exercise in formal thinking for the students. Please let me know what points *do not *convince you that Python needs contracts (in whatever form -- be it as a standard library, be it as a language construct, be it as a widely adopted and collectively maintained third-party library). I would be very glad to address these points in my next message(s). Cheers, Marko -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Sat Sep 15 04:29:11 2018 From: mertz at gnosis.cx (David Mertz) Date: Sat, 15 Sep 2018 04:29:11 -0400 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: References: <140891b8-3aef-0991-9421-7479e6a63eb6@gmail.com> Message-ID: I'm afraid that in reading the examples provided it is difficulties for me not simply to think that EVERY SINGLE ONE of them would be FAR easier to read if it were an `assert` instead. The API of the library is a bit noisy, but I think the obstacle it's more in the higher level design for me. Adding many layers of expensive runtime checks and many lines of code in order to assure simple predicates that a glance at the code or unit tests would do better seems wasteful. I just cannot imagine wanting to write or work on the kind of codebase that is down here. If some people or organizations want to come in this manner, sure a library is great. But I definitely don't want it in the syntax, nor even in the standard library. On Sat, Sep 15, 2018, 2:53 AM Marko Ristin-Kaufmann wrote: > Hi, > Let me make a couple of practical examples from the work-in-progress ( > https://github.com/Parquery/pypackagery, branch mristin/initial-version) > to illustrate again the usefulness of the contracts and why they are, in my > opinion, superior to assertions and unit tests. > > What follows is a list of function signatures decorated with contracts > from pypackagery library preceded by a human-readable description of the > contracts. > > The invariants tell us what format to expect from the related string > properties. > > @icontract.inv(lambda self: self.name.strip() == self.name) > @icontract.inv(lambda self: self.line.endswith("\n")) > class Requirement: > """Represent a requirement in requirements.txt.""" > > def __init__(self, name: str, line: str) -> None: > """ > Initialize. > > :param name: package name > :param line: line in the requirements.txt file > """ > ... > > The postcondition tells us that the resulting map keys the values on their > name property. > > @icontract.post(lambda result: all(val.name == key for key, val in result.items())) > def parse_requirements(text: str, filename: str = '') -> Mapping[str, Requirement]: > """ > Parse requirements file and return package name -> package requirement as in requirements.txt > > :param text: content of the ``requirements.txt`` > :param filename: where we got the ``requirements.txt`` from (URL or path) > :return: name of the requirement (*i.e.* pip package) -> parsed requirement > """ > ... > > > The postcondition ensures that the resulting list contains only unique > elements. Mind that if you returned a set, the order would have been lost. > > @icontract.post(lambda result: len(result) == len(set(result)), enabled=icontract.SLOW) > def missing_requirements(module_to_requirement: Mapping[str, str], > requirements: Mapping[str, Requirement]) -> List[str]: > """ > List requirements from module_to_requirement missing in the ``requirements``. > > :param module_to_requirement: parsed ``module_to_requiremnt.tsv`` > :param requirements: parsed ``requirements.txt`` > :return: list of requirement names > """ > ... > > Here is a bit more complex example. > - The precondition A requires that all the supplied relative paths > (rel_paths) are indeed relative (as opposed to absolute). > - The postcondition B ensures that the initial set of paths (given in > rel_paths) is included in the results. > - The postcondition C ensures that the requirements in the results are the > subset of the given requirements. > - The precondition D requires that there are no missing requirements (*i.e. > *that each requirement in the given module_to_requirement is also defined > in the given requirements). > > @icontract.pre(lambda rel_paths: all(rel_pth.root == "" for rel_pth in rel_paths)) # A > @icontract.post( > lambda rel_paths, result: all(pth in result.rel_paths for pth in rel_paths), > enabled=icontract.SLOW, > description="Initial relative paths included") # B > @icontract.post( > lambda requirements, result: all(req.name in requirements for req in result.requirements), > enabled=icontract.SLOW) # C > @icontract.pre( > lambda requirements, module_to_requirement: missing_requirements(module_to_requirement, requirements) == [], > enabled=icontract.SLOW) # D > def collect_dependency_graph(root_dir: pathlib.Path, rel_paths: List[pathlib.Path], > requirements: Mapping[str, Requirement], > module_to_requirement: Mapping[str, str]) -> Package: > > """ > Collect the dependency graph of the initial set of python files from the code base. > > :param root_dir: root directory of the codebase such as "/home/marko/workspace/pqry/production/src/py" > :param rel_paths: initial set of python files that we want to package. These paths are relative to root_dir. > :param requirements: requirements of the whole code base, mapped by package name > :param module_to_requirement: module to requirement correspondence of the whole code base > :return: resolved depedendency graph including the given initial relative paths, > """ > > I hope these examples convince you (at least a little bit :-)) that > contracts are easier and clearer to write than asserts. As noted before in > this thread, you can have the same *behavior* with asserts as long as you > don't need to inherit the contracts. But the contract decorators make it > very explicit what conditions should hold *without* having to look into > the implementation. Moreover, it is very hard to ensure the postconditions > with asserts as soon as you have a complex control flow since you would > need to duplicate the assert at every return statement. (You could > implement a context manager that ensures the postconditions, but a context > manager is not more readable than decorators and you have to duplicate them > as documentation in the docstring). > > In my view, contracts are also superior to many kinds of tests. As the > contracts are *always* enforced, they also enforce the correctness > throughout the program execution whereas the unit tests and doctests only > cover a list of selected cases. Furthermore, writing the contracts in these > examples as doctests or unit tests would escape the attention of most less > experienced programmers which are not used to read unit tests as > documentation. Finally, these unit tests would be much harder to read than > the decorators (*e.g.*, the unit test would supply invalid arguments and > then check for ValueError which is already a much more convoluted piece of > code than the preconditions and postconditions as decorators. Such testing > code also lives in a file separate from the original implementation making > it much harder to locate and maintain). > > Mind that the contracts *do not* *replace* the unit tests or the > doctests. The contracts make merely tests obsolete that test that the > function or class actually observes the contracts. Design-by-contract helps > you skip those tests and focus on the more complex ones that test the > behavior. Another positive effect of the contracts is that they make your > tests deeper: if you specified the contracts throughout the code base, a > test of a function that calls other functions in its implementation will > also make sure that all the contracts of that other functions hold. This > can be difficult to implement with standard unit test frameworks. > > Another aspect of the design-by-contract, which is IMO ignored quite > often, is the educational one. Contracts force the programmer to actually > sit down and think *formally* about the inputs and the outputs > (hopefully?) *before* she starts to implement a function. Since many > schools use Python to teach programming (especially at high school level), > I imagine writing contracts of a function to be a very good exercise in > formal thinking for the students. > > Please let me know what points *do not *convince you that Python needs > contracts (in whatever form -- be it as a standard library, be it as a > language construct, be it as a widely adopted and collectively maintained > third-party library). I would be very glad to address these points in my > next message(s). > > Cheers, > Marko > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > On Sep 15, 2018 2:53 AM, "Marko Ristin-Kaufmann" wrote: Hi, Let me make a couple of practical examples from the work-in-progress ( https://github.com/Parquery/pypackagery, branch mristin/initial-version) to illustrate again the usefulness of the contracts and why they are, in my opinion, superior to assertions and unit tests. What follows is a list of function signatures decorated with contracts from pypackagery library preceded by a human-readable description of the contracts. The invariants tell us what format to expect from the related string properties. @icontract.inv(lambda self: self.name.strip() == self.name) @icontract.inv(lambda self: self.line.endswith("\n")) class Requirement: """Represent a requirement in requirements.txt.""" def __init__(self, name: str, line: str) -> None: """ Initialize. :param name: package name :param line: line in the requirements.txt file """ ... The postcondition tells us that the resulting map keys the values on their name property. @icontract.post(lambda result: all(val.name == key for key, val in result.items())) def parse_requirements(text: str, filename: str = '') -> Mapping[str, Requirement]: """ Parse requirements file and return package name -> package requirement as in requirements.txt :param text: content of the ``requirements.txt`` :param filename: where we got the ``requirements.txt`` from (URL or path) :return: name of the requirement (*i.e.* pip package) -> parsed requirement """ ... The postcondition ensures that the resulting list contains only unique elements. Mind that if you returned a set, the order would have been lost. @icontract.post(lambda result: len(result) == len(set(result)), enabled=icontract.SLOW) def missing_requirements(module_to_requirement: Mapping[str, str], requirements: Mapping[str, Requirement]) -> List[str]: """ List requirements from module_to_requirement missing in the ``requirements``. :param module_to_requirement: parsed ``module_to_requiremnt.tsv`` :param requirements: parsed ``requirements.txt`` :return: list of requirement names """ ... Here is a bit more complex example. - The precondition A requires that all the supplied relative paths (rel_paths) are indeed relative (as opposed to absolute). - The postcondition B ensures that the initial set of paths (given in rel_paths) is included in the results. - The postcondition C ensures that the requirements in the results are the subset of the given requirements. - The precondition D requires that there are no missing requirements (*i.e. *that each requirement in the given module_to_requirement is also defined in the given requirements). @icontract.pre(lambda rel_paths: all(rel_pth.root == "" for rel_pth in rel_paths)) # A @icontract.post( lambda rel_paths, result: all(pth in result.rel_paths for pth in rel_paths), enabled=icontract.SLOW, description="Initial relative paths included") # B @icontract.post( lambda requirements, result: all(req.name in requirements for req in result.requirements), enabled=icontract.SLOW) # C @icontract.pre( lambda requirements, module_to_requirement: missing_requirements(module_to_requirement, requirements) == [], enabled=icontract.SLOW) # D def collect_dependency_graph(root_dir: pathlib.Path, rel_paths: List[pathlib.Path], requirements: Mapping[str, Requirement], module_to_requirement: Mapping[str, str]) -> Package: """ Collect the dependency graph of the initial set of python files from the code base. :param root_dir: root directory of the codebase such as "/home/marko/workspace/pqry/production/src/py" :param rel_paths: initial set of python files that we want to package. These paths are relative to root_dir. :param requirements: requirements of the whole code base, mapped by package name :param module_to_requirement: module to requirement correspondence of the whole code base :return: resolved depedendency graph including the given initial relative paths, """ I hope these examples convince you (at least a little bit :-)) that contracts are easier and clearer to write than asserts. As noted before in this thread, you can have the same *behavior* with asserts as long as you don't need to inherit the contracts. But the contract decorators make it very explicit what conditions should hold *without* having to look into the implementation. Moreover, it is very hard to ensure the postconditions with asserts as soon as you have a complex control flow since you would need to duplicate the assert at every return statement. (You could implement a context manager that ensures the postconditions, but a context manager is not more readable than decorators and you have to duplicate them as documentation in the docstring). In my view, contracts are also superior to many kinds of tests. As the contracts are *always* enforced, they also enforce the correctness throughout the program execution whereas the unit tests and doctests only cover a list of selected cases. Furthermore, writing the contracts in these examples as doctests or unit tests would escape the attention of most less experienced programmers which are not used to read unit tests as documentation. Finally, these unit tests would be much harder to read than the decorators (*e.g.*, the unit test would supply invalid arguments and then check for ValueError which is already a much more convoluted piece of code than the preconditions and postconditions as decorators. Such testing code also lives in a file separate from the original implementation making it much harder to locate and maintain). Mind that the contracts *do not* *replace* the unit tests or the doctests. The contracts make merely tests obsolete that test that the function or class actually observes the contracts. Design-by-contract helps you skip those tests and focus on the more complex ones that test the behavior. Another positive effect of the contracts is that they make your tests deeper: if you specified the contracts throughout the code base, a test of a function that calls other functions in its implementation will also make sure that all the contracts of that other functions hold. This can be difficult to implement with standard unit test frameworks. Another aspect of the design-by-contract, which is IMO ignored quite often, is the educational one. Contracts force the programmer to actually sit down and think *formally* about the inputs and the outputs (hopefully?) *before* she starts to implement a function. Since many schools use Python to teach programming (especially at high school level), I imagine writing contracts of a function to be a very good exercise in formal thinking for the students. Please let me know what points *do not *convince you that Python needs contracts (in whatever form -- be it as a standard library, be it as a language construct, be it as a widely adopted and collectively maintained third-party library). I would be very glad to address these points in my next message(s). Cheers, Marko _______________________________________________ Python-ideas mailing list Python-ideas at python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.lee.0x2a at gmail.com Sat Sep 15 04:42:00 2018 From: michael.lee.0x2a at gmail.com (Michael Lee) Date: Sat, 15 Sep 2018 01:42:00 -0700 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: References: <140891b8-3aef-0991-9421-7479e6a63eb6@gmail.com> Message-ID: I just want to point out that you don't need permission from anybody to start a library. I think developing and popularizing a contracts library is a reasonable goal -- but that's something you can start doing at any time without waiting for consensus. And if it gets popular enough, maybe it'll be added to the standard library in some form. That's what happened with attrs, iirc -- it got fairly popular and demonstrated there was an unfilled niche, and so Python acquired dataclasses.. The contracts make merely tests obsolete that test that the function or > class actually observes the contracts. > Is this actually the case? Your contracts are only checked when the function is evaluated, so you'd still need to write that unit test that confirms the function actually observes the contract. I don't think you necessarily get to reduce the number of tests you'd need to write. Please let me know what points *do not *convince you that Python needs > contracts > While I agree that contracts are a useful tool, I don't think they're going to be necessarily useful for *all* Python programmers. For example, contracts aren't particularly useful if you're writing fairly straightforward code with relatively simple invariants. I'm also not convinced that libraries where contracts are checked specifically *at runtime* actually give you that much added power and impact. For example, you still need to write a decent number of unit tests to make sure your contracts are being upheld (unless you plan on checking this by just deploying your code and letting it run, which seems suboptimal). There's also no guarantee that your contracts will necessarily be *accurate*. It's entirely possible that your preconditions/postconditions might hold for every test case you can think of, but end up failing when running in production due to some edge case that you missed. (And if you decide to disable those pre/post conditions to avoid the efficiency hit, you're back to square zero.) Or I guess to put it another way -- it seems what all of these contract libraries are doing is basically adding syntax to try and make adding asserts in various places more ergonomic, and not much else. I agree those kinds of libraries can be useful, but I don't think they're necessarily useful enough to be part of the standard library or to be a technique Python programmers should automatically use by default. What might be interesting is somebody wrote a library that does something more then just adding asserts. For example, one idea might be to try hooking up a contracts library to hypothesis (or any other library that does quickcheck-style testing). That might be a good way of partially addressing the problems up above -- you write out your invariants, and a testing library extracts that information and uses it to automatically synthesize interesting test cases. (And of course, what would be very cool is if the contracts could be verified statically like you can do in languages like dafny -- that way, you genuinely would be able to avoid writing many kinds of tests and could have confidence your contracts are upheld. But I understanding implementing such verifiers are extremely challenging and would probably have too-steep of a learning curve to be usable by most people anyways.) -- Michael On Fri, Sep 14, 2018 at 11:51 PM, Marko Ristin-Kaufmann < marko.ristin at gmail.com> wrote: > Hi, > Let me make a couple of practical examples from the work-in-progress ( > https://github.com/Parquery/pypackagery, branch mristin/initial-version) > to illustrate again the usefulness of the contracts and why they are, in my > opinion, superior to assertions and unit tests. > > What follows is a list of function signatures decorated with contracts > from pypackagery library preceded by a human-readable description of the > contracts. > > The invariants tell us what format to expect from the related string > properties. > > @icontract.inv(lambda self: self.name.strip() == self.name) > @icontract.inv(lambda self: self.line.endswith("\n")) > class Requirement: > """Represent a requirement in requirements.txt.""" > > def __init__(self, name: str, line: str) -> None: > """ > Initialize. > > :param name: package name > :param line: line in the requirements.txt file > """ > ... > > The postcondition tells us that the resulting map keys the values on their > name property. > > @icontract.post(lambda result: all(val.name == key for key, val in result.items())) > def parse_requirements(text: str, filename: str = '') -> Mapping[str, Requirement]: > """ > Parse requirements file and return package name -> package requirement as in requirements.txt > > :param text: content of the ``requirements.txt`` > :param filename: where we got the ``requirements.txt`` from (URL or path) > :return: name of the requirement (*i.e.* pip package) -> parsed requirement > """ > ... > > > The postcondition ensures that the resulting list contains only unique > elements. Mind that if you returned a set, the order would have been lost. > > @icontract.post(lambda result: len(result) == len(set(result)), enabled=icontract.SLOW) > def missing_requirements(module_to_requirement: Mapping[str, str], > requirements: Mapping[str, Requirement]) -> List[str]: > """ > List requirements from module_to_requirement missing in the ``requirements``. > > :param module_to_requirement: parsed ``module_to_requiremnt.tsv`` > :param requirements: parsed ``requirements.txt`` > :return: list of requirement names > """ > ... > > Here is a bit more complex example. > - The precondition A requires that all the supplied relative paths > (rel_paths) are indeed relative (as opposed to absolute). > - The postcondition B ensures that the initial set of paths (given in > rel_paths) is included in the results. > - The postcondition C ensures that the requirements in the results are the > subset of the given requirements. > - The precondition D requires that there are no missing requirements (*i.e. > *that each requirement in the given module_to_requirement is also defined > in the given requirements). > > @icontract.pre(lambda rel_paths: all(rel_pth.root == "" for rel_pth in rel_paths)) # A > @icontract.post( > lambda rel_paths, result: all(pth in result.rel_paths for pth in rel_paths), > enabled=icontract.SLOW, > description="Initial relative paths included") # B > @icontract.post( > lambda requirements, result: all(req.name in requirements for req in result.requirements), > enabled=icontract.SLOW) # C > @icontract.pre( > lambda requirements, module_to_requirement: missing_requirements(module_to_requirement, requirements) == [], > enabled=icontract.SLOW) # D > def collect_dependency_graph(root_dir: pathlib.Path, rel_paths: List[pathlib.Path], > requirements: Mapping[str, Requirement], > module_to_requirement: Mapping[str, str]) -> Package: > > """ > Collect the dependency graph of the initial set of python files from the code base. > > :param root_dir: root directory of the codebase such as "/home/marko/workspace/pqry/production/src/py" > :param rel_paths: initial set of python files that we want to package. These paths are relative to root_dir. > :param requirements: requirements of the whole code base, mapped by package name > :param module_to_requirement: module to requirement correspondence of the whole code base > :return: resolved depedendency graph including the given initial relative paths, > """ > > I hope these examples convince you (at least a little bit :-)) that > contracts are easier and clearer to write than asserts. As noted before in > this thread, you can have the same *behavior* with asserts as long as you > don't need to inherit the contracts. But the contract decorators make it > very explicit what conditions should hold *without* having to look into > the implementation. Moreover, it is very hard to ensure the postconditions > with asserts as soon as you have a complex control flow since you would > need to duplicate the assert at every return statement. (You could > implement a context manager that ensures the postconditions, but a context > manager is not more readable than decorators and you have to duplicate them > as documentation in the docstring). > > In my view, contracts are also superior to many kinds of tests. As the > contracts are *always* enforced, they also enforce the correctness > throughout the program execution whereas the unit tests and doctests only > cover a list of selected cases. Furthermore, writing the contracts in these > examples as doctests or unit tests would escape the attention of most less > experienced programmers which are not used to read unit tests as > documentation. Finally, these unit tests would be much harder to read than > the decorators (*e.g.*, the unit test would supply invalid arguments and > then check for ValueError which is already a much more convoluted piece of > code than the preconditions and postconditions as decorators. Such testing > code also lives in a file separate from the original implementation making > it much harder to locate and maintain). > > Mind that the contracts *do not* *replace* the unit tests or the > doctests. The contracts make merely tests obsolete that test that the > function or class actually observes the contracts. Design-by-contract helps > you skip those tests and focus on the more complex ones that test the > behavior. Another positive effect of the contracts is that they make your > tests deeper: if you specified the contracts throughout the code base, a > test of a function that calls other functions in its implementation will > also make sure that all the contracts of that other functions hold. This > can be difficult to implement with standard unit test frameworks. > > Another aspect of the design-by-contract, which is IMO ignored quite > often, is the educational one. Contracts force the programmer to actually > sit down and think *formally* about the inputs and the outputs > (hopefully?) *before* she starts to implement a function. Since many > schools use Python to teach programming (especially at high school level), > I imagine writing contracts of a function to be a very good exercise in > formal thinking for the students. > > Please let me know what points *do not *convince you that Python needs > contracts (in whatever form -- be it as a standard library, be it as a > language construct, be it as a widely adopted and collectively maintained > third-party library). I would be very glad to address these points in my > next message(s). > > Cheers, > Marko > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat Sep 15 13:38:48 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 15 Sep 2018 19:38:48 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause References: Message-ID: <20180915193848.749c1e03@fsol> On Fri, 14 Sep 2018 22:09:06 +0200 Davide Rizzo wrote: > > In this case, I even see the potential to convey the original message > in a more powerful way than the current formulation does. I'm not a > good candidate for this, as the chosen language for this community is > English, which is not my native language nor a language I feel very > good at. I appreciate the poetic style of the original, and I think > that Tim Peters has done an outstanding job at capturing these ideas > into easy and humor-rich language. The opportunity would be to express > the idea of aesthetic appeal of code in some way beyond the simplistic > judgmental labelling of "beautiful" vs "ugly". To be fair, in my > experience this has been a source of confusion to many Python > newcomers, as the notion of "beauty", as with any other value > judgment, is highly relative to the subject evaluating it. I've seen > people think of the Python community as conceited because they would > think they possess some absolute metric of beauty. What this merely shows, IMHO, is that writing programming slogans or jokes on clothing you wear in public is stupid. Most people who see them won't understand a word of them, and in some cases may badly misinterpret them as your example shows. I used to think I was the only one for whom conference t-shirts could only serve as pyjamas, but then I read online that others feel the same... That was quite reassuring: there are other sane people out there! ;-) Regards Antoine. From marko.ristin at gmail.com Sat Sep 15 16:14:43 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sat, 15 Sep 2018 22:14:43 +0200 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: References: <140891b8-3aef-0991-9421-7479e6a63eb6@gmail.com> Message-ID: Hi David Maertz and Michael Lee, Thank you for raising the points. Please let me respond to your comments in separation. Please let me know if I missed or misunderstood anything. *Assertions versus contracts.* David wrote: > I'm afraid that in reading the examples provided it is difficulties for me > not simply to think that EVERY SINGLE ONE of them would be FAR easier to > read if it were an `assert` instead. > I think there are two misunderstandings on the role of the contracts. First, they are part of the function signature, and not of the implementation. In contrast, the assertions are part of the implementation and are completely obscured in the signature. To see the contracts of a function or a class written as assertions, you need to visually inspect the implementation. The contracts are instead engraved in the signature and immediately visible. For example, you can test the distinction by pressing Ctrl + q in Pycharm. Second, assertions are only suitable for preconditions. Postconditions are practically unmaintainable as assertions as soon as you have multiple early returns in a function. The invariants implemented as assertions are always unmaintainable in practice (except for very, very small classes) -- you need to inspect each function of the class and all their return statements and manually add assertions for each invariant. Removing or changing invariants manually is totally impractical in my view. *Efficiency and Evidency. *David wrote: > The API of the library is a bit noisy, but I think the obstacle it's more > in the higher level design for me. Adding many layers of expensive runtime > checks and many lines of code in order to assure simple predicates that a > glance at the code or unit tests would do better seems wasteful. I'm not very sure what you mean by expensive runtime checks -- every single contract can be disabled at any point. Once a contract is disabled, there is literally no runtime computational cost incurred. The complexity of a contract during testing is also exactly the same as if you wrote it in the unit test. There is a constant overhead due to the extra function call to check the condition, but there's no more time complexity to it. The overhead of an additional function call is negligible in most practical test cases. When you say "a glance at the code", this implies to me that you referring to your own code and not to legacy code. In my experience, even simple predicates are often not obvious to see in other people's code as one might think (*e.g. *I had to struggle with even most simple ones like whether the result ends in a newline or not -- often having to actually run the code to check experimentally what happens with different inputs). Postconditions prove very useful in such situations: they let us know that whenever a function returns, the result must satisfy its postconditions. They are formal and obvious to read in the function signature, and hence spare us the need to parse the function's implementation or run it. Contracts in the unit tests. > The API of the library is a bit noisy, but I think the obstacle it's more > in the higher level design for me. Adding many layers of expensive runtime > checks and many lines of code in order to assure simple predicates that a > glance at the code or *unit tests would do better* seems wasteful. > (emphasis mine) Defining contracts in a unit test is, as I already mentioned in my previous message, problematic due to two reasons. First, the contract resides in a place far away from the function definition which might make it hard to find and maintain. Second, defining the contract in the unit test makes it impossible to put the contract in the production or test it in a call from a different function. In contrast, introducing the contract as a decorator works perfectly fine in all the three above-mentioned cases (smoke unit test, production, deeper testing). *Library. *Michael wrote: > I just want to point out that you don't need permission from anybody to > start a library. I think developing and popularizing a contracts library is > a reasonable goal -- but that's something you can start doing at any time > without waiting for consensus. As a matter of fact, I already implemented the library which covers most of the design-by-contract including the inheritance of the contracts. (The only missing parts are retrieval of "old" values in postconditions and loop invariants.) It's published on pypi as "icontract" package (the website is https://github.com/Parquery/icontract/). I'd like to gauge the interest before I/we even try to make a proposal to make it into the standard library. The discussions in this thread are an immense help for me to crystallize the points that would need to be addressed explicitly in such a proposal. If the proposal never comes about, it would at least flow into the documentation of the library and help me identify and explain better the important points. *Observation of contracts. *Michael wrote: > Your contracts are only checked when the function is evaluated, so you'd > still need to write that unit test that confirms the function actually > observes the contract. I don't think you necessarily get to reduce the > number of tests you'd need to write. Assuming that a contracts library is working correctly, there is no need to test whether a contract is observed or not -- you assume it is. The same applies to any testing library -- otherwise, you would have to test the tester, and so on *ad infinitum.* You still need to evaluate the function during testing, of course. But you don't need to document the contracts in your tests nor check that the postconditions are enforced -- you assume that they hold. For example, if you introduce a postcondition that the result of a function ends in a newline, there is no point of making a unit test, passing it some value and then checking that the result value ends in a newline in the test. Normally, it is sufficient to smoke-test the function. For example, you write a smoke unit test that gives a range of inputs to the function by using hypothesis library and let the postconditions be automatically checked. You can view each postcondition as an additional test case in this scenario -- but one that is also embedded in the function signature and also applicable in production. Not all tests can be written like this, of course. Dealing with a complex function involves writing testing logic which is too complex to fit in postconditions. Contracts are not a panacea, but they absolute us from implementing trivial testing logic while keeping the important bits of the documentation close to the function and allowing for deeper tests. *Accurate contracts. *Michael wrote: > There's also no guarantee that your contracts will necessarily be > *accurate*. It's entirely possible that your preconditions/postconditions > might hold for every test case you can think of, but end up failing when > running in production due to some edge case that you missed. > Unfortunately, there is no practical exit from this dilemma -- and it applies all the same for the tests. Who guarantees that the testing logic of the unit tests are correct? Unless you can formally prove that the code does what it should, there is no way around it. Whether you write contracts in the tests or in the decorators, it makes no difference to accuracy. If you missed to test an edge case, well, you missed it :). The design-by-contract does not make the code bug-free, but makes the bugs *much less likely* and *easier *to detect *early*. In practice, if there is a complex contract, I encapsulate its complex parts in separate functions (often with their own contracts), test these functions in separation and then, once the tests pass and I'm confident about their correctness, put them into contracts. (And if you decide to disable those pre/post conditions to avoid the > efficiency hit, you're back to square zero.) > In practice, we at Parquery AG let the critical contracts to run in production to ensure that the program blows up before it exercises undefined behavior in a critical situation. The informative violation errors of the icontract library help us to trace the bugs more easily since the relevant values are part of the error log. However, if some of the contracts are too inefficient to check in production, alas you have to turn them off and they can't be checked since they are inefficient. This seems like a tautology to me -- could you please clarify a bit what you meant? If a check is critical and inefficient at the same time then your problem is unsolvable (or at least ill-defined); contracts as well as any other approach can not solve it. *Ergonimical assertions. *Michael wrote: > Or I guess to put it another way -- it seems what all of these contract > libraries are doing is basically adding syntax to try and make adding > asserts in various places more ergonomic, and not much else. I agree those > kinds of libraries can be useful, but I don't think they're necessarily > useful enough to be part of the standard library or to be a technique > Python programmers should automatically use by default. >From the point of view of the *behavior, *that is exactly the case. The contracts (*e.g. *as function decorators) make postconditions and invariants possible in practice. As I already noted above, postconditions are very hard and invariants almost impossible to maintain manually without the contracts. This is even more so when contracts are inherited in a class hierarchy. Please do not underestimate another aspect of the contracts, namely the value of contracts as verifiable documentation. Please note that the only alternative that I observe in practice without design-by-contract is to write contracts in docstrings in *natural language*. Most often, they are just assumed, so the next programmer burns her fingers expecting the contracts to hold when they actually differ from the class or function description, but nobody bothered to update the docstrings (which is a common pitfall in any code base over a longer period of time). *Automatic generation of tests.* Michael wrote: > What might be interesting is somebody wrote a library that does something > more then just adding asserts. For example, one idea might be to try > hooking up a contracts library to hypothesis (or any other library that > does quickcheck-style testing). That might be a good way of partially > addressing the problems up above -- you write out your invariants, and a > testing library extracts that information and uses it to automatically > synthesize interesting test cases. This is the final goal and my main motivation to push for design-by-contract in Python :). There is a whole research community that tries to come up with automatic test generations, and contracts are of great utility there. Mind that generating the tests based on contracts is not trivial: hypothesis just picks elements for each input independently which is a much easier problem. However, preconditions can define how the arguments are *related*. Assume a function takes two numbers as arguments, x and y. If the precondition is y < x < (y + x) * 10, it is not trivial even for this simple example to come up with concrete samples of x and y unless you simply brute-force the problem by densely sampling all the numbers and checking the precondition. I see a chicken-and-egg problem here. If design-by-contract is not widely adopted, there will also be fewer or no libraries for automatic test generation. Honestly, I have absolutely no idea how you could approach automatic generation of test cases without contracts (in one form or the other). For example, how could you automatically mock a class without knowing its invariants? Since generating test cases for functions with non-trivial contracts is hard (and involves collaboration of many people), I don't expect anybody to start even thinking about it if the tool can only be applied to almost anywhere due to lack of contracts. Formal proofs and static analysis are even harder beasts to tame -- and I'd say the argument holds true for them even more. David and Michael, thank you again for your comments! I welcome very much your opinion and any follow-ups as well as from other participants on this mail list. Cheers, Marko On Sat, 15 Sep 2018 at 10:42, Michael Lee wrote: > I just want to point out that you don't need permission from anybody to > start a library. I think developing and popularizing a contracts library is > a reasonable goal -- but that's something you can start doing at any time > without waiting for consensus. > > And if it gets popular enough, maybe it'll be added to the standard > library in some form. That's what happened with attrs, iirc -- it got > fairly popular and demonstrated there was an unfilled niche, and so Python > acquired dataclasses.. > > > The contracts make merely tests obsolete that test that the function or >> class actually observes the contracts. >> > > Is this actually the case? Your contracts are only checked when the > function is evaluated, so you'd still need to write that unit test that > confirms the function actually observes the contract. I don't think you > necessarily get to reduce the number of tests you'd need to write. > > > Please let me know what points *do not *convince you that Python needs >> contracts >> > > While I agree that contracts are a useful tool, I don't think they're > going to be necessarily useful for *all* Python programmers. For example, > contracts aren't particularly useful if you're writing fairly > straightforward code with relatively simple invariants. > > I'm also not convinced that libraries where contracts are checked > specifically *at runtime* actually give you that much added power and > impact. For example, you still need to write a decent number of unit tests > to make sure your contracts are being upheld (unless you plan on checking > this by just deploying your code and letting it run, which seems > suboptimal). There's also no guarantee that your contracts will necessarily > be *accurate*. It's entirely possible that your > preconditions/postconditions might hold for every test case you can think > of, but end up failing when running in production due to some edge case > that you missed. (And if you decide to disable those pre/post conditions to > avoid the efficiency hit, you're back to square zero.) > > Or I guess to put it another way -- it seems what all of these contract > libraries are doing is basically adding syntax to try and make adding > asserts in various places more ergonomic, and not much else. I agree those > kinds of libraries can be useful, but I don't think they're necessarily > useful enough to be part of the standard library or to be a technique > Python programmers should automatically use by default. > > What might be interesting is somebody wrote a library that does something > more then just adding asserts. For example, one idea might be to try > hooking up a contracts library to hypothesis (or any other library that > does quickcheck-style testing). That might be a good way of partially > addressing the problems up above -- you write out your invariants, and a > testing library extracts that information and uses it to automatically > synthesize interesting test cases. > > (And of course, what would be very cool is if the contracts could be > verified statically like you can do in languages like dafny -- that way, > you genuinely would be able to avoid writing many kinds of tests and could > have confidence your contracts are upheld. But I understanding implementing > such verifiers are extremely challenging and would probably have too-steep > of a learning curve to be usable by most people anyways.) > > -- Michael > > > > On Fri, Sep 14, 2018 at 11:51 PM, Marko Ristin-Kaufmann < > marko.ristin at gmail.com> wrote: > >> Hi, >> Let me make a couple of practical examples from the work-in-progress ( >> https://github.com/Parquery/pypackagery, branch mristin/initial-version) >> to illustrate again the usefulness of the contracts and why they are, in my >> opinion, superior to assertions and unit tests. >> >> What follows is a list of function signatures decorated with contracts >> from pypackagery library preceded by a human-readable description of the >> contracts. >> >> The invariants tell us what format to expect from the related string >> properties. >> >> @icontract.inv(lambda self: self.name.strip() == self.name) >> @icontract.inv(lambda self: self.line.endswith("\n")) >> class Requirement: >> """Represent a requirement in requirements.txt.""" >> >> def __init__(self, name: str, line: str) -> None: >> """ >> Initialize. >> >> :param name: package name >> :param line: line in the requirements.txt file >> """ >> ... >> >> The postcondition tells us that the resulting map keys the values on >> their name property. >> >> @icontract.post(lambda result: all(val.name == key for key, val in result.items())) >> def parse_requirements(text: str, filename: str = '') -> Mapping[str, Requirement]: >> """ >> Parse requirements file and return package name -> package requirement as in requirements.txt >> >> :param text: content of the ``requirements.txt`` >> :param filename: where we got the ``requirements.txt`` from (URL or path) >> :return: name of the requirement (*i.e.* pip package) -> parsed requirement >> """ >> ... >> >> >> The postcondition ensures that the resulting list contains only unique >> elements. Mind that if you returned a set, the order would have been lost. >> >> @icontract.post(lambda result: len(result) == len(set(result)), enabled=icontract.SLOW) >> def missing_requirements(module_to_requirement: Mapping[str, str], >> requirements: Mapping[str, Requirement]) -> List[str]: >> """ >> List requirements from module_to_requirement missing in the ``requirements``. >> >> :param module_to_requirement: parsed ``module_to_requiremnt.tsv`` >> :param requirements: parsed ``requirements.txt`` >> :return: list of requirement names >> """ >> ... >> >> Here is a bit more complex example. >> - The precondition A requires that all the supplied relative paths >> (rel_paths) are indeed relative (as opposed to absolute). >> - The postcondition B ensures that the initial set of paths (given in >> rel_paths) is included in the results. >> - The postcondition C ensures that the requirements in the results are >> the subset of the given requirements. >> - The precondition D requires that there are no missing requirements (*i.e. >> *that each requirement in the given module_to_requirement is also >> defined in the given requirements). >> >> @icontract.pre(lambda rel_paths: all(rel_pth.root == "" for rel_pth in rel_paths)) # A >> @icontract.post( >> lambda rel_paths, result: all(pth in result.rel_paths for pth in rel_paths), >> enabled=icontract.SLOW, >> description="Initial relative paths included") # B >> @icontract.post( >> lambda requirements, result: all(req.name in requirements for req in result.requirements), >> enabled=icontract.SLOW) # C >> @icontract.pre( >> lambda requirements, module_to_requirement: missing_requirements(module_to_requirement, requirements) == [], >> enabled=icontract.SLOW) # D >> def collect_dependency_graph(root_dir: pathlib.Path, rel_paths: List[pathlib.Path], >> requirements: Mapping[str, Requirement], >> module_to_requirement: Mapping[str, str]) -> Package: >> >> """ >> Collect the dependency graph of the initial set of python files from the code base. >> >> :param root_dir: root directory of the codebase such as "/home/marko/workspace/pqry/production/src/py" >> :param rel_paths: initial set of python files that we want to package. These paths are relative to root_dir. >> :param requirements: requirements of the whole code base, mapped by package name >> :param module_to_requirement: module to requirement correspondence of the whole code base >> :return: resolved depedendency graph including the given initial relative paths, >> """ >> >> I hope these examples convince you (at least a little bit :-)) that >> contracts are easier and clearer to write than asserts. As noted before in >> this thread, you can have the same *behavior* with asserts as long as >> you don't need to inherit the contracts. But the contract decorators make >> it very explicit what conditions should hold *without* having to look >> into the implementation. Moreover, it is very hard to ensure the >> postconditions with asserts as soon as you have a complex control flow since >> you would need to duplicate the assert at every return statement. (You >> could implement a context manager that ensures the postconditions, but a >> context manager is not more readable than decorators and you have to >> duplicate them as documentation in the docstring). >> >> In my view, contracts are also superior to many kinds of tests. As the >> contracts are *always* enforced, they also enforce the correctness >> throughout the program execution whereas the unit tests and doctests only >> cover a list of selected cases. Furthermore, writing the contracts in these >> examples as doctests or unit tests would escape the attention of most less >> experienced programmers which are not used to read unit tests as >> documentation. Finally, these unit tests would be much harder to read than >> the decorators (*e.g.*, the unit test would supply invalid arguments and >> then check for ValueError which is already a much more convoluted piece of >> code than the preconditions and postconditions as decorators. Such testing >> code also lives in a file separate from the original implementation making >> it much harder to locate and maintain). >> >> Mind that the contracts *do not* *replace* the unit tests or the >> doctests. The contracts make merely tests obsolete that test that the >> function or class actually observes the contracts. Design-by-contract helps >> you skip those tests and focus on the more complex ones that test the >> behavior. Another positive effect of the contracts is that they make your >> tests deeper: if you specified the contracts throughout the code base, a >> test of a function that calls other functions in its implementation will >> also make sure that all the contracts of that other functions hold. This >> can be difficult to implement with standard unit test frameworks. >> >> Another aspect of the design-by-contract, which is IMO ignored quite >> often, is the educational one. Contracts force the programmer to actually >> sit down and think *formally* about the inputs and the outputs >> (hopefully?) *before* she starts to implement a function. Since many >> schools use Python to teach programming (especially at high school level), >> I imagine writing contracts of a function to be a very good exercise in >> formal thinking for the students. >> >> Please let me know what points *do not *convince you that Python needs >> contracts (in whatever form -- be it as a standard library, be it as a >> language construct, be it as a widely adopted and collectively maintained >> third-party library). I would be very glad to address these points in my >> next message(s). >> >> Cheers, >> Marko >> >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sat Sep 15 16:39:21 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Sat, 15 Sep 2018 22:39:21 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <20180915193848.749c1e03@fsol> References: <20180915193848.749c1e03@fsol> Message-ID: On Sat, Sep 15, 2018 at 7:38 PM, Antoine Pitrou wrote: > To be fair, in my > > experience this has been a source of confusion to many Python > > newcomers, as the notion of "beauty", as with any other value > > judgment, is highly relative to the subject evaluating it. Indeed is *is* subjective -- as is "Pythonic", or "elegant", or other concept of that nature -- that is intentional. "efficient is better than inefficient" kind of goes without saying... What this merely shows, IMHO, is that writing programming slogans or > jokes on clothing you wear in public is stupid. Most people who see > them won't understand a word of them, and in some cases may badly > misinterpret them as your example shows. > > I used to think I was the only one for whom conference t-shirts could > only serve as pyjamas, well, I see them as my "geek cred" t-shirts, and part of the point is that only those those "in the know" will get it. So I don't think this says anything about wearing clothing that refers to a particular group is bad, but that one shoudl be caefule about whicj slogans you display out of context. If teh shirt said" "beuatiful code is better than ugly code" I don't think there would be an issue. As to the OP's point: We now have anecdotal evidence that "beautiful is better than ugly" can be offensive out of context. Other than that, we have people "suspecting" or "imagining" that some people "may" find it offensive in context. I try never to speak for others when saying whether something is troublesome to a community, but if we have exactly zero actual cases of someone finding it personally offensive (in context), I think we'd be going a bit overboard in making any changes. Is it any better to make a change that has not been asked for by imagining other's sensitivities than it is to ignore others' sensitivities? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From kulakov.ilya at gmail.com Sat Sep 15 16:45:27 2018 From: kulakov.ilya at gmail.com (Ilya Kulakov) Date: Sat, 15 Sep 2018 13:45:27 -0700 Subject: [Python-ideas] Deprecation utilities for the warnings module In-Reply-To: <0E10AE5C-E650-42D8-9BCA-B42D9F5902B7@killingar.net> References: <3E800307-2D67-4E8E-B2C3-4C83875F0226@gmail.com> <7F6B5676-91D9-4FBE-98B8-501BA77D6897@killingar.net> <0E10AE5C-E650-42D8-9BCA-B42D9F5902B7@killingar.net> Message-ID: <81C43951-ECFE-470E-98D4-2D62CDE2CBD7@gmail.com> >> Therefore it's not redundant to subclass *Warning for namespacing alone. > > Not redundant? You mean you must subclass? In that case my concern stands. An unfortunate typo, meant "it's redundant". > And what does that match against? The module name of the exception type right? It matches agains a location where warn is called after taking stacklevel into account. Consider the following example: test.py: import warnings warnings.warn("test") warnings.warn("__main__ from test", stacklevel=2) $ python -c "import warnings, test; warnings.warn('__main__')" test.py:2: UserWarning: test warnings.warn("test") -c:1: UserWarning: __main__ from test -c:1: UserWarning: __main__ $ python -W "ignore:::test:" -c "import warnings, test; warnings.warn('__main__')" -c:1: UserWarning: __main__ from test -c:1: UserWarning: __main__ $ python -W "ignore:::__main__:" -c "import warnings, test; warnings.warn('__main__')" test.py:2: UserWarning: test warnings.warn("test") End-user can distinguish warnings of the same category by specifying their origin (where warning is issued in runtime). From greg.ewing at canterbury.ac.nz Sat Sep 15 23:23:01 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 16 Sep 2018 15:23:01 +1200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: <5B9DCC95.7050109@canterbury.ac.nz> Chris Barker via Python-ideas wrote: > "efficient is better than inefficient" kind of goes without saying... Perhaps we should just replace the entire Zen with "Good is better than bad." Insert your own subjective ideas on what constitutes "good" and "bad" and you're set to go. :-) -- Greg From leewangzhong+python at gmail.com Sat Sep 15 23:39:22 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Sat, 15 Sep 2018 23:39:22 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: I am very disappointed with the responses to this thread. We have mockery, dismissiveness, and even insinuations about OP's psychological health. Whether or not OP is a troll, and whether or not OP's idea has merit, that kind of response is unnecessary and unhelpful. (While I lean toward OP being a troll, the fact that the OP's name is the same as a Canadian actress is insignificant. Chinese surnames are single-syllable, there are only so many one-syllable surnames, and "Samantha" is a common-enough name.) Since Antoine challenged Calvin to name names, I will name names. If the thread devolves into one-on-one fights, then you'll know why Calvin didn't do it. Antoine: - Accusing the OP of not being open-minded for proposing (not "insisting on"!) the idea at all. "You ask others to be open-minded, but fail to show such an attitude yourself." - Labeling the OP's position as reactionary, and intolerant. "And, as a French person, I have to notice this is yet another attempt to impose reactionary, intolerant American politics on the rest of the world (or of the Python community)." David Mertz: Sarcastically suggesting that we burn programming books if they use "beautiful" in their titles. Chris Angelico: This implied accusation: "Not everyone assumes the worst about words." Oleg: - Dismissing the whole post as a troll.* "Nice trolling, go on! :-D" - Calling the OP's idea stupid, and calling a different (settled) decision stupid. (One can argue Oleg isn't really calling anything stupid, but I preemptively say that's a stupid argument.) "Removing master/slave is almost as stupid as ugly/beautiful." - Dismissing the stance as oversensitive offense-taking. "People shouldn't try and take personal offense to things that haven't been applied to them personally, or, even worse, complain about a term applied to anything/anyone else in a way they perceive to be offensive." - Mockery: The entire email with this line is spent on mockery: 'I also propose to ban the following technical terms that carry dark meanings: "abort", "kill" and "execute" (stop the genocide!) ...' Greg: Another email spent entirely on mockery: """If we're going to object to "slave", we should object to "robot" as well, since it's derived from a Czech word meaning "forced worker".""" * There is a difference between discussing whether it is a troll post and flippantly stating it as fact. The first brings up a relevant concern. The second says, "No one can reasonably believe what you claim to believe, so I won't treat you as a rational person." Jacco: - This is completely disrespectful and way over the line. Don't try to make a psychological evaluation from two emails, especially when it's just someone having an idea you don't like. """However, if merely the word ugly being on a page can be "harmful", what you really need is professional help, not a change to Python. Because there's obviously been some things in your past you need to work through.""" - Mockery. """If we have to ban "Ugly" for american sensitivities, then perhaps we need to ban a number of others for china's sensitivities. Where will it end ?""" There are people making serious arguments against the idea, including the people above. But those arguments could have been made without the above examples. The above quotes don't treat the OP or the OP's ideas as worthy of a serious and mature response. P.S.: I read Poe's Law not as a warning against falling for trolls, but as a warning about confirmation bias. If I keep falling for poes of group G, it's probably because I'm too far too willing to believe negative things about G, and don't care to understand them. From solipsis at pitrou.net Sun Sep 16 04:13:45 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 16 Sep 2018 10:13:45 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause References: <20180915193848.749c1e03@fsol> Message-ID: <20180916101345.092ddf9a@fsol> Yeah, right. You know, when I was pointing out Calvin not being very brave by attacking a bunch of people without giving names, my aim was to merely point out how dishonest and disrespectful his attitude his. *Not* to encourage someone to turn his post into more of a clusterfuck of personal attacks. Regards Antoine. On Sat, 15 Sep 2018 23:39:22 -0400 "Franklin? Lee" wrote: > I am very disappointed with the responses to this thread. We have > mockery, dismissiveness, and even insinuations about OP's > psychological health. Whether or not OP is a troll, and whether or not > OP's idea has merit, that kind of response is unnecessary and > unhelpful. > > (While I lean toward OP being a troll, the fact that the OP's name is > the same as a Canadian actress is insignificant. Chinese surnames are > single-syllable, there are only so many one-syllable surnames, and > "Samantha" is a common-enough name.) > > Since Antoine challenged Calvin to name names, I will name names. If > the thread devolves into one-on-one fights, then you'll know why > Calvin didn't do it. > > Antoine: > - Accusing the OP of not being open-minded for proposing (not > "insisting on"!) the idea at all. > "You ask others to be open-minded, but fail to show such an > attitude yourself." > - Labeling the OP's position as reactionary, and intolerant. > "And, as a French person, I have to notice this is yet another > attempt to impose reactionary, intolerant American politics on the > rest of the world (or of the Python community)." > > David Mertz: Sarcastically suggesting that we burn programming books > if they use "beautiful" in their titles. > > Chris Angelico: This implied accusation: > "Not everyone assumes the worst about words." > > Oleg: > - Dismissing the whole post as a troll.* > "Nice trolling, go on! :-D" > - Calling the OP's idea stupid, and calling a different (settled) > decision stupid. (One can argue Oleg isn't really calling anything > stupid, but I preemptively say that's a stupid argument.) > "Removing master/slave is almost as stupid as ugly/beautiful." > - Dismissing the stance as oversensitive offense-taking. > "People shouldn't try and take personal offense to things that > haven't been applied to them personally, or, even worse, complain > about a term applied to anything/anyone else in a way they perceive to > be offensive." > - Mockery: The entire email with this line is spent on mockery: > 'I also propose to ban the following technical terms that carry > dark meanings: "abort", "kill" and "execute" (stop the genocide!) ...' > > Greg: Another email spent entirely on mockery: > """If we're going to object to "slave", we should object to > "robot" as well, since it's derived from a Czech word meaning "forced > worker".""" > > * There is a difference between discussing whether it is a troll post > and flippantly stating it as fact. The first brings up a relevant > concern. The second says, "No one can reasonably believe what you > claim to believe, so I won't treat you as a rational person." > > Jacco: > - This is completely disrespectful and way over the line. Don't try to > make a psychological evaluation from two emails, especially when it's > just someone having an idea you don't like. > """However, if merely the word ugly being on a page can be > "harmful", what you really need is professional help, not a change to > Python. Because there's obviously been some things in your past you > need to work through.""" > - Mockery. > """If we have to ban "Ugly" for american sensitivities, then > perhaps we need to ban a number of others for china's sensitivities. > Where will it end ?""" > > There are people making serious arguments against the idea, including > the people above. But those arguments could have been made without the > above examples. The above quotes don't treat the OP or the OP's ideas > as worthy of a serious and mature response. > > > P.S.: I read Poe's Law not as a warning against falling for trolls, > but as a warning about confirmation bias. If I keep falling for poes > of group G, it's probably because I'm too far too willing to believe > negative things about G, and don't care to understand them. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > From turnbull.stephen.fw at u.tsukuba.ac.jp Sun Sep 16 04:33:45 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Sun, 16 Sep 2018 17:33:45 +0900 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: <23454.5481.769440.235222@turnbull.sk.tsukuba.ac.jp> Chris Barker via Python-ideas writes: > We now have anecdotal evidence that "beautiful is better than ugly" > can be offensive out of context. Other than that, we have people > "suspecting" or "imagining" that some people "may" find it > offensive in context. "Sam" at yandex.ru did not even do that. She just took it out of context. The post was a troll, whether she is or not. So put it back in context. PyCon 2017 had the whole python -m this on the back of the shirts. If somebody *ever* complains about that, I'll bite my tongue and ignore them. > Is it any better to make a change that has not been asked for by > imagining other's sensitivities than it is to ignore others' > sensitivities? Either way you're ignoring their actual sensitivities, so it's at root the same (the former manifests as patronizing, the latter as rude). On the other hand, sometimes there are better terms to use. It's one thing to pull "beautiful is better than ugly" out of a poem in which most of the lines follow that same pattern of " is better than ", breaking the symmetry. It's another when replacing "master/slave" comes up, and it's pointed out that there *are* more precise terms, such as "original/replica", in some contexts. I'm of two minds as whether it's worth the churn, but if others are willing to do the work ;-) of finding all the uses, proposing replacements, and submitting the PRs, I'd be willing to review and add my $.02 as to whether there's actually an improvement. I would also disagree with Greg Ewing's take on "robot". It may have meant "slave" in the original Czech, but in English it has strong connotations of "automaton" and an inherent lack of autonomy, quite different from a human slave's flexibility to perform any command, and the way a human slave's autonomy is stripped by force, respectively. If Czech-speakers want to offer their opinions, I'm listening, but I wouldn't be surprised to find that their consensus opinion in 2018 to be that the English usage of robot is more prevalent than the Czech original meaning. Steve From turnbull.stephen.fw at u.tsukuba.ac.jp Sun Sep 16 04:35:09 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Sun, 16 Sep 2018 17:35:09 +0900 Subject: [Python-ideas] Combine f-strings with i18n In-Reply-To: References: Message-ID: <23454.5565.705303.360300@turnbull.sk.tsukuba.ac.jp> Hans Polak writes: > The second problem is that I need to translate strings on the fly. I don't understand what that means. gettext() does exactly that. Do you mean you need to switch languages on the fly? > I propose to add a f''.language() method to the f-string format. > > Rationale: > > More pythonic. I don't think so, since as Chris points out, an f-string is an expression whose value is a str, with the values of the locals already interpolated in the string. You'd need a lot of magic in the compiler to make this work. > At this moment, _('').format() is the way to go, so I > would need to wrap another call around that: T(_(''), args, 'es_ES') > <===This is an ugly hack. > > # Set the _() function to return the same string > > _ = lambda s: s > > es = gettext.translation('myapplication', languages=['es_ES']) If, as I guessed, you want to change languages on the fly, I would advise to make a hash table of languages used so far, and add translation tables to it on the fly as new languages are requested. Or, if space is at a premium, a LRU cache of tables. > def T(translatable_string, args_dictionary = None, language = None) > > ??? if 'es_ES' == language: > > ??? ??? # Return translated, formatted string > > ??? ??? return es.gettext(translatable_string).format(args) > > > ??? # Default, return formatted string > > ??? return translatable_string.format(args) Then you can replace this with # Use duck-typing of gettext.translation objects class NullTranslation: def __init__(self): self.gettext = lambda s: s def get_gettext(language, translation={'C': NullTranslation()}): if language not in translation: translation[language] = \ gettext.translation('myapplication', languages=[language]) return translation[language].gettext and # This could be one line, but I guess in many cases you're likely # to use use the gettext function repeatedly. Also, use of the # _() idiom marks translatable string for translators. _ = get_gettext(language) _(translatable_string).format(key=value...) instead of T(translatable_string, args, language) which isn't great style (the names "T" and "args" are not very evocative). It's a little more code, but it doesn't require changing the semantics of existing Python code, and is prettier than your T() function IMO. Steve From mertz at gnosis.cx Sun Sep 16 05:45:18 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 16 Sep 2018 05:45:18 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: You have missed the use of *reductio ad absurdum* in my comment and several others. This argument structure is one of the fundamental forms of good logical reasoning, and shows nothing dismissive or insulting. The specifics book titles I used were carefully chosen, and you'd do well to think about why those specific books (and read all of them, if you haven't). On Sat, Sep 15, 2018, 11:40 PM Franklin? Lee wrote: > I am very disappointed with the responses to this thread. We have > mockery, dismissiveness, and even insinuations about OP's > psychological health. Whether or not OP is a troll, and whether or not > OP's idea has merit, that kind of response is unnecessary and > unhelpful. > > (While I lean toward OP being a troll, the fact that the OP's name is > the same as a Canadian actress is insignificant. Chinese surnames are > single-syllable, there are only so many one-syllable surnames, and > "Samantha" is a common-enough name.) > > Since Antoine challenged Calvin to name names, I will name names. If > the thread devolves into one-on-one fights, then you'll know why > Calvin didn't do it. > > Antoine: > - Accusing the OP of not being open-minded for proposing (not > "insisting on"!) the idea at all. > "You ask others to be open-minded, but fail to show such an > attitude yourself." > - Labeling the OP's position as reactionary, and intolerant. > "And, as a French person, I have to notice this is yet another > attempt to impose reactionary, intolerant American politics on the > rest of the world (or of the Python community)." > > David Mertz: Sarcastically suggesting that we burn programming books > if they use "beautiful" in their titles. > > Chris Angelico: This implied accusation: > "Not everyone assumes the worst about words." > > Oleg: > - Dismissing the whole post as a troll.* > "Nice trolling, go on! :-D" > - Calling the OP's idea stupid, and calling a different (settled) > decision stupid. (One can argue Oleg isn't really calling anything > stupid, but I preemptively say that's a stupid argument.) > "Removing master/slave is almost as stupid as ugly/beautiful." > - Dismissing the stance as oversensitive offense-taking. > "People shouldn't try and take personal offense to things that > haven't been applied to them personally, or, even worse, complain > about a term applied to anything/anyone else in a way they perceive to > be offensive." > - Mockery: The entire email with this line is spent on mockery: > 'I also propose to ban the following technical terms that carry > dark meanings: "abort", "kill" and "execute" (stop the genocide!) ...' > > Greg: Another email spent entirely on mockery: > """If we're going to object to "slave", we should object to > "robot" as well, since it's derived from a Czech word meaning "forced > worker".""" > > * There is a difference between discussing whether it is a troll post > and flippantly stating it as fact. The first brings up a relevant > concern. The second says, "No one can reasonably believe what you > claim to believe, so I won't treat you as a rational person." > > Jacco: > - This is completely disrespectful and way over the line. Don't try to > make a psychological evaluation from two emails, especially when it's > just someone having an idea you don't like. > """However, if merely the word ugly being on a page can be > "harmful", what you really need is professional help, not a change to > Python. Because there's obviously been some things in your past you > need to work through.""" > - Mockery. > """If we have to ban "Ugly" for american sensitivities, then > perhaps we need to ban a number of others for china's sensitivities. > Where will it end ?""" > > There are people making serious arguments against the idea, including > the people above. But those arguments could have been made without the > above examples. The above quotes don't treat the OP or the OP's ideas > as worthy of a serious and mature response. > > > P.S.: I read Poe's Law not as a warning against falling for trolls, > but as a warning about confirmation bias. If I keep falling for poes > of group G, it's probably because I'm too far too willing to believe > negative things about G, and don't care to understand them. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Sun Sep 16 05:57:58 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 16 Sep 2018 05:57:58 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <23454.5481.769440.235222@turnbull.sk.tsukuba.ac.jp> References: <20180915193848.749c1e03@fsol> <23454.5481.769440.235222@turnbull.sk.tsukuba.ac.jp> Message-ID: On Sun, Sep 16, 2018, 4:34 AM Stephen J. Turnbull < turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > I would also disagree with Greg Ewing's take on "robot". It may have > meant "slave" in the original Czech, but in English it has strong > connotations of "automaton" and an inherent lack of autonomy, quite > different from a human slave's flexibility to perform any command, Robot doesn't mean "slave" in Czech, but rather "serf." Serfdom was/is a terrible institution, but nothing best so terrible as the Atlantic slave trade of the 15th-19th C which is what modern usage tends to indicates. Moreover, the morpheme "r?b" is commonplace in Slavic languages to mean "work" in a more general sense. Wikipedia: Karl ?apek's fictional story postulated the technological creation of artificial human bodies without souls, and the old theme of the feudal robota class eloquently fit the imagination of a new class of manufactured, artificial workers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wofthep at gmail.com Sun Sep 16 06:14:03 2018 From: wofthep at gmail.com (Widom PsychoPath) Date: Sun, 16 Sep 2018 15:44:03 +0530 Subject: [Python-ideas] Retire or reword the namesake of the Language Message-ID: Guten Tag, I am Jack and I am grateful to see the efficiency of scientific computing in Python. However, What deeply saddens me is that the namesake "Python" has unfortunately been derived from the title of the uncivilised British jester troupe "Monty Python". This is something that deeply infuriates me and is against the morals of my culture. Although humor is an integral aspect of the life of an Untermensch, I believe that Python, A language used as an interface to majority of Scientific computing software should be renamed to something more suitable. I hereby propose that the Language should be renamed to Cobra, after the brilliant military strategist Cobra Commander , who has been history's most efficient and brilliant strategist. I hope that my message is not mistaken for an attempt to humor. I am physically unable to experience the same. Please revert back to the mail with your thoughts and constructive criticism. Yours forever, Jack Daniels From rosuav at gmail.com Sun Sep 16 07:40:02 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 16 Sep 2018 21:40:02 +1000 Subject: [Python-ideas] Retire or reword the namesake of the Language In-Reply-To: References: Message-ID: On Sun, Sep 16, 2018 at 8:14 PM, Widom PsychoPath wrote: > I hereby propose that the Language should be renamed to Cobra, after > the brilliant military strategist Cobra Commander , who has been > history's most efficient and brilliant strategist. > > Yours forever, > Jack Daniels Now THAT is the *true* spirit of professionalism. Here we have proof that the scientific community cares about the language. About 80 proof, I think. ChrisA From gadgetsteve at live.co.uk Sun Sep 16 08:05:53 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sun, 16 Sep 2018 12:05:53 +0000 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: On 16/09/2018 10:45, David Mertz wrote: > You have missed the use of *reductio ad absurdum* in my comment and > several others. This argument structure is one of the fundamental forms > of good logical reasoning, and shows nothing dismissive or insulting. > The specifics book titles I used were carefully chosen, and you'd do > well to think about why those specific books (and read all of them, if > you haven't). > For the ultimate "reductio as absurdum" it is possible to argue that one (1) is elitist and zero (0) is nihilist therefore to avoid offence we should scrap the entire binary system and with it digital computers - then the whole argument becomes moot until someone implements python on an analogue computer. ;-) -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From wes.turner at gmail.com Sun Sep 16 09:25:03 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 16 Sep 2018 09:25:03 -0400 Subject: [Python-ideas] Retire or reword the namesake of the Language In-Reply-To: References: Message-ID: There's already a thing named Cobra. https://github.com/opencobra/cobrapy "Python (mythology)" https://en.wikipedia.org/wiki/Python_(mythology) ... Serpent/Dragon guarding the omphalos. "Ouroboros" https://en.wikipedia.org/wiki/Ouroboros Monty Python themed Python language things: - The Cheese Shop https://en.wikipedia.org/wiki/Cheese_Shop_sketch https://wiki.python.org/moin/CheeseShop https://pypi.org/ - The Knights Who Say Ni -- https://en.wikipedia.org/wiki/Knights_Who_Say_Ni https://github.com/python/the-knights-who-say-ni - Miss Islington https://github.com/python/miss-islington On Sunday, September 16, 2018, Chris Angelico wrote: > On Sun, Sep 16, 2018 at 8:14 PM, Widom PsychoPath > wrote: > > I hereby propose that the Language should be renamed to Cobra, after > > the brilliant military strategist Cobra Commander , who has been > > history's most efficient and brilliant strategist. > > > > Yours forever, > > Jack Daniels > > Now THAT is the *true* spirit of professionalism. Here we have proof > that the scientific community cares about the language. > > About 80 proof, I think. > > ChrisA > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at janc.be Sun Sep 16 09:40:37 2018 From: lists at janc.be (Jan Claeys) Date: Sun, 16 Sep 2018 15:40:37 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: Message-ID: On Fri, 2018-09-14 at 22:09 +0200, Davide Rizzo wrote: > At one Python conference in Italy, participants were given an > elegantly designed and well-crafted t-shirt with a writing in large > characters that read "Beautiful is better than ugly" in reference to > the Zen of Python. Back home, a close person who is not a Python > programmer nor familiar with PEP 20 saw the t-shirt. They were > horrified by the out-of-context sentence for reasons similar to what > has been already stated in support of this argument. It prompted them > of lookism and judgmentality, and found the message to be disturbing > in its suggestion to compare by some standard of beauty and to > discriminate. Let me add some context: this person is socially and > politically active (maybe what someone would call a "SJW"; definitely > not what anyone would call "politically correct"), and is specially > sensitive to issues of discrimination and sexism. This was enough, > though, for me to wonder what kind of message I would be projecting > by wearing that writing on me. I've been since then discouraged to > ever wear the t-shirt in any public context. This illustrates that by taking something out of context, it can (appear to) get an entirely different meaning. This can happen on purpose or (as in this case, I assume) by accident. It doesn't say anything about the complete text of the ?Zen of Python? (which to any layperson probably looks quite like unintelligible gibberish). The lesson to be learned is: ?be careful when taking something out of context?. -- Jan Claeys From wes.turner at gmail.com Sun Sep 16 09:48:47 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 16 Sep 2018 09:48:47 -0400 Subject: [Python-ideas] Retire or reword the namesake of the Language In-Reply-To: References: Message-ID: Anyways, speaking of dragons, here are some ideas for new logos: "Strong Bad Email #58: Dragon" https://youtu.be/90X5NJleYJQ https://en.wikipedia.org/wiki/Monty_Python On Sunday, September 16, 2018, Wes Turner wrote: > There's already a thing named Cobra. > > https://github.com/opencobra/cobrapy > > > "Python (mythology)" > https://en.wikipedia.org/wiki/Python_(mythology) > ... Serpent/Dragon guarding the omphalos. > > "Ouroboros" > https://en.wikipedia.org/wiki/Ouroboros > > > Monty Python themed Python language things: > > - The Cheese Shop > https://en.wikipedia.org/wiki/Cheese_Shop_sketch > https://wiki.python.org/moin/CheeseShop > https://pypi.org/ > > - The Knights Who Say Ni -- > https://en.wikipedia.org/wiki/Knights_Who_Say_Ni > https://github.com/python/the-knights-who-say-ni > > - Miss Islington > https://github.com/python/miss-islington > > On Sunday, September 16, 2018, Chris Angelico wrote: > >> On Sun, Sep 16, 2018 at 8:14 PM, Widom PsychoPath >> wrote: >> > I hereby propose that the Language should be renamed to Cobra, after >> > the brilliant military strategist Cobra Commander , who has been >> > history's most efficient and brilliant strategist. >> > >> > Yours forever, >> > Jack Daniels >> >> Now THAT is the *true* spirit of professionalism. Here we have proof >> that the scientific community cares about the language. >> >> About 80 proof, I think. >> >> ChrisA >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sun Sep 16 10:07:09 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 16 Sep 2018 10:07:09 -0400 Subject: [Python-ideas] SEC: Spectre variant 2: GCC: -mindirect-branch=thunk -mindirect-branch-register In-Reply-To: References: Message-ID: Should Python builds add `-mindirect-branch=thunk -mindirect-branch-register` to CFLAGS? Where would this be to be added in the build scripts with which architectures? /QSpectre is the MSVC build flag for Spectre Variant 1: > The /Qspectre option is available in Visual Studio 2017 version 15.7 and later. https://docs.microsoft.com/en-us/cpp/build/reference/qspectre?view=vs-2017 security@ directed me to the issue tracker / lists, so I'm forwarding this to python-dev and python-ideas, as well. # Forwarded message From: *Wes Turner* Date: Wednesday, September 12, 2018 Subject: SEC: Spectre variant 2: GCC: -mindirect-branch=thunk -mindirect-branch-register To: distutils-sig Should C extensions that compile all add `-mindirect-branch=thunk -mindirect-branch-register` [1] to mitigate the risk of Spectre variant 2 (which does indeed affect user space applications as well as kernels)? [1] https://github.com/speed47/spectre-meltdown-checker/ issues/119#issuecomment-361432244 [2] https://en.wikipedia.org/wiki/Spectre_(security_vulnerability) [3] https://en.wikipedia.org/wiki/Speculative_Store_Bypass# Speculative_execution_exploit_variants On Wednesday, September 12, 2018, Wes Turner wrote: > >> On Wednesday, September 12, 2018, Joni Orponen >> wrote: >> >>> On Wed, Sep 12, 2018 at 8:48 PM Wes Turner wrote: >>> >>>> Should C extensions that compile all add >>>> `-mindirect-branch=thunk -mindirect-branch-register` [1] to mitigate >>>> the risk of Spectre variant 2 (which does indeed affect user space >>>> applications as well as kernels)? >>>> >>> >>> Are those available on GCC <= 4.2.0 as per PEP 513? >>> >> >> AFAIU, only >> GCC 7.3 and 8 have the retpoline (indirect-branch=thunk) support enabled >> by the `-mindirect-branch=thunk -mindirect-branch-register` CFLAGS. >> > On Wednesday, September 12, 2018, Wes Turner wrote: > "What is a retpoline and how does it work?" > https://stackoverflow.com/questions/48089426/what-is-a- > retpoline-and-how-does-it-work > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sun Sep 16 10:16:19 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 16 Sep 2018 10:16:19 -0400 Subject: [Python-ideas] SEC: Spectre variant 2: GCC: -mindirect-branch=thunk -mindirect-branch-register In-Reply-To: References: Message-ID: On Sunday, September 16, 2018, Wes Turner wrote: > Should Python builds add `-mindirect-branch=thunk > -mindirect-branch-register` to CFLAGS? > > Where would this be to be added in the build scripts with which > architectures? > > /QSpectre is the MSVC build flag for Spectre Variant 1: > > > The /Qspectre option is available in Visual Studio 2017 version 15.7 and > later. > > https://docs.microsoft.com/en-us/cpp/build/reference/qspectre?view=vs-2017 > > security@ directed me to the issue tracker / lists, > so I'm forwarding this to python-dev and python-ideas, as well. > > # Forwarded message > From: *Wes Turner* > Date: Wednesday, September 12, 2018 > Subject: SEC: Spectre variant 2: GCC: -mindirect-branch=thunk > -mindirect-branch-register > To: distutils-sig > > > Should C extensions that compile all add > `-mindirect-branch=thunk -mindirect-branch-register` [1] to mitigate the > risk of Spectre variant 2 (which does indeed affect user space applications > as well as kernels)? > > [1] https://github.com/speed47/spectre-meltdown-checker/issues/ > 119#issuecomment-361432244 > [2] https://en.wikipedia.org/wiki/Spectre_(security_vulnerability) > [3] https://en.wikipedia.org/wiki/Speculative_Store_Bypass#Specu > lative_execution_exploit_variants > > On Wednesday, September 12, 2018, Wes Turner wrote: >> >>> On Wednesday, September 12, 2018, Joni Orponen >>> wrote: >>> >>>> On Wed, Sep 12, 2018 at 8:48 PM Wes Turner >>>> wrote: >>>> >>>>> Should C extensions that compile all add >>>>> `-mindirect-branch=thunk -mindirect-branch-register` [1] to mitigate >>>>> the risk of Spectre variant 2 (which does indeed affect user space >>>>> applications as well as kernels)? >>>>> >>>> >>>> Are those available on GCC <= 4.2.0 as per PEP 513? >>>> >>> >>> AFAIU, only >>> GCC 7.3 and 8 have the retpoline (indirect-branch=thunk) support enabled >>> by the `-mindirect-branch=thunk -mindirect-branch-register` CFLAGS. >>> >> > On Wednesday, September 12, 2018, Wes Turner > wrote: > >> "What is a retpoline and how does it work?" >> https://stackoverflow.com/questions/48089426/what-is-a-retpo >> line-and-how-does-it-work >> >> There's probably already been an ANN announce about this? If not, someone with appropriate security posture and syntax could address: Whether python.org binaries are already rebuilt Whether OS package binaries are already rebuilt Whether anaconda binaries are already rebuilt Whether C extension binaries on pypi are already rebuilt -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Sat Sep 15 22:12:07 2018 From: jamtlu at gmail.com (James Lu) Date: Sat, 15 Sep 2018 22:12:07 -0400 Subject: [Python-ideas] +1 Pre-conditions and post-conditions by Kaufmann In-Reply-To: References: Message-ID: In response to your Sat, 15 Sep 2018 22:14:43: A good and thoughtful read. I agree with all your points. +1. From wes.turner at gmail.com Sun Sep 16 11:48:11 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 16 Sep 2018 11:48:11 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: On Saturday, September 15, 2018, Franklin? Lee < leewangzhong+python at gmail.com> wrote: > I am very disappointed with the responses to this thread. We have > mockery, dismissiveness, and even insinuations about OP's > psychological health. Whether or not OP is a troll, and whether or not > OP's idea has merit, that kind of response is unnecessary and > unhelpful. > > (While I lean toward OP being a troll, the fact that the OP's name is > the same as a Canadian actress is insignificant. Chinese surnames are > single-syllable, there are only so many one-syllable surnames, and > "Samantha" is a common-enough name.) > > Since Antoine challenged Calvin to name names, I will name names. If > the thread devolves into one-on-one fights, then you'll know why > Calvin didn't do it. > > Antoine: > - Accusing the OP of not being open-minded for proposing (not > "insisting on"!) the idea at all. > "You ask others to be open-minded, but fail to show such an > attitude yourself." > - Labeling the OP's position as reactionary, and intolerant. > "And, as a French person, I have to notice this is yet another > attempt to impose reactionary, intolerant American politics on the > rest of the world (or of the Python community)." > > David Mertz: Sarcastically suggesting that we burn programming books > if they use "beautiful" in their titles. > > Chris Angelico: This implied accusation: > "Not everyone assumes the worst about words." > > Oleg: > - Dismissing the whole post as a troll.* > "Nice trolling, go on! :-D" > - Calling the OP's idea stupid, and calling a different (settled) > decision stupid. (One can argue Oleg isn't really calling anything > stupid, but I preemptively say that's a stupid argument.) > "Removing master/slave is almost as stupid as ugly/beautiful." > - Dismissing the stance as oversensitive offense-taking. > "People shouldn't try and take personal offense to things that > haven't been applied to them personally, or, even worse, complain > about a term applied to anything/anyone else in a way they perceive to > be offensive." > - Mockery: The entire email with this line is spent on mockery: > 'I also propose to ban the following technical terms that carry > dark meanings: "abort", "kill" and "execute" (stop the genocide!) ...' > > Greg: Another email spent entirely on mockery: > """If we're going to object to "slave", we should object to > "robot" as well, since it's derived from a Czech word meaning "forced > worker".""" > > * There is a difference between discussing whether it is a troll post > and flippantly stating it as fact. The first brings up a relevant > concern. The second says, "No one can reasonably believe what you > claim to believe, so I won't treat you as a rational person." > > Jacco: > - This is completely disrespectful and way over the line. Don't try to > make a psychological evaluation from two emails, especially when it's > just someone having an idea you don't like. > """However, if merely the word ugly being on a page can be > "harmful", what you really need is professional help, not a change to > Python. Because there's obviously been some things in your past you > need to work through.""" > - Mockery. > """If we have to ban "Ugly" for american sensitivities, then > perhaps we need to ban a number of others for china's sensitivities. > Where will it end ?""" > > There are people making serious arguments against the idea, including > the people above. But those arguments could have been made without the > above examples. The above quotes don't treat the OP or the OP's ideas > as worthy of a serious and mature response. > > > P.S.: I read Poe's Law not as a warning against falling for trolls, > but as a warning about confirmation bias. If I keep falling for poes > of group G, it's probably because I'm too far too willing to believe > negative things about G, and don't care to understand them. It may be most relevant to interpret the poem as it is: culled from various writings of the community. What do we need to remember? Our criticism can hurt fragile feelings and egos; which we need to check at the door. https://en.wikipedia.org/wiki/Aesthetics https://en.wikipedia.org/wiki/Body_dysmorphic_disorder https://en.wikipedia.org/wiki/Compensation (time spent disambiguating) Python is primarily an online community; where words are our appearance. "Most reasonable people would understand that" we're *clearly* talking about engineering design aesthetic. Not body dysmorphia. Objectively, Compared to C, Python is fat and slow. It's not fast, but it's pretty, and that's all it has going for it, In this crazy world. Mean losers, T-shirts. Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sun Sep 16 12:52:11 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 16 Sep 2018 12:52:11 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: On Sunday, September 16, 2018, Wes Turner wrote: > > It may be most relevant to interpret the poem as it is: culled from > various writings of the community. > > What do we need to remember? Our criticism can hurt fragile feelings and > egos; which we need to check at the door. > Dear Python community, > > https://en.wikipedia.org/wiki/Aesthetics > https://en.wikipedia.org/wiki/Body_dysmorphic_disorder > https://en.wikipedia.org/wiki/Compensation (time spent disambiguating) > > Python is primarily an online community; where words are our appearance. > We should strive to be concise. "I don't like it because it's ugly" is not a helpful code review. Subjective assertions of superiority are only so useful in context to the objectives (e.g. reducing complexity) > > "Most reasonable people would understand that" we're *clearly* talking > about engineering design aesthetic. > > Not body dysmorphia. > Objectively, > That's subjective Compared to C, > C has different objectives for a different market. > Python is fat and slow. > Python is plenty fast for people all over the world who are solving problems for others. > It's not fast, but it's pretty, > and that's all it has going for it, > Python has lots of things to be proud of and confident about (i.e. helping others and paying bills). > In this crazy world. > > > Mean losers, > T-shirts. > Attention seeking AND problem solving > > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Sun Sep 16 13:32:26 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Sun, 16 Sep 2018 13:32:26 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: On Sun, Sep 16, 2018 at 4:14 AM Antoine Pitrou wrote: > > Yeah, right. > > You know, when I was pointing out Calvin not being very brave by > attacking a bunch of people without giving names, my aim was to merely > point out how dishonest and disrespectful his attitude his. *Not* to > encourage someone to turn his post into more of a clusterfuck of > personal attacks. Please give an example of an attack I made above. I see accusations, made against adults, regarding actions. On Sun, Sep 16, 2018, 05:45 David Mertz wrote: > > You have missed the use of *reductio ad absurdum* in my comment and several others. This argument structure is one of the fundamental forms of good logical reasoning, and shows nothing dismissive or insulting. The specifics book titles I used were carefully chosen, and you'd do well to think about why those specific books (and read all of them, if you haven't). Reductio ad absurdum and mockery are not mutually exclusive. Mockery can be thought of as a natural (though often fallacious) form of reductio ad absurdum: the position (or person) should not be taken seriously because the consequences are absurd. The examples I chose were not simply coldly rational arguments, so we can look at the extra choices made. I'm sure you can see the difference between these two logically-equivalent arguments: - "Assume there is a largest prime. Then we can construct a large number which is not divisible by any prime. But that's impossible, so there is no largest prime." - "Since you believe there is a largest prime, we have a large number not divisible by any prime. I'll start on that pull request to change INT_MAX to the largest prime." For granularity, let: "disrespectful" := Negative respect, such as an insult. "unrespectful" := Without proper respect, but not as bad as "disrespectful". Other than mockery, there can be disrespectful argumentum ad absurdum. Often, the difference between a respectful and an unrespectful argument is how much logical effort is needed to reach the absurdity, because that is the effort that wasn't put in, or incorrectly put in. (Thinking off the top of my head, a slippery slope argument is often unrespectful.) While people often make small logical mistakes when new to a subject or idea, or miss immediate consequences, there are many cases where the person has clearly thought about their position before, and a logically-obvious one-line counter is an insult to their effort, if not their intelligence. If you do think you have an obvious one-line counter, even after considering whether you misunderstood the original argument, then putting it as a question is more respectful than stating it conclusively, which is more respectful than making it sarcastically. More generally, a respectful argument aims to convince your opponents, while an argument made with the audience in mind can be unrespectful, and an argument which mostly appeals to those that already agree is usually disrespectful. In your specific case, you used sarcasm, a one-liner argument (listing a few titles without elaborating on their relevance), slippery slope, talked about book-burning when the OP was suggesting a change for future work, and focused on the word "beautiful" where the OP focused on "ugly". Was your post crafted to convince the OP, or for the sake of a laugh? Do you believe that your post could convince any of your opponents? Would you have said it that way in a room where no one was already on your side? From solipsis at pitrou.net Sun Sep 16 14:03:24 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 16 Sep 2018 20:03:24 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause References: <20180915193848.749c1e03@fsol> Message-ID: <20180916200324.044e5503@fsol> On Sun, 16 Sep 2018 13:32:26 -0400 "Franklin? Lee" wrote: > On Sun, Sep 16, 2018 at 4:14 AM Antoine Pitrou wrote: > > > > Yeah, right. > > > > You know, when I was pointing out Calvin not being very brave by > > attacking a bunch of people without giving names, my aim was to merely > > point out how dishonest and disrespectful his attitude his. *Not* to > > encourage someone to turn his post into more of a clusterfuck of > > personal attacks. > > Please give an example of an attack I made above. I see accusations, > made against adults, regarding actions. Why would I? As you admit, you are not countering arguments, but accusing people. I have better to do than to waste my time in this kind of game. This mailing-list is not supposed to be a playground for angry people. Regards Antoine. From wes.turner at gmail.com Sun Sep 16 20:29:06 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 16 Sep 2018 20:29:06 -0400 Subject: [Python-ideas] SEC: Spectre variant 2: GCC: -mindirect-branch=thunk -mindirect-branch-register In-Reply-To: References: Message-ID: Are all current Python builds and C extensions vulnerable to Spectre variants {1, 2, *}? There are now multiple threads: "SEC: Spectre variant 2: GCC: -mindirect-branch=thunk -mindirect-branch-register" - https://mail.python.org/mm3/archives/list/distutils-sig at python.org/thread/4BGE226DB5EWIAT5VCJ75QD5ASOVJZCM/ - https://mail.python.org/pipermail/python-ideas/2018-September/053473.html - https://mail.python.org/pipermail/python-dev/2018-September/155199.html Original thread (that I forwarded to security@): "[Python-ideas] Executable space protection: NX bit," https://mail.python.org/pipermail/python-ideas/2018-September/053175.html > ~ Do trampolines / nested functions in C extensions switch off the NX bit? On Sunday, September 16, 2018, Nathaniel Smith wrote: > On Wed, Sep 12, 2018, 12:29 Joni Orponen wrote: > >> On Wed, Sep 12, 2018 at 8:48 PM Wes Turner wrote: >> >>> Should C extensions that compile all add >>> `-mindirect-branch=thunk -mindirect-branch-register` [1] to mitigate the >>> risk of Spectre variant 2 (which does indeed affect user space applications >>> as well as kernels)? >>> >> >> Are those available on GCC <= 4.2.0 as per PEP 513? >> > > Pretty sure no manylinux1 compiler is ever going to get these mitigations. > > For manylinux2010 on x86-64, we can easily use a much newer compiler: RH > maintains a recent compiler, currently gcc 7.3, or if that doesn't work for > some reason then the conda folks have be apparently figured out how to > build the equivalent from gcc upstream releases. > Are there different CFLAGS and/or gcc compatibility flags in conda builds of Python and C extensions? Where are those set in conda builds? What's the best way to set CFLAGS in Python builds and C extensions? export CFLAGS="-mindirect-branch=thunk -mindirect-branch-register" ./configure make ? Why are we supposed to use an old version of GCC that doesn't have the retpoline patches that only mitigate Spectre variant 2? > > Unfortunately, the manylinux2010 infrastructure is not quite ready... I'm > pretty sure it needs some volunteers to push it to the finish line, though > unfortunately I haven't had enough time to keep track. > "PEP 571 -- The manylinux2010 Platform Tag" https://www.python.org/dev/peps/pep-0571/ "Tracking issue for manylinux2010 rollout" https://github.com/pypa/manylinux/issues/179 Are all current Python builds and C extensions vulnerable to Spectre variants {1, 2, *}? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.van.dorp at deonet.nl Mon Sep 17 02:53:29 2018 From: j.van.dorp at deonet.nl (Jacco van Dorp) Date: Mon, 17 Sep 2018 08:53:29 +0200 Subject: [Python-ideas] Retire or reword the namesake of the Language In-Reply-To: References: Message-ID: Yeah, sounds about as sensible as the recent "ban ugly" campaign. +1. Op zo 16 sep. 2018 om 15:49 schreef Wes Turner : > Anyways, speaking of dragons, here are some ideas for new logos: > > "Strong Bad Email #58: Dragon" > https://youtu.be/90X5NJleYJQ > > https://en.wikipedia.org/wiki/Monty_Python > > On Sunday, September 16, 2018, Wes Turner wrote: > >> There's already a thing named Cobra. >> >> https://github.com/opencobra/cobrapy >> >> >> "Python (mythology)" >> https://en.wikipedia.org/wiki/Python_(mythology) >> ... Serpent/Dragon guarding the omphalos. >> >> "Ouroboros" >> https://en.wikipedia.org/wiki/Ouroboros >> >> >> Monty Python themed Python language things: >> >> - The Cheese Shop >> https://en.wikipedia.org/wiki/Cheese_Shop_sketch >> https://wiki.python.org/moin/CheeseShop >> https://pypi.org/ >> >> - The Knights Who Say Ni -- >> https://en.wikipedia.org/wiki/Knights_Who_Say_Ni >> https://github.com/python/the-knights-who-say-ni >> >> - Miss Islington >> https://github.com/python/miss-islington >> >> On Sunday, September 16, 2018, Chris Angelico wrote: >> >>> On Sun, Sep 16, 2018 at 8:14 PM, Widom PsychoPath >>> wrote: >>> > I hereby propose that the Language should be renamed to Cobra, after >>> > the brilliant military strategist Cobra Commander , who has been >>> > history's most efficient and brilliant strategist. >>> > >>> > Yours forever, >>> > Jack Daniels >>> >>> Now THAT is the *true* spirit of professionalism. Here we have proof >>> that the scientific community cares about the language. >>> >>> About 80 proof, I think. >>> >>> ChrisA >>> _______________________________________________ >>> Python-ideas mailing list >>> Python-ideas at python.org >>> https://mail.python.org/mailman/listinfo/python-ideas >>> Code of Conduct: http://python.org/psf/codeofconduct/ >>> >> _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.van.dorp at deonet.nl Mon Sep 17 03:16:48 2018 From: j.van.dorp at deonet.nl (Jacco van Dorp) Date: Mon, 17 Sep 2018 09:16:48 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: Op zo 16 sep. 2018 om 05:40 schreef Franklin? Lee < leewangzhong+python at gmail.com>: > I am very disappointed with the responses to this thread. We have > mockery, dismissiveness, and even insinuations about OP's > psychological health. Whether or not OP is a troll, and whether or not > OP's idea has merit, that kind of response is unnecessary and > unhelpful. Sure, I'll take your bait. > Jacco: > - This is completely disrespectful and way over the line. Don't try to > make a psychological evaluation from two emails, especially when it's > just someone having an idea you don't like. > """However, if merely the word ugly being on a page can be > "harmful", what you really need is professional help, not a change to > Python. Because there's obviously been some things in your past you > need to work through.""" > Is it, though ? Even more because in order for it to apply to any one person's aesthetics, you need to pull it out of context first. You need to be looking for it. Being triggered by a word this simple is not exactly a sign of mental stability. I know a girl who's been raped more than she can count - but the word doesn't trigger her like this(only makes her want to beat up rapists). If people can do that, then surely a playground insult wont reduce you to tears, right ? > - Mockery. > """If we have to ban "Ugly" for american sensitivities, then > perhaps we need to ban a number of others for china's sensitivities. > Where will it end ?""" > Well, on the internet, the word "nigger" is already basically banned for american sensibilities, while the version in dutch, my language, is "neger", which doesn't really have any racist connotation, probably because the amount of slaves that have ever been in what's currently the netherlands, has been negligible. However, it's use is effectively banned because some other culture considers it offensive to use. Why should your culture be my censorship ? And it's no coincidence I used china there - it's notorious for it's censorship. If merely labeling a word as "offensive" is sufficient to ban it, I daresay they'd mark a whole lot more words as offensive. And why would their opinion be any less valid than yours ? Don't think you're special - you're not. If you want to give yourself the power to ban words for offensive, you're giving that same power to everyone. And since offensive is subjective, it means anybody could ban any word, since you couldn't tell the difference between real or fake offense. Therefore, it is a disastrous idea and I'll predict the end of Python if we go down that route. -------------- next part -------------- An HTML attachment was scrubbed... URL: From niki.spahiev at gmail.com Mon Sep 17 03:53:08 2018 From: niki.spahiev at gmail.com (Niki Spahiev) Date: Mon, 17 Sep 2018 10:53:08 +0300 Subject: [Python-ideas] Combine f-strings with i18n In-Reply-To: References: Message-ID: On 14.09.2018 12:33, Chris Angelico wrote: > On Fri, Sep 14, 2018 at 7:02 PM, Hans Polak wrote: >> I have recently updated my code to use the more pythonic f-string instead of >> '{}'.format() > > Well there's your problem right there. Don't change your string > formatting choice on that basis. F-strings aren't "more Pythonic" than > either .format() or percent-formatting; all three of them are > supported for good reasons. > > For i18n, I think .format() is probably your best bet. Trying to mess > with f-strings to give them methods is a path of great hairiness, as > they are not actually objects (they're expressions). Is it possible to use f-strings when making multilingual software? When i write non-hobby software translation is hard requirement. Niki From arj.python at gmail.com Mon Sep 17 04:03:15 2018 From: arj.python at gmail.com (Abdur-Rahmaan Janhangeer) Date: Mon, 17 Sep 2018 12:03:15 +0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: @SamanthaQuan Beautiful is a degree above the good. Beautiful in the context applied defines the refined, with a nuance of excellence. As the definition of beauty is not standard, it is what appeals to an individual. It is not only about passing the functional quality test, it is about perfection. Fields of application varies. A mathematician might, after going through some derivations express : "Beautiful !" The dictionary here https://en.oxforddictionaries.com/definition/beautiful defines it as --- 1Pleasing the senses or mind aesthetically. ?beautiful poetry? ?a beautiful young woman? 1.1 Of a very high standard; excellent ?she spoke in beautiful English? --- The word beautiful hints to the fact that code authorship or software craftmanship is an art, a science. It presumes that as time unfolds itself, masters must be produced to further excellence. It encourages high standards. Abdur-Rahmaan Janhangeer Mauritius -------------- next part -------------- An HTML attachment was scrubbed... URL: From hpolak at polak.es Mon Sep 17 04:11:18 2018 From: hpolak at polak.es (Hans Polak) Date: Mon, 17 Sep 2018 10:11:18 +0200 Subject: [Python-ideas] Combine f-strings with i18n In-Reply-To: References: Message-ID: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> On 17/09/18 09:53, Niki Spahiev wrote: > > Is it possible to use f-strings when making multilingual software? > When i write non-hobby software translation is hard requirement. > At this moment, it seems that this is not possible. My use case is not very unique and that's why I wrote the proposal in the first place. I'm working on a web server / application server. On the web, you have to take the users preferences into account, including language. If a user has the navigator configured for English, I have to return English (if I am doing i18n). That's why I would like to see a parameter that can be passed to the f-string. I don't think this should be too problematic, really. pygettext.py extracts strings surrounded by _('') My proposal would be to do that with f-strings. Let pygettext.py extract f-strings. The compiler can then rewrite these to normal unicode strings. For instance: f'Hi {user}'.language('es') would become T(_('Hi {user}'), 'es', user=user) My first email had pseudo-code. This is my working function. def T(translatable_string, language=None, *args, **kwargs): if args: print(args) if 'es' == language: # Return translated, formatted string return es.gettext(translatable_string).format(**kwargs) # Default, return formatted string return translatable_string.format(**kwargs) -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Mon Sep 17 04:38:58 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Mon, 17 Sep 2018 04:38:58 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <20180916200324.044e5503@fsol> References: <20180915193848.749c1e03@fsol> <20180916200324.044e5503@fsol> Message-ID: On Sun, Sep 16, 2018 at 2:04 PM Antoine Pitrou wrote: > > On Sun, 16 Sep 2018 13:32:26 -0400 > "Franklin? Lee" > wrote: > > On Sun, Sep 16, 2018 at 4:14 AM Antoine Pitrou wrote: > > > > > > Yeah, right. > > > > > > You know, when I was pointing out Calvin not being very brave by > > > attacking a bunch of people without giving names, my aim was to merely > > > point out how dishonest and disrespectful his attitude his. *Not* to > > > encourage someone to turn his post into more of a clusterfuck of > > > personal attacks. > > > > Please give an example of an attack I made above. I see accusations, > > made against adults, regarding actions. > > Why would I? As you admit, you are not countering arguments, but > accusing people. I have better to do than to waste my time in this > kind of game. This mailing-list is not supposed to be a playground for > angry people. I made accusations and explained them. You accused me of making personal attacks, but won't back it up when challenged, saying that it'd be a waste of time. If it's not important, why make the accusation in the first place? I have thought about making arguments and explanations, but erased them because it could distract from this point. I'm far less concerned about the proposal (in either direction) than about the treatment of a new user and a controversial proposal. The harm of that is here and now. From rosuav at gmail.com Mon Sep 17 04:59:54 2018 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 17 Sep 2018 18:59:54 +1000 Subject: [Python-ideas] Combine f-strings with i18n In-Reply-To: References: Message-ID: On Mon, Sep 17, 2018 at 5:53 PM, Niki Spahiev wrote: > On 14.09.2018 12:33, Chris Angelico wrote: >> >> On Fri, Sep 14, 2018 at 7:02 PM, Hans Polak wrote: >>> >>> I have recently updated my code to use the more pythonic f-string instead >>> of >>> '{}'.format() >> >> >> Well there's your problem right there. Don't change your string >> formatting choice on that basis. F-strings aren't "more Pythonic" than >> either .format() or percent-formatting; all three of them are >> supported for good reasons. >> >> For i18n, I think .format() is probably your best bet. Trying to mess >> with f-strings to give them methods is a path of great hairiness, as >> they are not actually objects (they're expressions). > > > Is it possible to use f-strings when making multilingual software? > When i write non-hobby software translation is hard requirement. I won't say it's *impossible*, but it's certainly not what I would recommend. Use one of the other formatting methods (percent formatting or the .format() method), since they start with a single string object rather than an expression. Don't assume that f-strings should be used for everything just because they're newer. They have their place, and that place is NOT translation. ChrisA From p.f.moore at gmail.com Mon Sep 17 05:33:13 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 17 Sep 2018 10:33:13 +0100 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> <20180916200324.044e5503@fsol> Message-ID: On Mon, 17 Sep 2018 at 09:40, Franklin? Lee wrote: > > On Sun, Sep 16, 2018 at 2:04 PM Antoine Pitrou wrote: > > > > On Sun, 16 Sep 2018 13:32:26 -0400 > > "Franklin? Lee" > > wrote: > > > On Sun, Sep 16, 2018 at 4:14 AM Antoine Pitrou wrote: > > > > > > > > Yeah, right. > > > > > > > > You know, when I was pointing out Calvin not being very brave by > > > > attacking a bunch of people without giving names, my aim was to merely > > > > point out how dishonest and disrespectful his attitude his. *Not* to > > > > encourage someone to turn his post into more of a clusterfuck of > > > > personal attacks. > > > > > > Please give an example of an attack I made above. I see accusations, > > > made against adults, regarding actions. > > > > Why would I? As you admit, you are not countering arguments, but > > accusing people. I have better to do than to waste my time in this > > kind of game. This mailing-list is not supposed to be a playground for > > angry people. > > I made accusations and explained them. You accused me of making > personal attacks, but won't back it up when challenged, saying that > it'd be a waste of time. If it's not important, why make the > accusation in the first place? > > I have thought about making arguments and explanations, but erased > them because it could distract from this point. I'm far less concerned > about the proposal (in either direction) than about the treatment of a > new user and a controversial proposal. The harm of that is here and > now. Please can we drop this line of discussion. It's neither productive nor helpful, and it's skirting very close to what I'd consider to be in violation of the CoC. Paul From leewangzhong+python at gmail.com Mon Sep 17 06:09:37 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Mon, 17 Sep 2018 06:09:37 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: On Mon, Sep 17, 2018 at 3:17 AM Jacco van Dorp wrote: > > Op zo 16 sep. 2018 om 05:40 schreef Franklin? Lee : >> >> Jacco: >> - This is completely disrespectful and way over the line. Don't try to >> make a psychological evaluation from two emails, especially when it's >> just someone having an idea you don't like. >> """However, if merely the word ugly being on a page can be >> "harmful", what you really need is professional help, not a change to >> Python. Because there's obviously been some things in your past you >> need to work through.""" > > > Is it, though ? Even more because in order for it to apply to any one person's aesthetics, you need to pull it out of context first. You need to be looking for it. Being triggered by a word this simple is not exactly a sign of mental stability. Is it disrespectful to give a psychological diagnosis in a discussion? Usually. It is dismissive (it's inarguably an ad hominem), potentially insulting (because it's often intentionally used that way, so the listener might interpret it that way even when it isn't intended), and it is based on very little information. I think it's safe to assume that you're not a trained professional, or even a well-read amateur, since you would otherwise know how much information is fed into a proper diagnosis, so it is also as inappropriate as giving lawyer advice without a disclaimer, or stating your quantum telepathy ideas as scientific "fact". But unlike the other fields, making psychology claims about your opponents during an argument is harmful, and not just theoretically harmful. It's a personal attack, and tries to invalidate their right to even be in the discussion by saying that they're fundamentally irrational. That's harmful to both people and the discussion. Look at how upset people get in any argument where someone accuses them of bias, which is a weaker claim than mentally unstable. We can go deeper. Your diagnosis is based on a single factor: If a person is harmed by the use of the word "ugly", they need psychological help. Even if that were true, you go further: Don't change Python for those people. Python should not accommodate them. We should not be inconvenienced by the needs of the mentally ill. Let's say something in Python harms people with a certain mental illness. They are getting treatment, or they don't know they need treatment, or they declined treatment. Should Python change (patch, document, maintain a change) to accommodate them? Or, on the other extreme, should Python tell them, "Go away, and come back when you're healthy"? What should they do in the meantime? (On-topic: I think the only reasonable answer is, "It depends." There's no slippery slope. There should be a weighing of the chance of harm, the amount of potential harm, and the cost of the change. I think it is not reasonable to accommodate everyone no matter the cost (and literally no one here has argued for that; I checked), and I think it is not reasonable to reject any accommodation if it's of a certain type (which people here have at least argued for, though I believe they're just not bothering to state their nuances).) I don't think you are unsympathetic to the mentally ill. I think the people you don't want to accommodate are actually the people who _claim to fight_ for the mentally ill. Question: If someone proposes an alternative that people think is better than the original, would you still be against making a change? Will you think of it as giving in to censorship or to the PC/SJW group? > I know a girl who's been raped more than she can count - but the word doesn't trigger her like this(only makes her want to beat up rapists). If people can do that, then surely a playground insult wont reduce you to tears, right ? It's complicated. Different people respond differently to different situations. And people have differing experiences, even if those experiences have the same label. It is hard to extrapolate from a handful of examples, because the mind and the real world are both complicated. >> - Mockery. >> """If we have to ban "Ugly" for american sensitivities, then >> perhaps we need to ban a number of others for china's sensitivities. >> Where will it end ?""" > > Well, on the internet, the word "nigger" is already basically banned for american sensibilities, while the version in dutch, my language, is "neger", which doesn't really have any racist connotation, probably because the amount of slaves that have ever been in what's currently the netherlands, has been negligible. However, it's use is effectively banned because some other culture considers it offensive to use. Why should your culture be my censorship ? And it's no coincidence I used china there - it's notorious for it's censorship. If merely labeling a word as "offensive" is sufficient to ban it, I daresay they'd mark a whole lot more words as offensive. And why would their opinion be any less valid than yours ? > > Don't think you're special - you're not. If you want to give yourself the power to ban words for offensive, you're giving that same power to everyone. And since offensive is subjective, it means anybody could ban any word, since you couldn't tell the difference between real or fake offense. (Arguably, the equivalent of "neger" is "Negro", which is today considered somewhat offensive in America, but is still used on official forms because it's preferred by some older black Americans. That's an interesting example of human culture.) No one argued that others can't also object, so I don't know if they'll see a problem with your slope. (I know of an example: "Laputa" was a prominent name in a Miyazaki film, but it was derived (through Jonathan Swift) from an offensive Spanish word, which the filmmakers didn't know. One could ask if we should censor the word in foreign localizations of the film, whether they land in Spanish-speaking countries or not.) But my objection wasn't that the argument was invalid, but that you wrote it as mockery. Let me try to rewrite the same argument without mockery. """You want us to remove "ugly", but isn't that only a problem to Americans? I am not an American. Do you believe Python should accommodate non-Americans objecting to common American words? What groups should we listen to, and which ones can we ignore? One group that comes to my mind is the Chinese government.""" This invites the speaker to outline the limits of their slopes, and explain themselves further. It opens discussion, instead of trying to close it. It respects their input, and asks for more. From arj.python at gmail.com Mon Sep 17 06:16:14 2018 From: arj.python at gmail.com (Abdur-Rahmaan Janhangeer) Date: Mon, 17 Sep 2018 14:16:14 +0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: if (out.of.subject).pingpong: time to let the thread go Abdur-Rahmaan Janhangeer Mauritius -------------- next part -------------- An HTML attachment was scrubbed... URL: From cspealma at redhat.com Mon Sep 17 08:25:27 2018 From: cspealma at redhat.com (Calvin Spealman) Date: Mon, 17 Sep 2018 08:25:27 -0400 Subject: [Python-ideas] Retire or reword the namesake of the Language In-Reply-To: References: Message-ID: I am very disappointed in the existence of this thread. Mocking discourse is extremely unpythonic. On Mon, Sep 17, 2018 at 2:54 AM Jacco van Dorp wrote: > Yeah, sounds about as sensible as the recent "ban ugly" campaign. > +1. > > Op zo 16 sep. 2018 om 15:49 schreef Wes Turner : > >> Anyways, speaking of dragons, here are some ideas for new logos: >> >> "Strong Bad Email #58: Dragon" >> https://youtu.be/90X5NJleYJQ >> >> https://en.wikipedia.org/wiki/Monty_Python >> >> On Sunday, September 16, 2018, Wes Turner wrote: >> >>> There's already a thing named Cobra. >>> >>> https://github.com/opencobra/cobrapy >>> >>> >>> "Python (mythology)" >>> https://en.wikipedia.org/wiki/Python_(mythology) >>> ... Serpent/Dragon guarding the omphalos. >>> >>> "Ouroboros" >>> https://en.wikipedia.org/wiki/Ouroboros >>> >>> >>> Monty Python themed Python language things: >>> >>> - The Cheese Shop >>> https://en.wikipedia.org/wiki/Cheese_Shop_sketch >>> https://wiki.python.org/moin/CheeseShop >>> https://pypi.org/ >>> >>> - The Knights Who Say Ni -- >>> https://en.wikipedia.org/wiki/Knights_Who_Say_Ni >>> https://github.com/python/the-knights-who-say-ni >>> >>> - Miss Islington >>> https://github.com/python/miss-islington >>> >>> On Sunday, September 16, 2018, Chris Angelico wrote: >>> >>>> On Sun, Sep 16, 2018 at 8:14 PM, Widom PsychoPath >>>> wrote: >>>> > I hereby propose that the Language should be renamed to Cobra, after >>>> > the brilliant military strategist Cobra Commander , who has been >>>> > history's most efficient and brilliant strategist. >>>> > >>>> > Yours forever, >>>> > Jack Daniels >>>> >>>> Now THAT is the *true* spirit of professionalism. Here we have proof >>>> that the scientific community cares about the language. >>>> >>>> About 80 proof, I think. >>>> >>>> ChrisA >>>> _______________________________________________ >>>> Python-ideas mailing list >>>> Python-ideas at python.org >>>> https://mail.python.org/mailman/listinfo/python-ideas >>>> Code of Conduct: http://python.org/psf/codeofconduct/ >>>> >>> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Mon Sep 17 08:38:54 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 17 Sep 2018 14:38:54 +0200 Subject: [Python-ideas] Retire or reword the namesake of the Language References: Message-ID: <20180917143854.341ff6ce@fsol> It's not like the Monty Python (whom the language was named after) would have dared mocking the discourse and manners of all kinds of social groups, let alone have a laugh at the expense of beliefs and ideologies. Regards Antoine. On Mon, 17 Sep 2018 08:25:27 -0400 Calvin Spealman wrote: > I am very disappointed in the existence of this thread. Mocking discourse > is extremely unpythonic. > > On Mon, Sep 17, 2018 at 2:54 AM Jacco van Dorp wrote: > > > Yeah, sounds about as sensible as the recent "ban ugly" campaign. > > +1. > > > > Op zo 16 sep. 2018 om 15:49 schreef Wes Turner : > > > >> Anyways, speaking of dragons, here are some ideas for new logos: > >> > >> "Strong Bad Email #58: Dragon" > >> https://youtu.be/90X5NJleYJQ > >> > >> https://en.wikipedia.org/wiki/Monty_Python > >> > >> On Sunday, September 16, 2018, Wes Turner wrote: > >> > >>> There's already a thing named Cobra. > >>> > >>> https://github.com/opencobra/cobrapy > >>> > >>> > >>> "Python (mythology)" > >>> https://en.wikipedia.org/wiki/Python_(mythology) > >>> ... Serpent/Dragon guarding the omphalos. > >>> > >>> "Ouroboros" > >>> https://en.wikipedia.org/wiki/Ouroboros > >>> > >>> > >>> Monty Python themed Python language things: > >>> > >>> - The Cheese Shop > >>> https://en.wikipedia.org/wiki/Cheese_Shop_sketch > >>> https://wiki.python.org/moin/CheeseShop > >>> https://pypi.org/ > >>> > >>> - The Knights Who Say Ni -- > >>> https://en.wikipedia.org/wiki/Knights_Who_Say_Ni > >>> https://github.com/python/the-knights-who-say-ni > >>> > >>> - Miss Islington > >>> https://github.com/python/miss-islington > >>> > >>> On Sunday, September 16, 2018, Chris Angelico wrote: > >>> > >>>> On Sun, Sep 16, 2018 at 8:14 PM, Widom PsychoPath > >>>> wrote: > >>>> > I hereby propose that the Language should be renamed to Cobra, after > >>>> > the brilliant military strategist Cobra Commander , who has been > >>>> > history's most efficient and brilliant strategist. > >>>> > > >>>> > Yours forever, > >>>> > Jack Daniels > >>>> > >>>> Now THAT is the *true* spirit of professionalism. Here we have proof > >>>> that the scientific community cares about the language. > >>>> > >>>> About 80 proof, I think. > >>>> > >>>> ChrisA > >>>> _______________________________________________ > >>>> Python-ideas mailing list > >>>> Python-ideas at python.org > >>>> https://mail.python.org/mailman/listinfo/python-ideas > >>>> Code of Conduct: http://python.org/psf/codeofconduct/ > >>>> > >>> _______________________________________________ > >> Python-ideas mailing list > >> Python-ideas at python.org > >> https://mail.python.org/mailman/listinfo/python-ideas > >> Code of Conduct: http://python.org/psf/codeofconduct/ > >> > > _______________________________________________ > > Python-ideas mailing list > > Python-ideas at python.org > > https://mail.python.org/mailman/listinfo/python-ideas > > Code of Conduct: http://python.org/psf/codeofconduct/ > > > From ctbrown at ucdavis.edu Mon Sep 17 08:50:02 2018 From: ctbrown at ucdavis.edu (C. Titus Brown) Date: Mon, 17 Sep 2018 05:50:02 -0700 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: <85350DA5-B2F2-4D2B-A2F0-3BA0ED4E157D@ucdavis.edu> Hi everyone, on behalf of the moderators? please, let?s stop discussing who accused whom of what, and either stick to the discussion at hand or be silent. If you can?t make a point without aggression or name calling, then it?s not a point you should be making. (That?s a general statement about this list and this thread, not about any one particular recent e-mail.) thanks, ?titus > On Sep 17, 2018, at 3:16 AM, Abdur-Rahmaan Janhangeer wrote: > > if (out.of.subject).pingpong: > time to let the thread go > > Abdur-Rahmaan Janhangeer > Mauritius > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ From Omar.Balbuena at scm.ca Mon Sep 17 09:27:58 2018 From: Omar.Balbuena at scm.ca (Omar Balbuena) Date: Mon, 17 Sep 2018 13:27:58 +0000 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <85350DA5-B2F2-4D2B-A2F0-3BA0ED4E157D@ucdavis.edu> References: <20180915193848.749c1e03@fsol> <85350DA5-B2F2-4D2B-A2F0-3BA0ED4E157D@ucdavis.edu> Message-ID: I love the Zen of Python and I occasionally cite one in a commit. Usually it's either flat/nested or the one about namespaces. I have never used beautiful/ugly. I think it would be incredibly conceited to cite it at any commit or code review. I don't think it serves anything. However I would welcome a serious attempt of rewording it (I don't have any suggestions myself though). -----Original Message----- From: Python-ideas [mailto:python-ideas-bounces+omar.balbuena=scm.ca at python.org] On Behalf Of C. Titus Brown Sent: Monday, September 17, 2018 08:50 To: Abdur-Rahmaan Janhangeer Cc: python-ideas Subject: Re: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause Hi everyone, on behalf of the moderators? please, let?s stop discussing who accused whom of what, and either stick to the discussion at hand or be silent. If you can?t make a point without aggression or name calling, then it?s not a point you should be making. (That?s a general statement about this list and this thread, not about any one particular recent e-mail.) thanks, ?titus > On Sep 17, 2018, at 3:16 AM, Abdur-Rahmaan Janhangeer wrote: > > if (out.of.subject).pingpong: > time to let the thread go > > Abdur-Rahmaan Janhangeer > Mauritius > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ _______________________________________________ Python-ideas mailing list Python-ideas at python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/ From sf at fermigier.com Mon Sep 17 10:22:31 2018 From: sf at fermigier.com (=?UTF-8?Q?St=C3=A9fane_Fermigier?=) Date: Mon, 17 Sep 2018 16:22:31 +0200 Subject: [Python-ideas] Retire or reword the namesake of the Language In-Reply-To: References: Message-ID: This proposal unfortunately carries the risk of reminding people of "CORBA", a technology that was once regarded as the highest pinnacle of the development of distributed systems, before it was sidelined by XMLRPC, then SOAP, then Docker containers exchanging Protobuf messages over WebSockets. My two cents (at being gently sarcastic). S. On Sun, Sep 16, 2018 at 12:14 PM Widom PsychoPath wrote: > Guten Tag, > > I am Jack and I am grateful to see the efficiency of scientific > computing in Python. > > However, What deeply saddens me is that the namesake "Python" has > unfortunately been derived from the title of the uncivilised British > jester troupe "Monty Python". This is something that deeply infuriates > me and is against the morals of my culture. Although humor is an > integral aspect of the life of an Untermensch, I believe that Python, > A language used as an interface to majority of Scientific computing > software should be renamed to something more suitable. > > I hereby propose that the Language should be renamed to Cobra, after > the brilliant military strategist Cobra Commander , who has been > history's most efficient and brilliant strategist. > > I hope that my message is not mistaken for an attempt to humor. I am > physically unable to experience the same. Please revert back to the > mail with your thoughts and constructive criticism. > > Yours forever, > Jack Daniels > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, Free&OSS Group @ Systematic Cluster - http://www.gt-logiciel-libre.org/ Co-Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyParis & PyData Paris - http://pyparis.org/ & http://pydata.fr/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Mon Sep 17 10:39:14 2018 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 17 Sep 2018 10:39:14 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: I think it's meant to be ironic? Why would that be the first sentence of a poem about software and the Python newsgroup/mailing list community? A certain percentage of people might be offended by changing the first line (the frame of) of said poem; to "I'm better than you". Dominance and arrogance are upsetting to a certain percentage, so that shouldn't occur. (Though arrogance tends to be the norm in many open source communities which are necessarily discerning and selective; in order to avoid amateurish mediocrity). So, in a way, "Beautiful is better than ugly" was the CoC in the Python community for many years; so, now that the CoC is in place, the best thing to do may be to just remove the Zen of Python entirely; rather than dominate the authors' sarcastic poem until it's devoid of its intentional tone. On Sunday, September 16, 2018, Wes Turner wrote: > > > On Sunday, September 16, 2018, Wes Turner wrote: > >> >> It may be most relevant to interpret the poem as it is: culled from >> various writings of the community. >> >> What do we need to remember? Our criticism can hurt fragile feelings and >> egos; which we need to check at the door. >> > > Dear Python community, > > >> >> https://en.wikipedia.org/wiki/Aesthetics >> https://en.wikipedia.org/wiki/Body_dysmorphic_disorder >> https://en.wikipedia.org/wiki/Compensation (time spent disambiguating) >> >> Python is primarily an online community; where words are our appearance. >> > > We should strive to be concise. > > "I don't like it because it's ugly" is not a helpful code review. > > Subjective assertions of superiority are only so useful in context to the > objectives (e.g. reducing complexity) > > >> >> "Most reasonable people would understand that" we're *clearly* talking >> about engineering design aesthetic. >> >> Not body dysmorphia. >> Objectively, >> > > That's subjective > > Compared to C, >> > > C has different objectives for a different market. > > >> Python is fat and slow. >> > > Python is plenty fast for people all over the world who are solving > problems for others. > > >> It's not fast, but it's pretty, >> and that's all it has going for it, >> > > Python has lots of things to be proud of and confident about (i.e. helping > others and paying bills). > > >> In this crazy world. >> >> >> Mean losers, >> T-shirts. >> > > Attention seeking AND problem solving > > >> >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.van.dorp at deonet.nl Mon Sep 17 10:49:30 2018 From: j.van.dorp at deonet.nl (Jacco van Dorp) Date: Mon, 17 Sep 2018 16:49:30 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: Op ma 17 sep. 2018 om 16:40 schreef Wes Turner : > I think it's meant to be ironic? > > Why would that be the first sentence of a poem about software and the > Python newsgroup/mailing list community? > > A certain percentage of people might be offended by changing the first > line (the frame of) of said poem; to "I'm better than you". > > Dominance and arrogance are upsetting to a certain percentage, so that > shouldn't occur. (Though arrogance tends to be the norm in many open source > communities which are necessarily discerning and selective; in order to > avoid amateurish mediocrity). > > So, in a way, "Beautiful is better than ugly" was the CoC in the Python > community for many years; so, now that the CoC is in place, the best thing > to do may be to just remove the Zen of Python entirely; rather than > dominate the authors' sarcastic poem until it's devoid of its intentional > tone. > > I always considered the Zen to be about code only. I don't think I ever read the CoC, and just assumed "at least pretend to be a decent person". I don't think the CoC has anything to do with your code, right ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From arj.python at gmail.com Mon Sep 17 11:18:07 2018 From: arj.python at gmail.com (Abdur-Rahmaan Janhangeer) Date: Mon, 17 Sep 2018 19:18:07 +0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> References: <8408891536827802@sas1-23a37bc8251c.qloud-c.yandex.net> Message-ID: Zen of The Python Mailing List >>> import that on topic is better than off topic the dish of toxicity is made up of opinion attacking irony is it's fine herbs top posting should be counselled homeworks are not to be done mail clients are the tastes and colours of life a mailing list serves it's purpose, unless specified ideas are the flagship of focus balazing pingponging is sign of Zen explosion RTFM has kinder alternatives good english is preferred, but makes you not a better programmer ... Abdur-Rahmaan Janhangeer Mauritius -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Mon Sep 17 11:20:17 2018 From: jamtlu at gmail.com (James Lu) Date: Mon, 17 Sep 2018 11:20:17 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause Message-ID: It?s been almost a week since this ?discussion? first started. Can we please stop this in the name of productive work on python-ideas? Frankly, you don?t need to reply just because you can point out something wrong with someone else?s argument. Post because it?s worthwhile to hear, not because you have something to say. This mindless and combative attitude is a big reason why Guido was motivated to suspend himself. From boxed at killingar.net Mon Sep 17 13:16:24 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Mon, 17 Sep 2018 19:16:24 +0200 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: Message-ID: <7885E131-C94C-44A6-82B1-2E157C744655@killingar.net> > It?s been almost a week since this ?discussion? first started. Can we please stop this in the name of productive work on python-ideas? A better use of time might be to discuss moving to a better forum system where moderation is easier/possible. Email somehow has a shape that makes those things 100% probable and you can?t easily silence discussions that are uninteresting. / Anders From turnbull.stephen.fw at u.tsukuba.ac.jp Mon Sep 17 13:42:21 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Tue, 18 Sep 2018 02:42:21 +0900 Subject: [Python-ideas] Combine f-strings with i18n In-Reply-To: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> References: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> Message-ID: <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> Hans Polak writes: > On 17/09/18 09:53, Niki Spahiev wrote: > > > > Is it possible to use f-strings when making multilingual software? > > When i write non-hobby software translation is hard requirement. > > At this moment, it seems that this is not possible. No, it's not possible. > If a user has the navigator configured for English, I have to > return English (if I am doing i18n). This is understood. Nobody is telling you what you want is an unreasonable desire. I'm telling you that I'm pretty sure your proposed syntax isn't going to happen, because it requires deep changes in the way Python evaluates expressions as far as I can see. > That's why I would like to see a parameter that can be passed to > the f-string. This doesn't make sense to me. Such configurations are long-lasting. In this context, the POSIX model (where the target language, or priority list of languages, is configured in the environment) is reasonable. There's no good reason for passing the language every time a string is formatted throughout an interaction with such a user. What we want is a way to tell the f-string to translate itself, and optionally specify a language. > I don't think this should be too problematic, really. Your proposal to use method syntax is *definitely* problematic. The f-string is an expression, and must be evaluated to a str first according to the language definition. > The compiler can then rewrite these to normal unicode strings. For > instance: f'Hi {user}'.language('es') would become T(_('Hi {user}'), > 'es', user=user) It could, but it's not going to. Implementing that with a reasonable amount of backward compatibility requires two tokens of lookahead and a new keyword as far as I can see. The problem is that unless the .language method is invoked on the f-string in the same expression, the f-string needs to be converted to a string immediately, as it is in Python 3.6 and 3.7. To decide whether to do this in the case where there is a method invoked, the parser needs to read the f-string (lookahead = 0), the "." (lookahead = 1), and the token "language" (lookahead = 2), and then for the parser to know that this is the special construct, it needs to know the special token. "Keyword" in this context simply means "a token that the parser knows about", but in general in Python we want keywords to be reserved to the language, which is considered a very high cost. What could work is an extension to the formatting language. I suggest abusing the *conversion flag*. (It's an abuse because I'm going to apply it to the whole f-string, while the current Language Reference says it's applied to the value being formatted.[1]) This flag would only be allowed as the first item in the string. The idea is that `f"{lang!g}Hello, {user}!"` would be interpreted as _ = get_gettext(lang) _("Hello, {user}!").format(user=user) and `f"{!g}Hello, {user}!"` as `_("Hello, {user}!").format(user=user)`, reusing the the most recent value of `_`. The "g" in "!g" stands for "gettext", of course. GNU xgettext can be taught to recognize things like `f"{...!g}` as translatable string markers; I'm sure pygettext can too. I'm assuming the implementation of get_gettext from my earlier post, reproduced at the end for reader convenience. One warning about this syntax: I think the gettext module is a pretty popular way to implement message localization. However, I'm not sure it's the only way, and I would guess that folks who use something else would want to be able to use f-strings with that package, too. So there may need to be a way to configure the translation engine. > # Use duck-typing of gettext.translation objects > class NullTranslation: > def __init__(self): > self.gettext = lambda s: s > > def get_gettext(language, translation={'C': NullTranslation()}): > if language not in translation: > translation[language] = \ > gettext.translation('myapplication', languages=[language]) > return translation[language].gettext > > and > > # This could be one line, but I guess in many cases you're likely > # to use use the gettext function repeatedly. Also, use of the > # _() idiom marks translatable string for translators. > _ = get_gettext(language) > _(translatable_string).format(key=value...) Footnotes: [1] I'm not sure whether "the g conversion is applied to the 'lang' variable, and the effect is to set the global 'translate' function" is pure sophistry or not, but I don't find it convincing. YMMV From danilo.bellini at gmail.com Mon Sep 17 14:06:31 2018 From: danilo.bellini at gmail.com (Danilo J. S. Bellini) Date: Mon, 17 Sep 2018 15:06:31 -0300 Subject: [Python-ideas] Revert "RuntimeError: generator raised StopIteration" Message-ID: Hi, The idea is simple: restore the "next" built-in and the "StopIteration" propagation behavior from Python 3.6. I'm using Python 3.7 for a while (as it's the default in Arch Linux), and there's one single backwards incompatible change from Python 3.6 that is breaking the code of some packages from PyPI. The reason is always the same: a "next" called inside a generator function was expected to propagate the StopIteration, but that no longer happens. As an example of something that had been made public (in PyPI), I've tried to run: from articlemeta.client import RestfulClient journals = list(RestfulClient().journals(collection="ecu")) This breaks with "RuntimeError: generator raised StopIteration". I already warned the maintainers of that project, it probably will be fixed. Another example (disclaimer: this time it's a package I've created): a simple "pip install dose" only works on Python<3.7, since some reStructuredText processing functions rely on the StopIteration propagation from calls to "next". I needed this package last saturday for a Coding Dojo, but I had to download the package from the repository and change some stuff before starting the Dojo (during the time reserved for it). I can fix this in the packages I'm maintaining by creating a new "fix_python37_next_runtime_error" decorator to restore the old behavior on every generator function that uses "next" in its body. But I can't do that to all packages from other people, and having to change/monkeypatch imported stuff in order to keep it working in this new Python version is getting annoying already. Perhaps adding a new kwarg to the "next" built-in to choose between a "propagate" default or a "error" alternative would avoid this. -- Danilo J. S. Bellini --------------- "*It is not our business to set up prohibitions, but to arrive at conventions.*" (R. Carnap) -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Mon Sep 17 14:10:52 2018 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 17 Sep 2018 14:10:52 -0400 Subject: [Python-ideas] Combine f-strings with i18n In-Reply-To: <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> References: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> Message-ID: See also PEP 501, which could be used for i18n. Eric On 9/17/2018 1:42 PM, Stephen J. Turnbull wrote: > Hans Polak writes: > > On 17/09/18 09:53, Niki Spahiev wrote: > > > > > > Is it possible to use f-strings when making multilingual software? > > > When i write non-hobby software translation is hard requirement. > > > > At this moment, it seems that this is not possible. > > No, it's not possible. > > > If a user has the navigator configured for English, I have to > > return English (if I am doing i18n). > > This is understood. Nobody is telling you what you want is an > unreasonable desire. I'm telling you that I'm pretty sure your > proposed syntax isn't going to happen, because it requires deep > changes in the way Python evaluates expressions as far as I can see. > > > That's why I would like to see a parameter that can be passed to > > the f-string. > > This doesn't make sense to me. Such configurations are long-lasting. > In this context, the POSIX model (where the target language, or > priority list of languages, is configured in the environment) is > reasonable. There's no good reason for passing the language every > time a string is formatted throughout an interaction with such a user. > > What we want is a way to tell the f-string to translate itself, and > optionally specify a language. > > > I don't think this should be too problematic, really. > > Your proposal to use method syntax is *definitely* problematic. The > f-string is an expression, and must be evaluated to a str first > according to the language definition. > > > The compiler can then rewrite these to normal unicode strings. For > > instance: f'Hi {user}'.language('es') would become T(_('Hi {user}'), > > 'es', user=user) > > It could, but it's not going to. Implementing that with a reasonable > amount of backward compatibility requires two tokens of lookahead and > a new keyword as far as I can see. The problem is that unless the > .language method is invoked on the f-string in the same expression, > the f-string needs to be converted to a string immediately, as it is > in Python 3.6 and 3.7. To decide whether to do this in the case where > there is a method invoked, the parser needs to read the f-string > (lookahead = 0), the "." (lookahead = 1), and the token "language" > (lookahead = 2), and then for the parser to know that this is the > special construct, it needs to know the special token. "Keyword" in > this context simply means "a token that the parser knows about", but > in general in Python we want keywords to be reserved to the language, > which is considered a very high cost. > > What could work is an extension to the formatting language. I suggest > abusing the *conversion flag*. (It's an abuse because I'm going to > apply it to the whole f-string, while the current Language Reference > says it's applied to the value being formatted.[1]) This flag would only > be allowed as the first item in the string. The idea is that > `f"{lang!g}Hello, {user}!"` would be interpreted as > > _ = get_gettext(lang) > _("Hello, {user}!").format(user=user) > > and `f"{!g}Hello, {user}!"` as `_("Hello, {user}!").format(user=user)`, > reusing the the most recent value of `_`. The "g" in "!g" stands for > "gettext", of course. GNU xgettext can be taught to recognize things > like `f"{...!g}` as translatable string markers; I'm sure pygettext > can too. > > I'm assuming the implementation of get_gettext from my earlier post, > reproduced at the end for reader convenience. > > One warning about this syntax: I think the gettext module is a pretty > popular way to implement message localization. However, I'm not sure > it's the only way, and I would guess that folks who use something else > would want to be able to use f-strings with that package, too. So > there may need to be a way to configure the translation engine. > >> # Use duck-typing of gettext.translation objects >> class NullTranslation: >> def __init__(self): >> self.gettext = lambda s: s >> >> def get_gettext(language, translation={'C': NullTranslation()}): >> if language not in translation: >> translation[language] = \ >> gettext.translation('myapplication', languages=[language]) >> return translation[language].gettext >> >> and >> >> # This could be one line, but I guess in many cases you're likely >> # to use use the gettext function repeatedly. Also, use of the >> # _() idiom marks translatable string for translators. >> _ = get_gettext(language) >> _(translatable_string).format(key=value...) > > > Footnotes: > [1] I'm not sure whether "the g conversion is applied to the 'lang' > variable, and the effect is to set the global 'translate' function" is > pure sophistry or not, but I don't find it convincing. YMMV > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > From ethan at stoneleaf.us Mon Sep 17 14:24:22 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 17 Sep 2018 11:24:22 -0700 Subject: [Python-ideas] Revert "RuntimeError: generator raised StopIteration" In-Reply-To: References: Message-ID: <5B9FF156.6040202@stoneleaf.us> On 09/17/2018 11:06 AM, Danilo J. S. Bellini wrote: > The idea is simple: restore the "next" built-in and the "StopIteration" propagation behavior from Python 3.6. Unlikely to happen. In 3.6 a deprecation warning started being issued for next inside generators that said it would raise a different exception in 3.7 [1]. -- ~Ethan~ [1] PEP 479: https://docs.python.org/3/whatsnew/3.5.html#whatsnew-pep-479 From jamtlu at gmail.com Mon Sep 17 14:49:26 2018 From: jamtlu at gmail.com (James Lu) Date: Mon, 17 Sep 2018 14:49:26 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible Message-ID: I agree completely. I propose Python register a trial of Stack Overflow Teams. Stack Overflow Teams is essentially your own private Stack Overflow. (I will address the private part later.) Proposals would be questions and additions or criticism would be answers. You can express your support or dissent of a proposal using the voting. Flags and reviews can be used to moderate. Stack Overflow Chat can be used for quick and casual discussion, and also to move irregular or irrelevant discussions away from the main site. Although the Stack Overflow platform is typically used for technical Q&A, there is precedent for using it as a way to discuss proposals: this is precisely what Meta Stack Overflow dies and it?s seen decent success. Anyone can register a @python.org email. Stack Overflow Teams can be configured to allow anyone with a @python.org email join the python-ideas team. I?m sure Stack Overflow Inc. is willing to grant Stack Overflow Teams to the PSF pro bono after the trial period expires. You can configure stack overflow to get email notifications as well. > On Sep 17, 2018, at 1:16 PM, Anders Hovm?ller wrote: > > >> It?s been almost a week since this ?discussion? first started. Can we please stop this in the name of productive work on python-ideas? > > A better use of time might be to discuss moving to a better forum system where moderation is easier/possible. Email somehow has a shape that makes those things 100% probable and you can?t easily silence discussions that are uninteresting. > > / Anders From arj.python at gmail.com Mon Sep 17 15:22:26 2018 From: arj.python at gmail.com (Abdur-Rahmaan Janhangeer) Date: Mon, 17 Sep 2018 23:22:26 +0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: Message-ID: py already has a Zulip chat Abdur-Rahmaan Janhangeer Mauritius -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Mon Sep 17 15:34:23 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 17 Sep 2018 15:34:23 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: Message-ID: It was decided to try https://www.discourse.org at the core dev sprints. We'll likely try it for the upcoming governance model/vote discussions. If it works well we'll consider using it for other discussions in the future. Let's table this topic for now as we're unlikely to (a) try anything else but Discource; (b) not to try Discource for governance discussions; (c) AFAIK we already have people who will set it up for us, so no help is needed. Yury On Mon, Sep 17, 2018 at 3:23 PM Abdur-Rahmaan Janhangeer wrote: > > py already has a Zulip chat > > Abdur-Rahmaan Janhangeer > Mauritius > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ -- Yury From psyker156 at gmail.com Mon Sep 17 15:35:27 2018 From: psyker156 at gmail.com (Philippe Godbout) Date: Mon, 17 Sep 2018 15:35:27 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: Message-ID: Also, by restricting to python.org email address, do we not run the risk of cutting off a lot of would be contributor? Le lun. 17 sept. 2018 ? 15:23, Abdur-Rahmaan Janhangeer < arj.python at gmail.com> a ?crit : > py already has a Zulip chat > > Abdur-Rahmaan Janhangeer > Mauritius > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Mon Sep 17 16:16:56 2018 From: jamtlu at gmail.com (James Lu) Date: Mon, 17 Sep 2018 16:16:56 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: Message-ID: > It was decided to try https://www.discourse.org at the core dev > sprints. We'll likely try it for the upcoming governance model/vote > discussions. If it works well we'll consider using it for other > discussions in the future. > > Let's table this topic for now as we're unlikely to So... we?re going to be using discourse instead of Python-ideas mailing list? Or will we only try that until Discourse works for ?core sprints?? > On Sep 17, 2018, at 3:34 PM, Yury Selivanov wrote: > > It was decided to try https://www.discourse.org at the core dev > sprints. We'll likely try it for the upcoming governance model/vote > discussions. If it works well we'll consider using it for other > discussions in the future. > > Let's table this topic for now as we're unlikely to -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Mon Sep 17 16:20:39 2018 From: jamtlu at gmail.com (James Lu) Date: Mon, 17 Sep 2018 16:20:39 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: Message-ID: <4EB50BF8-5A75-4E15-B4A9-BA3C6D1EC2B1@gmail.com> How can the Zulip chat be joined? Im interested in consolidating all the discussion into one centralized forum. Sent from my iPhone > On Sep 17, 2018, at 3:35 PM, Philippe Godbout wrote: > > Also, by restricting to python.org email address, do we not run the risk of cutting off a lot of would be contributor? > >> Le lun. 17 sept. 2018 ? 15:23, Abdur-Rahmaan Janhangeer a ?crit : >> py already has a Zulip chat >> >> Abdur-Rahmaan Janhangeer >> Mauritius >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From psyker156 at gmail.com Mon Sep 17 16:21:27 2018 From: psyker156 at gmail.com (Philippe Godbout) Date: Mon, 17 Sep 2018 16:21:27 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <4EB50BF8-5A75-4E15-B4A9-BA3C6D1EC2B1@gmail.com> References: <4EB50BF8-5A75-4E15-B4A9-BA3C6D1EC2B1@gmail.com> Message-ID: Simply use: https://python.zulipchat.com/login/ Le lun. 17 sept. 2018 ? 16:20, James Lu a ?crit : > How can the Zulip chat be joined? Im interested in consolidating all the > discussion into one centralized forum. > > Sent from my iPhone > > On Sep 17, 2018, at 3:35 PM, Philippe Godbout wrote: > > Also, by restricting to python.org email address, do we not run the risk > of cutting off a lot of would be contributor? > > Le lun. 17 sept. 2018 ? 15:23, Abdur-Rahmaan Janhangeer < > arj.python at gmail.com> a ?crit : > >> py already has a Zulip chat >> >> Abdur-Rahmaan Janhangeer >> Mauritius >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Mon Sep 17 16:22:08 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 17 Sep 2018 16:22:08 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: Message-ID: On Mon, Sep 17, 2018 at 4:16 PM James Lu wrote: [..] > So... we?re going to be using discourse instead of Python-ideas mailing list? Or will we only try that until Discourse works for ?core sprints?? Well, as I said: "If it works well we'll consider using it for other discussions in the future." We are do not know (right now) how exactly and for what exactly we use it. Using it for python-dev and python-ideas is one possible outcome. Yury From ethan at stoneleaf.us Mon Sep 17 16:22:26 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 17 Sep 2018 13:22:26 -0700 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: Message-ID: <5BA00D02.2010204@stoneleaf.us> On 09/17/2018 01:16 PM, James Lu wrote: > So... we?re going to be using discourse instead of Python-ideas mailing list? No. None of the mailing lists will be migrated at this time. The plan is to get a test instance set up, tried for a while on a specific issue or two, and evaluate our experiences then. We are also investigating ways to make the mailing lists themselves more manageable. -- ~Ethan~ From brett at python.org Mon Sep 17 21:22:38 2018 From: brett at python.org (Brett Cannon) Date: Mon, 17 Sep 2018 18:22:38 -0700 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <4EB50BF8-5A75-4E15-B4A9-BA3C6D1EC2B1@gmail.com> References: <4EB50BF8-5A75-4E15-B4A9-BA3C6D1EC2B1@gmail.com> Message-ID: On Mon., Sep. 17, 2018, 13:21 James Lu, wrote: > How can the Zulip chat be joined? Im interested in consolidating all the > discussion into one centralized forum. > No consolidation is happening yet. We're testing out mailing list alternatives on smaller, more manageable lists first before we try to migrate something as large as python-ideas. In other words please be patient as we try to figure this out while knowing we are looking into this. -Brett > Sent from my iPhone > > On Sep 17, 2018, at 3:35 PM, Philippe Godbout wrote: > > Also, by restricting to python.org email address, do we not run the risk > of cutting off a lot of would be contributor? > > Le lun. 17 sept. 2018 ? 15:23, Abdur-Rahmaan Janhangeer < > arj.python at gmail.com> a ?crit : > >> py already has a Zulip chat >> >> Abdur-Rahmaan Janhangeer >> Mauritius >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Mon Sep 17 21:42:30 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Mon, 17 Sep 2018 21:42:30 -0400 Subject: [Python-ideas] Retire or reword the "Beautiful is better than ugly" Zen clause In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: On Mon, Sep 17, 2018 at 10:50 AM Jacco van Dorp wrote: > > Op ma 17 sep. 2018 om 16:40 schreef Wes Turner : >> >> I think it's meant to be ironic? >> >> Why would that be the first sentence of a poem about software and the Python newsgroup/mailing list community? >> >> A certain percentage of people might be offended by changing the first line (the frame of) of said poem; to "I'm better than you". >> >> Dominance and arrogance are upsetting to a certain percentage, so that shouldn't occur. (Though arrogance tends to be the norm in many open source communities which are necessarily discerning and selective; in order to avoid amateurish mediocrity). >> >> So, in a way, "Beautiful is better than ugly" was the CoC in the Python community for many years; so, now that the CoC is in place, the best thing to do may be to just remove the Zen of Python entirely; rather than dominate the authors' sarcastic poem until it's devoid of its intentional tone. >> > > I always considered the Zen to be about code only. I don't think I ever read the CoC, and just assumed "at least pretend to be a decent person". I don't think the CoC has anything to do with your code, right ? PEP 20 makes it sound like it's for the design of Python itself. The original post was from a metadiscussion about Python's design, not about its users' code. https://mail.python.org/pipermail/python-list/1999-June/001951.html By "CoC", Wes is referring to the Zen. The official Community Code of Conduct is not just about being nice, but about being nice for the purpose of improving Python. https://www.python.org/psf/codeofconduct/ If you're a jerk to Python users in other contexts (maybe even in python-list), the Code doesn't care. Your code is (usually) also outside of the community. The Code of Conduct is about the people, while the Zen is about the design, and PEP 8 is about the style, but all three are about potential improvements to Python, not your personal/professional life (though you could still apply them if you want). (It irks me for no real reason that something called the _Code_ of Conduct has nothing to do with code, but that's the fault of the mathematicians and early computer scientists for overloading an existing word.) From leewangzhong+python at gmail.com Mon Sep 17 22:06:28 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Mon, 17 Sep 2018 22:06:28 -0400 Subject: [Python-ideas] Retire or reword the namesake of the Language In-Reply-To: <20180917143854.341ff6ce@fsol> References: <20180917143854.341ff6ce@fsol> Message-ID: Monty Python had the goal of making people laugh, while python-ideas has the goal of improving Python. With those priorities, we can have fun, but not at the expense of potential contributions and contributors. Other people aren't perfect, but sometimes you have to adapt to them for the sake of other goals. It may be easier if you think of it as writing a nasty workaround to an unmaintained or wontfix API, or, back in the 00's, doing anything at all to make a site work on Internet Explorer. On Mon, Sep 17, 2018 at 8:39 AM Antoine Pitrou wrote: > > It's not like the Monty Python (whom the language was named after) > would have dared mocking the discourse and manners of all kinds of > social groups, let alone have a laugh at the expense of beliefs and > ideologies. > > Regards > > Antoine. > > On Mon, 17 Sep 2018 08:25:27 -0400 > Calvin Spealman > wrote: > > I am very disappointed in the existence of this thread. Mocking discourse > > is extremely unpythonic. From python-ideas at mgmiller.net Mon Sep 17 23:24:22 2018 From: python-ideas at mgmiller.net (Mike Miller) Date: Mon, 17 Sep 2018 20:24:22 -0700 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: Message-ID: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> On 2018-09-17 11:49, James Lu wrote: > I agree completely. > >> On Sep 17, 2018, at 1:16 PM, Anders Hovm?ller wrote: >>> It?s been almost a week since this ?discussion? first started. Can we please stop this in the name of productive work on python-ideas? >> >> A better use of time might be to discuss moving to a better forum system where moderation is easier/possible. Email somehow has a shape that makes those things 100% probable and you can?t easily silence discussions that are uninteresting. A decent mail program can thread discussions and ignore the boring ones. I use Thunderbird, where the "k" key will easily silence a thread. Though I rarely use it in practice, favoring the delete key instead. -Mike From kulakov.ilya at gmail.com Mon Sep 17 23:43:00 2018 From: kulakov.ilya at gmail.com (Ilya Kulakov) Date: Mon, 17 Sep 2018 20:43:00 -0700 Subject: [Python-ideas] Deprecation utilities for the warnings module In-Reply-To: <3E800307-2D67-4E8E-B2C3-4C83875F0226@gmail.com> References: <3E800307-2D67-4E8E-B2C3-4C83875F0226@gmail.com> Message-ID: <8227973E-0E6C-4DDA-947A-5FD799A88473@gmail.com> After spending more time thinking about the implementation I came to a conclusion that it's not easy to generalize replacement of classes. Yes, with some work it's possible to ensure that old name references a new one. But that's not sufficient. If new class has different interface then user's code will fail. Library's author could provide a mapping via a dict or by manually implementing methods but that'd be only half of the problem. User's code would still pass "old" objects back to the library. So effectively the library would have to support both old and new making all work meaningless. I think it's possible to design a smart wrapper that behaves as an old class in user's code, but as a new class inside library. E.g. by analyzing traceback.extract_stack. But that is too clever and almost certainly comes with its own bag of flaws. I'm confident that this problem leaves the "obsolete" decorators out of scope of this enhancement. Would anyone be interested in using plain deprecation decorators? They are still quite useful: - Can warn when class is subclassed or class-level attributes are accessed (vs warn inside __init__/__new__) - Much easier to be seen by static analyzers Best Regards, Ilya Kulakov -------------- next part -------------- An HTML attachment was scrubbed... URL: From turnbull.stephen.fw at u.tsukuba.ac.jp Tue Sep 18 00:23:26 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Tue, 18 Sep 2018 13:23:26 +0900 Subject: [Python-ideas] Combine f-strings with i18n - How about using PEP 501? In-Reply-To: References: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> Message-ID: <23456.32190.131662.996934@turnbull.sk.tsukuba.ac.jp> Eric V. Smith writes: > See also PEP 501, which could be used for i18n. I don't see how this immediately helps the OP, who wants a *literal* expression that automatically invokes the translation machinery as well as the interpolation machinery. The translation machinery needs access to the raw template string, which is what (human) translators will be provided by pygettext. But as far as I can see there is nothing in i-string processing that provides a hook for this. It presumably wouldn't be impossible to provide a class or factory function derived from InterpolationTemplate that transparently does the lookup, and then constructs an InterpolationTemplate (duplicating the compiler's work?) But a direct compilation to InterpolationTemplate is hard-wired for (literal) i-strings. AFAICS this is the *only* benefit of PEP 501 over simply defining the InterpolationTemplate class in a library module, and it's not usable by gettext-style I18N! We could add a `translate` method, to completely replace the raw_template *before* the compiler parses it into parsed_template. But this gets complicated in the sense of the Zen, because we want to be able to change the translate method on the fly (if the target language changes, or to change the gettext domain, or whatever), or if we have non-I18N uses for i-strings in our application. I hope I'm missing something! Steve From boxed at killingar.net Tue Sep 18 00:31:42 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Tue, 18 Sep 2018 06:31:42 +0200 Subject: [Python-ideas] Combine f-strings with i18n - How about using PEP 501? In-Reply-To: <23456.32190.131662.996934@turnbull.sk.tsukuba.ac.jp> References: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> <23456.32190.131662.996934@turnbull.sk.tsukuba.ac.jp> Message-ID: <15EBEFE6-D745-4C33-A08B-3080063A0F02@killingar.net> >> See also PEP 501, which could be used for i18n. > > I don't see how this immediately helps the OP, who wants a *literal* > expression that automatically invokes the translation machinery as > well as the interpolation machinery. Another way forward could be a preprocessor. All this can be done with a fairly simple script using parso. If the op is interested I could whip out a prototype. The cool thing about parso is that it?s a round trip AST so it?s easy to perform refactorings or in this case preprocessing without affecting formatting or comments. / Anders From hpolak at polak.es Tue Sep 18 03:59:34 2018 From: hpolak at polak.es (Hans Polak) Date: Tue, 18 Sep 2018 09:59:34 +0200 Subject: [Python-ideas] Combine f-strings with i18n - How about using PEP 501? In-Reply-To: <15EBEFE6-D745-4C33-A08B-3080063A0F02@killingar.net> References: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> <23456.32190.131662.996934@turnbull.sk.tsukuba.ac.jp> <15EBEFE6-D745-4C33-A08B-3080063A0F02@killingar.net> Message-ID: >> I don't see how this immediately helps the OP, who wants a *literal* >> expression that automatically invokes the translation machinery as >> well as the interpolation machinery. Actually, no, I do not want the expression to be automatically translated at compile time. It should be translated at run-time. There are three situations. 1. No translation, just a regular f-string. 2. App translation. The f-string gets translated to the configured language. 3. On the fly translation. The string gets translated to the language passed as an argument as required. In code, this would be. 1. f'Hi {user}' 2. f'{!g}Hi {user}' 3. f'{lang!g}Hi {user}' Cases 2 and 3 need some additional code, just like with gettext. I'm sorry if that wasn't clear from the start. All I want is the code to be simpler to write and maintain. I do not want to have complicated parsing for the compiler. >> Another way forward could be a preprocessor. All this can be done with a fairly simple script using parso. This is probably the idea. Cheers, Hans From hpolak at polak.es Tue Sep 18 04:12:06 2018 From: hpolak at polak.es (Hans Polak) Date: Tue, 18 Sep 2018 10:12:06 +0200 Subject: [Python-ideas] Combine f-strings with i18n In-Reply-To: <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> References: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> Message-ID: On 17/09/18 19:42, Stephen J. Turnbull wrote: > > That's why I would like to see a parameter that can be passed to > > the f-string. > > This doesn't make sense to me. If I get a request in English, I need to return English. If I get a request in French, I need to return French. # At the start of the app, the languages get loaded in memory. translate = translation('app','.locale') translate.install() es = translation('app','.locale',languages=['es']) es.install() # Get the preferred user language from the http request T(_('Hello {user}...'), user_language,user=user) def T(translatable_string, language=None, *args, **kwargs): if 'es' == language: # Return translated, formatted string return es.gettext(translatable_string).format(**kwargs) # Default, return formatted string return translatable_string.format(**kwargs) > Such configurations are long-lasting. If it is for the whole app, yes. Not if it is just the request. 1. No translation, just a regular f-string. 2. App translation. The f-string gets translated to the configured language. Long lasting configuration. 3. On the fly translation. The string gets translated to the language passed as an argument as required. > What could work is an extension to the formatting language. I suggest > abusing the *conversion flag*. (It's an abuse because I'm going to > apply it to the whole f-string, while the current Language Reference > says it's applied to the value being formatted.[1]) This flag would only > be allowed as the first item in the string. The idea is that > `f"{lang!g}Hello, {user}!"` would be interpreted as Excellent. The syntax is unimportant to me. Cheers, Hans -------------- next part -------------- An HTML attachment was scrubbed... URL: From hpolak at polak.es Tue Sep 18 04:17:15 2018 From: hpolak at polak.es (Hans Polak) Date: Tue, 18 Sep 2018 10:17:15 +0200 Subject: [Python-ideas] Combine f-strings with i18n In-Reply-To: References: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> Message-ID: On 17/09/18 20:10, Eric V. Smith wrote: > See also PEP 501, which could be used for i18n. > My first idea was to propose a t-string (for translatable string). Cheers, Hans -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Sep 18 04:41:19 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 18 Sep 2018 10:41:19 +0200 Subject: [Python-ideas] Retire or reword the namesake of the Language References: <20180917143854.341ff6ce@fsol> Message-ID: <20180918104119.424c8e3a@fsol> On Mon, 17 Sep 2018 22:06:28 -0400 "Franklin? Lee" wrote: > Monty Python had the goal of making people laugh, while python-ideas > has the goal of improving Python. With those priorities, we can have > fun, but not at the expense of potential contributions and > contributors. You are right. Regards Antoine. From turnbull.stephen.fw at u.tsukuba.ac.jp Tue Sep 18 04:44:49 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Tue, 18 Sep 2018 17:44:49 +0900 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> Message-ID: <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Mike Miller writes: > A decent mail program can thread discussions and ignore the boring > ones. +100, but realistically, people aren't going to change their MUAs, especially on handhelds. The advantage of something like Discourse is that the server side controls the UX, and that's what people who don't want to change MUAs usually want. IMO the problems of these lists are a scale problem -- too many people, too many posts. As far as I can see, the only way to "fix" it is to become less inclusive, at least in terms of numbers. It's possible that a different technology will allow us to become more inclusive in terms of diversity at the same time that we become fewer. Steve From kohnt at tobiaskohn.ch Tue Sep 18 07:29:35 2018 From: kohnt at tobiaskohn.ch (Tobias Kohn) Date: Tue, 18 Sep 2018 13:29:35 +0200 Subject: [Python-ideas] Pattern Matching Syntax (reprise) Message-ID: <20180918132935.Horde.fDRgp_ZmxPK06oMIWN05y52@webmail.tobiaskohn.ch> Hello Everyone, Please excuse my being late for properly responding to the last thread on "Pattern Matching Syntax" [1].? As Robert Roskam has already pointed out at the beginning of that thread, there has been much previous discussion about adding pattern matching to Python, and several proposals exist.? It is therefore not my intention to propose yet another syntax choice for pattern matching, but more to share my experience with implementing it, in the hope of making a worthwhile contribution to the overall discussion. This summer, I basically ended up requiring pattern matching in Python for a research project I am working on.? Some initial hacks have then grown into a library for pattern matching in Python [2].? On the one hand, my design is certainly heavily influence by Scala, with which I also work on a regular basis.? On the other hand, I ran into various difficulties, challanges, and it has been particularly important to me to find a design that blends well with Python, and harnesses what Python already offers. I have written down my experience in the form of a discussion on several options concerning syntax [3], and implementation [4], respectively.? As the articles have turned out longer than I originally intended, it might take up too much time for those who have little interest in this matter in the first place.? However, considering that the subject of pattern matching has been coming up rather regularly, my experience might help to contribute something to the discussion.? Let me provide a brief summary here: 1. PATTERN MATCHING IS NOT SWITCH/CASE -------------------------------------- When I am talking about pattern matching, my goal is to do a deep structural comparison, and extract information from an object.? Consider, for instance, the problem of optimising the AST of a Python program, and eliminate patterns of the form `x + 0`, and `x - 0`.? What pattern matching should be offering here is a kind of comparison along the lines of: `if node == BinOp(?, (Add() or Sub()), Num(0)): ...` Ideally, it should also return the value of what is marked by the question mark `?` here, and assign it to a variable `left`, say.? The above comparison is often written as something like, e. g.: `case BinOp(left, Add()|Sub(), Num(0)): ...` This use of `case`, however, is not the same as a switch-statement. 2. ORTHOGONALITY ---------------- Getting the syntax and semantics of nested blocks right is hard.? Every block/suite in Python allows any kind of statements to occur, which allows for things like nested functions definitions, or having more than just methods in a class.? If we use a two-levelled block-structure, we run into the problem of finding good semantics for what the following means (note the variable `x` here): ``` match node: ??? x = 0 ??? case BinOp(left, Add(), Num(0)): ??????? ... ??? x += 1 ??? case BinOp(left, Mul(), Num(1)): ??????? ... ``` In the case of a "switch block", such additional statements like the `x=0`, and `x+=1` can become quite problematic.? On the other hand, requiring all statements inside the block to be case-statements violates the orthogonality found otherwise in Python. I feel that this dilemma is one of the core issues why the syntax of switch statements, or pattern matching seems so exceptionally hard.? In the end, it might therefore, indeed, make more sense to find a structure that is more in line with if/elif/else-chains.? This would lead to a form of pattern matching with little support for switch-statement, though. 3. IMPLEMENTATION ----------------- For the implementation of pattern matching, my package compiles the patterns into context-manager-classes, adds these classes in the background to the code, and then uses `with`-statements to express the `case`-statement.? If have found a neat way to make the execution of a `with`-statement's body conditional. Two things have been curcially important in the overall design: first, the "recompilation" cannot change the structure of the original code, or add any line.? This way, all error messages, and tooling should work as expected; the invasive procedure should be as minimal as possible.? Second, it is paramount that the semantics of how Python works is being preserved.? Even though the actual matching of patterns is done in external classes, all names must be resolved "locally" where the original `case`-statement lives.? Similarly, the variables defined by the pattern are to local variables, which are assigned if, and only if, the pattern actually matches. Should pattern matching ever be added to Python properly, then there will be no need to use `with`-statements and context managers, of course.? But the implementation must make sure, nonetheless, that the usual semantics with name resolving of Python is being respected. 4. SYNTAX OF PATTERNS --------------------- The syntax of patterns themselves has been guided by two principles (again).? First, if there is a way to already express the same thing in Python, use that.? This applies, in particular, to sequence unpacking.? Second, patterns specify a possible way how the object in question could have been created, or constructed in the first place.? Hence, it is no accident that the pattern `BinOp(left, Add(), Num(0))` above looks almost like a constructor. Other implementation are happy to redefine the meanings of operators like `in`, or `is` (as in, e. g., `case x is str:`).? While this might look very convenient at first, it will most probably lead to many subsequent bugs where people write `if x is str:` to mean the same thing (namely test the type of `x`).? Even though expressing patterns requires reusing some operators in a new way, we have to extremely careful to choose widely, and minimise confusion, and surprises. Pattern matching could certainly make a great addition to Python, and various current implementations act as proof of concepts.? However, choosing an appropriate syntax for pattern matching is hard, and we should work hard to make sure that any such addition feels natural in Python, even at the expense of having to write more, and being not as terse as other languages. I hope my thoughts in this matter can make a worthwhile constribution to the discussion.? And I would like to emphasise once more, that my goal is not to propose a new syntax for pattern matching, but to report on my experience while implementing it. Kind regards, Tobias Kohn [1] https://groups.google.com/d/topic/python-ideas/nqW2_-kKrNg/discussion [2] https://github.com/Tobias-Kohn/pyPMatch [3] https://tobiaskohn.ch/index.php/2018/09/18/pattern-matching-syntax-in-python/ [4] https://tobiaskohn.ch/index.php/2018/09/12/implementing-pattern-matching/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertve92 at gmail.com Tue Sep 18 10:42:08 2018 From: robertve92 at gmail.com (Robert Vanden Eynde) Date: Tue, 18 Sep 2018 16:42:08 +0200 Subject: [Python-ideas] Pattern Matching Syntax (reprise) In-Reply-To: <20180918132935.Horde.fDRgp_ZmxPK06oMIWN05y52@webmail.tobiaskohn.ch> References: <20180918132935.Horde.fDRgp_ZmxPK06oMIWN05y52@webmail.tobiaskohn.ch> Message-ID: Needless to say it's interesting to see what others language have (pros and cons), I'm thinking about Scala for example (but I'm sure perl can show us a long list of pros and cons). Le mar. 18 sept. 2018 ? 13:38, Tobias Kohn a ?crit : > Hello Everyone, > > Please excuse my being late for properly responding to the last thread on > "Pattern Matching Syntax" [1]. As Robert Roskam has already pointed out at > the beginning of that thread, there has been much previous discussion about > adding pattern matching to Python, and several proposals exist. It is > therefore not my intention to propose yet another syntax choice for pattern > matching, but more to share my experience with implementing it, in the hope > of making a worthwhile contribution to the overall discussion. > > This summer, I basically ended up requiring pattern matching in Python for > a research project I am working on. Some initial hacks have then grown > into a library for pattern matching in Python [2]. On the one hand, my > design is certainly heavily influence by Scala, with which I also work on a > regular basis. On the other hand, I ran into various difficulties, > challanges, and it has been particularly important to me to find a design > that blends well with Python, and harnesses what Python already offers. > > I have written down my experience in the form of a discussion on several > options concerning syntax [3], and implementation [4], respectively. As > the articles have turned out longer than I originally intended, it might > take up too much time for those who have little interest in this matter in > the first place. However, considering that the subject of pattern matching > has been coming up rather regularly, my experience might help to contribute > something to the discussion. Let me provide a brief summary here: > > *1. Pattern Matching is not Switch/Case* > -------------------------------------- > When I am talking about pattern matching, my goal is to do a deep > structural comparison, and extract information from an object. Consider, > for instance, the problem of optimising the AST of a Python program, and > eliminate patterns of the form `x + 0`, and `x - 0`. What pattern matching > should be offering here is a kind of comparison along the lines of: > `if node == BinOp(?, (Add() or Sub()), Num(0)): ...` > Ideally, it should also return the value of what is marked by the question > mark `?` here, and assign it to a variable `left`, say. The above > comparison is often written as something like, e. g.: > `case BinOp(left, Add()|Sub(), Num(0)): ...` > This use of `case`, however, is not the same as a switch-statement. > > *2. Orthogonality* > ---------------- > Getting the syntax and semantics of nested blocks right is hard. Every > block/suite in Python allows any kind of statements to occur, which allows > for things like nested functions definitions, or having more than just > methods in a class. If we use a two-levelled block-structure, we run into > the problem of finding good semantics for what the following means (note > the variable `x` here): > ``` > match node: > x = 0 > case BinOp(left, Add(), Num(0)): > ... > x += 1 > case BinOp(left, Mul(), Num(1)): > ... > ``` > In the case of a "switch block", such additional statements like the > `x=0`, and `x+=1` can become quite problematic. On the other hand, > requiring all statements inside the block to be case-statements violates > the orthogonality found otherwise in Python. > > I feel that this dilemma is one of the core issues why the syntax of > switch statements, or pattern matching seems so exceptionally hard. In the > end, it might therefore, indeed, make more sense to find a structure that > is more in line with if/elif/else-chains. This would lead to a form of > pattern matching with little support for switch-statement, though. > > *3. Implementation* > ----------------- > For the implementation of pattern matching, my package compiles the > patterns into context-manager-classes, adds these classes in the background > to the code, and then uses `with`-statements to express the > `case`-statement. If have found a neat way to make the execution of a > `with`-statement's body conditional. > > Two things have been curcially important in the overall design: first, the > "recompilation" cannot change the structure of the original code, or add > any line. This way, all error messages, and tooling should work as > expected; the invasive procedure should be as minimal as possible. Second, > it is paramount that the semantics of how Python works is being preserved. > Even though the actual matching of patterns is done in external classes, > all names must be resolved "locally" where the original `case`-statement > lives. Similarly, the variables defined by the pattern are to local > variables, which are assigned if, and only if, the pattern actually matches. > > Should pattern matching ever be added to Python properly, then there will > be no need to use `with`-statements and context managers, of course. But > the implementation must make sure, nonetheless, that the usual semantics > with name resolving of Python is being respected. > > *4. Syntax of Patterns* > --------------------- > The syntax of patterns themselves has been guided by two principles > (again). First, if there is a way to already express the same thing in > Python, use that. This applies, in particular, to sequence unpacking. > Second, patterns specify a possible way how the object in question could > have been created, or constructed in the first place. Hence, it is no > accident that the pattern `BinOp(left, Add(), Num(0))` above looks almost > like a constructor. > > Other implementation are happy to redefine the meanings of operators like > `in`, or `is` (as in, e. g., `case x is str:`). While this might look very > convenient at first, it will most probably lead to many subsequent bugs > where people write `if x is str:` to mean the same thing (namely test the > type of `x`). Even though expressing patterns requires reusing some > operators in a new way, we have to extremely careful to choose widely, and > minimise confusion, and surprises. > > > Pattern matching could certainly make a great addition to Python, and > various current implementations act as proof of concepts. However, > choosing an appropriate syntax for pattern matching is hard, and we should > work hard to make sure that any such addition feels natural in Python, even > at the expense of having to write more, and being not as terse as other > languages. > > I hope my thoughts in this matter can make a worthwhile constribution to > the discussion. And I would like to emphasise once more, that my goal is > not to propose a new syntax for pattern matching, but to report on my > experience while implementing it. > > Kind regards, > Tobias Kohn > > > [1] https://groups.google.com/d/topic/python-ideas/nqW2_-kKrNg/discussion > [2] https://github.com/Tobias-Kohn/pyPMatch > [3] > https://tobiaskohn.ch/index.php/2018/09/18/pattern-matching-syntax-in-python/ > [4] > https://tobiaskohn.ch/index.php/2018/09/12/implementing-pattern-matching/ > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertve92 at gmail.com Tue Sep 18 10:48:18 2018 From: robertve92 at gmail.com (Robert Vanden Eynde) Date: Tue, 18 Sep 2018 16:48:18 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: As said 100 times in the list, email is powerful, configurable but needs a lot of configuration (especially hard on mobile) and has a lot of rules (don't top post, reply to the list, don't html, wait, html is alright) whereas a web based alternative is easier to grasp (more modern) but adds more abstraction. I can't find the link we had explaining the difference between those two, but mailing list is easily searchable and archivable and readable on a terminal. However, providing guis to mailing list is a nice in between to have the better of two worlds. About moderation, what's the problem on the list ? Le mar. 18 sept. 2018 ? 10:44, Stephen J. Turnbull < turnbull.stephen.fw at u.tsukuba.ac.jp> a ?crit : > Mike Miller writes: > > > A decent mail program can thread discussions and ignore the boring > > ones. > > +100, but realistically, people aren't going to change their MUAs, > especially on handhelds. The advantage of something like Discourse is > that the server side controls the UX, and that's what people who don't > want to change MUAs usually want. > > IMO the problems of these lists are a scale problem -- too many > people, too many posts. As far as I can see, the only way to "fix" it > is to become less inclusive, at least in terms of numbers. > > It's possible that a different technology will allow us to become more > inclusive in terms of diversity at the same time that we become fewer. > > Steve > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcgoble3 at gmail.com Tue Sep 18 11:02:03 2018 From: jcgoble3 at gmail.com (Jonathan Goble) Date: Tue, 18 Sep 2018 11:02:03 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: On Tue, Sep 18, 2018 at 10:49 AM Robert Vanden Eynde wrote: > About moderation, what's the problem on the list ? > The biggest moderation issue I see with mailing lists is the inability to lock threads and delete posts (i.e. those that are spam or a Code of Conduct violation). Both of those are basic features that are core to virtually every forum system in existence today. Mailing lists offer no moderation of posts or threads unless every post is held in a moderation queue and manually approved before being sent, which isn't practical for large high-traffic lists like this. Instead, the only recourse is to moderate the user by banning or muting them, which can sometimes result in essentially using a sledgehammer to kill a fly. That is particularly the case if the only problems are on one heated thread where five people are attacking each other, but all are contributing constructively to other threads, in which case the best response is to simply terminate the argument by locking the thread. But on a mailing list, one would have to ban or mute all five users instead, impacting all of the other threads those users were contributing to. -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Tue Sep 18 11:12:35 2018 From: phd at phdru.name (Oleg Broytman) Date: Tue, 18 Sep 2018 17:12:35 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: <20180918151235.vzmabdobspwiwshh@phdru.name> On Tue, Sep 18, 2018 at 04:48:18PM +0200, Robert Vanden Eynde wrote: > As said 100 times in the list, email is powerful, configurable but needs a > lot of configuration (especially hard on mobile) and has a lot of rules > (don't top post, reply to the list, don't html, wait, html is alright) > whereas a web based alternative is easier to grasp (more modern) but adds > more abstraction. > > I can't find the link we had explaining the difference between those two, > but mailing list is easily searchable and archivable and readable on a > terminal. May I show mine: https://phdru.name/Software/mail-vs-web.html ? Oleg. -- Oleg Broytman https://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From apalala at gmail.com Tue Sep 18 11:42:36 2018 From: apalala at gmail.com (=?UTF-8?Q?Juancarlo_A=C3=B1ez?=) Date: Tue, 18 Sep 2018 11:42:36 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <20180918151235.vzmabdobspwiwshh@phdru.name> References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <20180918151235.vzmabdobspwiwshh@phdru.name> Message-ID: > I propose Python register a trial of Stack Overflow Teams. Stack Overflow Teams is essentially your own private Stack Overflow. (I will address the private part later.) Proposals would be questions and additions or criticism would be answers. You can express your support or dissent of a proposal using the voting. Flags and reviews can be used to moderate. SO is for Q&A, not for discussions. I recently had good success at the company I work for with Discourse, the sister/brother software to SO, which is designed specifically for discussions. https://www.discourse.org On Tue, Sep 18, 2018 at 11:12 AM, Oleg Broytman wrote: > On Tue, Sep 18, 2018 at 04:48:18PM +0200, Robert Vanden Eynde < > robertve92 at gmail.com> wrote: > > As said 100 times in the list, email is powerful, configurable but needs > a > > lot of configuration (especially hard on mobile) and has a lot of rules > > (don't top post, reply to the list, don't html, wait, html is alright) > > whereas a web based alternative is easier to grasp (more modern) but adds > > more abstraction. > > > > I can't find the link we had explaining the difference between those two, > > but mailing list is easily searchable and archivable and readable on a > > terminal. > > May I show mine: https://phdru.name/Software/mail-vs-web.html ? > > Oleg. > -- > Oleg Broytman https://phdru.name/ phd at phdru.name > Programmers don't die, they just GOSUB without RETURN. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -- Juancarlo *A?ez* -------------- next part -------------- An HTML attachment was scrubbed... URL: From tir.karthi at gmail.com Tue Sep 18 13:27:48 2018 From: tir.karthi at gmail.com (Karthikeyan) Date: Tue, 18 Sep 2018 10:27:48 -0700 (PDT) Subject: [Python-ideas] An experiment migrating bug tracker to GitLab Message-ID: <5287f452-df0f-4cca-85e9-f26765fa1e99@googlegroups.com> PEP 581 proposes the migration of bug tracking to GitHub issues. I have done a project to collect all issues in https://bugs.python.org. I have parsed the HTML data and migrated the issues to GitLab along with labels for issues and comments which is pretty much similar to GitHub issues. I have just added a comment from my account preceded by the Author name. I have migrated around 140 issues out of the 30000 issues for a demonstration. I can see some immediate benefits as follows which also apply to GitHub : * GitHub and GitLab support markdown thus code snippets can have highlighting. * Labels can be filtered from the UI and are helpful for triaging. * GitLab allows subscription for a label so that developers can subscribe for new issues in a label. * There are categories like milestones and priority that can help in release management. * They provide API and thus we can build integrations around issues. Some notes : I haven't parsed code in comments to enable syntax highlighting since it's hard to parse. Repo and feedback welcome : https://gitlab.com/tirkarthi/python-bugs/issues Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Tue Sep 18 14:00:00 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Tue, 18 Sep 2018 14:00:00 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: On Tue, Sep 18, 2018 at 11:02 AM Jonathan Goble wrote: > > On Tue, Sep 18, 2018 at 10:49 AM Robert Vanden Eynde wrote: >> >> About moderation, what's the problem on the list ? > > > The biggest moderation issue I see with mailing lists is the inability to lock threads and delete posts (i.e. those that are spam or a Code of Conduct violation). Both of those are basic features that are core to virtually every forum system in existence today. Is that really an issue here? I personally haven't seen threads where Brett tried to stop an active discussion, but people ignored him and kept fighting. From jcgoble3 at gmail.com Tue Sep 18 14:37:22 2018 From: jcgoble3 at gmail.com (Jonathan Goble) Date: Tue, 18 Sep 2018 14:37:22 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: On Tue, Sep 18, 2018, 2:00 PM Franklin? Lee wrote: > On Tue, Sep 18, 2018 at 11:02 AM Jonathan Goble > wrote: > > > > On Tue, Sep 18, 2018 at 10:49 AM Robert Vanden Eynde < > robertve92 at gmail.com> wrote: > >> > >> About moderation, what's the problem on the list ? > > > > > > The biggest moderation issue I see with mailing lists is the inability > to lock threads and delete posts (i.e. those that are spam or a Code of > Conduct violation). Both of those are basic features that are core to > virtually every forum system in existence today. > > Is that really an issue here? I personally haven't seen threads where > Brett tried to stop an active discussion, but people ignored him and > kept fighting. > Perhaps not, but part of that might be because stopping an active discussion on a mailing list can be hard to do, so one might not even try. Some discussions, I suspect, may have gone on in circles long past the point where they would have been locked on a forum. With forum software, it becomes much easier, and would be a more effective tool to terminate discussions that are going nowhere fast and wasting everyone's time. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Tue Sep 18 15:05:56 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Tue, 18 Sep 2018 15:05:56 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: On Tue, Sep 18, 2018 at 2:37 PM Jonathan Goble wrote: > > On Tue, Sep 18, 2018, 2:00 PM Franklin? Lee wrote: >> >> On Tue, Sep 18, 2018 at 11:02 AM Jonathan Goble wrote: >> > >> > The biggest moderation issue I see with mailing lists is the inability to lock threads and delete posts (i.e. those that are spam or a Code of Conduct violation). Both of those are basic features that are core to virtually every forum system in existence today. >> >> Is that really an issue here? I personally haven't seen threads where >> Brett tried to stop an active discussion, but people ignored him and >> kept fighting. > > > Perhaps not, but part of that might be because stopping an active discussion on a mailing list can be hard to do, so one might not even try. Some discussions, I suspect, may have gone on in circles long past the point where they would have been locked on a forum. With forum software, it becomes much easier, and would be a more effective tool to terminate discussions that are going nowhere fast and wasting everyone's time. But there's no evidence that such tools would help. Software enforcement powers are only necessary if verbal enforcement isn't enough. We need the current moderators (or just Brett) to say whether they feel it isn't enough. What people may really be clamoring for is a larger moderation team, or a heavier hand. They want more enforcement, not more effective enforcement. From ethan at stoneleaf.us Tue Sep 18 15:19:56 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 18 Sep 2018 12:19:56 -0700 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: <5BA14FDC.2010807@stoneleaf.us> On 09/18/2018 12:05 PM, Franklin? Lee wrote: > On Tue, Sep 18, 2018 at 2:37 PM Jonathan Goble wrote: >> Perhaps not, but part of that might be because stopping an active >> discussion on a mailing list can be hard to do, so one might not even >> try. Some discussions, I suspect, may have gone on in circles long past >> the point where they would have been locked on a forum. With forum >> software, it becomes much easier, and would be a more effective tool to >> terminate discussions that are going nowhere fast and wasting everyone's >> time. True. > But there's no evidence that such tools would help. Software > enforcement powers are only necessary if verbal enforcement isn't > enough. We need the current moderators (or just Brett) to say whether > they feel it isn't enough. It isn't enough. > What people may really be clamoring for is a larger moderation team, > or a heavier hand. They want more enforcement, not more effective > enforcement. More ineffective enforcement will be, um, ineffective. Let's have a test. I'm a moderator (from -List). We're* working on avenues to improve the mailing tools and simultaneously testing other options. I'm not seeing anything new in this thread that will impact that one way or another, so I'm asking for all of us to move on to other topics. -- ~Ethan~ * Various moderators from various lists. From boxed at killingar.net Tue Sep 18 15:35:41 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Tue, 18 Sep 2018 21:35:41 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: <4FD37C1A-2D22-4E28-96C5-7DA5574E06EF@killingar.net> > But there's no evidence that such tools would help. Software > enforcement powers are only necessary if verbal enforcement isn't > enough. We need the current moderators (or just Brett) to say whether > they feel it isn't enough. These systems work radically differently. You don?t get notifications for all messages in all threads by default. > What people may really be clamoring for is a larger moderation team, > or a heavier hand. They want more enforcement, not more effective > enforcement. If you just have good (granular, configurable) notifications for threads you don?t even need moderation. It side steps the entire problem. / Anders From lists at janc.be Tue Sep 18 18:00:11 2018 From: lists at janc.be (Jan Claeys) Date: Wed, 19 Sep 2018 00:00:11 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: On Tue, 2018-09-18 at 11:02 -0400, Jonathan Goble wrote: > The biggest moderation issue I see with mailing lists is the > inability to lock threads That actually wouldn't be hard to implement in a mailing list software as a semi-automatic moderation feature... -- Jan Claeys From mertz at gnosis.cx Tue Sep 18 18:07:56 2018 From: mertz at gnosis.cx (David Mertz) Date: Tue, 18 Sep 2018 18:07:56 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: Since 1972, there have been hundreds of reinventions of a means of carying on electronic conversations intended to be "better than email." The one thing they all have in common is that they are vastly worse than email. On Tue, Sep 18, 2018, 6:04 PM Jan Claeys wrote: > On Tue, 2018-09-18 at 11:02 -0400, Jonathan Goble wrote: > > The biggest moderation issue I see with mailing lists is the > > inability to lock threads > > That actually wouldn't be hard to implement in a mailing list software > as a semi-automatic moderation feature... > > > -- > Jan Claeys > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Tue Sep 18 18:37:09 2018 From: jamtlu at gmail.com (James Lu) Date: Tue, 18 Sep 2018 18:37:09 -0400 Subject: [Python-ideas] Moving to another forum system where Message-ID: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> > Is that really an issue here? I personally haven't seen threads where > Brett tried to stop an active discussion, but people ignored him and > kept fighting. Not personally with Brett, but I have seen multiple people try to stop the ?reword or remove beautiful is better than ugly in Zen of Python.? The discussion was going in circles and evolved into attacking each other?s use of logical fallacies. Other than that, my biggest issues with the current mailing system are: * There?s no way to keep a updated proposal of your own- if you decide to change your proposal, you have to communicate the change. Then, if you want to find the authoritative current copy, since you might?ve forgotten or you want to join he current discussion, then you have to dig through the emails and recursively apply the proposed change. It?s just easier if people can have one proposal they can edit themselves. * I?ve seen experienced people get confused about what was the current proposal because they were replying to older emails or they didn?t see the email with the clear examples. * The mailing list is frankly obscure. Python community leaders and package maintainers often are not aware or do not participate in Python-ideas. Not many people know how to use or navigate a mailing list. * No one really promotes the mailing list, you have to go out of your way to find where new features are proposed. * Higher discoverability means more people can participate, providing their own use cases or voting (I mean using like or dislike measures, consensus should still be how things are approved) go out of their way to find so they can propose something. Instead, I envision a forum where people can read and give their 2 cents about what features they might like to see or might not want to see. * More people means instead of having to make decisions from sometimes subjective personal experience, we can make decisions with confidence in what other Python devs want. Since potential proposers will find it easier to navigate a GUI forum, they can read previous discussions to understand the reasoning, precedent behind rejected and successful features. People proposing things that have already been rejected before can be directed to open a subtopic on the older discussion. > On Sep 18, 2018, at 3:19 PM, python-ideas-request at python.org wrote: > > Is that really an issue here? I personally haven't seen threads where > Brett tried to stop an active discussion, but people ignored him and > kept fighting. From lists at janc.be Tue Sep 18 20:42:48 2018 From: lists at janc.be (Jan Claeys) Date: Wed, 19 Sep 2018 02:42:48 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: <531cdd0a48bee4d039dd029d878b1aa41a6cbcbe.camel@janc.be> On Tue, 2018-09-18 at 18:07 -0400, David Mertz wrote: > Since 1972, there have been hundreds of reinventions of a means of > carying on electronic conversations intended to be "better than > email." The one thing they all have in common is that they are vastly > worse than email. I don't 100% agree with that. E.g., there are better protocols when you need real-time conversations, because (internet) email isn't necessarily good at that (by design). And I'm sure there are other circumstances or purposes where another protocol/standard is more appropriate. But in general, email is pretty good. :) -- Jan Claeys From rosuav at gmail.com Tue Sep 18 20:56:58 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 19 Sep 2018 10:56:58 +1000 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> Message-ID: On Wed, Sep 19, 2018 at 10:21 AM James Lu wrote: > > Not personally with Brett, but I have seen multiple people try to stop the ?reword or remove beautiful is better than ugly in Zen of Python.? The discussion was going in circles and evolved into attacking each other?s use of logical fallacies. > > Other than that, my biggest issues with the current mailing system are: > > * There?s no way to keep a updated proposal of your own- if you decide to change your proposal, you have to communicate the change. Then, if you want to find the authoritative current copy, since you might?ve forgotten or you want to join he current discussion, then you have to dig through the emails and recursively apply the proposed change. It?s just easier if people can have one proposal they can edit themselves. > That's what the PEP system exists for. But with the "remove the word ugly from the zen" proposal, it's not serious enough for anyone to actually want to write up a PEP about it. Normally, what happens is that the "authoritative current copy" can always be found at https://www.python.org/dev/peps/pep-????/ for some well-known PEP number. That PEP generally has a single authoritative author (sometimes two or three, but always a small number). For any proposal that actually has currency, this system does work (well enough that I've wanted to introduce something like it in other contexts). ChrisA From rosuav at gmail.com Tue Sep 18 20:58:53 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 19 Sep 2018 10:58:53 +1000 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <531cdd0a48bee4d039dd029d878b1aa41a6cbcbe.camel@janc.be> References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <531cdd0a48bee4d039dd029d878b1aa41a6cbcbe.camel@janc.be> Message-ID: On Wed, Sep 19, 2018 at 10:43 AM Jan Claeys wrote: > > On Tue, 2018-09-18 at 18:07 -0400, David Mertz wrote: > > Since 1972, there have been hundreds of reinventions of a means of > > carying on electronic conversations intended to be "better than > > email." The one thing they all have in common is that they are vastly > > worse than email. > > I don't 100% agree with that. > > E.g., there are better protocols when you need real-time conversations, > because (internet) email isn't necessarily good at that (by design). Which part of email or internet is "by design" not good for real-time conversation? With any non-stupid MUA, emails are sent virtually instantly, unless the destination server is down. Of course, if you're used to accessing Gmail via your mobile phone app, you probably aren't accustomed to real-time conversations in email; but that is not the *design* of email. ChrisA From jamtlu at gmail.com Tue Sep 18 21:04:30 2018 From: jamtlu at gmail.com (James Lu) Date: Tue, 18 Sep 2018 21:04:30 -0400 Subject: [Python-ideas] Moving to another forum system where Message-ID: <2E297FAA-EC99-4E15-AB24-F3AF4141820B@gmail.com> It would be nice if there was a guide on using Python-ideas and writing PEPs. It would make it less obscure. From mike at selik.org Tue Sep 18 21:17:09 2018 From: mike at selik.org (Michael Selik) Date: Tue, 18 Sep 2018 18:17:09 -0700 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> Message-ID: On Tue, Sep 18, 2018 at 5:57 PM Chris Angelico wrote: > For any proposal that actually has currency, this system does work The trouble is the ambiguity of knowing what "actually has currency" is and how to get it. PEP 1 states, "Following a discussion on python-ideas, the proposal should be submitted as a draft PEP via a GitHub pull request." However, PEP 1 does not give instruction on how to evaluate whether that discussion has been completed satisfactorily. https://www.python.org/dev/peps/pep-0001/#submitting-a-pep From rosuav at gmail.com Tue Sep 18 21:30:53 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 19 Sep 2018 11:30:53 +1000 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <2E297FAA-EC99-4E15-AB24-F3AF4141820B@gmail.com> References: <2E297FAA-EC99-4E15-AB24-F3AF4141820B@gmail.com> Message-ID: On Wed, Sep 19, 2018 at 11:05 AM James Lu wrote: > > It would be nice if there was a guide on using Python-ideas and writing PEPs. It would make it less obscure. https://www.python.org/dev/peps/pep-0001/ On Wed, Sep 19, 2018 at 11:17 AM Michael Selik wrote: > > On Tue, Sep 18, 2018 at 5:57 PM Chris Angelico wrote: > > For any proposal that actually has currency, this system does work > > The trouble is the ambiguity of knowing what "actually has currency" > is and how to get it. PEP 1 states, "Following a discussion on > python-ideas, the proposal should be submitted as a draft PEP via a > GitHub pull request." However, PEP 1 does not give instruction on how > to evaluate whether that discussion has been completed satisfactorily. Fair point. However, if there's enough in an idea that it's worth pushing forward, and too much for it to just go straight to the issue tracker or a GitHub PR, someone will usually recommend it at some point. In borderline cases, the decision of whether it's PEP-worthy or not generally comes down to "is someone willing to write and shepherd the PEP" - it's a fair bit of work, and a lot of incoming emails to deal with. ChrisA From mertz at gnosis.cx Tue Sep 18 22:01:18 2018 From: mertz at gnosis.cx (David Mertz) Date: Tue, 18 Sep 2018 22:01:18 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <531cdd0a48bee4d039dd029d878b1aa41a6cbcbe.camel@janc.be> References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <531cdd0a48bee4d039dd029d878b1aa41a6cbcbe.camel@janc.be> Message-ID: On Tue, Sep 18, 2018, 8:43 PM Jan Claeys wrote: > On Tue, 2018-09-18 at 18:07 -0400, David Mertz wrote: > > Since 1972, there have been hundreds of reinventions of a means of > > carying on electronic conversations intended to be "better than > > email." The one thing they all have in common is that they are vastly > > worse than email. > > I don't 100% agree with that. > > E.g., there are better protocols when you need real-time conversations, > because (internet) email isn't necessarily good at that (by design). > Good point, 1988 IRC also serves a good purpose that is also poorly copied in hundreds of new systems. :-) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Richard at Damon-Family.org Wed Sep 19 00:08:54 2018 From: Richard at Damon-Family.org (Richard Damon) Date: Wed, 19 Sep 2018 00:08:54 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: <056a7b10-d77d-87e1-c3c7-54d3894b8648@Damon-Family.org> On 9/18/18 11:02 AM, Jonathan Goble wrote: > On Tue, Sep 18, 2018 at 10:49 AM Robert Vanden Eynde > > wrote: > > About moderation, what's the problem on the list ? > > > The biggest moderation issue I see with mailing lists is the inability > to lock threads and delete posts (i.e. those that are spam or a Code > of Conduct violation). Both of those are basic features that are core > to virtually every forum system in existence today. > > Mailing lists offer no moderation of posts or threads unless every > post is held in a moderation queue and manually approved before being > sent, which isn't practical for large high-traffic lists like this. > Instead, the only recourse is to moderate the user by banning or > muting them, which can sometimes result in essentially using a > sledgehammer to kill a fly. That is particularly the case if the only > problems are on one heated thread where five people are attacking each > other, but all are contributing constructively to other threads, in > which case the best response is to simply terminate the argument by > locking the thread. But on a mailing list, one would have to ban or > mute all five users instead, impacting all of the other threads those > users were contributing to. This is incorrect. I run a moderate volume (~100 posts per day) mailing list for my local community using the same software as python.org. When a thread gets out of bounds I can enter a message filter to hold for review any message matching the subject of that thread (or specified parts of it). While people can get around the filter by changing the subject line, they can do the same on a forum by starting a new topic. Someone who intentionally does this does get themselves on personal moderation. -- Richard Damon From leewangzhong+python at gmail.com Wed Sep 19 00:48:13 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Wed, 19 Sep 2018 00:48:13 -0400 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> Message-ID: On Tue, Sep 18, 2018 at 8:21 PM James Lu wrote: > > > Is that really an issue here? I personally haven't seen threads where > > Brett tried to stop an active discussion, but people ignored him and > > kept fighting. > Not personally with Brett, but I have seen multiple people try to stop the ?reword or remove beautiful is better than ugly in Zen of Python.? The discussion was going in circles and evolved into attacking each other?s use of logical fallacies. I disagree with your description, of course, but that's not important right now. Multiple people *without any authority in that forum* tried to stop a discussion, and failed. Why would it be any different if it happened in a forum? Those same people still wouldn't have the power to lock the discussion. They could only try to convince others to stop. If the ones with authority wanted to completely shut down the discussion, they can do so now. The only thing that a forum adds is, when they say stop, no one can decide to ignore them. If no one is ignoring them now, then locking powers don't add anything. > Other than that, my biggest issues with the current mailing system are: > > * There?s no way to keep a updated proposal of your own- if you decide to change your proposal, you have to communicate the change. Then, if you want to find the authoritative current copy, since you might?ve forgotten or you want to join he current discussion, then you have to dig through the emails and recursively apply the proposed change. It?s just easier if people can have one proposal they can edit themselves. > * I?ve seen experienced people get confused about what was the current proposal because they were replying to older emails or they didn?t see the email with the clear examples. I agree that editing is a very useful feature. In a large discussion, newcomers can comment after reading only the first few posts, and if the first post has an easily-misunderstood line, you'll get people talking about it. For proposals, I'm concerned that many forums don't have version history in their editing tools (Reddit being one such discussion site). Version history can be useful in understanding old comments. Instead, you'd have to put it up on a repo and link to it. Editing will help when you realize you should move your proposal to a public repo. > * The mailing list is frankly obscure. Python community leaders and package maintainers often are not aware or do not participate in Python-ideas. Not many people know how to use or navigate a mailing list. > * No one really promotes the mailing list, you have to go out of your way to find where new features are proposed. > * Higher discoverability means more people can participate, providing their own use cases or voting (I mean using like or dislike measures, consensus should still be how things are approved) go out of their way to find so they can propose something. Instead, I envision a forum where people can read and give their 2 cents about what features they might like to see or might not want to see. Some of these problems are not about mailing lists. Whether a forum is more accessible can go either way. A mailing list is more accessible because everyone has access to email, and it doesn't require making another account. It is less accessible because people might get intimidated by such old interfaces or culture (like proper quoting etiquette, or when to switch to private replies). Setting up an email interface to a forum can be a compromise. > * More people means instead of having to make decisions from sometimes subjective personal experience, we can make decisions with confidence in what other Python devs want. I don't agree. You don't get more objective by getting a larger self-selected sample, not without carefully designing who will self-select. But getting more people means getting MORE subjective personal experiences, which is good. Some proposals need more voices, like any proposal that is meant to help new programmers. You want to hear from people who still vividly remember their experiences learning Python. On the other hand, getting more people necessarily means more noise (no matter what system you use), and less time for new people to acclimate. > Since potential proposers will find it easier to navigate a GUI forum, they can read previous discussions to understand the reasoning, precedent behind rejected and successful features. People proposing things that have already been rejected before can be directed to open a subtopic on the older discussion. A kind of GUI version already exists, precisely because this is a public mailing list. Google Groups provides a mirror of the archives. https://groups.google.com/forum/#!forum/python-ideas It's searchable, and possibly replyable. You can even star conversations (but not hide them). If it isn't listed on some python.org page, maybe it should be. Personally, when I want to find past discussions, I use Google with the keyword `site:https://mail.python.org/pipermail/python-ideas/`. I know a lot of people don't know about that, though. Maybe it can be listed on one of the python.org pages. As for subtopics, I haven't seen such things. I've seen reply subtrees, but either they don't bump the topic (giving them little visibility), or they do bump the topic (annoying anyone as much as a new topic). I don't know if there is a good compromise there. From gadgetsteve at live.co.uk Wed Sep 19 01:54:42 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Wed, 19 Sep 2018 05:54:42 +0000 Subject: [Python-ideas] Combine f-strings with i18n - How about using PEP 501? In-Reply-To: References: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> <23456.32190.131662.996934@turnbull.sk.tsukuba.ac.jp> <15EBEFE6-D745-4C33-A08B-3080063A0F02@killingar.net> Message-ID: On 18/09/2018 08:59, Hans Polak wrote: > >>> I don't see how this immediately helps the OP, who wants a *literal* >>> expression that automatically invokes the translation machinery as >>> well as the interpolation machinery. > Actually, no, I do not want the expression to be automatically > translated at compile time. It should be translated at run-time. There > are three situations. > > 1. No translation, just a regular f-string. > 2. App translation. The f-string gets translated to the configured > language. > 3. On the fly translation. The string gets translated to the language > passed as an argument as required. > > In code, this would be. > 1. f'Hi {user}' > 2. f'{!g}Hi {user}' > 3. f'{lang!g}Hi {user}' > > Cases 2 and 3 need some additional code, just like with gettext. > > I'm sorry if that wasn't clear from the start. All I want is the code to > be simpler to write and maintain. I do not want to have complicated > parsing for the compiler. > >>> Another way forward could be a preprocessor. All this can be done >>> with a fairly simple script using parso. > This is probably the idea. > > Cheers, > Hans > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ Surely the simpler solution is to specify in I18n any items within un-escaped {} pairs is excluded from the translation, lookups, etc., and that translation needs to take place, also leaving the {} content alone, before f string processing. Other than that there is no change. So: _(f'Hi {user}') would be in the .po/.mo as just 'Hi ' and if our locale is set to FR this gets translated to f'Bonjor {user}' which then gets the user variable substituted in. If you wanted to insert into an f string a value that is itself subject to I18n you need to mark the content assigned to that value for translation. For example: parts_of_day = [_("Morning"), _("Afternoon"), _("Evening"), _("Night"), ] tod = lookup_time_as_pod() greeting = _(f"Good {tod}") If our locale happens to be a German one and our current time of day is morning then tod will be assigned as "morgan" and our greeting will be "Gutten Morgan", etc. This should work without any major problems whether our locale is fixed at start-up or changes dynamically. As far as I can see the only possibly required change to the core python language is that the evaluation order may need to be able to be override-able so that the translate function, (with the leave {.*} alone rule), is called _before_ the f string formatting, (I think that with current precedence it would not). Is there, or could there be, an "@eager" or "@push_precedence" decorator, or some such, that could be added to translate so as to do this? The remaining changes would be in the translate/I18n package(s) and the documents of course. -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From pvergain at gmail.com Wed Sep 19 02:01:49 2018 From: pvergain at gmail.com (Patrick Vergain) Date: Wed, 19 Sep 2018 08:01:49 +0200 Subject: [Python-ideas] Pattern Matching Syntax (reprise) In-Reply-To: <20180918132935.Horde.fDRgp_ZmxPK06oMIWN05y52@webmail.tobiaskohn.ch> References: <20180918132935.Horde.fDRgp_ZmxPK06oMIWN05y52@webmail.tobiaskohn.ch> Message-ID: Le mar. 18 sept. 2018 ? 13:39, Tobias Kohn a ?crit : > Hello Everyone, > > Please excuse my being late for properly responding to the last thread on > "Pattern Matching Syntax" [1]. As Robert Roskam has already pointed out at > the beginning of that thread, there has been much previous discussion about > adding pattern matching to Python, and several proposals exist. It is > therefore not my intention to propose yet another syntax choice for pattern > matching, but more to share my experience with implementing it, in the hope > of making a worthwhile contribution to the overall discussion. > > This summer, I basically ended up requiring pattern matching in Python for > a research project I am working on. Some initial hacks have then grown > into a library for pattern matching in Python [2]. On the one hand, my > design is certainly heavily influence by Scala, with which I also work on a > regular basis. On the other hand, I ran into various difficulties, > challanges, and it has been particularly important to me to find a design > that blends well with Python, and harnesses what Python already offers. > > I have written down my experience in the form of a discussion on several > options concerning syntax [3], and implementation [4], respectively. As > the articles have turned out longer than I originally intended, it might > take up too much time for those who have little interest in this matter in > the first place. However, considering that the subject of pattern matching > has been coming up rather regularly, my experience might help to contribute > something to the discussion. Let me provide a brief summary here: > > *1. Pattern Matching is not Switch/Case* > -------------------------------------- > When I am talking about pattern matching, my goal is to do a deep > structural comparison, and extract information from an object. Consider, > for instance, the problem of optimising the AST of a Python program, and > eliminate patterns of the form `x + 0`, and `x - 0`. What pattern matching > should be offering here is a kind of comparison along the lines of: > `if node == BinOp(?, (Add() or Sub()), Num(0)): ...` > Ideally, it should also return the value of what is marked by the question > mark `?` here, and assign it to a variable `left`, say. The above > comparison is often written as something like, e. g.: > `case BinOp(left, Add()|Sub(), Num(0)): ...` > This use of `case`, however, is not the same as a switch-statement. > > *2. Orthogonality* > ---------------- > Getting the syntax and semantics of nested blocks right is hard. Every > block/suite in Python allows any kind of statements to occur, which allows > for things like nested functions definitions, or having more than just > methods in a class. If we use a two-levelled block-structure, we run into > the problem of finding good semantics for what the following means (note > the variable `x` here): > ``` > match node: > x = 0 > case BinOp(left, Add(), Num(0)): > ... > x += 1 > case BinOp(left, Mul(), Num(1)): > ... > ``` > In the case of a "switch block", such additional statements like the > `x=0`, and `x+=1` can become quite problematic. On the other hand, > requiring all statements inside the block to be case-statements violates > the orthogonality found otherwise in Python. > > I feel that this dilemma is one of the core issues why the syntax of > switch statements, or pattern matching seems so exceptionally hard. In the > end, it might therefore, indeed, make more sense to find a structure that > is more in line with if/elif/else-chains. This would lead to a form of > pattern matching with little support for switch-statement, though. > > *3. Implementation* > ----------------- > For the implementation of pattern matching, my package compiles the > patterns into context-manager-classes, adds these classes in the background > to the code, and then uses `with`-statements to express the > `case`-statement. If have found a neat way to make the execution of a > `with`-statement's body conditional. > > Two things have been curcially important in the overall design: first, the > "recompilation" cannot change the structure of the original code, or add > any line. This way, all error messages, and tooling should work as > expected; the invasive procedure should be as minimal as possible. Second, > it is paramount that the semantics of how Python works is being preserved. > Even though the actual matching of patterns is done in external classes, > all names must be resolved "locally" where the original `case`-statement > lives. Similarly, the variables defined by the pattern are to local > variables, which are assigned if, and only if, the pattern actually matches. > > Should pattern matching ever be added to Python properly, then there will > be no need to use `with`-statements and context managers, of course. But > the implementation must make sure, nonetheless, that the usual semantics > with name resolving of Python is being respected. > > *4. Syntax of Patterns* > --------------------- > The syntax of patterns themselves has been guided by two principles > (again). First, if there is a way to already express the same thing in > Python, use that. This applies, in particular, to sequence unpacking. > Second, patterns specify a possible way how the object in question could > have been created, or constructed in the first place. Hence, it is no > accident that the pattern `BinOp(left, Add(), Num(0))` above looks almost > like a constructor. > > Other implementation are happy to redefine the meanings of operators like > `in`, or `is` (as in, e. g., `case x is str:`). While this might look very > convenient at first, it will most probably lead to many subsequent bugs > where people write `if x is str:` to mean the same thing (namely test the > type of `x`). Even though expressing patterns requires reusing some > operators in a new way, we have to extremely careful to choose widely, and > minimise confusion, and surprises. > > > Pattern matching could certainly make a great addition to Python, and > various current implementations act as proof of concepts. However, > choosing an appropriate syntax for pattern matching is hard, and we should > work hard to make sure that any such addition feels natural in Python, even > at the expense of having to write more, and being not as terse as other > languages. > > I hope my thoughts in this matter can make a worthwhile constribution to > the discussion. And I would like to emphasise once more, that my goal is > not to propose a new syntax for pattern matching, but to report on my > experience while implementing it. > > Kind regards, > Tobias Kohn > > > [1] https://groups.google.com/d/topic/python-ideas/nqW2_-kKrNg/discussion > [2] https://github.com/Tobias-Kohn/pyPMatch > [3] > https://tobiaskohn.ch/index.php/2018/09/18/pattern-matching-syntax-in-python/ > [4] > https://tobiaskohn.ch/index.php/2018/09/12/implementing-pattern-matching/ > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Wed Sep 19 02:48:51 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 19 Sep 2018 16:48:51 +1000 Subject: [Python-ideas] Combine f-strings with i18n - How about using PEP 501? In-Reply-To: References: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> <23456.32190.131662.996934@turnbull.sk.tsukuba.ac.jp> <15EBEFE6-D745-4C33-A08B-3080063A0F02@killingar.net> Message-ID: On Wed, Sep 19, 2018 at 3:55 PM Steve Barnes wrote: > Surely the simpler solution is to specify in I18n any items within > un-escaped {} pairs is excluded from the translation, lookups, etc., and > that translation needs to take place, also leaving the {} content alone, > before f string processing. Other than that there is no change. So: > > _(f'Hi {user}') would be in the .po/.mo as just 'Hi ' and if our locale > is set to FR this gets translated to f'Bonjor {user}' which then gets > the user variable substituted in. How about this: Have a script that runs over your code, looking for "translatable f-strings": _(f'Hi {user}') and replaces them with actually-translatable strings: _('Hi %s') % (user,) _('Hi {user}').format(user=user) Take your pick of which way you want to spell it. Either of these is easily able to be picked up by a standard translation package, is 100% legal Python code in today's interpreters, and doesn't require any bizarre markers and such saying that things need to be processed out of order (the parentheses specify the order for you). Not everything has to be an f-string. ChrisA From boxed at killingar.net Wed Sep 19 02:54:37 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Wed, 19 Sep 2018 08:54:37 +0200 Subject: [Python-ideas] Combine f-strings with i18n - How about using PEP 501? In-Reply-To: References: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> <23456.32190.131662.996934@turnbull.sk.tsukuba.ac.jp> <15EBEFE6-D745-4C33-A08B-3080063A0F02@killingar.net> Message-ID: > How about this: Have a script that runs over your code, looking for > "translatable f-strings": > > _(f'Hi {user}') > > and replaces them with actually-translatable strings: > > _('Hi %s') % (user,) > _('Hi {user}').format(user=user) > > Take your pick of which way you want to spell it. Either of these is > easily able to be picked up by a standard translation package, is 100% > legal Python code in today's interpreters, and doesn't require any > bizarre markers and such saying that things need to be processed out > of order (the parentheses specify the order for you). I guess it wasn't clear before.. that's exactly what I was proposing :) I'd suggest using parso to do it. It's a really great library to write such transformations. / Anders From rosuav at gmail.com Wed Sep 19 02:55:38 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 19 Sep 2018 16:55:38 +1000 Subject: [Python-ideas] Combine f-strings with i18n - How about using PEP 501? In-Reply-To: References: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> <23456.32190.131662.996934@turnbull.sk.tsukuba.ac.jp> <15EBEFE6-D745-4C33-A08B-3080063A0F02@killingar.net> Message-ID: On Wed, Sep 19, 2018 at 4:52 PM Anders Hovm?ller wrote: > > > > How about this: Have a script that runs over your code, looking for > > "translatable f-strings": > > > > _(f'Hi {user}') > > > > and replaces them with actually-translatable strings: > > > > _('Hi %s') % (user,) > > _('Hi {user}').format(user=user) > > > > Take your pick of which way you want to spell it. Either of these is > > easily able to be picked up by a standard translation package, is 100% > > legal Python code in today's interpreters, and doesn't require any > > bizarre markers and such saying that things need to be processed out > > of order (the parentheses specify the order for you). > > > I guess it wasn't clear before.. that's exactly what I was proposing :) > > I'd suggest using parso to do it. It's a really great library to write such transformations. Ah. It wasn't clear what your destination was, so I thought you were talking about doing the translation itself using parso. But yeah, grab one of these sorts of parsing libraries, do the transformation, save back, then use a standard translation library. Seems a lot easier than changing the language. ChrisA From boxed at killingar.net Wed Sep 19 03:02:41 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Wed, 19 Sep 2018 09:02:41 +0200 Subject: [Python-ideas] Combine f-strings with i18n - How about using PEP 501? In-Reply-To: References: <874ed867-6d68-b775-4331-ec62499cf366@polak.es> <23455.59261.224419.818192@turnbull.sk.tsukuba.ac.jp> <23456.32190.131662.996934@turnbull.sk.tsukuba.ac.jp> <15EBEFE6-D745-4C33-A08B-3080063A0F02@killingar.net> Message-ID: >> I'd suggest using parso to do it. It's a really great library to write such transformations. > > Ah. It wasn't clear what your destination was, so I thought you were > talking about doing the translation itself using parso. But yeah, grab > one of these sorts of parsing libraries, do the transformation, save > back, then use a standard translation library. Seems a lot easier than > changing the language. Ah, my bad. I agree that this is the way forward for people who are trying to localize an existing app, but I still think we should _also_ change the language. F-strings are great and .format is powerful but there is a too big gap in usability and readability between them. This gap is one of the most compelling motivations for my suggestion of a short form for keyword arguments, while also helping us poor guys who deal with huge legacy code bases :) / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Sep 19 04:51:00 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 19 Sep 2018 10:51:00 +0200 Subject: [Python-ideas] Moving to another forum system where References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> Message-ID: <20180919105100.340dc46a@fsol> On Tue, 18 Sep 2018 18:37:09 -0400 James Lu wrote: > * The mailing list is frankly obscure. Python community leaders and package maintainers often are not aware or do not participate in Python-ideas. Not many people know how to use or navigate a mailing list. > * No one really promotes the mailing list, you have to go out of your way to find where new features are proposed. > * Higher discoverability means more people can participate, providing their own use cases or voting (I mean using like or dislike measures, consensus should still be how things are approved) go out of their way to find so they can propose something. Instead, I envision a forum where people can read and give their 2 cents about what features they might like to see or might not want to see. I'm not sure that's a popular opinion, but I don't think I want more people around on python-ideas. There's enough quantity here. The problem is quality. Regards Antoine. From desmoulinmichel at gmail.com Wed Sep 19 09:23:04 2018 From: desmoulinmichel at gmail.com (Michel Desmoulin) Date: Wed, 19 Sep 2018 15:23:04 +0200 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> Message-ID: <80c9abc2-6936-4647-b5b0-d637b34f72b4@gmail.com> Le 19/09/2018 ? 00:37, James Lu a ?crit?: >> Is that really an issue here? I personally haven't seen threads where >> Brett tried to stop an active discussion, but people ignored him and >> kept fighting. > Not personally with Brett, but I have seen multiple people try to stop the ?reword or remove beautiful is better than ugly in Zen of Python.? The discussion was going in circles and evolved into attacking each other?s use of logical fallacies. > > Other than that, my biggest issues with the current mailing system are: > > * There?s no way to keep a updated proposal of your own- if you decide to change your proposal, you have to communicate the change. Then, if you want to find the authoritative current copy, since you might?ve forgotten or you want to join he current discussion, then you have to dig through the emails and recursively apply the proposed change. It?s just easier if people can have one proposal they can edit themselves. > * I?ve seen experienced people get confused about what was the current proposal because they were replying to older emails or they didn?t see the email with the clear examples. > * The mailing list is frankly obscure. Python community leaders and package maintainers often are not aware or do not participate in Python-ideas. Not many people know how to use or navigate a mailing list. > * No one really promotes the mailing list, you have to go out of your way to find where new features are proposed. > * Higher discoverability means more people can participate, providing their own use cases or voting (I mean using like or dislike measures, consensus should still be how things are approved) go out of their way to find so they can propose something. Instead, I envision a forum where people can read and give their 2 cents about what features they might like to see or might not want to see. > * More people means instead of having to make decisions from sometimes subjective personal experience, we can make decisions with confidence in what other Python devs want. > > Since potential proposers will find it easier to navigate a GUI forum, they can read previous discussions to understand the reasoning, precedent behind rejected and successful features. People proposing things that have already been rejected before can be directed to open a subtopic on the older discussion. +1 except for visibility I have been on this list for years and those issues have been a big problem ever since. But I agree with Antoine, quantity is not the problem. Quality is. However having no way to moderate efficiently means nobody does it, which means quality goes down. Since you have no way to identify who is who anyway, you can't know if the person telling you that you are out of line is an experienced member of the community or a newcomer with a lot of energy. Another things is that we keep having the same debates over and over. If you had the same duplication in code, it would never pass code reviews. The problem is looking up something, or making a reference to something is really hard on the list. A few scenario that seem important to me that are badly handled by this tool: - Person A is making a long constructive argument, and person B arrives, doesn't read anything, and make arguments against things that have been answered. It should be easy for somebody to link to the answers to this. - Somebody is making a proposal that has been already discussed and rejected several times. It should be easy to link to the discussions and conclusions about this. Even if the goal is to start the debate over again, at least we start ahead. - A is telling B this is a bad idea. It should be easy to tell if the person is experienced or not. You probably don't want to interact the same way with Victor and Yury, that have done numerous contributions to the Python core, and me, that is just a regular Python dev and don't know how the implementation work. - somebody wants to make a proposal. It should be easy to search if similar proposals already have been made, and read __ a summary __ of what happened. The bar to write a PEP is to high to serve that purpose: most proposal don't ever leave the list. From rosuav at gmail.com Wed Sep 19 09:28:09 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 19 Sep 2018 23:28:09 +1000 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <80c9abc2-6936-4647-b5b0-d637b34f72b4@gmail.com> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <80c9abc2-6936-4647-b5b0-d637b34f72b4@gmail.com> Message-ID: On Wed, Sep 19, 2018 at 11:23 PM Michel Desmoulin wrote: > - A is telling B this is a bad idea. It should be easy to tell if the > person is experienced or not. You probably don't want to interact the > same way with Victor and Yury, that have done numerous contributions to > the Python core, and me, that is just a regular Python dev and don't > know how the implementation work. Hmm, I'm not sure about this. Shouldn't a person's arguments be assessed on their own merit, rather than "oh, so-and-so said it so it must be right"? But if you want to research the people who are posting, you're welcome to do that. The list of core dev experts is on the devguide: https://devguide.python.org/experts/ Translating those usernames back into real names would be done via BPO, I think. ChrisA From desmoulinmichel at gmail.com Wed Sep 19 09:40:42 2018 From: desmoulinmichel at gmail.com (Michel Desmoulin) Date: Wed, 19 Sep 2018 15:40:42 +0200 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <80c9abc2-6936-4647-b5b0-d637b34f72b4@gmail.com> Message-ID: <2ad9529f-eccc-52df-e163-ae72b9232d54@gmail.com> Le 19/09/2018 ? 15:28, Chris Angelico a ?crit?: > On Wed, Sep 19, 2018 at 11:23 PM Michel Desmoulin > wrote: >> - A is telling B this is a bad idea. It should be easy to tell if the >> person is experienced or not. You probably don't want to interact the >> same way with Victor and Yury, that have done numerous contributions to >> the Python core, and me, that is just a regular Python dev and don't >> know how the implementation work. > > Hmm, I'm not sure about this. Shouldn't a person's arguments be > assessed on their own merit, rather than "oh, so-and-so said it so it > must be right"? "Merit" is something hard to evaluate, having context help. If somebody comes and says "this is hard to implement so I doubt it will pass", Tim Peters does know better than the average Joe. If somebody says, "I advise you to do things the other way around, it works better on this mailing list". You will consider the advice more strongly if the author has been on the list 10 years or 10 days. Above all, if 2 people have opposite views, and that they both make sense, having the context of who they are helping. It's the same has if somebody gives you health advice. You do want to listen to everybody, but it's nice to know who is a doctor, and who is a somebody who repeats Facebook posts. It helps to decide. > > But if you want to research the people who are posting, you're welcome > to do that. The list of core dev experts is on the devguide: > > https://devguide.python.org/experts/ > > Translating those usernames back into real names would be done via BPO, I think. This is a good summary of the problem with the list: you can do anything you want, but it cost you time and effort. And since you have many things to do, cumulatively, it's a lot of time and effort. I read all the posts and answered 2 mails on the list today. It took me 40 minutes. And I have been on the list for a long time so I know how the whole thing works so I'm pretty fast at doing this. Who can spend a lot of time every day, and yet feels to be just barely part of the discussion ? Who will take the time to do things right ? And among those few people, couldn't they do more good things if we'd save them time and energy ? Let's make the tool work for the community, and not against it. I agree that the mailing list is a great format for things like Python-dev. However, it's not a good fit for Python-idea: we have reached the limit of it for a long time. Most of the real decisions are actually taken outside of it, with more direct channels in the small groups of contributors. It slows down the decision process and it waste a lot of good will. > > ChrisA > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > From jamtlu at gmail.com Wed Sep 19 11:54:20 2018 From: jamtlu at gmail.com (James Lu) Date: Wed, 19 Sep 2018 11:54:20 -0400 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> Message-ID: Oh wow, Google Groups is actually a much better interface. Any better forum software needs a system where people can voluntarily leave comments or feedback that is lower-priority. I'm not sure if Discourse has this, actually. Reddit comments are extremely compact as are Stack Overflow comments. I was going to propose that the PSF twitter account post a link to https://groups.google.com/forum/#!topic/python-ideas/, but I was worried that getting more subjective personal experiences might undesirably decrease the signal-to-noise ratio. On Wed, Sep 19, 2018 at 12:48 AM Franklin? Lee < leewangzhong+python at gmail.com> wrote: > On Tue, Sep 18, 2018 at 8:21 PM James Lu wrote: > > > > > Is that really an issue here? I personally haven't seen threads where > > > Brett tried to stop an active discussion, but people ignored him and > > > kept fighting. > > Not personally with Brett, but I have seen multiple people try to stop > the ?reword or remove beautiful is better than ugly in Zen of Python.? The > discussion was going in circles and evolved into attacking each other?s use > of logical fallacies. > > I disagree with your description, of course, but that's not important > right now. > > Multiple people *without any authority in that forum* tried to stop a > discussion, and failed. Why would it be any different if it happened > in a forum? Those same people still wouldn't have the power to lock > the discussion. They could only try to convince others to stop. > > If the ones with authority wanted to completely shut down the > discussion, they can do so now. The only thing that a forum adds is, > when they say stop, no one can decide to ignore them. If no one is > ignoring them now, then locking powers don't add anything. > > > Other than that, my biggest issues with the current mailing system are: > > > > * There?s no way to keep a updated proposal of your own- if you decide > to change your proposal, you have to communicate the change. Then, if you > want to find the authoritative current copy, since you might?ve forgotten > or you want to join he current discussion, then you have to dig through > the emails and recursively apply the proposed change. It?s just easier if > people can have one proposal they can edit themselves. > > * I?ve seen experienced people get confused about what was the current > proposal because they were replying to older emails or they didn?t see the > email with the clear examples. > > I agree that editing is a very useful feature. In a large discussion, > newcomers can comment after reading only the first few posts, and if > the first post has an easily-misunderstood line, you'll get people > talking about it. > > For proposals, I'm concerned that many forums don't have version > history in their editing tools (Reddit being one such discussion > site). Version history can be useful in understanding old comments. > Instead, you'd have to put it up on a repo and link to it. Editing > will help when you realize you should move your proposal to a public > repo. > > > * The mailing list is frankly obscure. Python community leaders and > package maintainers often are not aware or do not participate in > Python-ideas. Not many people know how to use or navigate a mailing list. > > * No one really promotes the mailing list, you have to go out of your > way to find where new features are proposed. > > * Higher discoverability means more people can participate, providing > their own use cases or voting (I mean using like or dislike measures, > consensus should still be how things are approved) go out of their way to > find so they can propose something. Instead, I envision a forum where > people can read and give their 2 cents about what features they might like > to see or might not want to see. > > Some of these problems are not about mailing lists. > > Whether a forum is more accessible can go either way. A mailing list > is more accessible because everyone has access to email, and it > doesn't require making another account. It is less accessible because > people might get intimidated by such old interfaces or culture (like > proper quoting etiquette, or when to switch to private replies). > Setting up an email interface to a forum can be a compromise. > > > * More people means instead of having to make decisions from > sometimes subjective personal experience, we can make decisions with > confidence in what other Python devs want. > > I don't agree. You don't get more objective by getting a larger > self-selected sample, not without carefully designing who will > self-select. > > But getting more people means getting MORE subjective personal > experiences, which is good. Some proposals need more voices, like any > proposal that is meant to help new programmers. You want to hear from > people who still vividly remember their experiences learning Python. > > On the other hand, getting more people necessarily means more noise > (no matter what system you use), and less time for new people to > acclimate. > > > Since potential proposers will find it easier to navigate a GUI forum, > they can read previous discussions to understand the reasoning, precedent > behind rejected and successful features. People proposing things that have > already been rejected before can be directed to open a subtopic on the > older discussion. > > A kind of GUI version already exists, precisely because this is a > public mailing list. Google Groups provides a mirror of the archives. > https://groups.google.com/forum/#!forum/python-ideas > It's searchable, and possibly replyable. You can even star > conversations (but not hide them). If it isn't listed on some > python.org page, maybe it should be. > > Personally, when I want to find past discussions, I use Google with > the keyword `site:https://mail.python.org/pipermail/python-ideas/` > . I > know a lot of people don't know about that, though. Maybe it can be > listed on one of the python.org pages. > > As for subtopics, I haven't seen such things. I've seen reply > subtrees, but either they don't bump the topic (giving them little > visibility), or they do bump the topic (annoying anyone as much as a > new topic). I don't know if there is a good compromise there. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Wed Sep 19 11:59:42 2018 From: jamtlu at gmail.com (James Lu) Date: Wed, 19 Sep 2018 08:59:42 -0700 (PDT) Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> Message-ID: > > Most of the real decisions are actually taken > outside of it, with more direct channels in the small groups of > contributors. > It would be very nice if there was more transparency in this process. The language is better if more subjective personal experience heard- but to make that happen, the forum experience must be better for both On Tuesday, September 18, 2018 at 8:21:46 PM UTC-4, James Lu wrote: > > > Is that really an issue here? I personally haven't seen threads where > > Brett tried to stop an active discussion, but people ignored him and > > kept fighting. > Not personally with Brett, but I have seen multiple people try to stop the > ?reword or remove beautiful is better than ugly in Zen of Python.? The > discussion was going in circles and evolved into attacking each other?s use > of logical fallacies. > > Other than that, my biggest issues with the current mailing system are: > > * There?s no way to keep a updated proposal of your own- if you decide to > change your proposal, you have to communicate the change. Then, if you want > to find the authoritative current copy, since you might?ve forgotten or you > want to join he current discussion, then you have to dig through the > emails and recursively apply the proposed change. It?s just easier if > people can have one proposal they can edit themselves. > * I?ve seen experienced people get confused about what was the current > proposal because they were replying to older emails or they didn?t see the > email with the clear examples. > * The mailing list is frankly obscure. Python community leaders and > package maintainers often are not aware or do not participate in > Python-ideas. Not many people know how to use or navigate a mailing list. > * No one really promotes the mailing list, you have to go out of your > way to find where new features are proposed. > * Higher discoverability means more people can participate, providing > their own use cases or voting (I mean using like or dislike measures, > consensus should still be how things are approved) go out of their way to > find so they can propose something. Instead, I envision a forum where > people can read and give their 2 cents about what features they might like > to see or might not want to see. > * More people means instead of having to make decisions from sometimes > subjective personal experience, we can make decisions with confidence in > what other Python devs want. > > Since potential proposers will find it easier to navigate a GUI forum, > they can read previous discussions to understand the reasoning, precedent > behind rejected and successful features. People proposing things that have > already been rejected before can be directed to open a subtopic on the > older discussion. > > > On Sep 18, 2018, at 3:19 PM, python-ideas-request at python.org wrote: > > > > Is that really an issue here? I personally haven't seen threads where > > Brett tried to stop an active discussion, but people ignored him and > > kept fighting. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hpolak at polak.es Wed Sep 19 12:36:12 2018 From: hpolak at polak.es (Hans Polak) Date: Wed, 19 Sep 2018 18:36:12 +0200 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> Message-ID: <17490e47-9562-1308-4c51-a007ef4162d7@polak.es> Just an observation. I've been a member of this mailing list since (literally) five days ago and I am receiving a busload of emails. I'm a member of Stackoverflow and I visit the Q&A site daily... and I hardly ever receive emails. I suspect Discourse would be a good match for these discussions (although I have no experience whatsoever with it). TL;DR; I would appreciate receiving less mail. Cheers, Hans -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikhailwas at gmail.com Wed Sep 19 12:52:05 2018 From: mikhailwas at gmail.com (Mikhail V) Date: Wed, 19 Sep 2018 19:52:05 +0300 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> Message-ID: On Wed, Sep 19, 2018 at 7:49 AM Franklin? Lee wrote: > > On Tue, Sep 18, 2018 at 8:21 PM James Lu wrote: > > > > > Is that really an issue here? I personally haven't seen threads where > > > Brett tried to stop an active discussion, but people ignored him and > > > kept fighting. > > Not personally with Brett, but I have seen multiple people try to stop the ?reword or remove beautiful is better than ugly in Zen of Python.? The discussion was going in circles and evolved into attacking each other?s use of logical fallacies. > > > Multiple people *without any authority in that forum* tried to stop a > discussion, and failed. Why would it be any different if it happened > in a forum? Those same people still wouldn't have the power to lock > the discussion. They could only try to convince others to stop. It would be different because some people use private mail addresses, and might not be very happy to start the day by seeing political/personal/meta/uninteresting/etc. discussions in mailbox. This aspect alone would make _any_ forum-like approach far better than mailing list. Mikhail From tritium-list at sdamon.com Wed Sep 19 13:08:20 2018 From: tritium-list at sdamon.com (Alex Walters) Date: Wed, 19 Sep 2018 13:08:20 -0400 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <17490e47-9562-1308-4c51-a007ef4162d7@polak.es> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <17490e47-9562-1308-4c51-a007ef4162d7@polak.es> Message-ID: <0c2401d4503b$5e509960$1af1cc20$@sdamon.com> > -----Original Message----- > From: Python-ideas list=sdamon.com at python.org> On Behalf Of Hans Polak > Sent: Wednesday, September 19, 2018 12:36 PM > To: python-ideas at python.org > Subject: Re: [Python-ideas] Moving to another forum system where > > Just an observation. I've been a member of this mailing list since (literally) > five days ago and I am receiving a busload of emails. > > I'm a member of Stackoverflow and I visit the Q&A site daily... and I hardly > ever receive emails. > > > I suspect Discourse would be a good match for these discussions (although I > have no experience whatsoever with it). > > TL;DR; I would appreciate receiving less mail. > I don?t think its unreasonable to point out that it?s a *mailing list*. A firehose of email is generally a sign of good health of a mailing list. Even so, there are mitigations to the firehose effect, including, but not limited to digests and setting up your client to move mailing list posts directly to a folder (including the trash for threads you don?t want to follow). I don't understand how one can sign up for a mass email discussion forum, and be surprised that it increased the amount of email they receive. It's kind of the point of the medium. > > Cheers, > Hans > > From boxed at killingar.net Wed Sep 19 13:16:29 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Wed, 19 Sep 2018 19:16:29 +0200 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <0c2401d4503b$5e509960$1af1cc20$@sdamon.com> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <17490e47-9562-1308-4c51-a007ef4162d7@polak.es> <0c2401d4503b$5e509960$1af1cc20$@sdamon.com> Message-ID: > Even so, there are mitigations to the firehose effect, including, but not limited to digests I accidentally signed up with divest turned on for this list first. I got five digests in so many hours and I couldn?t figure out how to respond to individual threads. It?s a terrible choice and I personally would recommend removing the option because it seems broken or at least unusable. / Anders From solipsis at pitrou.net Wed Sep 19 13:27:05 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 19 Sep 2018 19:27:05 +0200 Subject: [Python-ideas] Moving to another forum system where References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> Message-ID: <20180919192705.219c9ced@fsol> On Wed, 19 Sep 2018 11:54:20 -0400 James Lu wrote: > Oh wow, Google Groups is actually a much better interface. Depends who you talk to. For me, having to use the Google Groups UI would be a strong impediment to my continued contribution. Regards Antoine. > > Any better forum software needs a system where people can > voluntarily leave comments or feedback that is lower-priority. > I'm not sure if Discourse has this, actually. Reddit comments > are extremely compact as are Stack Overflow comments. > > I was going to propose that the PSF twitter account post a > link to https://groups.google.com/forum/#!topic/python-ideas/, > but I was worried that getting more subjective personal > experiences might undesirably decrease the signal-to-noise > ratio. > > On Wed, Sep 19, 2018 at 12:48 AM Franklin? Lee < > leewangzhong+python at gmail.com> wrote: > > > On Tue, Sep 18, 2018 at 8:21 PM James Lu wrote: > > > > > > > Is that really an issue here? I personally haven't seen threads where > > > > Brett tried to stop an active discussion, but people ignored him and > > > > kept fighting. > > > Not personally with Brett, but I have seen multiple people try to stop > > the ?reword or remove beautiful is better than ugly in Zen of Python.? The > > discussion was going in circles and evolved into attacking each other?s use > > of logical fallacies. > > > > I disagree with your description, of course, but that's not important > > right now. > > > > Multiple people *without any authority in that forum* tried to stop a > > discussion, and failed. Why would it be any different if it happened > > in a forum? Those same people still wouldn't have the power to lock > > the discussion. They could only try to convince others to stop. > > > > If the ones with authority wanted to completely shut down the > > discussion, they can do so now. The only thing that a forum adds is, > > when they say stop, no one can decide to ignore them. If no one is > > ignoring them now, then locking powers don't add anything. > > > > > Other than that, my biggest issues with the current mailing system are: > > > > > > * There?s no way to keep a updated proposal of your own- if you decide > > to change your proposal, you have to communicate the change. Then, if you > > want to find the authoritative current copy, since you might?ve forgotten > > or you want to join he current discussion, then you have to dig through > > the emails and recursively apply the proposed change. It?s just easier if > > people can have one proposal they can edit themselves. > > > * I?ve seen experienced people get confused about what was the current > > proposal because they were replying to older emails or they didn?t see the > > email with the clear examples. > > > > I agree that editing is a very useful feature. In a large discussion, > > newcomers can comment after reading only the first few posts, and if > > the first post has an easily-misunderstood line, you'll get people > > talking about it. > > > > For proposals, I'm concerned that many forums don't have version > > history in their editing tools (Reddit being one such discussion > > site). Version history can be useful in understanding old comments. > > Instead, you'd have to put it up on a repo and link to it. Editing > > will help when you realize you should move your proposal to a public > > repo. > > > > > * The mailing list is frankly obscure. Python community leaders and > > package maintainers often are not aware or do not participate in > > Python-ideas. Not many people know how to use or navigate a mailing list. > > > * No one really promotes the mailing list, you have to go out of your > > way to find where new features are proposed. > > > * Higher discoverability means more people can participate, providing > > their own use cases or voting (I mean using like or dislike measures, > > consensus should still be how things are approved) go out of their way to > > find so they can propose something. Instead, I envision a forum where > > people can read and give their 2 cents about what features they might like > > to see or might not want to see. > > > > Some of these problems are not about mailing lists. > > > > Whether a forum is more accessible can go either way. A mailing list > > is more accessible because everyone has access to email, and it > > doesn't require making another account. It is less accessible because > > people might get intimidated by such old interfaces or culture (like > > proper quoting etiquette, or when to switch to private replies). > > Setting up an email interface to a forum can be a compromise. > > > > > * More people means instead of having to make decisions from > > sometimes subjective personal experience, we can make decisions with > > confidence in what other Python devs want. > > > > I don't agree. You don't get more objective by getting a larger > > self-selected sample, not without carefully designing who will > > self-select. > > > > But getting more people means getting MORE subjective personal > > experiences, which is good. Some proposals need more voices, like any > > proposal that is meant to help new programmers. You want to hear from > > people who still vividly remember their experiences learning Python. > > > > On the other hand, getting more people necessarily means more noise > > (no matter what system you use), and less time for new people to > > acclimate. > > > > > Since potential proposers will find it easier to navigate a GUI forum, > > they can read previous discussions to understand the reasoning, precedent > > behind rejected and successful features. People proposing things that have > > already been rejected before can be directed to open a subtopic on the > > older discussion. > > > > A kind of GUI version already exists, precisely because this is a > > public mailing list. Google Groups provides a mirror of the archives. > > https://groups.google.com/forum/#!forum/python-ideas > > It's searchable, and possibly replyable. You can even star > > conversations (but not hide them). If it isn't listed on some > > python.org page, maybe it should be. > > > > Personally, when I want to find past discussions, I use Google with > > the keyword `site:https://mail.python.org/pipermail/python-ideas/` > > . I > > know a lot of people don't know about that, though. Maybe it can be > > listed on one of the python.org pages. > > > > As for subtopics, I haven't seen such things. I've seen reply > > subtrees, but either they don't bump the topic (giving them little > > visibility), or they do bump the topic (annoying anyone as much as a > > new topic). I don't know if there is a good compromise there. > > > From hpolak at polak.es Thu Sep 20 03:38:10 2018 From: hpolak at polak.es (Hans Polak) Date: Thu, 20 Sep 2018 09:38:10 +0200 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <0c2401d4503b$5e509960$1af1cc20$@sdamon.com> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <17490e47-9562-1308-4c51-a007ef4162d7@polak.es> <0c2401d4503b$5e509960$1af1cc20$@sdamon.com> Message-ID: <7419dc94-2e99-2d2b-facc-73dd4343932f@polak.es> > I don?t think its unreasonable to point out that it?s a *mailing list*. A firehose of email is generally a sign of good health of a mailing list. Even so, there are mitigations to the firehose effect, including, but not limited to digests and setting up your client to move mailing list posts directly to a folder (including the trash for threads you don?t want to follow). I don't understand how one can sign up for a mass email discussion forum, and be surprised that it increased the amount of email they receive. It's kind of the point of the medium. > Right you are, Alex. I don?t think its unreasonable to point out that the title of this thread is "Moving to another forum". If you want to contribute Python Ideas you *have to* subscribe to the mailing list. Let me just say that I second the idea of moving to another forum. I already move most mail automatically to the trash folder and read it there before eliminating it. My inbox contains exactly four emails at the moment, FYI. Cheers, Hans -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Sep 20 04:16:10 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 20 Sep 2018 10:16:10 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: On Tue, Sep 18, 2018 at 9:05 PM, Franklin? Lee < leewangzhong+python at gmail.com> wrote: > > What people may really be clamoring for is a larger moderation team, > or a heavier hand. They want more enforcement, not more effective > enforcement. > Or maybe clamoring for nothing -- it's just not that hard to ignore a thread .... Frankly, I think the bigger issue is all too human -- we get sucked in and participate when we really know we shouldn't (or maybe that's just me). And I'm having a hard time figuring out how moderation would actually result in the "good" discussion we really want in an example like the "beautiful is better than ugly" issue, without someone trusted individual approving every single post -- I don't imagine anyone wants to do that. Let's just keep it on email -- I, at least, find i never participate in any other type of discussion forum regularly. -CHB > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From cs at cskk.id.au Thu Sep 20 04:20:55 2018 From: cs at cskk.id.au (Cameron Simpson) Date: Thu, 20 Sep 2018 18:20:55 +1000 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: Message-ID: <20180920082055.GA98218@cskk.homeip.net> On 20Sep2018 10:16, Chris Barker - NOAA Federal wrote: >Let's just keep it on email -- I, at least, find i never participate in any >other type of discussion forum regularly. As do I. Email comes to me. Forums, leaving aside their ergonomic horrors (subjective), require a visit. Cheers, Cameron Simpson From desmoulinmichel at gmail.com Thu Sep 20 04:25:17 2018 From: desmoulinmichel at gmail.com (Michel Desmoulin) Date: Thu, 20 Sep 2018 10:25:17 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <20180920082055.GA98218@cskk.homeip.net> References: <20180920082055.GA98218@cskk.homeip.net> Message-ID: Le 20/09/2018 ? 10:20, Cameron Simpson a ?crit?: > On 20Sep2018 10:16, Chris Barker - NOAA Federal > wrote: >> Let's just keep it on email -- I, at least, find i never participate >> in any >> other type of discussion forum regularly. > > As do I. Email comes to me. Forums, leaving aside their ergonomic > horrors (subjective), require a visit. Good forums have RSS for that purpose. Besides, it's unlikely that one has to be kept up to date on a daily basis on what's going on on Python-idea with such accuracy that one needs instant notifications that a new entry is available. > > Cheers, > Cameron Simpson > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ From turnbull.stephen.fw at u.tsukuba.ac.jp Thu Sep 20 05:13:43 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Thu, 20 Sep 2018 18:13:43 +0900 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> Message-ID: <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> Michael Selik writes: > However, PEP 1 does not give instruction on how to evaluate whether > that discussion has been completed satisfactorily. That's because completion of discussion has never been a requirement for writing a PEP. Writing a PEP is a lot more effort than writing an email. The purposes of initiating discussions are 1. Avoid duplication. Nobody has encyclopedic knowledge of the hundreds of PEPs anymore, but the lists do. 2. Gauge feasibility of the proposal. Some are non-starters for reasons of "Pythonicity", others are extremely difficult to implement given Python internals or constraints like LL(1) syntax in the parser. 3. Gauge interest in the content of the proposal. If the protagonists think it's worth it after that, they write a PEP. Typically the discussion continues on list during the drafting. From arj.python at gmail.com Thu Sep 20 05:46:10 2018 From: arj.python at gmail.com (Abdur-Rahmaan Janhangeer) Date: Thu, 20 Sep 2018 13:46:10 +0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: i miss a +1 button On Thu, Sep 20, 2018 at 12:17 PM Chris Barker via Python-ideas < python-ideas at python.org> wrote: > > Let's just keep it on email -- I, at least, find i never participate in > any other type of discussion forum regularly. > > ... > -- > > Christopher Barker, Ph.D. > Oceanographer > -- Abdur-Rahmaan Janhangeer https://github.com/abdur-rahmaanj Mauritius -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Thu Sep 20 07:37:01 2018 From: jamtlu at gmail.com (James Lu) Date: Thu, 20 Sep 2018 07:37:01 -0400 Subject: [Python-ideas] Python-ideas Digest, Vol 142, Issue 110 In-Reply-To: References: Message-ID: <4EB9A5C9-4EF2-43ED-8501-334E6DD546E9@gmail.com> > Frankly, I think the bigger issue is all too human -- we get sucked in and > participate when we really know we shouldn't (or maybe that's just me). > That may be why some people misbehave, but we have no way of discouraging that misbehavior. > And I'm having a hard time figuring out how moderation would actually > result in the "good" discussion we really want in an example like the > "beautiful is better than ugly" issue, without someone trusted individual > approving every single post -- I don't imagine anyone wants to do that. In a forum, the beautiful is better than ugly issue would be locked. No more posts can be added. If someone wants to discuss another proposal branching off of the original discussion, they can start a new thread. If they just want to lampoon, we can courteously ask them to 1) take it elsewhere or 2) move the post to a ?malarkey? section of the forum where people don?t get notified. From jamtlu at gmail.com Thu Sep 20 07:39:13 2018 From: jamtlu at gmail.com (James Lu) Date: Thu, 20 Sep 2018 07:39:13 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible Message-ID: <017BFC81-310A-4AAE-A1E8-D8AE4405034C@gmail.com> > Frankly, I think the bigger issue is all too human -- we get sucked in and > participate when we really know we shouldn't (or maybe that's just me). > That may be why some people misbehave, but we have no way of discouraging that misbehavior. > And I'm having a hard time figuring out how moderation would actually > result in the "good" discussion we really want in an example like the > "beautiful is better than ugly" issue, without someone trusted individual > approving every single post -- I don't imagine anyone wants to do that. In a forum, the beautiful is better than ugly issue would be locked. No more posts can be added. If someone wants to discuss another proposal branching off of the original discussion, they can start a new thread. If they just want to lampoon, we can courteously ask them to 1) take it elsewhere or 2) move the post to a ?malarkey? section of the forum where people don?t get notified. -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Thu Sep 20 08:09:18 2018 From: phd at phdru.name (Oleg Broytman) Date: Thu, 20 Sep 2018 14:09:18 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> Message-ID: <20180920120918.6ij7dzbp3pefcrb3@phdru.name> On Thu, Sep 20, 2018 at 01:46:10PM +0400, Abdur-Rahmaan Janhangeer wrote: > i miss a +1 button It's absence is a big advantage. We're not a social network with "likes". We don't need a bunch of argumentless "voting". > -- > Abdur-Rahmaan Janhangeer > https://github.com/abdur-rahmaanj > Mauritius Oleg. -- Oleg Broytman https://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From mehaase at gmail.com Thu Sep 20 09:05:33 2018 From: mehaase at gmail.com (Mark E. Haase) Date: Thu, 20 Sep 2018 09:05:33 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <20180920120918.6ij7dzbp3pefcrb3@phdru.name> References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <20180920120918.6ij7dzbp3pefcrb3@phdru.name> Message-ID: On Thu, Sep 20, 2018 at 8:09 AM Oleg Broytman wrote: > On Thu, Sep 20, 2018 at 01:46:10PM +0400, Abdur-Rahmaan Janhangeer < > arj.python at gmail.com> wrote: > > i miss a +1 button > > It's absence is a big advantage. We're not a social network with > "likes". We don't need a bunch of argumentless "voting" I would also appreciate a +1 button. Many e-mails to this list do nothing more than say +1 or -1 without much added discussion. It's difficult to keep track of all these disparate, unstructured votes in threads that contain a hundred e-mails and spin off into subthreads. There are also a lot of lurkers who don't want to gum up inboxes with +1's and -1's, so responses are naturally biased towards the more opinionated and active users of the list. GitHub added +1 and -1 buttons for exactly this reason: to reduce needless comment on Issues and Pull Requests. (If I could have +1'ed Abdur-Rahmaan's e-mail, I wouldn't have written this response.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From arj.python at gmail.com Thu Sep 20 09:07:59 2018 From: arj.python at gmail.com (Abdur-Rahmaan Janhangeer) Date: Thu, 20 Sep 2018 17:07:59 +0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <20180920120918.6ij7dzbp3pefcrb3@phdru.name> References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <20180920120918.6ij7dzbp3pefcrb3@phdru.name> Message-ID: it's another phrasing of +1 or i like his reply not meaning i'd like +1 buttons in mail Abdur-Rahmaan Janhangeer Mauritius On Thu, 20 Sep 2018, 16:09 Oleg wrote: > > It's absence is a big advantage. We're not a social network with > "likes". We don't need a bunch of argumentless "voting". > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhodri at kynesim.co.uk Thu Sep 20 09:08:55 2018 From: rhodri at kynesim.co.uk (Rhodri James) Date: Thu, 20 Sep 2018 14:08:55 +0100 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> Message-ID: <9ad11e8b-3824-5580-d286-7f242f1ef442@kynesim.co.uk> On 18/09/18 23:37, James Lu wrote: > Other than that, my biggest issues with the current mailing system are: > > * There?s no way to keep a updated proposal of your own- if you decide to change your proposal, you have to communicate the change. Then, if you want to find the authoritative current copy, since you might?ve forgotten or you want to join he current discussion, then you have to dig through the emails and recursively apply the proposed change. It?s just easier if people can have one proposal they can edit themselves. Believe it or not, I like the fact that you can't just edit posts. I've lost count of the number of forum threads I've been on where comments to the initial post make *no sense* because that initial post is nothing like it was to start with. (Also it makes it easier to quote people back at themselves :-) > * I?ve seen experienced people get confused about what was the current proposal because they were replying to older emails or they didn?t see the email with the clear examples. As you said yourself, "you have to communicate the change." Even in a forum or similar. Just editing your post and expecting people to notice is not going to cut it. And yes, there is a danger that even experienced people will get confused about what is being proposed right now, but I've seen that happen on forums too. The lack of threading tends to help with that, but on the other hand it stifles breadth of debate. > * The mailing list is frankly obscure. Python community leaders and package maintainers often are not aware or do not participate in Python-ideas. Not many people know how to use or navigate a mailing list. > * No one really promotes the mailing list, you have to go out of your way to find where new features are proposed. > * Higher discoverability means more people can participate, providing their own use cases or voting (I mean using like or dislike measures, consensus should still be how things are approved) go out of their way to find so they can propose something. Instead, I envision a forum where people can read and give their 2 cents about what features they might like to see or might not want to see. -1. (I'm British, I'm allowed to be ironic.) Approximately none of this has anything to do with the medium. If the mailing list is obscure (and personally I don't think it is), it just needs better advertising. A poorly advertised forum is equally undiscoverable. > * More people means instead of having to make decisions from sometimes subjective personal experience, we can make decisions with confidence in what other Python devs want. Um. Have you read this list? Mostly we can make decisions with confidence that people disagree vigorously about what they as Python devs want. Besides, I've never met a mailing list, forum or any group of more than about twelve people that could make decisions in a timely manner (or at all in some cases), and I've been a member of a few that were supposed to. Eventually some*one* has to decide to do or allow something, traditionally the BDFL. > Since potential proposers will find it easier to navigate a GUI forum, they can read previous discussions to understand the reasoning, precedent behind rejected and successful features. People proposing things that have already been rejected before can be directed to open a subtopic on the older discussion. Your faith in graphical interfaces is touching, but I've seen some stinkers. It is no easier to go through the average forum's thousands of prior discussions looking for the topic you are interested in than it is to go through the mailing list archive (and frankly googling is your best bet for both). People don't always do it here, but they don't always do it on any of the forums I'm on either, and resurrecting a moribund thread is no different to resurrecting a moribund topic. -- Rhodri James *-* Kynesim Ltd From chris.barker at noaa.gov Thu Sep 20 09:47:13 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 20 Sep 2018 15:47:13 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <017BFC81-310A-4AAE-A1E8-D8AE4405034C@gmail.com> References: <017BFC81-310A-4AAE-A1E8-D8AE4405034C@gmail.com> Message-ID: On Thu, Sep 20, 2018 at 1:39 PM, James Lu wrote: > In a forum, the beautiful is better than ugly issue would be locked. No > more posts can be added. > Exactly -- but that means we are stopping the discussion -- but we don't want to stop the discussion altogether, we want to have the productive parts of the discussion, without the non-productive parts -- not sure there is any technical solution to that problem. - CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Thu Sep 20 10:23:33 2018 From: phd at phdru.name (Oleg Broytman) Date: Thu, 20 Sep 2018 16:23:33 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <20180920120918.6ij7dzbp3pefcrb3@phdru.name> Message-ID: <20180920142333.zppltvgru53gzvzk@phdru.name> On Thu, Sep 20, 2018 at 09:05:33AM -0400, "Mark E. Haase" wrote: > On Thu, Sep 20, 2018 at 8:09 AM Oleg Broytman wrote: > > > On Thu, Sep 20, 2018 at 01:46:10PM +0400, Abdur-Rahmaan Janhangeer < > > arj.python at gmail.com> wrote: > > > i miss a +1 button > > > > It's absence is a big advantage. We're not a social network with > > "likes". We don't need a bunch of argumentless "voting" > > It's difficult to > keep track of all these disparate, unstructured votes There is no need to track them. > GitHub added +1 and -1 buttons GitHub is a social network so it's natural for them to add "likes". > (If I could have +1'ed Abdur-Rahmaan's e-mail, I wouldn't have written this > response.) That message was rather bad in my not so humble opinion -- it was just "I want my +1 button" without any argument. Your message is much better as it have arguments. See, the absence of the button work! We're proposing and *discussing* things here not "likes" each other. Write your arguments or be silent. Oleg. -- Oleg Broytman https://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From ethan at stoneleaf.us Thu Sep 20 10:33:16 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 20 Sep 2018 07:33:16 -0700 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <20180920142333.zppltvgru53gzvzk@phdru.name> References: <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <20180920120918.6ij7dzbp3pefcrb3@phdru.name> <20180920142333.zppltvgru53gzvzk@phdru.name> Message-ID: <5BA3AFAC.4080406@stoneleaf.us> On 09/20/2018 07:23 AM, Oleg Broytman wrote: > On Thu, Sep 20, 2018 at 09:05:33AM -0400, Mark E. Haase wrote: >> On Thu, Sep 20, 2018 at 8:09 AM Oleg Broytman wrote: >>> On Thu, Sep 20, 2018 at 01:46:10PM +0400, Abdur-Rahmaan Janhangeer wrote: >>>> i miss a +1 button >>> >>> It's absence is a big advantage. We're not a social network with >>> "likes". We don't need a bunch of argumentless "voting" >> >> It's difficult to keep track of all these disparate, unstructured votes > > There is no need to track them. > >> GitHub added +1 and -1 buttons > > GitHub is a social network so it's natural for them to add "likes". > >> (If I could have +1'ed Abdur-Rahmaan's e-mail, I wouldn't have written this >> response.) > > That message was rather bad in my not so humble opinion -- it was > just "I want my +1 button" without any argument. Your message is much > better as it have arguments. See, the absence of the button work! > > We're proposing and *discussing* things here not "likes" each other. > Write your arguments or be silent. The number of people who have the same argument is also a factor. I would rather have the argument once with 15 +1s than 16 posts all saying the same thing. A "like" on an argument means "I agree" -- which is valuable information to have. -- ~Ethan~ From jamtlu at gmail.com Thu Sep 20 10:48:18 2018 From: jamtlu at gmail.com (James Lu) Date: Thu, 20 Sep 2018 10:48:18 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <017BFC81-310A-4AAE-A1E8-D8AE4405034C@gmail.com> Message-ID: Were there any productive parts to that conversation? Sent from my iPhone > On Sep 20, 2018, at 9:47 AM, Chris Barker wrote: > >> On Thu, Sep 20, 2018 at 1:39 PM, James Lu wrote: >> In a forum, the beautiful is better than ugly issue would be locked. No more posts can be added. > > Exactly -- but that means we are stopping the discussion -- but we don't want to stop the discussion altogether, we want to have the productive parts of the discussion, without the non-productive parts -- not sure there is any technical solution to that problem. > > - CHB > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Thu Sep 20 11:08:53 2018 From: jamtlu at gmail.com (James Lu) Date: Thu, 20 Sep 2018 11:08:53 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible Message-ID: <2D8C7980-FD08-4650-BDEE-96A00F210A56@gmail.com> > It's absence is a big advantage. We're not a social network with > "likes". We don't need a bunch of argumentless "voting". Up/ down voting indicates how much consensus we have among the entire community- an expert might agree with another expert?s arguments but not have anything else to add, and an outsider might agree with the scenario an expert presents without having much more to add. Granular up/down votes are useful. > Believe it or not, I like the fact that you can't just edit posts. I've > lost count of the number of forum threads I've been on where comments to > the initial post make *no sense* because that initial post is nothing > like it was to start with. There is version history. Not all of us have the time to read through every single post beforehand to get the current state of discussion. Hmm, what if we used GitHub as a discussion forum? You?d make a pull request with an informal proposal to a repository. Then people can comment on lines in the diff and reply to each other there. The OP can update their branch to change their proposal- expired/stale comments on old diffs are automatically hidden. You can also create a competing proposal by forming from the OP?s branch and sending a new PR. > Just editing your post and expecting people to notice > is not going to cut it. You would ping someone after editing the post. > Approximately none of this has anything to do with the medium. If the > mailing list is obscure (and personally I don't think it is), it just > needs better advertising. A poorly advertised forum is equally > undiscoverable. It does have to do with the medium. First, people aren?t used to mailing lists- but that?s not what?s important here. If the PSF advertised for people to sign up over say twitter, then we?d get even more email. More +1 and more -1. Most of us don?t want more mailing list volume. The fact that you can?t easily find an overview people will post arguments that have already been made if they don?t have the extreme patience to read all that has been said before. For the rest of your comments, I advise you to read the earlier discussion that other people had in response to my email. > That message was rather bad in my not so humble opinion -- it was > just "I want my +1 button" without any argument. Your message is much > better as it have arguments. See, the absence of the button work! > > We're proposing and *discussing* things here not "likes" each other. > Write your arguments or be silent. Please respond to the actual arguments in both of the two emails that have arguments in support of +1/-1. +1/-1 reflects which usage scenarios people find valuable, since Python features sometimes do benefit one group at the detriment to another. Or use syntax/behavior for one thing that could be used for another thing, and some programming styles of python use cases would prefer one kind of that syntax/behavior. From mehaase at gmail.com Thu Sep 20 11:17:18 2018 From: mehaase at gmail.com (Mark E. Haase) Date: Thu, 20 Sep 2018 11:17:18 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <5BA3AFAC.4080406@stoneleaf.us> References: <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <20180920120918.6ij7dzbp3pefcrb3@phdru.name> <20180920142333.zppltvgru53gzvzk@phdru.name> <5BA3AFAC.4080406@stoneleaf.us> Message-ID: On Thu, Sep 20, 2018 at 10:33 AM Ethan Furman wrote: > On 09/20/2018 07:23 AM, Oleg Broytman wrote: > > We're proposing and *discussing* things here not "likes" each other. > > Write your arguments or be silent. > > The number of people who have the same argument is also a factor. I would > rather have the argument once with 15 +1s > than 16 posts all saying the same thing. A "like" on an argument means "I > agree" -- which is valuable information to have. +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Thu Sep 20 11:34:14 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 20 Sep 2018 08:34:14 -0700 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <017BFC81-310A-4AAE-A1E8-D8AE4405034C@gmail.com> Message-ID: <5BA3BDF6.9050405@stoneleaf.us> On 09/20/2018 07:48 AM, James Lu wrote: > Were there any productive parts to that conversation? Out of 85 messages, there was 1 for sure, possibly three more. In the 95 "Retire or reword the namesake of the Language" thread there were 2. Obviously my opinion, but I hope everyone would agree that the signal-to-noise ratio of those two threads was low. -- ~Ethan~ From mike at selik.org Thu Sep 20 11:52:21 2018 From: mike at selik.org (Michael Selik) Date: Thu, 20 Sep 2018 08:52:21 -0700 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> Message-ID: On Thu, Sep 20, 2018 at 2:13 AM Stephen J. Turnbull wrote: > Michael Selik writes: > > > However, PEP 1 does not give instruction on how to evaluate whether > > that discussion has been completed satisfactorily. > > That's because completion of discussion has never been a requirement > for writing a PEP. Not for drafting, but for submitting. For my own PEP submission, I received the specific feedback that it needed a "proper title" before being assigned a PEP number. My goal for submitting the draft was to receive a PEP number to avoid the awkwardness of discussing a PEP without an obvious title. Perhaps PEP 1 should be revised to clarify the expectations for PEP submission. From boxed at killingar.net Thu Sep 20 12:09:17 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 20 Sep 2018 18:09:17 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <2D8C7980-FD08-4650-BDEE-96A00F210A56@gmail.com> References: <2D8C7980-FD08-4650-BDEE-96A00F210A56@gmail.com> Message-ID: <409A5887-D11A-4F61-ACB7-0C3E0B64BD00@killingar.net> +1 to everything James said. This otherwise pointless mail is further evidence he?s right. On 20 Sep 2018, at 17:08, James Lu wrote: >> It's absence is a big advantage. We're not a social network with >> "likes". We don't need a bunch of argumentless "voting". > > Up/ down voting indicates how much consensus we have among the entire community- an expert might agree with another expert?s arguments but not have anything else to add, and an outsider might agree with the scenario an expert presents without having much more to add. Granular up/down votes are useful. > >> Believe it or not, I like the fact that you can't just edit posts. I've >> lost count of the number of forum threads I've been on where comments to >> the initial post make *no sense* because that initial post is nothing >> like it was to start with. > > There is version history. Not all of us have the time to read through every single post beforehand to get the current state of discussion. > > Hmm, what if we used GitHub as a discussion forum? You?d make a pull request with an informal proposal to a repository. Then people can comment on lines in the diff and reply to each other there. The OP can update their branch to change their proposal- expired/stale comments on old diffs are automatically hidden. > > You can also create a competing proposal by forming from the OP?s branch and sending a new PR. > >> Just editing your post and expecting people to notice >> is not going to cut it. > > You would ping someone after editing the post. > >> Approximately none of this has anything to do with the medium. If the >> mailing list is obscure (and personally I don't think it is), it just >> needs better advertising. A poorly advertised forum is equally >> undiscoverable. > > It does have to do with the medium. First, people aren?t used to mailing lists- but that?s not what?s important here. If the PSF advertised for people to sign up over say twitter, then we?d get even more email. More +1 and more -1. Most of us don?t want more mailing list volume. > > The fact that you can?t easily find an overview people will post arguments that have already been made if they don?t have the extreme patience to read all that has been said before. > > For the rest of your comments, I advise you to read the earlier discussion that other people had in response to my email. > >> That message was rather bad in my not so humble opinion -- it was >> just "I want my +1 button" without any argument. Your message is much >> better as it have arguments. See, the absence of the button work! >> >> We're proposing and *discussing* things here not "likes" each other. >> Write your arguments or be silent. > > Please respond to the actual arguments in both of the two emails that have arguments in support of +1/-1. > > +1/-1 reflects which usage scenarios people find valuable, since Python features sometimes do benefit one group at the detriment to another. Or use syntax/behavior for one thing that could be used for another thing, and some programming styles of python use cases would prefer one kind of that syntax/behavior. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ From tritium-list at sdamon.com Thu Sep 20 12:20:59 2018 From: tritium-list at sdamon.com (Alex Walters) Date: Thu, 20 Sep 2018 12:20:59 -0400 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <7419dc94-2e99-2d2b-facc-73dd4343932f@polak.es> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <17490e47-9562-1308-4c51-a007ef4162d7@polak.es> <0c2401d4503b$5e509960$1af1cc20$@sdamon.com> <7419dc94-2e99-2d2b-facc-73dd4343932f@polak.es> Message-ID: <0e0e01d450fd$eae19fb0$c0a4df10$@sdamon.com> > -----Original Message----- > From: Hans Polak > Sent: Thursday, September 20, 2018 3:38 AM > To: Alex Walters ; python-ideas at python.org > Subject: Re: [Python-ideas] Moving to another forum system where > > > I don?t think its unreasonable to point out that it?s a *mailing list*. A > firehose of email is generally a sign of good health of a mailing list. Even so, > there are mitigations to the firehose effect, including, but not limited to > digests and setting up your client to move mailing list posts directly to a folder > (including the trash for threads you don?t want to follow). I don't understand > how one can sign up for a mass email discussion forum, and be surprised that > it increased the amount of email they receive. It's kind of the point of the > medium. > > > Right you are, Alex. > > I don?t think its unreasonable to point out that the title of this thread is > "Moving to another forum". If you want to contribute Python Ideas you have > to subscribe to the mailing list. > I have zero sympathy for this position. First, you only need to join the list to propose major changes - everything other type of contribution can be done off list. Fixing bugs never touches a list at all unless you need to discuss backwards incompatible changes, at which point it goes to the lower volume python-dev list. Documentation changes are done on the tracker, and trivial changes are done in pull requests. The firehose of python-ideas is a barrier to entry to suggesting major changes to the language. This is a GOOD thing. Major changes need dedicated advocates - if they are unwilling to endure the flood of mail, they are not dedicated enough to the change, and that is an indication of how much they will contribute to actually bring that fruition. They need a thick skin, since the idea will be folded, spindled and mutilated, usually reducing the change proposed to a single actionable item. This is what makes python a good language - the road to changing it is incredibly tough. We should not want to make it easier. The firehose is a virtue for this list. > Let me just say that I second the idea of moving to another forum. > > I already move most mail automatically to the trash folder and read it there > before eliminating it. My inbox contains exactly four emails at the moment, > FYI. > > Cheers, > Hans From turnbull.stephen.fw at u.tsukuba.ac.jp Thu Sep 20 12:24:46 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Fri, 21 Sep 2018 01:24:46 +0900 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <017BFC81-310A-4AAE-A1E8-D8AE4405034C@gmail.com> Message-ID: <23459.51662.646816.743851@turnbull.sk.tsukuba.ac.jp> Chris Barker via Python-ideas writes: > On Thu, Sep 20, 2018 at 1:39 PM, James Lu wrote: > > In a forum, the beautiful is better than ugly issue would be > > locked. No more posts can be added. > Exactly -- but that means we are stopping the discussion -- but we don't > want to stop the discussion altogether, And that's exactly what a mute on replies does. Most people will just give up, which is appropriate. People who have (what they think is) a good reason to continue can start a new thread with a link to the old one. Hardly a prohibitive barrier, if you're willing to risk banning. I think this "feature" will do what people generally think it does: provide a strong signal to stop the discussion, and back that up with a fail-safe (if you *do* hit reply, it won't work). Footnotes: [1] Something rather unlikely to happen for many many months, even if the core decides to support it. From boxed at killingar.net Thu Sep 20 12:27:08 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 20 Sep 2018 18:27:08 +0200 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> Message-ID: <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> >> That's because completion of discussion has never been a requirement >> for writing a PEP. > > Not for drafting, but for submitting. Can you quote pep1? I think you?re wrong. In general pep1 is frustratingly vague. Terms like ?community consensus? without defining community or what numbers would constitute a consensus are not fun to read as someone who doesn?t personally know anyone of the core devs. Further references to Guido are even more frustrating now that he?s bowed out. / Anders From boxed at killingar.net Thu Sep 20 12:33:06 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 20 Sep 2018 18:33:06 +0200 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <0e0e01d450fd$eae19fb0$c0a4df10$@sdamon.com> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <17490e47-9562-1308-4c51-a007ef4162d7@polak.es> <0c2401d4503b$5e509960$1af1cc20$@sdamon.com> <7419dc94-2e99-2d2b-facc-73dd4343932f@polak.es> <0e0e01d450fd$eae19fb0$c0a4df10$@sdamon.com> Message-ID: > The firehose of python-ideas is a barrier to entry to suggesting major changes to the language. This is a GOOD thing. Major changes need dedicated advocates - if they are unwilling to endure the flood of mail, they are not dedicated enough to the change, and that is an indication of how much they will contribute to actually bring that fruition. They need a thick skin, since the idea will be folded, spindled and mutilated, usually reducing the change proposed to a single actionable item. This is what makes python a good language - the road to changing it is incredibly tough. We should not want to make it easier. > > The firehose is a virtue for this list. You?re conflating two things here: 1. Volume of mails (which is irrelevant for people who have a more advanced mail workflow anyway as has been pointed out as an argument against the very idea that is a firehouse at all) 2. People being... let?s be nice and say.. brutally honest and change averse. Point 2 has no connection to the technology afaik. And point 1 is weak at best. / Anders From arj.python at gmail.com Thu Sep 20 12:53:39 2018 From: arj.python at gmail.com (Abdur-Rahmaan Janhangeer) Date: Thu, 20 Sep 2018 20:53:39 +0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <20180920142333.zppltvgru53gzvzk@phdru.name> References: <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <20180920120918.6ij7dzbp3pefcrb3@phdru.name> <20180920142333.zppltvgru53gzvzk@phdru.name> Message-ID: it was just i like chris message v/s i like a like button Abdur-Rahmaan Janhangeer https://github.com/Abdur-rahmaanJ Mauritius On Thu, 20 Sep 2018, 18:24 Oleg Broytman, wrote: > That message was rather bad in my not so humble opinion -- it was > just "I want my +1 button" without any argument. Your message is much > better as it have arguments. See, the absence of the button work! > > We're proposing and *discussing* things here not "likes" each other. > Write your arguments or be silent. > > Oleg. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at selik.org Thu Sep 20 12:56:29 2018 From: mike at selik.org (Michael Selik) Date: Thu, 20 Sep 2018 09:56:29 -0700 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> Message-ID: On Thu, Sep 20, 2018 at 9:27 AM Anders Hovm?ller wrote: > >> That's because completion of discussion has never been a requirement > >> for writing a PEP. > > > > Not for drafting, but for submitting. > > Can you quote pep1? I think you?re wrong. I can't remember if I pulled this quote previously (that's one of the troubles with emails): "Following a discussion on python-ideas, the proposal should be submitted as a draft PEP ..." Could you clarify what you think is inaccurate in the previous statements? From brett at python.org Thu Sep 20 12:58:03 2018 From: brett at python.org (Brett Cannon) Date: Thu, 20 Sep 2018 09:58:03 -0700 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <5BA14FDC.2010807@stoneleaf.us> References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <5BA14FDC.2010807@stoneleaf.us> Message-ID: On Tue, 18 Sep 2018 at 12:20 Ethan Furman wrote: > On 09/18/2018 12:05 PM, Franklin? Lee wrote: > > On Tue, Sep 18, 2018 at 2:37 PM Jonathan Goble wrote: > > >> Perhaps not, but part of that might be because stopping an active > >> discussion on a mailing list can be hard to do, so one might not even > >> try. Some discussions, I suspect, may have gone on in circles long past > >> the point where they would have been locked on a forum. With forum > >> software, it becomes much easier, and would be a more effective tool to > >> terminate discussions that are going nowhere fast and wasting > everyone's > >> time. > > True. > > > But there's no evidence that such tools would help. Software > > enforcement powers are only necessary if verbal enforcement isn't > > enough. We need the current moderators (or just Brett) to say whether > > they feel it isn't enough. > > It isn't enough. > Ethan's correct, it isn't enough. The past two weeks have been pretty horrible for me as an admin and Titus and I need to find a solution to keep this place sustainable long-term, otherwise I'm liable to burn out from running this list (and before anyone says it, more admins will not help as we have already tried that in the past). > > > What people may really be clamoring for is a larger moderation team, > > or a heavier hand. They want more enforcement, not more effective > > enforcement. > > More ineffective enforcement will be, um, ineffective. > > Let's have a test. I'm a moderator (from -List). We're* working on > avenues to improve the mailing tools and > simultaneously testing other options. I'm not seeing anything new in this > thread that will impact that one way or > another, so I'm asking for all of us to move on to other topics. > What Ethan said. :) I'm now muting this thread as it has already become a subjective debate of the value of email versus not which doesn't help me as no one has come forward with anything I didn't already know, and this is all before we have even had a chance to start an evaluation of alternatives (IOW there's nothing for anyone on either side of this debate to actually debate about when it comes to this mailing list yet :) . -------------- next part -------------- An HTML attachment was scrubbed... URL: From arj.python at gmail.com Thu Sep 20 13:06:44 2018 From: arj.python at gmail.com (Abdur-Rahmaan Janhangeer) Date: Thu, 20 Sep 2018 21:06:44 +0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <5BA14FDC.2010807@stoneleaf.us> Message-ID: also Mr Brett, i have no way of knowing moderators, though i don't trample here and there, mods words are sacred and apart from mods saying i'm a mod, i can't really tell. maybe a footer saying mod or something like that Abdur-Rahmaan Janhangeer Mauritius -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Sep 20 13:07:53 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 20 Sep 2018 19:07:53 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <5BA14FDC.2010807@stoneleaf.us> Message-ID: <20180920190753.00f120a7@fsol> On Thu, 20 Sep 2018 09:58:03 -0700 Brett Cannon wrote: > > Ethan's correct, it isn't enough. The past two weeks have been pretty > horrible for me as an admin and Titus and I need to find a solution to keep > this place sustainable long-term, otherwise I'm liable to burn out from > running this list (and before anyone says it, more admins will not help as > we have already tried that in the past). I think more admins would definitely help *if* they had the help of a modern discussion system with built-in moderation options. Regards Antoine. From boxed at killingar.net Thu Sep 20 13:25:00 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Thu, 20 Sep 2018 19:25:00 +0200 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> Message-ID: <66882BC6-4442-46F3-8547-23EF85237E13@killingar.net> >>> Not for drafting, but for submitting. >> >> Can you quote pep1? I think you?re wrong. > > I can't remember if I pulled this quote previously (that's one of the > troubles with emails): "Following a discussion on python-ideas, the > proposal should be submitted as a draft PEP ..." > > Could you clarify what you think is inaccurate in the previous statements? It later states this is just to avoid submitting bad ideas. It?s not actually a requirement but a (supposed) kindness. / Anders From mike at selik.org Thu Sep 20 13:46:07 2018 From: mike at selik.org (Michael Selik) Date: Thu, 20 Sep 2018 10:46:07 -0700 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <66882BC6-4442-46F3-8547-23EF85237E13@killingar.net> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <66882BC6-4442-46F3-8547-23EF85237E13@killingar.net> Message-ID: On Thu, Sep 20, 2018 at 10:25 AM Anders Hovm?ller wrote: > >>> Not for drafting, but for submitting. > >> > >> Can you quote pep1? I think you?re wrong. > > > > I can't remember if I pulled this quote previously (that's one of the > > troubles with emails): "Following a discussion on python-ideas, the > > proposal should be submitted as a draft PEP ..." > > > > Could you clarify what you think is inaccurate in the previous statements? > > It later states this is just to avoid submitting bad ideas. It?s not actually a requirement but a (supposed) kindness. Some regulations are de jure, others are de facto. From mikhailwas at gmail.com Thu Sep 20 13:55:16 2018 From: mikhailwas at gmail.com (Mikhail V) Date: Thu, 20 Sep 2018 20:55:16 +0300 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <20180920082055.GA98218@cskk.homeip.net> References: <20180920082055.GA98218@cskk.homeip.net> Message-ID: On Thu, Sep 20, 2018 at 11:21 AM Cameron Simpson wrote: > > On 20Sep2018 10:16, Chris Barker - NOAA Federal wrote: > >Let's just keep it on email -- I, at least, find i never participate in any > >other type of discussion forum regularly. > > As do I. Email comes to me. Forums, leaving aside their ergonomic horrors > (subjective), require a visit. So you are ok with 100 emails / day, like it happened when inline assignment discussion erupted? I think there are forum systems which allow you to post by email so it is possible to get the same effect as with mailing list, if you really want. I think most people want the ability to choose what topic they want to receive notification and its not possible. As for ergonomics - it depends on forum software and design. If I use some site frequently and it has bad layout/colors/fonts, then I use Stylish plugin to customize the CSS. Therefore I'd prefer forum with minimalistic CSS to easily customize the look. OTOH if the mailing software has bad ergonomics, I can't do much with that. Or if people post a word and leave 5 pages quote below or messed up formatting - I can't do anything with that. On a good forum systems such things are less probable = less annoyance in general. I see 2 major problems: 1. The mentioned mass mail delivery 2. PEPs and discussion browsing is far from effective - I'd like a better way to browse PEPs - for example filtering by topics, eg. "syntax", "module X", by their status, etc, and of course discoverable relevant discussion. Systems used in Stackoverflow, Github already offer these features. I personally would like Stackoverflow-like format for presenting PEPs + discussion below, so everybody can easily browse PEPs and related info in one place. Mikhail From chris.barker at noaa.gov Thu Sep 20 14:04:05 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 20 Sep 2018 20:04:05 +0200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: <23459.51662.646816.743851@turnbull.sk.tsukuba.ac.jp> References: <017BFC81-310A-4AAE-A1E8-D8AE4405034C@gmail.com> <23459.51662.646816.743851@turnbull.sk.tsukuba.ac.jp> Message-ID: On Thu, Sep 20, 2018 at 6:24 PM, Stephen J. Turnbull < turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > > And that's exactly what a mute on replies does. Most people will just > give up, which is appropriate. People who have (what they think is) a > good reason to continue can start a new thread with a link to the old > one. Hardly a prohibitive barrier, if you're willing to risk banning. > > I think this "feature" will do what people generally think it does: > provide a strong signal to stop the discussion, and back that up with > a fail-safe (if you *do* hit reply, it won't work). > Hmm -- I don't suppose Mailman has a way to filter out threads, does it? If not, maybe we could add that -- might work well in cases like this. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Sep 20 14:10:27 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 20 Sep 2018 20:10:27 +0200 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <66882BC6-4442-46F3-8547-23EF85237E13@killingar.net> Message-ID: A point here: any proposal that is an actual proposal, rather than a idea that needs fleshing out, can benefit from being written down in a single document. It does NOT have to be an official PEP in order for that to happen. If you are advocating something, then write it down, post it gitHbu or some such, and point people to it -- that's it. Using this list alone (or any forum-like system) is not a good way to work out the details of the proposal -- if you have a written out version, then folks can debate what's actually in the proposal, and not the various already discussed and rejected idea that float around in multiple discussion threads... -CHB On Thu, Sep 20, 2018 at 7:46 PM, Michael Selik wrote: > On Thu, Sep 20, 2018 at 10:25 AM Anders Hovm?ller > wrote: > > >>> Not for drafting, but for submitting. > > >> > > >> Can you quote pep1? I think you?re wrong. > > > > > > I can't remember if I pulled this quote previously (that's one of the > > > troubles with emails): "Following a discussion on python-ideas, the > > > proposal should be submitted as a draft PEP ..." > > > > > > Could you clarify what you think is inaccurate in the previous > statements? > > > > It later states this is just to avoid submitting bad ideas. It?s not > actually a requirement but a (supposed) kindness. > > Some regulations are de jure, others are de facto. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Sep 20 14:56:05 2018 From: brett at python.org (Brett Cannon) Date: Thu, 20 Sep 2018 11:56:05 -0700 Subject: [Python-ideas] CoC violation (was: Retire or reword the "Beautiful is better than ugly" Zen clause) In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: The below email was reported to the PSF board for code of conduct violations and then passed on to the conduct working group to decide on an appropriate response. Based on the WG's recommendation and after discussing it with Titus, the decision has been made to ban Jacco from python-ideas. Trivializing assault, using the n-word, and making inappropriate comments about someone's mental stability are all uncalled for and entirely unnecessary to carry on a reasonable discourse of conversation that remains welcoming to others. On Mon, 17 Sep 2018 at 00:18 Jacco van Dorp wrote: > Op zo 16 sep. 2018 om 05:40 schreef Franklin? Lee < > leewangzhong+python at gmail.com>: > >> I am very disappointed with the responses to this thread. We have >> mockery, dismissiveness, and even insinuations about OP's >> psychological health. Whether or not OP is a troll, and whether or not >> OP's idea has merit, that kind of response is unnecessary and >> unhelpful. > > > Sure, I'll take your bait. > > >> Jacco: >> - This is completely disrespectful and way over the line. Don't try to >> make a psychological evaluation from two emails, especially when it's >> just someone having an idea you don't like. >> """However, if merely the word ugly being on a page can be >> "harmful", what you really need is professional help, not a change to >> Python. Because there's obviously been some things in your past you >> need to work through.""" >> > > Is it, though ? Even more because in order for it to apply to any one > person's aesthetics, you need to pull it out of context first. You need to > be looking for it. Being triggered by a word this simple is not exactly a > sign of mental stability. I know a girl who's been raped more than she can > count - but the word doesn't trigger her like this(only makes her want to > beat up rapists). If people can do that, then surely a playground insult > wont reduce you to tears, right ? > > >> - Mockery. >> """If we have to ban "Ugly" for american sensitivities, then >> perhaps we need to ban a number of others for china's sensitivities. >> Where will it end ?""" >> > > Well, on the internet, the word "nigger" is already basically banned for > american sensibilities, while the version in dutch, my language, is > "neger", which doesn't really have any racist connotation, probably because > the amount of slaves that have ever been in what's currently the > netherlands, has been negligible. However, it's use is effectively banned > because some other culture considers it offensive to use. Why should your > culture be my censorship ? And it's no coincidence I used china there - > it's notorious for it's censorship. If merely labeling a word as > "offensive" is sufficient to ban it, I daresay they'd mark a whole lot more > words as offensive. And why would their opinion be any less valid than > yours ? > > Don't think you're special - you're not. If you want to give yourself the > power to ban words for offensive, you're giving that same power to > everyone. And since offensive is subjective, it means anybody could ban any > word, since you couldn't tell the difference between real or fake offense. > > Therefore, it is a disastrous idea and I'll predict the end of Python if > we go down that route. > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Sep 20 14:59:17 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 20 Sep 2018 14:59:17 -0400 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <7419dc94-2e99-2d2b-facc-73dd4343932f@polak.es> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <17490e47-9562-1308-4c51-a007ef4162d7@polak.es> <0c2401d4503b$5e509960$1af1cc20$@sdamon.com> <7419dc94-2e99-2d2b-facc-73dd4343932f@polak.es> Message-ID: On 9/20/2018 3:38 AM, Hans Polak wrote: > I don?t think its unreasonable to point out that the title of this > thread is "Moving to another forum". If you want to contribute Python > Ideas you *have to* subscribe to the mailing list. Or you can point your mail/news reader to news.gmane.org and 'subscribe' to gmane.comp.python.ideas, which keeps everything in this list in a separate folder and only downloads messages one wants to read. -- Terry Jan Reedy From marko.ristin at gmail.com Thu Sep 20 16:52:26 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Thu, 20 Sep 2018 22:52:26 +0200 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: References: <140891b8-3aef-0991-9421-7479e6a63eb6@gmail.com> Message-ID: Hi, Again a brief update. * icontract supports now static and class methods (thanks to my colleague Adam Radomski) which came very handy when defining a group of functions as an interface *via* an abstract (stateless) class. The implementors then need to all satisfy the contracts without needing to re-write them. You could implement the same behavior with *_impl or _* ("protected") methods where public methods would add the contracts as asserts, but we find the contracts-as-decorators more elegant (N functions instead of 2*N; see the snippet below). * We implemented a linter to statically check that the contract arguments are defined correctly. It is available as a separate Pypi package pyicontract-lint (https://github.com/Parquery/pyicontract-lint/). Next step will be to use asteroid to infer that the return type of the condition function is boolean. Does it make sense to include PEX in the release on github? * We plan to implement a sphinx plugin so that contracts can be readily visible in the documentation. Is there any guideline or standard/preferred approach how you would expect this plugin to be implemented? My colleagues and I don't have any experience with sphinx plugins, so any guidance is very welcome. class Component(abc.ABC, icontract.DBC): """Initialize a single component.""" @staticmethod @abc.abstractmethod def user() -> str: """ Get the user name. :return: user which executes this component. """ pass @staticmethod @abc.abstractmethod @icontract.post(lambda result: result in groups()) def primary_group() -> str: """ Get the primary group. :return: primary group of this component """ pass @staticmethod @abc.abstractmethod @icontract.post(lambda result: result.issubset(groups())) def secondary_groups() -> Set[str]: """ Get the secondary groups. :return: list of secondary groups """ pass @staticmethod @abc.abstractmethod @icontract.post(lambda result: all(not pth.is_absolute() for pth in result)) def bin_paths(config: mapried.config.Config) -> List[pathlib.Path]: """ Get list of binary paths used by this component. :param config: of the instance :return: list of paths to binaries used by this component """ pass @staticmethod @abc.abstractmethod @icontract.post(lambda result: all(not pth.is_absolute() for pth in result)) def py_paths(config: mapried.config.Config) -> List[pathlib.Path]: """ Get list of py paths used by this component. :param config: of the instance :return: list of paths to python executables used by this component """ pass @staticmethod @abc.abstractmethod @icontract.post(lambda result: all(not pth.is_absolute() for pth in result)) def dirs(config: mapried.config.Config) -> List[pathlib.Path]: """ Get directories used by this component. :param config: of the instance :return: list of paths to directories used by this component """ pass On Sat, 15 Sep 2018 at 22:14, Marko Ristin-Kaufmann wrote: > Hi David Maertz and Michael Lee, > > Thank you for raising the points. Please let me respond to your comments > in separation. Please let me know if I missed or misunderstood anything. > > *Assertions versus contracts.* David wrote: > >> I'm afraid that in reading the examples provided it is difficulties for >> me not simply to think that EVERY SINGLE ONE of them would be FAR easier to >> read if it were an `assert` instead. >> > > I think there are two misunderstandings on the role of the contracts. > First, they are part of the function signature, and not of the > implementation. In contrast, the assertions are part of the implementation > and are completely obscured in the signature. To see the contracts of a > function or a class written as assertions, you need to visually inspect the > implementation. The contracts are instead engraved in the signature and > immediately visible. For example, you can test the distinction by pressing > Ctrl + q in Pycharm. > > Second, assertions are only suitable for preconditions. Postconditions are > practically unmaintainable as assertions as soon as you have multiple early > returns in a function. The invariants implemented as assertions are always > unmaintainable in practice (except for very, very small classes) -- you > need to inspect each function of the class and all their return statements > and manually add assertions for each invariant. Removing or changing > invariants manually is totally impractical in my view. > > *Efficiency and Evidency. *David wrote: > >> The API of the library is a bit noisy, but I think the obstacle it's more >> in the higher level design for me. Adding many layers of expensive runtime >> checks and many lines of code in order to assure simple predicates that a >> glance at the code or unit tests would do better seems wasteful. > > > I'm not very sure what you mean by expensive runtime checks -- every > single contract can be disabled at any point. Once a contract is disabled, > there is literally no runtime computational cost incurred. The complexity > of a contract during testing is also exactly the same as if you wrote it in > the unit test. There is a constant overhead due to the extra function call > to check the condition, but there's no more time complexity to it. The > overhead of an additional function call is negligible in most practical > test cases. > > When you say "a glance at the code", this implies to me that you referring > to your own code and not to legacy code. In my experience, even simple > predicates are often not obvious to see in other people's code as one might > think (*e.g. *I had to struggle with even most simple ones like whether > the result ends in a newline or not -- often having to actually run the > code to check experimentally what happens with different inputs). > Postconditions prove very useful in such situations: they let us know that > whenever a function returns, the result must satisfy its postconditions. > They are formal and obvious to read in the function signature, and hence > spare us the need to parse the function's implementation or run it. > > Contracts in the unit tests. > >> The API of the library is a bit noisy, but I think the obstacle it's more >> in the higher level design for me. Adding many layers of expensive runtime >> checks and many lines of code in order to assure simple predicates that a >> glance at the code or *unit tests would do better* seems wasteful. >> > (emphasis mine) > > Defining contracts in a unit test is, as I already mentioned in my > previous message, problematic due to two reasons. First, the contract > resides in a place far away from the function definition which might make > it hard to find and maintain. Second, defining the contract in the unit > test makes it impossible to put the contract in the production or test it > in a call from a different function. In contrast, introducing the contract > as a decorator works perfectly fine in all the three above-mentioned cases > (smoke unit test, production, deeper testing). > > *Library. *Michael wrote: > >> I just want to point out that you don't need permission from anybody to >> start a library. I think developing and popularizing a contracts library is >> a reasonable goal -- but that's something you can start doing at any time >> without waiting for consensus. > > > As a matter of fact, I already implemented the library which covers most > of the design-by-contract including the inheritance of the contracts. (The > only missing parts are retrieval of "old" values in postconditions and loop > invariants.) It's published on pypi as "icontract" package (the website is > https://github.com/Parquery/icontract/). I'd like to gauge the interest > before I/we even try to make a proposal to make it into the standard > library. > > The discussions in this thread are an immense help for me to crystallize > the points that would need to be addressed explicitly in such a proposal. > If the proposal never comes about, it would at least flow into the > documentation of the library and help me identify and explain better the > important points. > > *Observation of contracts. *Michael wrote: > >> Your contracts are only checked when the function is evaluated, so you'd >> still need to write that unit test that confirms the function actually >> observes the contract. I don't think you necessarily get to reduce the >> number of tests you'd need to write. > > > Assuming that a contracts library is working correctly, there is no need > to test whether a contract is observed or not -- you assume it is. The same > applies to any testing library -- otherwise, you would have to test the > tester, and so on *ad infinitum.* > > You still need to evaluate the function during testing, of course. But you > don't need to document the contracts in your tests nor check that the > postconditions are enforced -- you assume that they hold. For example, if > you introduce a postcondition that the result of a function ends in a > newline, there is no point of making a unit test, passing it some value and > then checking that the result value ends in a newline in the test. > Normally, it is sufficient to smoke-test the function. For example, you > write a smoke unit test that gives a range of inputs to the function by > using hypothesis library and let the postconditions be automatically > checked. You can view each postcondition as an additional test case in this > scenario -- but one that is also embedded in the function signature and > also applicable in production. > > Not all tests can be written like this, of course. Dealing with a complex > function involves writing testing logic which is too complex to fit in > postconditions. Contracts are not a panacea, but they absolute us from > implementing trivial testing logic while keeping the important bits of the > documentation close to the function and allowing for deeper tests. > > *Accurate contracts. *Michael wrote: > >> There's also no guarantee that your contracts will necessarily be >> *accurate*. It's entirely possible that your preconditions/postconditions >> might hold for every test case you can think of, but end up failing when >> running in production due to some edge case that you missed. >> > > Unfortunately, there is no practical exit from this dilemma -- and it > applies all the same for the tests. Who guarantees that the testing logic > of the unit tests are correct? Unless you can formally prove that the code > does what it should, there is no way around it. Whether you write contracts > in the tests or in the decorators, it makes no difference to accuracy. > > If you missed to test an edge case, well, you missed it :). The > design-by-contract does not make the code bug-free, but makes the bugs *much > less likely* and *easier *to detect *early*. In practice, if there is a > complex contract, I encapsulate its complex parts in separate functions > (often with their own contracts), test these functions in separation and > then, once the tests pass and I'm confident about their correctness, put > them into contracts. > > (And if you decide to disable those pre/post conditions to avoid the >> efficiency hit, you're back to square zero.) >> > > In practice, we at Parquery AG let the critical contracts to run in > production to ensure that the program blows up before it exercises > undefined behavior in a critical situation. The informative violation > errors of the icontract library help us to trace the bugs more easily since > the relevant values are part of the error log. > > However, if some of the contracts are too inefficient to check in > production, alas you have to turn them off and they can't be checked since > they are inefficient. This seems like a tautology to me -- could you please > clarify a bit what you meant? If a check is critical and inefficient at the > same time then your problem is unsolvable (or at least ill-defined); > contracts as well as any other approach can not solve it. > > *Ergonimical assertions. *Michael wrote: > >> Or I guess to put it another way -- it seems what all of these contract >> libraries are doing is basically adding syntax to try and make adding >> asserts in various places more ergonomic, and not much else. I agree those >> kinds of libraries can be useful, but I don't think they're necessarily >> useful enough to be part of the standard library or to be a technique >> Python programmers should automatically use by default. > > > From the point of view of the *behavior, *that is exactly the case. The > contracts (*e.g. *as function decorators) make postconditions and > invariants possible in practice. As I already noted above, postconditions > are very hard and invariants almost impossible to maintain manually without > the contracts. This is even more so when contracts are inherited in a class > hierarchy. > > Please do not underestimate another aspect of the contracts, namely the > value of contracts as verifiable documentation. Please note that the only > alternative that I observe in practice without design-by-contract is to > write contracts in docstrings in *natural language*. Most often, they are > just assumed, so the next programmer burns her fingers expecting the > contracts to hold when they actually differ from the class or function > description, but nobody bothered to update the docstrings (which is a > common pitfall in any code base over a longer period of time). > > *Automatic generation of tests.* Michael wrote: > >> What might be interesting is somebody wrote a library that does something >> more then just adding asserts. For example, one idea might be to try >> hooking up a contracts library to hypothesis (or any other library that >> does quickcheck-style testing). That might be a good way of partially >> addressing the problems up above -- you write out your invariants, and a >> testing library extracts that information and uses it to automatically >> synthesize interesting test cases. > > > This is the final goal and my main motivation to push for > design-by-contract in Python :). There is a whole research community that > tries to come up with automatic test generations, and contracts are of > great utility there. Mind that generating the tests based on contracts is > not trivial: hypothesis just picks elements for each input independently > which is a much easier problem. However, preconditions can define how the > arguments are *related*. Assume a function takes two numbers as > arguments, x and y. If the precondition is y < x < (y + x) * 10, it is not > trivial even for this simple example to come up with concrete samples of x > and y unless you simply brute-force the problem by densely sampling all the > numbers and checking the precondition. > > I see a chicken-and-egg problem here. If design-by-contract is not widely > adopted, there will also be fewer or no libraries for automatic test > generation. Honestly, I have absolutely no idea how you could approach > automatic generation of test cases without contracts (in one form or the > other). For example, how could you automatically mock a class without > knowing its invariants? > > Since generating test cases for functions with non-trivial contracts is > hard (and involves collaboration of many people), I don't expect anybody to > start even thinking about it if the tool can only be applied to almost > anywhere due to lack of contracts. Formal proofs and static analysis are > even harder beasts to tame -- and I'd say the argument holds true for them > even more. > > David and Michael, thank you again for your comments! I welcome very much > your opinion and any follow-ups as well as from other participants on this > mail list. > > Cheers, > Marko > > On Sat, 15 Sep 2018 at 10:42, Michael Lee > wrote: > >> I just want to point out that you don't need permission from anybody to >> start a library. I think developing and popularizing a contracts library is >> a reasonable goal -- but that's something you can start doing at any time >> without waiting for consensus. >> >> And if it gets popular enough, maybe it'll be added to the standard >> library in some form. That's what happened with attrs, iirc -- it got >> fairly popular and demonstrated there was an unfilled niche, and so Python >> acquired dataclasses.. >> >> >> The contracts make merely tests obsolete that test that the function or >>> class actually observes the contracts. >>> >> >> Is this actually the case? Your contracts are only checked when the >> function is evaluated, so you'd still need to write that unit test that >> confirms the function actually observes the contract. I don't think you >> necessarily get to reduce the number of tests you'd need to write. >> >> >> Please let me know what points *do not *convince you that Python needs >>> contracts >>> >> >> While I agree that contracts are a useful tool, I don't think they're >> going to be necessarily useful for *all* Python programmers. For example, >> contracts aren't particularly useful if you're writing fairly >> straightforward code with relatively simple invariants. >> >> I'm also not convinced that libraries where contracts are checked >> specifically *at runtime* actually give you that much added power and >> impact. For example, you still need to write a decent number of unit tests >> to make sure your contracts are being upheld (unless you plan on checking >> this by just deploying your code and letting it run, which seems >> suboptimal). There's also no guarantee that your contracts will necessarily >> be *accurate*. It's entirely possible that your >> preconditions/postconditions might hold for every test case you can think >> of, but end up failing when running in production due to some edge case >> that you missed. (And if you decide to disable those pre/post conditions to >> avoid the efficiency hit, you're back to square zero.) >> >> Or I guess to put it another way -- it seems what all of these contract >> libraries are doing is basically adding syntax to try and make adding >> asserts in various places more ergonomic, and not much else. I agree those >> kinds of libraries can be useful, but I don't think they're necessarily >> useful enough to be part of the standard library or to be a technique >> Python programmers should automatically use by default. >> >> What might be interesting is somebody wrote a library that does something >> more then just adding asserts. For example, one idea might be to try >> hooking up a contracts library to hypothesis (or any other library that >> does quickcheck-style testing). That might be a good way of partially >> addressing the problems up above -- you write out your invariants, and a >> testing library extracts that information and uses it to automatically >> synthesize interesting test cases. >> >> (And of course, what would be very cool is if the contracts could be >> verified statically like you can do in languages like dafny -- that way, >> you genuinely would be able to avoid writing many kinds of tests and could >> have confidence your contracts are upheld. But I understanding implementing >> such verifiers are extremely challenging and would probably have too-steep >> of a learning curve to be usable by most people anyways.) >> >> -- Michael >> >> >> >> On Fri, Sep 14, 2018 at 11:51 PM, Marko Ristin-Kaufmann < >> marko.ristin at gmail.com> wrote: >> >>> Hi, >>> Let me make a couple of practical examples from the work-in-progress ( >>> https://github.com/Parquery/pypackagery, branch >>> mristin/initial-version) to illustrate again the usefulness of the >>> contracts and why they are, in my opinion, superior to assertions and unit >>> tests. >>> >>> What follows is a list of function signatures decorated with contracts >>> from pypackagery library preceded by a human-readable description of the >>> contracts. >>> >>> The invariants tell us what format to expect from the related string >>> properties. >>> >>> @icontract.inv(lambda self: self.name.strip() == self.name) >>> @icontract.inv(lambda self: self.line.endswith("\n")) >>> class Requirement: >>> """Represent a requirement in requirements.txt.""" >>> >>> def __init__(self, name: str, line: str) -> None: >>> """ >>> Initialize. >>> >>> :param name: package name >>> :param line: line in the requirements.txt file >>> """ >>> ... >>> >>> The postcondition tells us that the resulting map keys the values on >>> their name property. >>> >>> @icontract.post(lambda result: all(val.name == key for key, val in result.items())) >>> def parse_requirements(text: str, filename: str = '') -> Mapping[str, Requirement]: >>> """ >>> Parse requirements file and return package name -> package requirement as in requirements.txt >>> >>> :param text: content of the ``requirements.txt`` >>> :param filename: where we got the ``requirements.txt`` from (URL or path) >>> :return: name of the requirement (*i.e.* pip package) -> parsed requirement >>> """ >>> ... >>> >>> >>> The postcondition ensures that the resulting list contains only unique >>> elements. Mind that if you returned a set, the order would have been lost. >>> >>> @icontract.post(lambda result: len(result) == len(set(result)), enabled=icontract.SLOW) >>> def missing_requirements(module_to_requirement: Mapping[str, str], >>> requirements: Mapping[str, Requirement]) -> List[str]: >>> """ >>> List requirements from module_to_requirement missing in the ``requirements``. >>> >>> :param module_to_requirement: parsed ``module_to_requiremnt.tsv`` >>> :param requirements: parsed ``requirements.txt`` >>> :return: list of requirement names >>> """ >>> ... >>> >>> Here is a bit more complex example. >>> - The precondition A requires that all the supplied relative paths >>> (rel_paths) are indeed relative (as opposed to absolute). >>> - The postcondition B ensures that the initial set of paths (given in >>> rel_paths) is included in the results. >>> - The postcondition C ensures that the requirements in the results are >>> the subset of the given requirements. >>> - The precondition D requires that there are no missing requirements (*i.e. >>> *that each requirement in the given module_to_requirement is also >>> defined in the given requirements). >>> >>> @icontract.pre(lambda rel_paths: all(rel_pth.root == "" for rel_pth in rel_paths)) # A >>> @icontract.post( >>> lambda rel_paths, result: all(pth in result.rel_paths for pth in rel_paths), >>> enabled=icontract.SLOW, >>> description="Initial relative paths included") # B >>> @icontract.post( >>> lambda requirements, result: all(req.name in requirements for req in result.requirements), >>> enabled=icontract.SLOW) # C >>> @icontract.pre( >>> lambda requirements, module_to_requirement: missing_requirements(module_to_requirement, requirements) == [], >>> enabled=icontract.SLOW) # D >>> def collect_dependency_graph(root_dir: pathlib.Path, rel_paths: List[pathlib.Path], >>> requirements: Mapping[str, Requirement], >>> module_to_requirement: Mapping[str, str]) -> Package: >>> >>> """ >>> Collect the dependency graph of the initial set of python files from the code base. >>> >>> :param root_dir: root directory of the codebase such as "/home/marko/workspace/pqry/production/src/py" >>> :param rel_paths: initial set of python files that we want to package. These paths are relative to root_dir. >>> :param requirements: requirements of the whole code base, mapped by package name >>> :param module_to_requirement: module to requirement correspondence of the whole code base >>> :return: resolved depedendency graph including the given initial relative paths, >>> """ >>> >>> I hope these examples convince you (at least a little bit :-)) that >>> contracts are easier and clearer to write than asserts. As noted before in >>> this thread, you can have the same *behavior* with asserts as long as >>> you don't need to inherit the contracts. But the contract decorators make >>> it very explicit what conditions should hold *without* having to look >>> into the implementation. Moreover, it is very hard to ensure the >>> postconditions with asserts as soon as you have a complex control flow since >>> you would need to duplicate the assert at every return statement. (You >>> could implement a context manager that ensures the postconditions, but a >>> context manager is not more readable than decorators and you have to >>> duplicate them as documentation in the docstring). >>> >>> In my view, contracts are also superior to many kinds of tests. As the >>> contracts are *always* enforced, they also enforce the correctness >>> throughout the program execution whereas the unit tests and doctests only >>> cover a list of selected cases. Furthermore, writing the contracts in these >>> examples as doctests or unit tests would escape the attention of most less >>> experienced programmers which are not used to read unit tests as >>> documentation. Finally, these unit tests would be much harder to read than >>> the decorators (*e.g.*, the unit test would supply invalid arguments >>> and then check for ValueError which is already a much more convoluted piece >>> of code than the preconditions and postconditions as decorators. Such >>> testing code also lives in a file separate from the original implementation >>> making it much harder to locate and maintain). >>> >>> Mind that the contracts *do not* *replace* the unit tests or the >>> doctests. The contracts make merely tests obsolete that test that the >>> function or class actually observes the contracts. Design-by-contract helps >>> you skip those tests and focus on the more complex ones that test the >>> behavior. Another positive effect of the contracts is that they make your >>> tests deeper: if you specified the contracts throughout the code base, a >>> test of a function that calls other functions in its implementation will >>> also make sure that all the contracts of that other functions hold. This >>> can be difficult to implement with standard unit test frameworks. >>> >>> Another aspect of the design-by-contract, which is IMO ignored quite >>> often, is the educational one. Contracts force the programmer to actually >>> sit down and think *formally* about the inputs and the outputs >>> (hopefully?) *before* she starts to implement a function. Since many >>> schools use Python to teach programming (especially at high school level), >>> I imagine writing contracts of a function to be a very good exercise in >>> formal thinking for the students. >>> >>> Please let me know what points *do not *convince you that Python needs >>> contracts (in whatever form -- be it as a standard library, be it as a >>> language construct, be it as a widely adopted and collectively maintained >>> third-party library). I would be very glad to address these points in my >>> next message(s). >>> >>> Cheers, >>> Marko >>> >>> _______________________________________________ >>> Python-ideas mailing list >>> Python-ideas at python.org >>> https://mail.python.org/mailman/listinfo/python-ideas >>> Code of Conduct: http://python.org/psf/codeofconduct/ >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu Sep 20 18:20:13 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 21 Sep 2018 10:20:13 +1200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <20180920120918.6ij7dzbp3pefcrb3@phdru.name> Message-ID: <5BA41D1D.7020007@canterbury.ac.nz> Mark E. Haase wrote: > I would also appreciate a +1 button. Many e-mails to this list do > nothing more than say +1 or -1 without much added discussion. A tiny bit of discussion is still better than none at all. And even if there's no discussion, there's a name attached to the message, which makes it more personal and meaningful than a "+1" counter getting incremented somewhere. Counting only makes sense if the counts are going to be treated as votes, and we don't do that. -- Greg From greg.ewing at canterbury.ac.nz Thu Sep 20 18:41:44 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 21 Sep 2018 10:41:44 +1200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <78a98351-b4e2-0096-2554-ac00238a1b30@mgmiller.net> <23456.47873.934149.250940@turnbull.sk.tsukuba.ac.jp> <20180920120918.6ij7dzbp3pefcrb3@phdru.name> Message-ID: <5BA42228.6030808@canterbury.ac.nz> Mark E. Haase wrote: > Many e-mails to this list do > nothing more than say +1 or -1 without much added discussion. Are there really all that many? They seem relatively rare to me. Certainly not enough to annoy me. -- Greg From klahnakoski at mozilla.com Thu Sep 20 18:52:00 2018 From: klahnakoski at mozilla.com (Kyle Lahnakoski) Date: Thu, 20 Sep 2018 18:52:00 -0400 Subject: [Python-ideas] Asynchronous exception handling around with/try statement borders In-Reply-To: References: Message-ID: <2cbe6bbe-eae0-e69d-590a-c28e05de523b@mozilla.com> On 2017-06-28 07:40, Erik Bray wrote: > Hi folks, Since the java.lang.Thread.stop() "debacle", it has been obvious that stopping code to run other code has been dangerous.? KeyboardInterrupt (any interrupt really) is dangerous. Now, we can probably code a solution, but how about we remove the danger: I suggest we remove interrupts from Python, and make them act more like java.lang.Thread.interrupt(); setting a thread local bit to indicate an interrupt has occurred.? Then we can write explicit code to check for that bit, and raise an exception in a safe place if we wish.? This can be done with Python code, or convenient places in Python's C source itself.? I imagine it would be easier to whitelist where interrupts can raise exceptions, rather than blacklisting where they should not. In the meantime, my solution is to spawn new threads to do the work, while the main thread has the sole purpose to sleep, and set the "please stop" flag upon interrupt. From rosuav at gmail.com Thu Sep 20 18:57:53 2018 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 21 Sep 2018 08:57:53 +1000 Subject: [Python-ideas] Asynchronous exception handling around with/try statement borders In-Reply-To: <2cbe6bbe-eae0-e69d-590a-c28e05de523b@mozilla.com> References: <2cbe6bbe-eae0-e69d-590a-c28e05de523b@mozilla.com> Message-ID: On Fri, Sep 21, 2018 at 8:52 AM Kyle Lahnakoski wrote: > Since the java.lang.Thread.stop() "debacle", it has been obvious that > stopping code to run other code has been dangerous. KeyboardInterrupt > (any interrupt really) is dangerous. Now, we can probably code a > solution, but how about we remove the danger: > > I suggest we remove interrupts from Python, and make them act more like > java.lang.Thread.interrupt(); setting a thread local bit to indicate an > interrupt has occurred. Then we can write explicit code to check for > that bit, and raise an exception in a safe place if we wish. This can > be done with Python code, or convenient places in Python's C source > itself. I imagine it would be easier to whitelist where interrupts can > raise exceptions, rather than blacklisting where they should not. The time machine strikes again! https://docs.python.org/3/c-api/exceptions.html#signal-handling ChrisA From mike at selik.org Thu Sep 20 19:02:18 2018 From: mike at selik.org (Michael Selik) Date: Thu, 20 Sep 2018 16:02:18 -0700 Subject: [Python-ideas] Asynchronous exception handling around with/try statement borders In-Reply-To: <2cbe6bbe-eae0-e69d-590a-c28e05de523b@mozilla.com> References: <2cbe6bbe-eae0-e69d-590a-c28e05de523b@mozilla.com> Message-ID: On Thu, Sep 20, 2018, 3:52 PM Kyle Lahnakoski wrote: > KeyboardInterrupt (any interrupt really) is dangerous. Now, we can > probably code a solution, but how about we remove the danger > The other day I accidentally fork-bombed myself with Python os.fork in an infinite loop. Whoops. It seems to me that Python's design philosophy is to make the safe things beautiful and efficient, but not to remove the dangerous things. I'd be supportive of a proposal that makes threading safer without removing capabilities for those that want them. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu Sep 20 19:27:07 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 21 Sep 2018 11:27:07 +1200 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <20180920082055.GA98218@cskk.homeip.net> Message-ID: <5BA42CCB.4000001@canterbury.ac.nz> Mikhail V wrote: > I think there are forum systems which allow you to post by email so > it is possible to get the same effect as with mailing list, if you really want. I hope that, if any such change is made, a forum system is chosen that allows full participation via either email or news. Otherwise it will probably mean the end of my participation, because I don't have time to chase down and wrestle with multiple web forums every day. -- Greg From Richard at Damon-Family.org Thu Sep 20 22:39:40 2018 From: Richard at Damon-Family.org (Richard Damon) Date: Thu, 20 Sep 2018 22:39:40 -0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: <017BFC81-310A-4AAE-A1E8-D8AE4405034C@gmail.com> <23459.51662.646816.743851@turnbull.sk.tsukuba.ac.jp> Message-ID: <825ab5e8-0f9c-0dfa-c158-ff250604aa8c@Damon-Family.org> On 9/20/18 2:04 PM, Chris Barker via Python-ideas wrote: > > Hmm -- I don't suppose Mailman has a way to filter out threads, does > it? If not, maybe we could add that -- might work well in cases like this. > > -CHB Mailman can filter based on regular expression on anything in the headers of the email. Filtering on Subject does a pretty good job of? 'Thread' filtering You could also filter on In-Reply-To and References to get actual filtering on threads, but would need to list a lot of message-ids (especially for In-Reply-To) to block all replys to a long thread. -- Richard Damon From cs at cskk.id.au Thu Sep 20 22:43:44 2018 From: cs at cskk.id.au (Cameron Simpson) Date: Fri, 21 Sep 2018 12:43:44 +1000 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: Message-ID: <20180921024344.GA90585@cskk.homeip.net> On 20Sep2018 20:55, Mikhail V wrote: >On Thu, Sep 20, 2018 at 11:21 AM Cameron Simpson wrote: >> On 20Sep2018 10:16, Chris Barker - NOAA Federal >> wrote: >> >Let's just keep it on email -- I, at least, find i never participate in any >> >other type of discussion forum regularly. >> >> As do I. Email comes to me. Forums, leaving aside their ergonomic horrors >> (subjective), require a visit. > >So you are ok with 100 emails / day, like it happened when >inline assignment discussion erupted? A drop in a bucket to me. Since I autofile my email, such messages all land in my python folder. Since my mail reader threads, 100 messages on a single topic are easy to follow, or easy to delete/archive/defer if that particular discussion is not of interest to me. And I was interested in the inline assignment discussion. So yes, totally ok. >I think there are forum systems which allow you to post by email so >it is possible to get the same effect as with mailing list, if you really want. The point by point response, such as this one, is hard in a forum, generally. Qualification: in my deliberately limited experience. And few forums provide email mirroring/posting. Were I choosing the forum that would be an essential feature to me. >I think most people want the ability to choose what topic they want to >receive notification and its not possible. It is perfectly possible. I get hundreds of message by email every day, and arrange notifications only for a tiny subset of those. >As for ergonomics - it depends on forum software and design. If I use some >site frequently and it has bad layout/colors/fonts, then I use Stylish plugin to >customize the CSS. Therefore I'd prefer forum with minimalistic CSS >to easily customize the look. > >OTOH if the mailing software has bad ergonomics, I can't do much with that. You can switch clients. With email, there are many clients. Most forums provide only one: a single web interface. >Or if people post a word and leave 5 pages quote below or messed up formatting - >I can't do anything with that. >On a good forum systems such things are less probable = less annoyance >in general. > >I see 2 major problems: >1. The mentioned mass mail delivery >2. PEPs and discussion browsing is far from effective - I'd like a better >way to browse PEPs - for example filtering by topics, eg. "syntax", "module X", >by their status, etc, and of course discoverable relevant discussion. > >Systems used in Stackoverflow, Github already offer these features. >I personally would like Stackoverflow-like format for presenting PEPs >+ discussion >below, so everybody can easily browse PEPs and related info in one place. Clearly these work well for you. How well does that work offline? My laptop collects email continuously, and I visit the relevant folders on my own schedule. If that schedule is on a train with no internet, I'm fine. I can read. I can reply (the message will go out when I'm next online). A forum providing a _good_ email mirroring/posting service might make us both happy. Cheers, Cameron Simpson From arj.python at gmail.com Fri Sep 21 00:12:03 2018 From: arj.python at gmail.com (Abdur-Rahmaan Janhangeer) Date: Fri, 21 Sep 2018 08:12:03 +0400 Subject: [Python-ideas] Moving to another forum system where moderation is possible In-Reply-To: References: Message-ID: my closing comment on this thread : i back discourse, atwood is a nice guy, he believes in his product. just mobile, mobile usage is a must. Abdur-Rahmaan Janhangeer https://github.com/Abdur-rahmaanJ Mauritius -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Thu Sep 20 21:45:10 2018 From: jamtlu at gmail.com (James Lu) Date: Thu, 20 Sep 2018 21:45:10 -0400 Subject: [Python-ideas] Moving to another forum system where Message-ID: <5557B592-78D7-4EAC-846C-99461B34123A@gmail.com> One of the reasons Guido left was the insane volume of emails he had to read on Python-ideas. > A tiny bit of discussion is still better than none at all. > And even if there's no discussion, there's a name attached > to the message, which makes it more personal and meaningful > than a "+1" counter getting incremented somewhere. > > Counting only makes sense if the counts are going to be > treated as votes, and we don't do that. I agree. I think this is good evidence in favor of using GitHub pull requests or GitHub issues- you can see exactly who +1?d a topic. GitHub also has moderation tools and the ability to delete comments that are irrelevant, and edit comments that are disrespectful. > I hope that, if any such change is made, a forum system is > chosen that allows full participation via either email or news. > Otherwise it will probably mean the end of my participation, > because I don't have time to chase down and wrestle with > multiple web forums every day. +1, everyone should be accommodated. I believe GitHub has direct email capability. If you watch the repository and have email notifications on, you can reply directly to an email and it will be sent as a reply. ? To solve the problem of tons of email for controversial decisions like :=, I don?t think GitHub issues would actually be the solution. The best solution would to have admins receive all the email, and broadcast a subset of the email sent, only broadcasting new arguments and new opinions. Admins can do this ?summary duty? every 12 hours on a rotating basis, where each admin takes turns doing summary duty. This solution would mean a slower iteration time for the conversation, but it would significantly lessen the deluge of email, and I think that would make it more bearable for people participating in the conversation. After all, once a proposal has been fleshed out, what kind of conversation needs more than say 30 rounds of constructive discussion- in that case, if people reply every 25 hours, the discussion would be done in a month. For transparency purposes, all of the email can be made received for approval can be published online. From jamtlu at gmail.com Thu Sep 20 21:56:30 2018 From: jamtlu at gmail.com (James Lu) Date: Thu, 20 Sep 2018 21:56:30 -0400 Subject: [Python-ideas] =?utf-8?q?JS=E2=80=99_governance_model_is_worth_i?= =?utf-8?q?nspecting?= Message-ID: JS? decisions are made by a body known as TC39, a fairly/very small group of JS implementers. First, JS has an easy and widely supported way to modify the language for yourself: Babel. Babel transpires your JS to older JS, which is then run. You can publish your language modification on the JS package manager, npm. When a feature is being considered for inclusion in mainline JS, the proposal must first gain a champion (represented by ?)that is a member of TC-39. The guidelines say that the proposal?s features should already have found use in the community. Then it moves through three stages, and the champion must think the proposal is ready for the next stage before it can move on. I?m hazy on what the criterion for each of the three stages is. The fourth stage is approved. I believe the global TC39 committee meets regularly in person, and at those meetings, proposals can advance stages- these meetings are frequent enough for the process to be fast and slow enough that people can have the time to try out a feature before it becomes main line JS. Meeting notes are made public. The language and its future features are discussed on ESDiscuss.org, which is surprisingly filled with quality and respectful discussion, largely from experts in the JavaScript language. I?m fairly hazy on the details, this is just the summary off the top of my head. ? I?m not saying this should be Python?s governance model, just to keep JS? in mind. From rhodri at kynesim.co.uk Fri Sep 21 09:13:18 2018 From: rhodri at kynesim.co.uk (Rhodri James) Date: Fri, 21 Sep 2018 14:13:18 +0100 Subject: [Python-ideas] CoC violation In-Reply-To: References: <20180915193848.749c1e03@fsol> Message-ID: <44c35adf-24e1-72d3-c043-a496fa6fd7f4@kynesim.co.uk> On 20/09/18 19:56, Brett Cannon wrote: > Based on the WG's recommendation and after discussing it with Titus, the > decision has been made to ban Jacco from python-ideas. Trivializing > assault, using the n-word, and making inappropriate comments about > someone's mental stability are all uncalled for and entirely unnecessary to > carry on a reasonable discourse of conversation that remains welcoming to > others. Not a challenge to the ban in any way, but I feel the need to repeat what I said about banning words. The moment you create that taboo, you give the word power. That's the exact opposite of what you want to do. It's the intent with which the word is used that matters. I've heard all sorts of words used as insults -- "special", anyone? -- and many of the same words used innocently or affectionately. Banning bad or insulting behaviour is fine. Banning words is a bad path to go down. -- Rhodri James *-* Kynesim Ltd From flying-sheep at web.de Fri Sep 21 09:55:10 2018 From: flying-sheep at web.de (Philipp A.) Date: Fri, 21 Sep 2018 15:55:10 +0200 Subject: [Python-ideas] CoC violation In-Reply-To: <44c35adf-24e1-72d3-c043-a496fa6fd7f4@kynesim.co.uk> References: <20180915193848.749c1e03@fsol> <44c35adf-24e1-72d3-c043-a496fa6fd7f4@kynesim.co.uk> Message-ID: The main clause differentiating bad, weaponizable CoCs from good ones is "Assume good faith" Everything will be OK if good faith can reasonably be assumed (E.g. when someone uses a word which is only offensive based on context) On the other hand, e.g. obvious racial slurs never have a place on a discussion board about a programming language. How can one possibly say them in good faith? Rhodri James schrieb am Fr., 21. Sep. 2018 um 15:46 Uhr: > On 20/09/18 19:56, Brett Cannon wrote: > > Based on the WG's recommendation and after discussing it with Titus, the > > decision has been made to ban Jacco from python-ideas. Trivializing > > assault, using the n-word, and making inappropriate comments about > > someone's mental stability are all uncalled for and entirely unnecessary > to > > carry on a reasonable discourse of conversation that remains welcoming to > > others. > > Not a challenge to the ban in any way, but I feel the need to repeat > what I said about banning words. The moment you create that taboo, you > give the word power. That's the exact opposite of what you want to do. > It's the intent with which the word is used that matters. I've heard > all sorts of words used as insults -- "special", anyone? -- and many of > the same words used innocently or affectionately. > > Banning bad or insulting behaviour is fine. Banning words is a bad path > to go down. > > -- > Rhodri James *-* Kynesim Ltd > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Sep 21 10:44:11 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 21 Sep 2018 16:44:11 +0200 Subject: [Python-ideas] "slur" vs "insult"? References: <20180915193848.749c1e03@fsol> <44c35adf-24e1-72d3-c043-a496fa6fd7f4@kynesim.co.uk> Message-ID: <20180921164411.2c26f346@fsol> Hi, For the record I was surprised to see the word "slur" pop up quite often recently, while I'd only heard "insult" before. I looked it up and it doesn't help that the French translation seems to be the same in both cases (it's "insulte"). Then I came upon this thread where someone pretty much asks the same question: https://www.reddit.com/r/EnglishLearning/comments/6bjgwq/slur_vs_insult/ and the comments there are interesting as to how complicated and difficult to grasp the cultural landscape of linguistic taboos really is. Regards Antoine. On Fri, 21 Sep 2018 15:55:10 +0200 "Philipp A." wrote: > The main clause differentiating bad, weaponizable CoCs from good ones is > > "Assume good faith" > > Everything will be OK if good faith can reasonably be assumed (E.g. when > someone uses a word which is only offensive based on context) > On the other hand, e.g. obvious racial slurs never have a place on a > discussion board about a programming language. How can one possibly say > them in good faith? > > Rhodri James schrieb am Fr., 21. Sep. 2018 um > 15:46 Uhr: > > > On 20/09/18 19:56, Brett Cannon wrote: > > > Based on the WG's recommendation and after discussing it with Titus, the > > > decision has been made to ban Jacco from python-ideas. Trivializing > > > assault, using the n-word, and making inappropriate comments about > > > someone's mental stability are all uncalled for and entirely unnecessary > > to > > > carry on a reasonable discourse of conversation that remains welcoming to > > > others. > > > > Not a challenge to the ban in any way, but I feel the need to repeat > > what I said about banning words. The moment you create that taboo, you > > give the word power. That's the exact opposite of what you want to do. > > It's the intent with which the word is used that matters. I've heard > > all sorts of words used as insults -- "special", anyone? -- and many of > > the same words used innocently or affectionately. > > > > Banning bad or insulting behaviour is fine. Banning words is a bad path > > to go down. > > > > -- > > Rhodri James *-* Kynesim Ltd > > _______________________________________________ > > Python-ideas mailing list > > Python-ideas at python.org > > https://mail.python.org/mailman/listinfo/python-ideas > > Code of Conduct: http://python.org/psf/codeofconduct/ > > > From elazarg at gmail.com Fri Sep 21 10:52:16 2018 From: elazarg at gmail.com (Elazar) Date: Fri, 21 Sep 2018 17:52:16 +0300 Subject: [Python-ideas] CoC violation In-Reply-To: References: <20180915193848.749c1e03@fsol> <44c35adf-24e1-72d3-c043-a496fa6fd7f4@kynesim.co.uk> Message-ID: On Fri, Sep 21, 2018, 16:56 Philipp A. wrote: > The main clause differentiating bad, weaponizable CoCs from good ones is > > "Assume good faith" > > Everything will be OK if good faith can reasonably be assumed (E.g. when > someone uses a word which is only offensive based on context) > On the other hand, e.g. obvious racial slurs never have a place on a > discussion board about a programming language. How can one possibly say > them in good faith? > Here's how: as a demonstration that words that are considered slurs in certain contexts (such as the word "Negro" in America) might be considered perfectly legitimate day-to-day words in another context. Even if the example was incorrect, it is still legitimate. Your question should be directed against the OP in the discussion, bringing up an issue completely unrelated to programming languages (probably trolling, as several people before me have pointed out). Elazar > Rhodri James schrieb am Fr., 21. Sep. 2018 um > 15:46 Uhr: > >> On 20/09/18 19:56, Brett Cannon wrote: >> > Based on the WG's recommendation and after discussing it with Titus, the >> > decision has been made to ban Jacco from python-ideas. Trivializing >> > assault, using the n-word, and making inappropriate comments about >> > someone's mental stability are all uncalled for and entirely >> unnecessary to >> > carry on a reasonable discourse of conversation that remains welcoming >> to >> > others. >> >> Not a challenge to the ban in any way, but I feel the need to repeat >> what I said about banning words. The moment you create that taboo, you >> give the word power. That's the exact opposite of what you want to do. >> It's the intent with which the word is used that matters. I've heard >> all sorts of words used as insults -- "special", anyone? -- and many of >> the same words used innocently or affectionately. >> >> Banning bad or insulting behaviour is fine. Banning words is a bad path >> to go down. >> >> -- >> Rhodri James *-* Kynesim Ltd >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Sep 21 11:11:20 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 21 Sep 2018 17:11:20 +0200 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <5557B592-78D7-4EAC-846C-99461B34123A@gmail.com> References: <5557B592-78D7-4EAC-846C-99461B34123A@gmail.com> Message-ID: On Fri, Sep 21, 2018 at 1:24 PM James Lu wrote: > One of the reasons Guido left was the insane volume of emails he had to > read on Python-ideas. > You'd have to ask Guido directly, but I don't think so. It wasn't the volume, but the nature and timing of the discussion that was so difficult. It went on for a LONG time, with many, many circular arguments, and people commenting on issues that has already been brought up and maybe resolved. Then the kicker -- after a decision was made, there were very strong objections -- the whole process was rather ugly. One can certainly make a good case that a different system for having such discussion might have made it much better -- but I'm not so sure. But maybe it's a good case-study t guide a decision. Frankly, I'm more concerned about how an important technical discussion like that goes than I am about than issues like the recent "beautiful - ugly" thread. Maybe we need something in-between python-idea and python-dev -- a place to discuss "serious" proposals, where "serious" means somewhat fleshed out, and with the support of at least a couple key people. One of the problems with the assignment expression discussion is that it got pretty far on python-ideas, then moved to python-dev, where is was further discussed (and there were parallel thread on the two lists) -- but the two list have overlapping, but different members, so some folks were surprised at the outcome. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From rymg19 at gmail.com Fri Sep 21 11:23:25 2018 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Fri, 21 Sep 2018 10:23:25 -0500 Subject: [Python-ideas] "slur" vs "insult"? In-Reply-To: <20180921164411.2c26f346@fsol> References: <20180915193848.749c1e03@fsol> <44c35adf-24e1-72d3-c043-a496fa6fd7f4@kynesim.co.uk> <20180921164411.2c26f346@fsol> Message-ID: Kinda OT, but I believe the connotation is that slur is use of the word, whereas an insult is use directed at someone. For instance, if someone is having a conversation where they use the n-word, it's a racial slur. If they directly call someone that, it's an insult. On Fri, Sep 21, 2018, 9:45 AM Antoine Pitrou wrote: > > Hi, > > For the record I was surprised to see the word "slur" pop up > quite often recently, while I'd only heard "insult" before. I > looked it up and it doesn't help that the French translation seems to > be the same in both cases (it's "insulte"). > > Then I came upon this thread where someone pretty much asks the same > question: > https://www.reddit.com/r/EnglishLearning/comments/6bjgwq/slur_vs_insult/ > > and the comments there are interesting as to how complicated and > difficult to grasp the cultural landscape of linguistic taboos really > is. > > Regards > > Antoine. > > > On Fri, 21 Sep 2018 15:55:10 +0200 > "Philipp A." wrote: > > The main clause differentiating bad, weaponizable CoCs from good ones is > > > > "Assume good faith" > > > > Everything will be OK if good faith can reasonably be assumed (E.g. when > > someone uses a word which is only offensive based on context) > > On the other hand, e.g. obvious racial slurs never have a place on a > > discussion board about a programming language. How can one possibly say > > them in good faith? > > > > Rhodri James schrieb am Fr., 21. Sep. 2018 um > > 15:46 Uhr: > > > > > On 20/09/18 19:56, Brett Cannon wrote: > > > > Based on the WG's recommendation and after discussing it with Titus, > the > > > > decision has been made to ban Jacco from python-ideas. Trivializing > > > > assault, using the n-word, and making inappropriate comments about > > > > someone's mental stability are all uncalled for and entirely > unnecessary > > > to > > > > carry on a reasonable discourse of conversation that remains > welcoming to > > > > others. > > > > > > Not a challenge to the ban in any way, but I feel the need to repeat > > > what I said about banning words. The moment you create that taboo, you > > > give the word power. That's the exact opposite of what you want to do. > > > It's the intent with which the word is used that matters. I've heard > > > all sorts of words used as insults -- "special", anyone? -- and many of > > > the same words used innocently or affectionately. > > > > > > Banning bad or insulting behaviour is fine. Banning words is a bad > path > > > to go down. > > > > > > -- > > > Rhodri James *-* Kynesim Ltd > > > _______________________________________________ > > > Python-ideas mailing list > > > Python-ideas at python.org > > > https://mail.python.org/mailman/listinfo/python-ideas > > > Code of Conduct: http://python.org/psf/codeofconduct/ > > > > > > > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -- Ryan (????) Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone else https://refi64.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhodri at kynesim.co.uk Fri Sep 21 11:36:31 2018 From: rhodri at kynesim.co.uk (Rhodri James) Date: Fri, 21 Sep 2018 16:36:31 +0100 Subject: [Python-ideas] "slur" vs "insult"? In-Reply-To: References: <20180915193848.749c1e03@fsol> <44c35adf-24e1-72d3-c043-a496fa6fd7f4@kynesim.co.uk> <20180921164411.2c26f346@fsol> Message-ID: On 21/09/18 16:23, Ryan Gonzalez wrote: > Kinda OT, but I believe the connotation is that slur is use of the word, > whereas an insult is use directed at someone. According to Chambers online, a slur is "a disparaging remark intended to damage a reputation" while an insult is "a rude or offensive remark or action" (at least in the meanings we are talking about). They have overlapping meanings but aren't identical. -- Rhodri James *-* Kynesim Ltd From hasan.diwan at gmail.com Fri Sep 21 11:49:16 2018 From: hasan.diwan at gmail.com (Hasan Diwan) Date: Fri, 21 Sep 2018 08:49:16 -0700 Subject: [Python-ideas] "slur" vs "insult"? In-Reply-To: References: <20180915193848.749c1e03@fsol> <44c35adf-24e1-72d3-c043-a496fa6fd7f4@kynesim.co.uk> <20180921164411.2c26f346@fsol> Message-ID: English is not straightforward and is constantly evolving. -- H On Fri, 21 Sep 2018 at 08:45, Rhodri James wrote: > > On 21/09/18 16:23, Ryan Gonzalez wrote: > > Kinda OT, but I believe the connotation is that slur is use of the word, > > whereas an insult is use directed at someone. > > According to Chambers online, a slur is "a disparaging remark intended > to damage a reputation" while an insult is "a rude or offensive remark > or action" (at least in the meanings we are talking about). They have > overlapping meanings but aren't identical. > > -- > Rhodri James *-* Kynesim Ltd > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ -- OpenPGP: https://sks-keyservers.net/pks/lookup?op=get&search=0xFEBAD7FFD041BBA1 If you wish to request my time, please do so using bit.ly/hd1AppointmentRequest. Si vous voudrais faire connnaisance, allez a bit.ly/hd1AppointmentRequest. Sent from my mobile device Envoye de mon portable From rymg19 at gmail.com Fri Sep 21 11:54:40 2018 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Fri, 21 Sep 2018 10:54:40 -0500 Subject: [Python-ideas] =?utf-8?q?JS=E2=80=99_governance_model_is_worth_i?= =?utf-8?q?nspecting?= In-Reply-To: References: Message-ID: This feels a bit like apples and oranges. Babel's primary purpose is transpiling to run on older browsers, which isn't that much of an issue with Python. It's also complicated a bit by the large number of implementations that *must* be developed in sync, again due to running in user's browsers. On Fri, Sep 21, 2018, 6:25 AM James Lu wrote: > JS? decisions are made by a body known as TC39, a fairly/very small group > of JS implementers. > > First, JS has an easy and widely supported way to modify the language for > yourself: Babel. Babel transpires your JS to older JS, which is then run. > > You can publish your language modification on the JS package manager, npm. > > When a feature is being considered for inclusion in mainline JS, the > proposal must first gain a champion (represented by ?)that is a member of > TC-39. The guidelines say that the proposal?s features should already have > found use in the community. Then it moves through three stages, and the > champion must think the proposal is ready for the next stage before it can > move on. I?m hazy on what the criterion for each of the three stages is. > The fourth stage is approved. > > I believe the global TC39 committee meets regularly in person, and at > those meetings, proposals can advance stages- these meetings are frequent > enough for the process to be fast and slow enough that people can have the > time to try out a feature before it becomes main line JS. Meeting notes are > made public. > > The language and its future features are discussed on ESDiscuss.org, which > is surprisingly filled with quality and respectful discussion, largely from > experts in the JavaScript language. > > I?m fairly hazy on the details, this is just the summary off the top of my > head. > > ? > I?m not saying this should be Python?s governance model, just to keep JS? > in mind. > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -- Ryan (????) Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone else https://refi64.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Fri Sep 21 13:55:03 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Fri, 21 Sep 2018 13:55:03 -0400 Subject: [Python-ideas] CoC violation In-Reply-To: References: <20180915193848.749c1e03@fsol> <44c35adf-24e1-72d3-c043-a496fa6fd7f4@kynesim.co.uk> Message-ID: On Fri, Sep 21, 2018 at 10:52 AM Elazar wrote: > > > > On Fri, Sep 21, 2018, 16:56 Philipp A. wrote: >> >> The main clause differentiating bad, weaponizable CoCs from good ones is >> >> "Assume good faith" >> >> Everything will be OK if good faith can reasonably be assumed (E.g. when someone uses a word which is only offensive based on context) >> On the other hand, e.g. obvious racial slurs never have a place on a discussion board about a programming language. How can one possibly say them in good faith? > > > Here's how: as a demonstration that words that are considered slurs in certain contexts (such as the word "Negro" in America) might be considered perfectly legitimate day-to-day words in another context. Even if the example was incorrect, it is still legitimate. I didn't report him, and I don't agree with the ban, but I assume I'm missing something if they felt the need to act so strongly, days after the discussion died down. Some words are KNOWN to be considered taboo by some. Using the word (instead of a euphemism), especially while discussing the taboo, is an intentional political act against those people. Compare with "Voldemort" in the well-known series "Harry Potter". The protagonists use the name in the presence of other, superstitious, characters, when they intend to change the status quo. If they wanted to have a polite conversation about it, they would use the common euphemism for that name, because you don't want to ADD emotions to such a conversation. (I'm intentionally using a positive example, to keep people from feeling slighted by a negative one.) > Your question should be directed against the OP in the discussion, bringing up an issue completely unrelated to programming languages (probably trolling, as several people before me have pointed out). I'm one of those who believe the OP was a troll. But you're saying that the post was also off-topic. Where would you rather have it go? The proposal was about changing the Zen of Python. If ANY proposed change in the Zen goes through, I'd expect to see a discussion on python-ideas. On the other hand, discussing taboo words in general society is less on-topic. Tie it back to Python and how it hurts Python to ban these words. From ctbrown at ucdavis.edu Fri Sep 21 13:59:06 2018 From: ctbrown at ucdavis.edu (C. Titus Brown) Date: Fri, 21 Sep 2018 10:59:06 -0700 Subject: [Python-ideas] CoC violation In-Reply-To: References: <20180915193848.749c1e03@fsol> <44c35adf-24e1-72d3-c043-a496fa6fd7f4@kynesim.co.uk> Message-ID: <20180921175906.GA18197@idyll.org> On Fri, Sep 21, 2018 at 01:55:03PM -0400, Franklin? Lee wrote: > On Fri, Sep 21, 2018 at 10:52 AM Elazar wrote: > > > > > > > > On Fri, Sep 21, 2018, 16:56 Philipp A. wrote: > >> > >> The main clause differentiating bad, weaponizable CoCs from good ones is > >> > >> "Assume good faith" > >> > >> Everything will be OK if good faith can reasonably be assumed (E.g. when someone uses a word which is only offensive based on context) > >> On the other hand, e.g. obvious racial slurs never have a place on a discussion board about a programming language. How can one possibly say them in good faith? > > > > > > Here's how: as a demonstration that words that are considered slurs in certain contexts (such as the word "Negro" in America) might be considered perfectly legitimate day-to-day words in another context. Even if the example was incorrect, it is still legitimate. > > I didn't report him, and I don't agree with the ban, but I assume I'm > missing something if they felt the need to act so strongly, days after > the discussion died down. Hi folks, we have a committee-based process for making these decisions, so it necessarily takes some time. Brett and I can make urgent decisions but everything goes through the process. best, --titus From jamtlu at gmail.com Fri Sep 21 16:29:57 2018 From: jamtlu at gmail.com (James Lu) Date: Fri, 21 Sep 2018 16:29:57 -0400 Subject: [Python-ideas] =?utf-8?q?JS=E2=80=99_governance_model_is_worth_i?= =?utf-8?q?nspecting?= In-Reply-To: References: Message-ID: <9103CDEF-8933-4081-A986-EA6949885E1A@gmail.com> > Babel's primary purpose is transpiling to run on older browsers, which isn't that much of an issue with Python. It's also complicated a bit by the large number of implementations that *must* be developed in sync, again due to running in user's browsers. It?s true that one of Babel?s purposes is to transpile to older browsers. However, if you look at the React Native project template (a fairly typical JavaScript project template), the Babel transpiler preset looks like this: - Remove Flow and transpile JSX: these are language extensions for type hinting and inline XML, not intended to be merged with mainline JavaScript. - Transpile Standardized JavaScript to older JavaScript - Stage-3 Proposal: adds dedicated syntax for setting static and instance variables outside of the constructor but within the class. - Stage-1 Proposal: syntax to support trailing comma in functions function foo( a, b, c, ) { } Inspired from Python?s syntax: def foo(a, b, c, ): ... As you can see, two non-standard features under consideration for inclusion in the standard are included in the preset. This inclusion of non-standard features is typical for JS starter projects. One of the requirements for advancing stages is seeing practical use in the industry. Since almost everyone uses Babel anyways, this four stage process acts as a way to gain consensus on the base set of JS features. Almost all of the newest standard JS took this route of unofficial use before official inclusion. From greg.ewing at canterbury.ac.nz Fri Sep 21 19:18:47 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 22 Sep 2018 11:18:47 +1200 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <5557B592-78D7-4EAC-846C-99461B34123A@gmail.com> References: <5557B592-78D7-4EAC-846C-99461B34123A@gmail.com> Message-ID: <5BA57C57.90904@canterbury.ac.nz> James Lu wrote: > I believe GitHub has direct email > capability. If you watch the repository and have email notifications on, you > can reply directly to an email and it will be sent as a reply. Can you start a new topic of conversation by email, though? > The best solution > would to have admins receive all the email, and broadcast a subset of the > email sent, only broadcasting new arguments and new opinions. > > Admins can do this ?summary duty? every 12 hours on a rotating basis, where > each admin takes turns doing summary duty. Even spreading the load out, it sounds like a huge amount of work. And I question the feasibility of admins deciding whether an argument is "new" or not -- that would require an encyclopaedic knowledge of all past discussions. Hard enough for one person, even harder if it's a rotating duty. -- Greg From Richard at Damon-Family.org Fri Sep 21 19:31:35 2018 From: Richard at Damon-Family.org (Richard Damon) Date: Fri, 21 Sep 2018 19:31:35 -0400 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: <5557B592-78D7-4EAC-846C-99461B34123A@gmail.com> References: <5557B592-78D7-4EAC-846C-99461B34123A@gmail.com> Message-ID: <00f46b60-95c8-c6ff-e796-daf9bec8b388@Damon-Family.org> On 9/20/18 9:45 PM, James Lu wrote: > One of the reasons Guido left was the insane volume of emails he had to read on Python-ideas. > >> A tiny bit of discussion is still better than none at all. >> And even if there's no discussion, there's a name attached >> to the message, which makes it more personal and meaningful >> than a "+1" counter getting incremented somewhere. >> >> Counting only makes sense if the counts are going to be >> treated as votes, and we don't do that. > I agree. I think this is good evidence in favor of using GitHub pull requests or GitHub issues- you can see exactly who +1?d a topic. > > GitHub also has moderation tools and the ability to delete comments that are irrelevant, and edit comments that are disrespectful. > >> I hope that, if any such change is made, a forum system is >> chosen that allows full participation via either email or news. >> Otherwise it will probably mean the end of my participation, >> because I don't have time to chase down and wrestle with >> multiple web forums every day. > +1, everyone should be accommodated. I believe GitHub has direct email capability. If you watch the repository and have email notifications on, you can reply directly to an email and it will be sent as a reply. > > ? > To solve the problem of tons of email for controversial decisions like :=, I don?t think GitHub issues would actually be the solution. The best solution would to have admins receive all the email, and broadcast a subset of the email sent, only broadcasting new arguments and new opinions. > > Admins can do this ?summary duty? every 12 hours on a rotating basis, where each admin takes turns doing summary duty. > > This solution would mean a slower iteration time for the conversation, but it would significantly lessen the deluge of email, and I think that would make it more bearable for people participating in the conversation. After all, once a proposal has been fleshed out, what kind of conversation needs more than say 30 rounds of constructive discussion- in that case, if people reply every 25 hours, the discussion would be done in a month. > > For transparency purposes, all of the email can be made received for approval can be published online. Actually, since this is a Mailman list, all that needs to happen is to turn on moderation. Every message is held in the moderation queue till handled. If any of the people in charge think it is a useful message, they release it to the list. If any of the people in charge think it is a bad message, they can reject it (first to act wins). Probably need someone to periodically review the messages that have sit for a bit and make a decision on them. Some trusted people can have their moderation status removed, and what they post goes to the list immediately, and if they abuse that right, it can be taken back. -- Richard Damon From greg.ewing at canterbury.ac.nz Fri Sep 21 20:33:25 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 22 Sep 2018 12:33:25 +1200 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: References: <5557B592-78D7-4EAC-846C-99461B34123A@gmail.com> Message-ID: <5BA58DD5.609@canterbury.ac.nz> Chris Barker via Python-ideas wrote: > One of the > problems with the assignment expression discussion is that it got pretty > far on python-ideas, then moved to python-dev, where is was further > discussed (and there were parallel thread on the two lists) As long as there are two lists with similar purposes, this sort of thing will be prone to happen -- and adding a third list can only make it worse. -- Greg From brenbarn at brenbarn.net Fri Sep 21 12:20:01 2018 From: brenbarn at brenbarn.net (Brendan Barnwell) Date: Fri, 21 Sep 2018 09:20:01 -0700 Subject: [Python-ideas] =?utf-8?q?JS=E2=80=99_governance_model_is_worth_i?= =?utf-8?q?nspecting?= In-Reply-To: References: Message-ID: <5BA51A31.1030302@brenbarn.net> On 2018-09-20 18:56, James Lu wrote: > JS? decisions are made by a body known as TC39, a fairly/very small > group of JS implementers. > ? I?m not saying this should be Python?s governance model, just to > keep JS? in mind. To my mind, there is one very big reason we should be cautious about adopting JS language-design policies, namely, that they have led to a very, very poorly designed language. No doubt a good deal of that is baggage from early stages in which JS had a poor to nonexistent language design governance model. Nonetheless, the failure of JS to fix its numerous fundamental flaws, and especially the rapid feature churn in recent years, suggests to me that their model should be viewed with skepticism. -- Brendan Barnwell "Do not follow where the path may lead. Go, instead, where there is no path, and leave a trail." --author unknown From turnbull.stephen.fw at u.tsukuba.ac.jp Sat Sep 22 04:00:26 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Sat, 22 Sep 2018 17:00:26 +0900 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: References: <5557B592-78D7-4EAC-846C-99461B34123A@gmail.com> Message-ID: <23461.63130.176175.582090@turnbull.sk.tsukuba.ac.jp> Chris Barker via Python-ideas writes: > On Fri, Sep 21, 2018 at 1:24 PM James Lu wrote: > > > One of the reasons Guido left was the insane volume of emails he > > had to read on Python-ideas. > > You'd have to ask Guido directly, but I don't think so. It wasn't > the volume, but the nature and timing of the discussion that was so > difficult. +1. I have talked to Guido about this issue, though long before his BDFL resignation, and at that time he pointed out nature and timing as his primary concern. (Antoine Pitrou has also lamented the fact that people take a post asking for help on a technical issue in an approved PR as a chance to reopen debate on the wisdom of the change.) For Guido, the "thread mute" feature of his MUA does a lot of work to mitigate volume. > Maybe we need something in-between python-idea and python-dev -- a > place to discuss "serious" proposals, where "serious" means > somewhat fleshed out, and with the support of at least a couple key > people. I'm with Greg Ewing on this: an additional list simply adds more potential for confusion and misinformation. > One of the problems with the assignment expression discussion is > that it got pretty far on python-ideas, then moved to python-dev, > where is was further discussed (and there were parallel thread on > the two lists)[.] That's a good point, one I had not noticed, and very useful to the Mailman devs. This is an excellent reason for invoking cloture on a thread. It's the only one needed on Python lists IMO -- if things get bad enough that enforced moderation, rather than a "nothing to see here, people, please move along" post, is needed, usually there's a bad actor who needs a time out. Steve From turnbull.stephen.fw at u.tsukuba.ac.jp Sat Sep 22 04:11:22 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Sat, 22 Sep 2018 17:11:22 +0900 Subject: [Python-ideas] PEPs: Theory of operation [was: Moving to another forum system ...] In-Reply-To: <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> Message-ID: <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> Executive summary: Writing a PEP is an inherently uncertain process. Achieving "community consensus" is the goal of the process, not a precondition. Anders Hovm?ller writes: > In general pep1 is frustratingly vague. Terms like ?community > consensus? without defining community or what numbers would > constitute a consensus are not fun to read as someone who doesn?t > personally know anyone of the core devs. Further references to > Guido are even more frustrating now that he?s bowed out. These terms have little to do with what a new PEP's proponent needs to think about, though. A PEP-able proposal by definition involves uncertainty. Nobody, not even Guido, can tell you in advance whether a PEP will be accepted (for implementation). The PEP process is rigorous enough that by the time you get close to needing consensus to proceed, you'll know what it means. "Community consensus" is not a condition for *anything* in the PEP process, except final acceptance. It is the *goal* of the process. PEPs are approved (for publication) by default; the only requirement is editorial completeness. PEPs are needed for two reasons: (1) to get the input of the community, both highly competent engineers for implementation and a variety of users for requirements, to refine a complex proposal or one with far-reaching implications for the language, and/or (2) to build a consensus for implementation. Either way, by definition the outcome is unclear at the beginning. If your concern about "consensus" is that you want to know whether you're likely to get to consensus, and an accepted PEP, ask somebody who seems sympathetic and experienced enough to know about what it looks like on the list when a PEP is going to succeed. Anything PEP-able is sufficiently unclear that rules can't be given in PEP 1. It is possible only to say that Python is now very mature, and there's a strong conservative bias against change. That doesn't mean there aren't changes: Python attracts a lot of feature proposals, so the rate of change isn't slowing although the acceptance rate is declining gradually. "Consensus" is never defined by numbers in the English language, and it does not mean "unanimity". In PEP 1, it means that some people agree, most people don't disagree, and even if a senior person disagrees, they're willing to go along with the "sense of the community". As that adjective "senior" implies, some people count more to the consensus than others. Usually when I write "senior" I'm referring to core developers (committers), but here there people who are "senior" enough despite not having commit bits.[1] "The community" is not well defined, and it can't be, short of a doctoral dissertation in anthropology. The relevant channels are open-participation, some people speak for themselves, some people are "official" representatives of important constituencies such as the leaders of large independent projects or alternative implementations, and some people have acquired sufficient reputation to be considered representative of a group of people (especially when other members of the group rarely participate in the dev lists but for some reason are considered important to the community -- I'm thinking in particular of sysadmins and devops, and the problems we can cause them by messing with packaging and installation). References to the BDFL are, of course, in limbo. AFAIK we don't have one at the moment. Until we do, any PEPs will presumably be accepted either by a self-nominated BDFL-Delegate acceptable to the core devs, or by an ad hoc committee of interested core devs, and that part of PEP 1 can't be usefully updated yet. This is not a weakness of the Python project, IMO. Rather, the fact that, despite a sort of constitutional crisis, the whole process is continuing pretty much as usual shows its strength. This is possible because the BDFL is not, and has not been for many years, a "hands-on" manager. It's true that where a proposal affects his own "development *in* Python", he's likely to work closely with a proponent, off- and on-list, or even *be* the proponent. Of course such proposals are more likely to be approved, and a few community members have pushed back on that because it appears undemocratic. But the general reaction is "maybe 'Although that way may not be obvious at first unless you're Dutch' applies to me in such cases!" For most proposals, he's "just" a very senior developer whose comments are important because he's a great developer, but he is easily swayed by the sense of the community. Bottom line: except in the rare case where your proposal directly affects the BDFL's own coding, the BDFL's now-traditional role is to declare that consensus has been achieved, postpone the PEP because it's clear that consensus is not forming, or in rare cases, make a choice despite the lack of consensus. But none of this is really of importance to a PEP proponent ("champion" in the terminology of PEP 1). PEP 1 is quite specific about the required components of the document, and many points of formatting and style. Accept the uncertainty, and do what you need to do to meet those requirements, that's all there is to it. If the community wants more, or wants changes, it will tell you, either as a demand about style or missing content from an editor or as a technical comment on the list. Whether you accept those technical comments is up to you, but your star will rise far more rapidly if you are very sensitive to claims that "this change to the PEP will a big improvement for some significant consituency in the community". If you want advice on whether the chance of acceptance is high enough to be worth putting in more work, ask the BDFL-Delegate (or the BDFL if she/he has "claimed" the PEP) where the proposal has an official adjudicator, and if not, a senior core developer. If one doesn't know who the senior developers are yet, she should think twice about whether she's ready to PEP anything. That's not a litmus test; some PEPs have eventually succeeded though the proponent was new to the project development process.[2] But it's a lot less painful if you can tell who's likely to be able to sway the whole project one way or the other. And as a matter of improving your proposal, who surely does know more about what your proposal implies for the implementation than you do, so you should strongly consider whether *you* are the one who's missing something when you disagree with them. Footnotes: [1] They are familiar to some of the core developers as drivers of important projects developing *in* Python. [2] The ones I can think of involve the same kind of person as footnote 1, and a co-proponent who was a core developer. From turnbull.stephen.fw at u.tsukuba.ac.jp Sat Sep 22 04:12:45 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Sat, 22 Sep 2018 17:12:45 +0900 Subject: [Python-ideas] Moving to another forum system where In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> Message-ID: <23461.63869.732837.294576@turnbull.sk.tsukuba.ac.jp> Michael Selik writes: > On Thu, Sep 20, 2018 at 2:13 AM Stephen J. Turnbull > wrote: > > That's because completion of discussion has never been a requirement > > for writing a PEP. > > Not for drafting, but for submitting. For my own PEP submission, I > received the specific feedback that it needed a "proper title" before > being assigned a PEP number. What does that have to do with "completion of discussion"? I don't know what the editor told you, but in the PEP "proper title" is well- defined and not very stringent: "accurately describes the content". > My goal for submitting the draft was to receive a PEP number to > avoid the awkwardness of discussing a PEP without an obvious > title. Perhaps PEP 1 should be revised to clarify the expectations > for PEP submission. Good point. That's definitely grounds for refusing to approve the PEP, but the approval criteria are in the section "PEP Editor Responsibilities & Workflow". I'm submitting a pull request (python/peps #789 on GitHub) to also put it in the section "Submitting a PEP", under the bullet "The PEP editors review your PR for structure, formatting, and other errors." Steve From marko.ristin at gmail.com Sat Sep 22 04:30:23 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sat, 22 Sep 2018 10:30:23 +0200 Subject: [Python-ideas] Pre-conditions and post-conditions In-Reply-To: References: <140891b8-3aef-0991-9421-7479e6a63eb6@gmail.com> Message-ID: Hi, I implemented a sphinx extension to include contracts in the documentation: https://github.com/Parquery/sphinx-icontract The extension supports inheritance. It lists all the postconditions and invariants including the inherited one. The preconditions are grouped by classes with ":requires:" and ":requires else:". I was unable to get the syntax highlighting for in-line code to work -- does anybody know how to do that in Sphinx? The results can be seen, *e.g.* in this documentation: https://pypackagery.readthedocs.io/en/latest/packagery.html On a more general note: is there any blocker left why you would *not *use the contracts in your code? Anything I could improve or fix in icontract that would make it more convincing to use (apart from implementing static contract checking and automatic test generation :))? Cheers, Marko On Thu, 20 Sep 2018 at 22:52, Marko Ristin-Kaufmann wrote: > Hi, > Again a brief update. > > * icontract supports now static and class methods (thanks to my colleague > Adam Radomski) which came very handy when defining a group of functions as > an interface *via* an abstract (stateless) class. The implementors then > need to all satisfy the contracts without needing to re-write them. You > could implement the same behavior with *_impl or _* ("protected") methods > where public methods would add the contracts as asserts, but we find the > contracts-as-decorators more elegant (N functions instead of 2*N; see the > snippet below). > > * We implemented a linter to statically check that the contract arguments > are defined correctly. It is available as a separate Pypi package > pyicontract-lint (https://github.com/Parquery/pyicontract-lint/). Next > step will be to use asteroid to infer that the return type of the condition > function is boolean. Does it make sense to include PEX in the release on > github? > > * We plan to implement a sphinx plugin so that contracts can be readily > visible in the documentation. Is there any guideline or standard/preferred > approach how you would expect this plugin to be implemented? My colleagues > and I don't have any experience with sphinx plugins, so any guidance is > very welcome. > > class Component(abc.ABC, icontract.DBC): > """Initialize a single component.""" > > @staticmethod > @abc.abstractmethod > def user() -> str: > """ > Get the user name. > > :return: user which executes this component. > """ > pass > > @staticmethod > @abc.abstractmethod > @icontract.post(lambda result: result in groups()) > def primary_group() -> str: > """ > Get the primary group. > > :return: primary group of this component > """ > pass > > @staticmethod > @abc.abstractmethod > @icontract.post(lambda result: result.issubset(groups())) > def secondary_groups() -> Set[str]: > """ > Get the secondary groups. > > :return: list of secondary groups > """ > pass > > @staticmethod > @abc.abstractmethod > @icontract.post(lambda result: all(not pth.is_absolute() for pth in result)) > def bin_paths(config: mapried.config.Config) -> List[pathlib.Path]: > """ > Get list of binary paths used by this component. > > :param config: of the instance > :return: list of paths to binaries used by this component > """ > pass > > @staticmethod > @abc.abstractmethod > @icontract.post(lambda result: all(not pth.is_absolute() for pth in result)) > def py_paths(config: mapried.config.Config) -> List[pathlib.Path]: > """ > Get list of py paths used by this component. > > :param config: of the instance > :return: list of paths to python executables used by this component > """ > pass > > @staticmethod > @abc.abstractmethod > @icontract.post(lambda result: all(not pth.is_absolute() for pth in result)) > def dirs(config: mapried.config.Config) -> List[pathlib.Path]: > """ > Get directories used by this component. > > :param config: of the instance > :return: list of paths to directories used by this component > """ > pass > > > On Sat, 15 Sep 2018 at 22:14, Marko Ristin-Kaufmann < > marko.ristin at gmail.com> wrote: > >> Hi David Maertz and Michael Lee, >> >> Thank you for raising the points. Please let me respond to your comments >> in separation. Please let me know if I missed or misunderstood anything. >> >> *Assertions versus contracts.* David wrote: >> >>> I'm afraid that in reading the examples provided it is difficulties for >>> me not simply to think that EVERY SINGLE ONE of them would be FAR easier to >>> read if it were an `assert` instead. >>> >> >> I think there are two misunderstandings on the role of the contracts. >> First, they are part of the function signature, and not of the >> implementation. In contrast, the assertions are part of the implementation >> and are completely obscured in the signature. To see the contracts of a >> function or a class written as assertions, you need to visually inspect the >> implementation. The contracts are instead engraved in the signature and >> immediately visible. For example, you can test the distinction by pressing >> Ctrl + q in Pycharm. >> >> Second, assertions are only suitable for preconditions. Postconditions >> are practically unmaintainable as assertions as soon as you have multiple >> early returns in a function. The invariants implemented as assertions are >> always unmaintainable in practice (except for very, very small classes) -- >> you need to inspect each function of the class and all their return >> statements and manually add assertions for each invariant. Removing or >> changing invariants manually is totally impractical in my view. >> >> *Efficiency and Evidency. *David wrote: >> >>> The API of the library is a bit noisy, but I think the obstacle it's >>> more in the higher level design for me. Adding many layers of expensive >>> runtime checks and many lines of code in order to assure simple predicates >>> that a glance at the code or unit tests would do better seems wasteful. >> >> >> I'm not very sure what you mean by expensive runtime checks -- every >> single contract can be disabled at any point. Once a contract is disabled, >> there is literally no runtime computational cost incurred. The complexity >> of a contract during testing is also exactly the same as if you wrote it in >> the unit test. There is a constant overhead due to the extra function call >> to check the condition, but there's no more time complexity to it. The >> overhead of an additional function call is negligible in most practical >> test cases. >> >> When you say "a glance at the code", this implies to me that you >> referring to your own code and not to legacy code. In my experience, even >> simple predicates are often not obvious to see in other people's code as >> one might think (*e.g. *I had to struggle with even most simple ones >> like whether the result ends in a newline or not -- often having to >> actually run the code to check experimentally what happens with different >> inputs). Postconditions prove very useful in such situations: they let us >> know that whenever a function returns, the result must satisfy its >> postconditions. They are formal and obvious to read in the function >> signature, and hence spare us the need to parse the function's >> implementation or run it. >> >> Contracts in the unit tests. >> >>> The API of the library is a bit noisy, but I think the obstacle it's >>> more in the higher level design for me. Adding many layers of expensive >>> runtime checks and many lines of code in order to assure simple predicates >>> that a glance at the code or *unit tests would do better* seems >>> wasteful. >>> >> (emphasis mine) >> >> Defining contracts in a unit test is, as I already mentioned in my >> previous message, problematic due to two reasons. First, the contract >> resides in a place far away from the function definition which might make >> it hard to find and maintain. Second, defining the contract in the unit >> test makes it impossible to put the contract in the production or test it >> in a call from a different function. In contrast, introducing the contract >> as a decorator works perfectly fine in all the three above-mentioned cases >> (smoke unit test, production, deeper testing). >> >> *Library. *Michael wrote: >> >>> I just want to point out that you don't need permission from anybody to >>> start a library. I think developing and popularizing a contracts library is >>> a reasonable goal -- but that's something you can start doing at any time >>> without waiting for consensus. >> >> >> As a matter of fact, I already implemented the library which covers most >> of the design-by-contract including the inheritance of the contracts. (The >> only missing parts are retrieval of "old" values in postconditions and loop >> invariants.) It's published on pypi as "icontract" package (the website is >> https://github.com/Parquery/icontract/). I'd like to gauge the interest >> before I/we even try to make a proposal to make it into the standard >> library. >> >> The discussions in this thread are an immense help for me to crystallize >> the points that would need to be addressed explicitly in such a proposal. >> If the proposal never comes about, it would at least flow into the >> documentation of the library and help me identify and explain better the >> important points. >> >> *Observation of contracts. *Michael wrote: >> >>> Your contracts are only checked when the function is evaluated, so you'd >>> still need to write that unit test that confirms the function actually >>> observes the contract. I don't think you necessarily get to reduce the >>> number of tests you'd need to write. >> >> >> Assuming that a contracts library is working correctly, there is no need >> to test whether a contract is observed or not -- you assume it is. The same >> applies to any testing library -- otherwise, you would have to test the >> tester, and so on *ad infinitum.* >> >> You still need to evaluate the function during testing, of course. But >> you don't need to document the contracts in your tests nor check that the >> postconditions are enforced -- you assume that they hold. For example, if >> you introduce a postcondition that the result of a function ends in a >> newline, there is no point of making a unit test, passing it some value and >> then checking that the result value ends in a newline in the test. >> Normally, it is sufficient to smoke-test the function. For example, you >> write a smoke unit test that gives a range of inputs to the function by >> using hypothesis library and let the postconditions be automatically >> checked. You can view each postcondition as an additional test case in this >> scenario -- but one that is also embedded in the function signature and >> also applicable in production. >> >> Not all tests can be written like this, of course. Dealing with a complex >> function involves writing testing logic which is too complex to fit in >> postconditions. Contracts are not a panacea, but they absolute us from >> implementing trivial testing logic while keeping the important bits of the >> documentation close to the function and allowing for deeper tests. >> >> *Accurate contracts. *Michael wrote: >> >>> There's also no guarantee that your contracts will necessarily be >>> *accurate*. It's entirely possible that your preconditions/postconditions >>> might hold for every test case you can think of, but end up failing when >>> running in production due to some edge case that you missed. >>> >> >> Unfortunately, there is no practical exit from this dilemma -- and it >> applies all the same for the tests. Who guarantees that the testing logic >> of the unit tests are correct? Unless you can formally prove that the code >> does what it should, there is no way around it. Whether you write contracts >> in the tests or in the decorators, it makes no difference to accuracy. >> >> If you missed to test an edge case, well, you missed it :). The >> design-by-contract does not make the code bug-free, but makes the bugs *much >> less likely* and *easier *to detect *early*. In practice, if there is a >> complex contract, I encapsulate its complex parts in separate functions >> (often with their own contracts), test these functions in separation and >> then, once the tests pass and I'm confident about their correctness, put >> them into contracts. >> >> (And if you decide to disable those pre/post conditions to avoid the >>> efficiency hit, you're back to square zero.) >>> >> >> In practice, we at Parquery AG let the critical contracts to run in >> production to ensure that the program blows up before it exercises >> undefined behavior in a critical situation. The informative violation >> errors of the icontract library help us to trace the bugs more easily since >> the relevant values are part of the error log. >> >> However, if some of the contracts are too inefficient to check in >> production, alas you have to turn them off and they can't be checked since >> they are inefficient. This seems like a tautology to me -- could you please >> clarify a bit what you meant? If a check is critical and inefficient at the >> same time then your problem is unsolvable (or at least ill-defined); >> contracts as well as any other approach can not solve it. >> >> *Ergonimical assertions. *Michael wrote: >> >>> Or I guess to put it another way -- it seems what all of these contract >>> libraries are doing is basically adding syntax to try and make adding >>> asserts in various places more ergonomic, and not much else. I agree those >>> kinds of libraries can be useful, but I don't think they're necessarily >>> useful enough to be part of the standard library or to be a technique >>> Python programmers should automatically use by default. >> >> >> From the point of view of the *behavior, *that is exactly the case. The >> contracts (*e.g. *as function decorators) make postconditions and >> invariants possible in practice. As I already noted above, postconditions >> are very hard and invariants almost impossible to maintain manually without >> the contracts. This is even more so when contracts are inherited in a class >> hierarchy. >> >> Please do not underestimate another aspect of the contracts, namely the >> value of contracts as verifiable documentation. Please note that the only >> alternative that I observe in practice without design-by-contract is to >> write contracts in docstrings in *natural language*. Most often, they >> are just assumed, so the next programmer burns her fingers expecting the >> contracts to hold when they actually differ from the class or function >> description, but nobody bothered to update the docstrings (which is a >> common pitfall in any code base over a longer period of time). >> >> *Automatic generation of tests.* Michael wrote: >> >>> What might be interesting is somebody wrote a library that does >>> something more then just adding asserts. For example, one idea might be to >>> try hooking up a contracts library to hypothesis (or any other library that >>> does quickcheck-style testing). That might be a good way of partially >>> addressing the problems up above -- you write out your invariants, and a >>> testing library extracts that information and uses it to automatically >>> synthesize interesting test cases. >> >> >> This is the final goal and my main motivation to push for >> design-by-contract in Python :). There is a whole research community that >> tries to come up with automatic test generations, and contracts are of >> great utility there. Mind that generating the tests based on contracts is >> not trivial: hypothesis just picks elements for each input independently >> which is a much easier problem. However, preconditions can define how the >> arguments are *related*. Assume a function takes two numbers as >> arguments, x and y. If the precondition is y < x < (y + x) * 10, it is not >> trivial even for this simple example to come up with concrete samples of x >> and y unless you simply brute-force the problem by densely sampling all the >> numbers and checking the precondition. >> >> I see a chicken-and-egg problem here. If design-by-contract is not widely >> adopted, there will also be fewer or no libraries for automatic test >> generation. Honestly, I have absolutely no idea how you could approach >> automatic generation of test cases without contracts (in one form or the >> other). For example, how could you automatically mock a class without >> knowing its invariants? >> >> Since generating test cases for functions with non-trivial contracts is >> hard (and involves collaboration of many people), I don't expect anybody to >> start even thinking about it if the tool can only be applied to almost >> anywhere due to lack of contracts. Formal proofs and static analysis are >> even harder beasts to tame -- and I'd say the argument holds true for them >> even more. >> >> David and Michael, thank you again for your comments! I welcome very much >> your opinion and any follow-ups as well as from other participants on this >> mail list. >> >> Cheers, >> Marko >> >> On Sat, 15 Sep 2018 at 10:42, Michael Lee >> wrote: >> >>> I just want to point out that you don't need permission from anybody to >>> start a library. I think developing and popularizing a contracts library is >>> a reasonable goal -- but that's something you can start doing at any time >>> without waiting for consensus. >>> >>> And if it gets popular enough, maybe it'll be added to the standard >>> library in some form. That's what happened with attrs, iirc -- it got >>> fairly popular and demonstrated there was an unfilled niche, and so Python >>> acquired dataclasses.. >>> >>> >>> The contracts make merely tests obsolete that test that the function or >>>> class actually observes the contracts. >>>> >>> >>> Is this actually the case? Your contracts are only checked when the >>> function is evaluated, so you'd still need to write that unit test that >>> confirms the function actually observes the contract. I don't think you >>> necessarily get to reduce the number of tests you'd need to write. >>> >>> >>> Please let me know what points *do not *convince you that Python needs >>>> contracts >>>> >>> >>> While I agree that contracts are a useful tool, I don't think they're >>> going to be necessarily useful for *all* Python programmers. For example, >>> contracts aren't particularly useful if you're writing fairly >>> straightforward code with relatively simple invariants. >>> >>> I'm also not convinced that libraries where contracts are checked >>> specifically *at runtime* actually give you that much added power and >>> impact. For example, you still need to write a decent number of unit tests >>> to make sure your contracts are being upheld (unless you plan on checking >>> this by just deploying your code and letting it run, which seems >>> suboptimal). There's also no guarantee that your contracts will necessarily >>> be *accurate*. It's entirely possible that your >>> preconditions/postconditions might hold for every test case you can think >>> of, but end up failing when running in production due to some edge case >>> that you missed. (And if you decide to disable those pre/post conditions to >>> avoid the efficiency hit, you're back to square zero.) >>> >>> Or I guess to put it another way -- it seems what all of these contract >>> libraries are doing is basically adding syntax to try and make adding >>> asserts in various places more ergonomic, and not much else. I agree those >>> kinds of libraries can be useful, but I don't think they're necessarily >>> useful enough to be part of the standard library or to be a technique >>> Python programmers should automatically use by default. >>> >>> What might be interesting is somebody wrote a library that does >>> something more then just adding asserts. For example, one idea might be to >>> try hooking up a contracts library to hypothesis (or any other library that >>> does quickcheck-style testing). That might be a good way of partially >>> addressing the problems up above -- you write out your invariants, and a >>> testing library extracts that information and uses it to automatically >>> synthesize interesting test cases. >>> >>> (And of course, what would be very cool is if the contracts could be >>> verified statically like you can do in languages like dafny -- that way, >>> you genuinely would be able to avoid writing many kinds of tests and could >>> have confidence your contracts are upheld. But I understanding implementing >>> such verifiers are extremely challenging and would probably have too-steep >>> of a learning curve to be usable by most people anyways.) >>> >>> -- Michael >>> >>> >>> >>> On Fri, Sep 14, 2018 at 11:51 PM, Marko Ristin-Kaufmann < >>> marko.ristin at gmail.com> wrote: >>> >>>> Hi, >>>> Let me make a couple of practical examples from the work-in-progress ( >>>> https://github.com/Parquery/pypackagery, branch >>>> mristin/initial-version) to illustrate again the usefulness of the >>>> contracts and why they are, in my opinion, superior to assertions and unit >>>> tests. >>>> >>>> What follows is a list of function signatures decorated with contracts >>>> from pypackagery library preceded by a human-readable description of the >>>> contracts. >>>> >>>> The invariants tell us what format to expect from the related string >>>> properties. >>>> >>>> @icontract.inv(lambda self: self.name.strip() == self.name) >>>> @icontract.inv(lambda self: self.line.endswith("\n")) >>>> class Requirement: >>>> """Represent a requirement in requirements.txt.""" >>>> >>>> def __init__(self, name: str, line: str) -> None: >>>> """ >>>> Initialize. >>>> >>>> :param name: package name >>>> :param line: line in the requirements.txt file >>>> """ >>>> ... >>>> >>>> The postcondition tells us that the resulting map keys the values on >>>> their name property. >>>> >>>> @icontract.post(lambda result: all(val.name == key for key, val in result.items())) >>>> def parse_requirements(text: str, filename: str = '') -> Mapping[str, Requirement]: >>>> """ >>>> Parse requirements file and return package name -> package requirement as in requirements.txt >>>> >>>> :param text: content of the ``requirements.txt`` >>>> :param filename: where we got the ``requirements.txt`` from (URL or path) >>>> :return: name of the requirement (*i.e.* pip package) -> parsed requirement >>>> """ >>>> ... >>>> >>>> >>>> The postcondition ensures that the resulting list contains only unique >>>> elements. Mind that if you returned a set, the order would have been lost. >>>> >>>> @icontract.post(lambda result: len(result) == len(set(result)), enabled=icontract.SLOW) >>>> def missing_requirements(module_to_requirement: Mapping[str, str], >>>> requirements: Mapping[str, Requirement]) -> List[str]: >>>> """ >>>> List requirements from module_to_requirement missing in the ``requirements``. >>>> >>>> :param module_to_requirement: parsed ``module_to_requiremnt.tsv`` >>>> :param requirements: parsed ``requirements.txt`` >>>> :return: list of requirement names >>>> """ >>>> ... >>>> >>>> Here is a bit more complex example. >>>> - The precondition A requires that all the supplied relative paths >>>> (rel_paths) are indeed relative (as opposed to absolute). >>>> - The postcondition B ensures that the initial set of paths (given in >>>> rel_paths) is included in the results. >>>> - The postcondition C ensures that the requirements in the results are >>>> the subset of the given requirements. >>>> - The precondition D requires that there are no missing requirements (*i.e. >>>> *that each requirement in the given module_to_requirement is also >>>> defined in the given requirements). >>>> >>>> @icontract.pre(lambda rel_paths: all(rel_pth.root == "" for rel_pth in rel_paths)) # A >>>> @icontract.post( >>>> lambda rel_paths, result: all(pth in result.rel_paths for pth in rel_paths), >>>> enabled=icontract.SLOW, >>>> description="Initial relative paths included") # B >>>> @icontract.post( >>>> lambda requirements, result: all(req.name in requirements for req in result.requirements), >>>> enabled=icontract.SLOW) # C >>>> @icontract.pre( >>>> lambda requirements, module_to_requirement: missing_requirements(module_to_requirement, requirements) == [], >>>> enabled=icontract.SLOW) # D >>>> def collect_dependency_graph(root_dir: pathlib.Path, rel_paths: List[pathlib.Path], >>>> requirements: Mapping[str, Requirement], >>>> module_to_requirement: Mapping[str, str]) -> Package: >>>> >>>> """ >>>> Collect the dependency graph of the initial set of python files from the code base. >>>> >>>> :param root_dir: root directory of the codebase such as "/home/marko/workspace/pqry/production/src/py" >>>> :param rel_paths: initial set of python files that we want to package. These paths are relative to root_dir. >>>> :param requirements: requirements of the whole code base, mapped by package name >>>> :param module_to_requirement: module to requirement correspondence of the whole code base >>>> :return: resolved depedendency graph including the given initial relative paths, >>>> """ >>>> >>>> I hope these examples convince you (at least a little bit :-)) that >>>> contracts are easier and clearer to write than asserts. As noted before in >>>> this thread, you can have the same *behavior* with asserts as long as >>>> you don't need to inherit the contracts. But the contract decorators make >>>> it very explicit what conditions should hold *without* having to look >>>> into the implementation. Moreover, it is very hard to ensure the >>>> postconditions with asserts as soon as you have a complex control flow since >>>> you would need to duplicate the assert at every return statement. (You >>>> could implement a context manager that ensures the postconditions, but a >>>> context manager is not more readable than decorators and you have to >>>> duplicate them as documentation in the docstring). >>>> >>>> In my view, contracts are also superior to many kinds of tests. As the >>>> contracts are *always* enforced, they also enforce the correctness >>>> throughout the program execution whereas the unit tests and doctests only >>>> cover a list of selected cases. Furthermore, writing the contracts in these >>>> examples as doctests or unit tests would escape the attention of most less >>>> experienced programmers which are not used to read unit tests as >>>> documentation. Finally, these unit tests would be much harder to read than >>>> the decorators (*e.g.*, the unit test would supply invalid arguments >>>> and then check for ValueError which is already a much more convoluted piece >>>> of code than the preconditions and postconditions as decorators. Such >>>> testing code also lives in a file separate from the original implementation >>>> making it much harder to locate and maintain). >>>> >>>> Mind that the contracts *do not* *replace* the unit tests or the >>>> doctests. The contracts make merely tests obsolete that test that the >>>> function or class actually observes the contracts. Design-by-contract helps >>>> you skip those tests and focus on the more complex ones that test the >>>> behavior. Another positive effect of the contracts is that they make your >>>> tests deeper: if you specified the contracts throughout the code base, a >>>> test of a function that calls other functions in its implementation will >>>> also make sure that all the contracts of that other functions hold. This >>>> can be difficult to implement with standard unit test frameworks. >>>> >>>> Another aspect of the design-by-contract, which is IMO ignored quite >>>> often, is the educational one. Contracts force the programmer to actually >>>> sit down and think *formally* about the inputs and the outputs >>>> (hopefully?) *before* she starts to implement a function. Since many >>>> schools use Python to teach programming (especially at high school level), >>>> I imagine writing contracts of a function to be a very good exercise in >>>> formal thinking for the students. >>>> >>>> Please let me know what points *do not *convince you that Python needs >>>> contracts (in whatever form -- be it as a standard library, be it as a >>>> language construct, be it as a widely adopted and collectively maintained >>>> third-party library). I would be very glad to address these points in my >>>> next message(s). >>>> >>>> Cheers, >>>> Marko >>>> >>>> _______________________________________________ >>>> Python-ideas mailing list >>>> Python-ideas at python.org >>>> https://mail.python.org/mailman/listinfo/python-ideas >>>> Code of Conduct: http://python.org/psf/codeofconduct/ >>>> >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sat Sep 22 05:46:47 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 22 Sep 2018 05:46:47 -0400 Subject: [Python-ideas] PEPs: Theory of operation [was: Moving to another forum system ...] In-Reply-To: <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> Message-ID: Process suggestions that could minimize non-BDFL's BDFL legwork: * https://github.com/python/peps * https://github.com/pypa/interoperability-peps * Use GitHub reactions for voting on BDFL delegates, PEP final approval, and PEP sub issues? * Specify a voting deadline? * How to make a quorum call? * Add '@core/team' as reviewers for every PEP? * Link to the mailing list thread(s) at the top of the PR * [ ] Add unique message URLs to footers with mailman3 * What type of communications are better suited for mailing lists over PEP pull-requests and PEP code reviews? It seems like everything's fine, but I would have no idea, BTW [] https://en.wikipedia.org/wiki/Quorum_call On Saturday, September 22, 2018, Stephen J. Turnbull < turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > Executive summary: Writing a PEP is an inherently uncertain process. > Achieving "community consensus" is the goal of the process, not a > precondition. > > Anders Hovm?ller writes: > > > In general pep1 is frustratingly vague. Terms like ?community > > consensus? without defining community or what numbers would > > constitute a consensus are not fun to read as someone who doesn?t > > personally know anyone of the core devs. Further references to > > Guido are even more frustrating now that he?s bowed out. > > These terms have little to do with what a new PEP's proponent needs to > think about, though. A PEP-able proposal by definition involves > uncertainty. Nobody, not even Guido, can tell you in advance whether > a PEP will be accepted (for implementation). The PEP process is > rigorous enough that by the time you get close to needing consensus to > proceed, you'll know what it means. > > "Community consensus" is not a condition for *anything* in the PEP > process, except final acceptance. It is the *goal* of the process. > PEPs are approved (for publication) by default; the only requirement > is editorial completeness. PEPs are needed for two reasons: (1) to > get the input of the community, both highly competent engineers for > implementation and a variety of users for requirements, to refine a > complex proposal or one with far-reaching implications for the > language, and/or (2) to build a consensus for implementation. Either > way, by definition the outcome is unclear at the beginning. > > If your concern about "consensus" is that you want to know whether > you're likely to get to consensus, and an accepted PEP, ask somebody > who seems sympathetic and experienced enough to know about what it > looks like on the list when a PEP is going to succeed. Anything > PEP-able is sufficiently unclear that rules can't be given in PEP 1. > It is possible only to say that Python is now very mature, and there's > a strong conservative bias against change. That doesn't mean there > aren't changes: Python attracts a lot of feature proposals, so the > rate of change isn't slowing although the acceptance rate is declining > gradually. > > "Consensus" is never defined by numbers in the English language, and > it does not mean "unanimity". In PEP 1, it means that some people > agree, most people don't disagree, and even if a senior person > disagrees, they're willing to go along with the "sense of the > community". As that adjective "senior" implies, some people count > more to the consensus than others. Usually when I write "senior" I'm > referring to core developers (committers), but here there > people who are "senior" enough despite not having commit bits.[1] > > "The community" is not well defined, and it can't be, short of a > doctoral dissertation in anthropology. The relevant channels are > open-participation, some people speak for themselves, some people are > "official" representatives of important constituencies such as the > leaders of large independent projects or alternative implementations, > and some people have acquired sufficient reputation to be considered > representative of a group of people (especially when other members of > the group rarely participate in the dev lists but for some reason are > considered important to the community -- I'm thinking in particular of > sysadmins and devops, and the problems we can cause them by messing > with packaging and installation). > > References to the BDFL are, of course, in limbo. AFAIK we don't have > one at the moment. Until we do, any PEPs will presumably be accepted > either by a self-nominated BDFL-Delegate acceptable to the core devs, > or by an ad hoc committee of interested core devs, and that part of > PEP 1 can't be usefully updated yet. This is not a weakness of the > Python project, IMO. Rather, the fact that, despite a sort of > constitutional crisis, the whole process is continuing pretty much as > usual shows its strength. > > This is possible because the BDFL is not, and has not been for many > years, a "hands-on" manager. It's true that where a proposal affects > his own "development *in* Python", he's likely to work closely with a > proponent, off- and on-list, or even *be* the proponent. Of course > such proposals are more likely to be approved, and a few community > members have pushed back on that because it appears undemocratic. But > the general reaction is "maybe 'Although that way may not be obvious > at first unless you're Dutch' applies to me in such cases!" For most > proposals, he's "just" a very senior developer whose comments are > important because he's a great developer, but he is easily swayed by > the sense of the community. Bottom line: except in the rare case > where your proposal directly affects the BDFL's own coding, the BDFL's > now-traditional role is to declare that consensus has been achieved, > postpone the PEP because it's clear that consensus is not forming, or > in rare cases, make a choice despite the lack of consensus. > > But none of this is really of importance to a PEP proponent > ("champion" in the terminology of PEP 1). PEP 1 is quite specific > about the required components of the document, and many points of > formatting and style. Accept the uncertainty, and do what you need to > do to meet those requirements, that's all there is to it. If the > community wants more, or wants changes, it will tell you, either as a > demand about style or missing content from an editor or as a technical > comment on the list. Whether you accept those technical comments is > up to you, but your star will rise far more rapidly if you are very > sensitive to claims that "this change to the PEP will a big > improvement for some significant consituency in the community". If > you want advice on whether the chance of acceptance is high enough to > be worth putting in more work, ask the BDFL-Delegate (or the BDFL if > she/he has "claimed" the PEP) where the proposal has an official > adjudicator, and if not, a senior core developer. > > If one doesn't know who the senior developers are yet, she should think > twice about whether she's ready to PEP anything. That's not a litmus > test; some PEPs have eventually succeeded though the proponent was new > to the project development process.[2] But it's a lot less painful if > you can tell who's likely to be able to sway the whole project one way > or the other. And as a matter of improving your proposal, who surely > does know more about what your proposal implies for the implementation > than you do, so you should strongly consider whether *you* are the one > who's missing something when you disagree with them. > > > Footnotes: > [1] They are familiar to some of the core developers as drivers of > important projects developing *in* Python. > > [2] The ones I can think of involve the same kind of person as > footnote 1, and a co-proponent who was a core developer. > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Sat Sep 22 05:55:12 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sat, 22 Sep 2018 11:55:12 +0200 Subject: [Python-ideas] PEPs: Theory of operation [was: Moving to another forum system ...] In-Reply-To: <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> Message-ID: <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> > If one doesn't know who the senior developers are yet, she should think > twice about whether she's ready to PEP anything. That's not a litmus > test; some PEPs have eventually succeeded though the proponent was new > to the project development process.[2] But it's a lot less painful if > you can tell who's likely to be able to sway the whole project one way > or the other. I think that entire paragraph made it sound even worse than what I wrote originally. It reads to an outsider as ?if you don?t know what?s wrong I?m not going to tell you?. > And as a matter of improving your proposal, who surely > does know more about what your proposal implies for the implementation > than you do, so you should strongly consider whether *you* are the one > who's missing something when you disagree with them. Is this me specifically or ?you? in the abstract? English isn?t great here. I personally supplied a complete implementation so don?t see how this applies to me? / Anders From wes.turner at gmail.com Sat Sep 22 05:59:29 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 22 Sep 2018 05:59:29 -0400 Subject: [Python-ideas] PEPs: Theory of operation [was: Moving to another forum system ...] In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> Message-ID: On Saturday, September 22, 2018, Wes Turner wrote: > > It seems like everything's fine, but I would have no idea, BTW > Would project boards be helpful for coordinating proposal status information, or extra process for something that's already working just fine? https://github.com/python/peps/projects https://github.com/pypa/interoperability-peps/projects TBH, I like Waffle.io boards, but core team may be more comfortable with GH projects with swimlanes? > [] https://en.wikipedia.org/wiki/Quorum_call > > On Saturday, September 22, 2018, Stephen J. Turnbull < > turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > >> Executive summary: Writing a PEP is an inherently uncertain process. >> Achieving "community consensus" is the goal of the process, not a >> precondition. >> >> Anders Hovm?ller writes: >> >> > In general pep1 is frustratingly vague. Terms like ?community >> > consensus? without defining community or what numbers would >> > constitute a consensus are not fun to read as someone who doesn?t >> > personally know anyone of the core devs. Further references to >> > Guido are even more frustrating now that he?s bowed out. >> >> These terms have little to do with what a new PEP's proponent needs to >> think about, though. A PEP-able proposal by definition involves >> uncertainty. Nobody, not even Guido, can tell you in advance whether >> a PEP will be accepted (for implementation). The PEP process is >> rigorous enough that by the time you get close to needing consensus to >> proceed, you'll know what it means. >> >> "Community consensus" is not a condition for *anything* in the PEP >> process, except final acceptance. It is the *goal* of the process. >> PEPs are approved (for publication) by default; the only requirement >> is editorial completeness. PEPs are needed for two reasons: (1) to >> get the input of the community, both highly competent engineers for >> implementation and a variety of users for requirements, to refine a >> complex proposal or one with far-reaching implications for the >> language, and/or (2) to build a consensus for implementation. Either >> way, by definition the outcome is unclear at the beginning. >> >> If your concern about "consensus" is that you want to know whether >> you're likely to get to consensus, and an accepted PEP, ask somebody >> who seems sympathetic and experienced enough to know about what it >> looks like on the list when a PEP is going to succeed. Anything >> PEP-able is sufficiently unclear that rules can't be given in PEP 1. >> It is possible only to say that Python is now very mature, and there's >> a strong conservative bias against change. That doesn't mean there >> aren't changes: Python attracts a lot of feature proposals, so the >> rate of change isn't slowing although the acceptance rate is declining >> gradually. >> >> "Consensus" is never defined by numbers in the English language, and >> it does not mean "unanimity". In PEP 1, it means that some people >> agree, most people don't disagree, and even if a senior person >> disagrees, they're willing to go along with the "sense of the >> community". As that adjective "senior" implies, some people count >> more to the consensus than others. Usually when I write "senior" I'm >> referring to core developers (committers), but here there >> people who are "senior" enough despite not having commit bits.[1] >> >> "The community" is not well defined, and it can't be, short of a >> doctoral dissertation in anthropology. The relevant channels are >> open-participation, some people speak for themselves, some people are >> "official" representatives of important constituencies such as the >> leaders of large independent projects or alternative implementations, >> and some people have acquired sufficient reputation to be considered >> representative of a group of people (especially when other members of >> the group rarely participate in the dev lists but for some reason are >> considered important to the community -- I'm thinking in particular of >> sysadmins and devops, and the problems we can cause them by messing >> with packaging and installation). >> >> References to the BDFL are, of course, in limbo. AFAIK we don't have >> one at the moment. Until we do, any PEPs will presumably be accepted >> either by a self-nominated BDFL-Delegate acceptable to the core devs, >> or by an ad hoc committee of interested core devs, and that part of >> PEP 1 can't be usefully updated yet. This is not a weakness of the >> Python project, IMO. Rather, the fact that, despite a sort of >> constitutional crisis, the whole process is continuing pretty much as >> usual shows its strength. >> >> This is possible because the BDFL is not, and has not been for many >> years, a "hands-on" manager. It's true that where a proposal affects >> his own "development *in* Python", he's likely to work closely with a >> proponent, off- and on-list, or even *be* the proponent. Of course >> such proposals are more likely to be approved, and a few community >> members have pushed back on that because it appears undemocratic. But >> the general reaction is "maybe 'Although that way may not be obvious >> at first unless you're Dutch' applies to me in such cases!" For most >> proposals, he's "just" a very senior developer whose comments are >> important because he's a great developer, but he is easily swayed by >> the sense of the community. Bottom line: except in the rare case >> where your proposal directly affects the BDFL's own coding, the BDFL's >> now-traditional role is to declare that consensus has been achieved, >> postpone the PEP because it's clear that consensus is not forming, or >> in rare cases, make a choice despite the lack of consensus. >> >> But none of this is really of importance to a PEP proponent >> ("champion" in the terminology of PEP 1). PEP 1 is quite specific >> about the required components of the document, and many points of >> formatting and style. Accept the uncertainty, and do what you need to >> do to meet those requirements, that's all there is to it. If the >> community wants more, or wants changes, it will tell you, either as a >> demand about style or missing content from an editor or as a technical >> comment on the list. Whether you accept those technical comments is >> up to you, but your star will rise far more rapidly if you are very >> sensitive to claims that "this change to the PEP will a big >> improvement for some significant consituency in the community". If >> you want advice on whether the chance of acceptance is high enough to >> be worth putting in more work, ask the BDFL-Delegate (or the BDFL if >> she/he has "claimed" the PEP) where the proposal has an official >> adjudicator, and if not, a senior core developer. >> >> If one doesn't know who the senior developers are yet, she should think >> twice about whether she's ready to PEP anything. That's not a litmus >> test; some PEPs have eventually succeeded though the proponent was new >> to the project development process.[2] But it's a lot less painful if >> you can tell who's likely to be able to sway the whole project one way >> or the other. And as a matter of improving your proposal, who surely >> does know more about what your proposal implies for the implementation >> than you do, so you should strongly consider whether *you* are the one >> who's missing something when you disagree with them. >> >> >> Footnotes: >> [1] They are familiar to some of the core developers as drivers of >> important projects developing *in* Python. >> >> [2] The ones I can think of involve the same kind of person as >> footnote 1, and a co-proponent who was a core developer. >> >> >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sat Sep 22 06:08:06 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 22 Sep 2018 06:08:06 -0400 Subject: [Python-ideas] PEPs: Theory of operation [was: Moving to another forum system ...] In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> Message-ID: Here are links to the Apache governance docs: https://www.apache.org/foundation/governance/#technical https://www.apache.org/foundation/governance/pmcs.html Which are the PSF docs for these exact same processes for open source governance? (In re: to transitioning from BDFL is not dead, but) https://devguide.python.org/#contributing https://devguide.python.org/experts/ - is there a different BDFL-delegate org chart, or would this be the page to add to and refer to? On Saturday, September 22, 2018, Wes Turner wrote: > > > On Saturday, September 22, 2018, Wes Turner wrote: > >> >> It seems like everything's fine, but I would have no idea, BTW >> > > Would project boards be helpful for coordinating proposal status > information, or extra process for something that's already working just > fine? > > https://github.com/python/peps/projects > > https://github.com/pypa/interoperability-peps/projects > > TBH, I like Waffle.io boards, but core team may be more comfortable with > GH projects with swimlanes? > > >> [] https://en.wikipedia.org/wiki/Quorum_call >> >> On Saturday, September 22, 2018, Stephen J. Turnbull < >> turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: >> >>> Executive summary: Writing a PEP is an inherently uncertain process. >>> Achieving "community consensus" is the goal of the process, not a >>> precondition. >>> >>> Anders Hovm?ller writes: >>> >>> > In general pep1 is frustratingly vague. Terms like ?community >>> > consensus? without defining community or what numbers would >>> > constitute a consensus are not fun to read as someone who doesn?t >>> > personally know anyone of the core devs. Further references to >>> > Guido are even more frustrating now that he?s bowed out. >>> >>> These terms have little to do with what a new PEP's proponent needs to >>> think about, though. A PEP-able proposal by definition involves >>> uncertainty. Nobody, not even Guido, can tell you in advance whether >>> a PEP will be accepted (for implementation). The PEP process is >>> rigorous enough that by the time you get close to needing consensus to >>> proceed, you'll know what it means. >>> >>> "Community consensus" is not a condition for *anything* in the PEP >>> process, except final acceptance. It is the *goal* of the process. >>> PEPs are approved (for publication) by default; the only requirement >>> is editorial completeness. PEPs are needed for two reasons: (1) to >>> get the input of the community, both highly competent engineers for >>> implementation and a variety of users for requirements, to refine a >>> complex proposal or one with far-reaching implications for the >>> language, and/or (2) to build a consensus for implementation. Either >>> way, by definition the outcome is unclear at the beginning. >>> >>> If your concern about "consensus" is that you want to know whether >>> you're likely to get to consensus, and an accepted PEP, ask somebody >>> who seems sympathetic and experienced enough to know about what it >>> looks like on the list when a PEP is going to succeed. Anything >>> PEP-able is sufficiently unclear that rules can't be given in PEP 1. >>> It is possible only to say that Python is now very mature, and there's >>> a strong conservative bias against change. That doesn't mean there >>> aren't changes: Python attracts a lot of feature proposals, so the >>> rate of change isn't slowing although the acceptance rate is declining >>> gradually. >>> >>> "Consensus" is never defined by numbers in the English language, and >>> it does not mean "unanimity". In PEP 1, it means that some people >>> agree, most people don't disagree, and even if a senior person >>> disagrees, they're willing to go along with the "sense of the >>> community". As that adjective "senior" implies, some people count >>> more to the consensus than others. Usually when I write "senior" I'm >>> referring to core developers (committers), but here there >>> people who are "senior" enough despite not having commit bits.[1] >>> >>> "The community" is not well defined, and it can't be, short of a >>> doctoral dissertation in anthropology. The relevant channels are >>> open-participation, some people speak for themselves, some people are >>> "official" representatives of important constituencies such as the >>> leaders of large independent projects or alternative implementations, >>> and some people have acquired sufficient reputation to be considered >>> representative of a group of people (especially when other members of >>> the group rarely participate in the dev lists but for some reason are >>> considered important to the community -- I'm thinking in particular of >>> sysadmins and devops, and the problems we can cause them by messing >>> with packaging and installation). >>> >>> References to the BDFL are, of course, in limbo. AFAIK we don't have >>> one at the moment. Until we do, any PEPs will presumably be accepted >>> either by a self-nominated BDFL-Delegate acceptable to the core devs, >>> or by an ad hoc committee of interested core devs, and that part of >>> PEP 1 can't be usefully updated yet. This is not a weakness of the >>> Python project, IMO. Rather, the fact that, despite a sort of >>> constitutional crisis, the whole process is continuing pretty much as >>> usual shows its strength. >>> >>> This is possible because the BDFL is not, and has not been for many >>> years, a "hands-on" manager. It's true that where a proposal affects >>> his own "development *in* Python", he's likely to work closely with a >>> proponent, off- and on-list, or even *be* the proponent. Of course >>> such proposals are more likely to be approved, and a few community >>> members have pushed back on that because it appears undemocratic. But >>> the general reaction is "maybe 'Although that way may not be obvious >>> at first unless you're Dutch' applies to me in such cases!" For most >>> proposals, he's "just" a very senior developer whose comments are >>> important because he's a great developer, but he is easily swayed by >>> the sense of the community. Bottom line: except in the rare case >>> where your proposal directly affects the BDFL's own coding, the BDFL's >>> now-traditional role is to declare that consensus has been achieved, >>> postpone the PEP because it's clear that consensus is not forming, or >>> in rare cases, make a choice despite the lack of consensus. >>> >>> But none of this is really of importance to a PEP proponent >>> ("champion" in the terminology of PEP 1). PEP 1 is quite specific >>> about the required components of the document, and many points of >>> formatting and style. Accept the uncertainty, and do what you need to >>> do to meet those requirements, that's all there is to it. If the >>> community wants more, or wants changes, it will tell you, either as a >>> demand about style or missing content from an editor or as a technical >>> comment on the list. Whether you accept those technical comments is >>> up to you, but your star will rise far more rapidly if you are very >>> sensitive to claims that "this change to the PEP will a big >>> improvement for some significant consituency in the community". If >>> you want advice on whether the chance of acceptance is high enough to >>> be worth putting in more work, ask the BDFL-Delegate (or the BDFL if >>> she/he has "claimed" the PEP) where the proposal has an official >>> adjudicator, and if not, a senior core developer. >>> >>> If one doesn't know who the senior developers are yet, she should think >>> twice about whether she's ready to PEP anything. That's not a litmus >>> test; some PEPs have eventually succeeded though the proponent was new >>> to the project development process.[2] But it's a lot less painful if >>> you can tell who's likely to be able to sway the whole project one way >>> or the other. And as a matter of improving your proposal, who surely >>> does know more about what your proposal implies for the implementation >>> than you do, so you should strongly consider whether *you* are the one >>> who's missing something when you disagree with them. >>> >>> >>> Footnotes: >>> [1] They are familiar to some of the core developers as drivers of >>> important projects developing *in* Python. >>> >>> [2] The ones I can think of involve the same kind of person as >>> footnote 1, and a co-proponent who was a core developer. >>> >>> >>> _______________________________________________ >>> Python-ideas mailing list >>> Python-ideas at python.org >>> https://mail.python.org/mailman/listinfo/python-ideas >>> Code of Conduct: http://python.org/psf/codeofconduct/ >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From leebraid at gmail.com Sat Sep 22 07:53:09 2018 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 22 Sep 2018 12:53:09 +0100 Subject: [Python-ideas] Proposal for an inplace else (?=) operator Message-ID: Could I get some feedback on this? I'd like to know if anyone thinks it might make it through the pep process before spending too much (more) time on it. That said, it seems valuable to me, and I'm willing to put in the time, of course, IF it has a chance. --------------------------- Problem: Due to (biggest python WTF) (Problem 1), which prevents actual default argument values being set in a function signature, many functions look like: > def teleport(from, to, passenger, hitchhiker=None, food_accessory=None, comfort_accessory=None): > if hitchhiker is None: > hitchhiker = Fly() > > if food_accessory is None: > food_accessory = Cheeseburger() > > if comfort_accessory is None: > comfort_accessory = Towel() > > ... This None checking and setting is unwieldy (Problem 2) boilerplate, which is responsible for many extra lines of code in python (Problem 3), and tends to distract from the real code (Problem 4) in a function. To reduce boilerplate, a ternary or binary expression can be used: > def teleport(from, to, passenger, hitchhiker=None, accessories=[]): > hitchhiker = hitchhiker if hitchhiker or Fly() # Existing Solution A > def teleport(from, to, passenger, hitchhiker=None, accessories=[]): > hitchhiker = hitchhiker or Fly() # Existing Solution B These help, but are still quite repetitive: * Existing Solution A is often avoided simply because many Pythonistas dislike tenery expressions (perhaps due to readability or hidden code branch concerns), and can quickly become unwieldy when the new value (Fly()) is a more complex expression, such as a list comprehension: > def teleport(from, to, passenger, hitchhiker=None, accessories=[]): > hitchhiker = hitchhiker if hitchhiker or filter(lambda h: not h.already_hitching(), available_hitchhikers)[0] * Existing Solution B is less unwieldy (solving Problem 2), yet still suffers from repetition (Problems 2, 3, and 4). In a similar scenario, when we want to populate an empty list (say, accesories), we could write: > accessories |= [Cheeseburger(), Towel()] # Almost-Solution C However, this is not actually a solution, because: * The inplace-or (|=) operator is not provided for None in python: > food_accessor = None > food_accessory |= Cheeseburger() Traceback (most recent call last): File "", line 1, in TypeError: unsupported operand type(s) for |=: 'NoneType' and 'Cheeseburger' This could be added, but would not solve the issue, because: * If an non-default (non-None) argument value WERE provided, we be would modifying the argument unintentionally: > class Larry: > def __ior__(self, other): > print("{} modified".format(self)) > > teleport(..., hitchhiker=Larry(), ...) <__main__.Larry object at 0x7f1ad9828be0> modified And so Problems 1,2,3, and 4 are compounded. Proposal: The addition of a ?= operator could provide an elegant solution: > def teleport(from, to, hitchiker=None, food_accessory=None, comfort_accessory=None): > hitchhiker ?= Fly() > food_accessory ?= Cheeseburger() > comfort_accessory ?= Towel() In these examples, > a = None > a ?= b > c = [1, 2] > c ?= d Would be equivalent to (assuming ?= was called __ielse__): > class ExtendedNone(NoneType) > def __ielse__(self, other): > return other > > class ielse_list(list): > def __ielse__(self, other): > return self > > None = ExtendedNone() > > a = None > a = a.__ielse__(b) > > c = iff_list([1, 2]) > c = a.__ielse__(d) Although explicitly provided for list above, this ielse operator could be defined (again, as `return other` for NoneType only, but defaulted to `return self` for all other values of `a`, requiring very little implementation effort or intrusion into other types. Possible interaction with typing It may be also possible to define a ? suffix on function arguments, so that: > def func(a?): > a ?= [1,2] greatly shorting the current verbose form: > from typing import Optional > > def func(a = None : Optional[Any])): > a = [1,2] if a is None else a and equivalent to: > from typing import Optional > > def func(a: Optional[Any]): > a = a.__ielse__([1,2]) Possible alternatives: * The __ielse__ name is highly debatable. Alternatives such as __iif__ (inplace if), __imaybe__, __isomeor__ (borrowing from Rusts's Some/None terminology, and reflecting the intended use as in relation to None types, DB nulls, and similar void-like values). * The ? function argument suffix is not necessary to implement the core ?= proposal. * If the ? function argument suffix is implemented, implementation via typing.Optional is not necessary; it could also be simply implemented so that: > def f(a?): > ... is equivalent to: > def f(a=None): > ... --------------------------- Feedback would be much appreciated. -- Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Sep 22 08:24:31 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 22 Sep 2018 13:24:31 +0100 Subject: [Python-ideas] PEPs: Theory of operation [was: Moving to another forum system ...] In-Reply-To: <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> Message-ID: On Sat, 22 Sep 2018 at 10:56, Anders Hovm?ller wrote: > > > > > If one doesn't know who the senior developers are yet, she should think > > twice about whether she's ready to PEP anything. That's not a litmus > > test; some PEPs have eventually succeeded though the proponent was new > > to the project development process.[2] But it's a lot less painful if > > you can tell who's likely to be able to sway the whole project one way > > or the other. > > I think that entire paragraph made it sound even worse than what I wrote originally. It reads to an outsider as ?if you don?t know what?s wrong I?m not going to tell you?. > > > And as a matter of improving your proposal, who surely > > does know more about what your proposal implies for the implementation > > than you do, so you should strongly consider whether *you* are the one > > who's missing something when you disagree with them. > > Is this me specifically or ?you? in the abstract? English isn?t great here. > > I personally supplied a complete implementation so don?t see how this applies to me? > > / Anders > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ From p.f.moore at gmail.com Sat Sep 22 08:27:11 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 22 Sep 2018 13:27:11 +0100 Subject: [Python-ideas] PEPs: Theory of operation [was: Moving to another forum system ...] References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> Message-ID: Sorry, hit send too soon... On Sat, 22 Sep 2018 at 13:24, Paul Moore wrote: > > > On Sat, 22 Sep 2018 at 10:56, Anders Hovm?ller wrote: > > > > > > > > > If one doesn't know who the senior developers are yet, she should think > > > twice about whether she's ready to PEP anything. That's not a litmus > > > test; some PEPs have eventually succeeded though the proponent was new > > > to the project development process.[2] But it's a lot less painful if > > > you can tell who's likely to be able to sway the whole project one way > > > or the other. > > > > I think that entire paragraph made it sound even worse than what I wrote originally. It reads to an outsider as ?if you don?t know what?s wrong I?m not going to tell you?. More like, if you're not sufficiently familiar with the community or the language, you have some research to do to get to a point where your proposals are likely to be sufficiently well informed. People could simply tell you the facts, but that wouldn't help with the process of becoming familiar with the community/language. Yes, that's a barrier to people making proposals. But not all barriers to entry are bad - this one says, do your research in advance, which IMO is a bare minimum people should expect to need to do... Paul From bruce at leban.us Sat Sep 22 08:30:36 2018 From: bruce at leban.us (Bruce Leban) Date: Sat, 22 Sep 2018 05:30:36 -0700 Subject: [Python-ideas] Proposal for an inplace else (?=) operator In-Reply-To: References: Message-ID: On Saturday, September 22, 2018, Lee Braiden wrote: > > Proposal: > > The addition of a ?= operator could provide an elegant solution: > > > def teleport(from, to, hitchiker=None, food_accessory=None, > comfort_accessory=None): > > hitchhiker ?= Fly() > > food_accessory ?= Cheeseburger() > > comfort_accessory ?= Towel() > > > Would be equivalent to (assuming ?= was called __ielse__): > > > class ExtendedNone(NoneType) > > def __ielse__(self, other): > > return other > > > > class ielse_list(list): > > def __ielse__(self, other): > > return self > > > > None = ExtendedNone() > > This is attacking the workaround not the actual problem. The real problem is that default parameters are evaluated at function definition time not call time. Setting the default to None and then replacing it is a workaround to that problem (and one that doesn't work if None is an allowable value). If Python were going to address this problem, I think it better to do something like: def teleport(from, to, hitchiker => Fly(), accessory => Towel(hitchhiker)): etc. Where => specifies an expression thst is evaluated inside the function and Is approximately equivalent to your code except it works even if the caller passes None. (I don't know what the right syntax for this would be. I just picked => as something that is suggestive, not legal today and isn't ?= to avoid confusion with your proposal.) Note that I shortened your example and modified it slightly to show that I would have the order of the parameters be significant. The call to Towel can use the previous hitchiker parameter and it will do what you expect. Clearly that would work with your code; you just weren't esplicit about it. My alternative doesn't allow arbitrary None replacement, but I'm not sure that's a prevalent pattern other than in this case notwithstanding your examples. --- Bruce -- --- Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Sep 22 08:30:37 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 22 Sep 2018 13:30:37 +0100 Subject: [Python-ideas] Proposal for an inplace else (?=) operator In-Reply-To: References: Message-ID: On Sat, 22 Sep 2018 at 12:54, Lee Braiden wrote: > > Could I get some feedback on this? I'd like to know if anyone thinks it might make it through the pep process before spending too much (more) time on it. That said, it seems valuable to me, and I'm willing to put in the time, of course, IF it has a chance. There have been other proposals along these lines already. For a start, you should take a look at the "None-aware operators" PEP, and the various discussions in the list archives around that. If nothing else, your proposal would been to include a review of that one and an explanation of why yours is better. Paul From robertve92 at gmail.com Sat Sep 22 08:35:25 2018 From: robertve92 at gmail.com (Robert Vanden Eynde) Date: Sat, 22 Sep 2018 14:35:25 +0200 Subject: [Python-ideas] Proposal for an inplace else (?=) operator In-Reply-To: References: Message-ID: That's an idea that could be added to my thread "dialects of python" in order to compile some fancy or specific syntax to regular python. Le sam. 22 sept. 2018 ? 13:53, Lee Braiden a ?crit : > Could I get some feedback on this? I'd like to know if anyone thinks it > might make it through the pep process before spending too much (more) time > on it. That said, it seems valuable to me, and I'm willing to put in the > time, of course, IF it has a chance. > > --------------------------- > > Problem: > > Due to (biggest python WTF) (Problem 1), which prevents actual default > argument values being set in a function signature, many > functions look like: > > > def teleport(from, to, passenger, hitchhiker=None, > food_accessory=None, comfort_accessory=None): > > if hitchhiker is None: > > hitchhiker = Fly() > > > > if food_accessory is None: > > food_accessory = Cheeseburger() > > > > if comfort_accessory is None: > > comfort_accessory = Towel() > > > > ... > > This None checking and setting is unwieldy (Problem 2) boilerplate, > which is responsible for many extra lines of code in python > (Problem 3), and tends to distract from the real code (Problem 4) in a > function. > > To reduce boilerplate, a ternary or binary expression can be used: > > > def teleport(from, to, passenger, hitchhiker=None, > accessories=[]): > > hitchhiker = hitchhiker if hitchhiker or Fly() > # Existing Solution A > > > def teleport(from, to, passenger, hitchhiker=None, > accessories=[]): > > hitchhiker = hitchhiker or Fly() > # Existing Solution B > > These help, but are still quite repetitive: > > * Existing Solution A is often avoided simply because many Pythonistas > dislike tenery expressions (perhaps due to > readability or hidden code branch concerns), and can quickly become > unwieldy when the new value (Fly()) is a more > complex expression, such as a list comprehension: > > > def teleport(from, to, passenger, hitchhiker=None, > accessories=[]): > > hitchhiker = hitchhiker if hitchhiker or filter(lambda h: not > h.already_hitching(), available_hitchhikers)[0] > > * Existing Solution B is less unwieldy (solving Problem 2), yet still > suffers from repetition (Problems 2, 3, and 4). > > In a similar scenario, when we want to populate an empty list (say, > accesories), we could write: > > > accessories |= [Cheeseburger(), Towel()] > # Almost-Solution C > > However, this is not actually a solution, because: > > * The inplace-or (|=) operator is not provided for None in python: > > > food_accessor = None > > food_accessory |= Cheeseburger() > > Traceback (most recent call last): > File "", line 1, in > TypeError: unsupported operand type(s) for |=: 'NoneType' and > 'Cheeseburger' > > This could be added, but would not solve the issue, because: > > * If an non-default (non-None) argument value WERE provided, we be > would modifying > the argument unintentionally: > > > class Larry: > > def __ior__(self, other): > > print("{} modified".format(self)) > > > > teleport(..., hitchhiker=Larry(), ...) > <__main__.Larry object at 0x7f1ad9828be0> modified > > And so Problems 1,2,3, and 4 are compounded. > > Proposal: > > The addition of a ?= operator could provide an elegant solution: > > > def teleport(from, to, hitchiker=None, food_accessory=None, > comfort_accessory=None): > > hitchhiker ?= Fly() > > food_accessory ?= Cheeseburger() > > comfort_accessory ?= Towel() > > In these examples, > > > a = None > > a ?= b > > > c = [1, 2] > > c ?= d > > Would be equivalent to (assuming ?= was called __ielse__): > > > class ExtendedNone(NoneType) > > def __ielse__(self, other): > > return other > > > > class ielse_list(list): > > def __ielse__(self, other): > > return self > > > > None = ExtendedNone() > > > > a = None > > a = a.__ielse__(b) > > > > c = iff_list([1, 2]) > > c = a.__ielse__(d) > > Although explicitly provided for list above, this ielse operator could > be defined > (again, as `return other` for NoneType only, but defaulted to `return > self` for > all other values of `a`, requiring very little implementation effort > or intrusion > into other types. > > Possible interaction with typing > > It may be also possible to define a ? suffix on function arguments, so > that: > > > def func(a?): > > a ?= [1,2] > > greatly shorting the current verbose form: > > > from typing import Optional > > > > def func(a = None : Optional[Any])): > > a = [1,2] if a is None else a > > and equivalent to: > > > from typing import Optional > > > > def func(a: Optional[Any]): > > a = a.__ielse__([1,2]) > > Possible alternatives: > > * The __ielse__ name is highly debatable. Alternatives such as > __iif__ (inplace if), __imaybe__, __isomeor__ (borrowing from Rusts's > Some/None terminology, and reflecting the intended use as in relation to > None types, DB nulls, and similar void-like values). > > * The ? function argument suffix is not necessary to implement the > core ?= proposal. > > * If the ? function argument suffix is implemented, implementation > via typing.Optional is not necessary; it could also be simply implemented > so that: > > > def f(a?): > > ... > > is equivalent to: > > > def f(a=None): > > ... > > > --------------------------- > > Feedback would be much appreciated. > > -- > Lee > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Sat Sep 22 08:52:15 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sat, 22 Sep 2018 14:52:15 +0200 Subject: [Python-ideas] PEPs: Theory of operation [was: Moving to another forum system ...] In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> Message-ID: <819A91CF-149A-4AB7-AE7A-61E608EBF88D@killingar.net> >>> >>> I think that entire paragraph made it sound even worse than what I wrote originally. It reads to an outsider as ?if you don?t know what?s wrong I?m not going to tell you?. > > More like, if you're not sufficiently familiar with the community or > the language, And now you made it sound even worse by insinuating that I don?t know the language and maybe I?m not a part of the community. / Anders From boxed at killingar.net Sat Sep 22 08:57:15 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sat, 22 Sep 2018 14:57:15 +0200 Subject: [Python-ideas] Proposal for an inplace else (?=) operator In-Reply-To: References: Message-ID: <7F985877-644A-4D0A-81FB-D368A8A8F1F6@killingar.net> > If Python were going to address this problem, I think it better to do something like: > > def teleport(from, to, hitchiker => Fly(), accessory => Towel(hitchhiker)): > etc. > > Where => specifies an expression thst is evaluated inside the function and Is approximately equivalent to your code except it works even if the caller passes None. (I don't know what the right syntax for this would be. I just picked => as something that is suggestive, not legal today and isn't ?= to avoid confusion with your proposal.) I was gonna reply pretty much exactly this! I was gonna suggest this syntax: def foo(a=new {}): Which is a bit more explicit I think, but smells a bit C++. / Anders From guido at python.org Sat Sep 22 11:59:50 2018 From: guido at python.org (Guido van Rossum) Date: Sat, 22 Sep 2018 08:59:50 -0700 Subject: [Python-ideas] PEPs: Theory of operation [was: Moving to another forum system ...] In-Reply-To: <819A91CF-149A-4AB7-AE7A-61E608EBF88D@killingar.net> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> <819A91CF-149A-4AB7-AE7A-61E608EBF88D@killingar.net> Message-ID: On Sat, Sep 22, 2018 at 5:53 AM Anders Hovm?ller wrote: > > > >>> > >>> I think that entire paragraph made it sound even worse than what I > wrote originally. It reads to an outsider as ?if you don?t know what?s > wrong I?m not going to tell you?. > > > > More like, if you're not sufficiently familiar with the community or > > the language, > > And now you made it sound even worse by insinuating that I don?t know the > language and maybe I?m not a part of the community. > Anders, I'm sorry you feel that everyone is piling onto you. I don't think they intend to pick specifically on you at all. What Stephen and Paul are describing applies to everyone who wants to write a PEP. I think collectively we haven't spent enough time writing up guidelines for new PEP authors to make it possible for someone to start writing a PEP without asking questions about how to write a PEP. I think part of the problem is that every author has a different background -- some folks come with great technical and writing skills but without much experience with how debate works in the Python (core dev) community; others have experience using the language and interacting with the community and have good ideas but lack writing skills or understanding of the technicalities of parsers and interpreters. So instead of writing a complete guide to writing a PEP (and getting it approved), we have to answer questions and help prospective authors based on the text they show us. I have to admit that I've not followed the full context, but I recommend that you try to see that other posters in this thread are trying to help with kindness, not judging you or your skills. Good luck with your PEP, whatever it is about! -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholas.chammas at gmail.com Sat Sep 22 12:03:54 2018 From: nicholas.chammas at gmail.com (Nicholas Chammas) Date: Sat, 22 Sep 2018 12:03:54 -0400 Subject: [Python-ideas] PEPs: Theory of operation [was: Moving to another forum system ...] In-Reply-To: <819A91CF-149A-4AB7-AE7A-61E608EBF88D@killingar.net> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> <819A91CF-149A-4AB7-AE7A-61E608EBF88D@killingar.net> Message-ID: On Sat, Sep 22, 2018 at 8:52 AM Anders Hovm?ller wrote: > >>> I think that entire paragraph made it sound even worse than what I > wrote originally. It reads to an outsider as ?if you don?t know what?s > wrong I?m not going to tell you?. > > > > More like, if you're not sufficiently familiar with the community or > > the language, > > And now you made it sound even worse by insinuating that I don?t know the > language and maybe I?m not a part of the community. Anders, I think you're reading too much into what Paul and Stephen are writing. To me it seems they are just explaining the landscape of how the PEP process typically plays out. I don't think their comments are insinuating anything. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Sat Sep 22 12:15:00 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Sat, 22 Sep 2018 09:15:00 -0700 Subject: [Python-ideas] PEPs: Theory of operation [was: Moving to another forum system ...] In-Reply-To: <819A91CF-149A-4AB7-AE7A-61E608EBF88D@killingar.net> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> <819A91CF-149A-4AB7-AE7A-61E608EBF88D@killingar.net> Message-ID: <27f3910d-807d-6ff4-862b-345e5ace6321@stoneleaf.us> On 09/22/2018 05:52 AM, Anders Hovm?ller wrote: > And now you made it sound even worse [...] Their use of the word "you" is "everybody who wants to write a PEP", not you "Anders Hovm?ller" specifically. (Isn't English a wonderful language? *sigh* ) -- ~Ethan~ From boxed at killingar.net Sat Sep 22 12:44:26 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sat, 22 Sep 2018 18:44:26 +0200 Subject: [Python-ideas] PEPs: Theory of operation [was: Moving to another forum system ...] In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> <819A91CF-149A-4AB7-AE7A-61E608EBF88D@killingar.net> Message-ID: <7229C3E4-E65B-4EE1-842F-251C10DE1D93@killingar.net> >> And now you made it sound even worse by insinuating that I don?t know the language and maybe I?m not a part of the community. > > Anders, I'm sorry you feel that everyone is piling onto you. Well a bit, but mostly I was just pointing out that the text I replied to wasn?t thought out and made it sound worse than it is (I think!). > I don't think they intend to pick specifically on you at all. Agreed. It?s also maybe a consequence of using English where ?thou?, ?you? and ?one? has been merged into one word, creating ambiguity. > What Stephen and Paul are describing applies to everyone who wants to write a PEP. Sure, but that there isn?t a list of people who form the deciding committee makes it very strange. That it applies to everyone doesn?t make it better, it makes it worse. > we have to answer questions and help prospective authors based on the text they show us. Also agreed. The fastest responses on this list tend to be the most hostile and least constructive also, so if you read it after the fact and don?t pay close attention to the chronology it looks nicer than how it feels posting here. > I have to admit that I've not followed the full context, but I recommend that you try to see that other posters in this thread are trying to help with kindness, not judging you or your skills. Good luck with your PEP, whatever it is about! Sure. But I?m also pointing out that they are being (accidentally?) brusque. I know I am all the time! I keep apologizing and trying to backpedal when I realize I wasn?t understood in the way I was attempting. / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From turnbull.stephen.fw at u.tsukuba.ac.jp Sat Sep 22 17:00:35 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Sun, 23 Sep 2018 06:00:35 +0900 Subject: [Python-ideas] PEPs: Theory of operation In-Reply-To: <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> Message-ID: <23462.44403.641797.27777@turnbull.sk.tsukuba.ac.jp> Anders Hovm?ller writes: > > If one doesn't know who the senior developers are yet, she should > > think twice about whether she's ready to PEP anything. That's > > not a litmus test; some PEPs have eventually succeeded though the > > proponent was new to the project development process.[2] But it's > > a lot less painful if you can tell who's likely to be able to > > sway the whole project one way or the other. > > I think that entire paragraph made it sound even worse than what I > wrote originally. It reads to an outsider as ?if you don?t know > what?s wrong I?m not going to tell you?. "What's wrong" *with what*? Nothing in that paragraph implies that anything is wrong with anything. I wrote that post for your benefit *among others* but it's not about you. It's about how Python development makes decisions about whether to implement a proposal (specifically, PEPs) or not. Understanding how things work currently will help new contributors get their proposals implemented, or at least understand why those proposals weren't accepted. Unfortunately, there seems to be a lot of misunderstanding about these very basic processes, among a half-dozen or more newcomers who are posting about governance. I want to clear that up. Personally, I think Python governance is fine, but for those who don't, they should at least understand what it *is* before they start proposing modifications. > > And as a matter of improving your proposal, who surely does know > > more about what your proposal implies for the implementation than > > you do, so you should strongly consider whether *you* are the one > > who's missing something when you disagree with them. > > Is this me specifically or ?you? in the abstract? English isn?t > great here. Nothing in that post is about you, it's just that your post triggered mine, and a quote from your post was a convenient lead-in to a discussion of several aspects of the PEP process (and more generally the decision to implement a feature or not) that are pretty opaque to most newcomers. Regards, From jamtlu at gmail.com Sat Sep 22 14:27:38 2018 From: jamtlu at gmail.com (James Lu) Date: Sat, 22 Sep 2018 14:27:38 -0400 Subject: [Python-ideas] =?utf-8?q?JS=E2=80=99_governance_model_is_worth_i?= =?utf-8?q?nspecting?= Message-ID: <3AD40D9B-D9F4-4F25-95E7-8CFB4697D622@gmail.com> > To my mind, there is one very big reason we should be cautious about > adopting JS language-design policies, namely, that they have led to a > very, very poorly designed language. No doubt a good deal of that is > baggage from early stages in which JS had a poor to nonexistent language > design governance model. Nonetheless, the failure of JS to fix its > numerous fundamental flaws, and especially the rapid feature churn in > recent years, suggests to me that their model should be viewed with > skepticism. I disagree. The language is often very flexible and effective in its domains. I don?t know what you mean by ?rapid feature churn?, churn usually means existing features are superseded by newer ones- this isn?t the case. JS is much more nuanced than it appears on the surface. It?s understandable that those with only a glossing of JS look down on it, because JS really was a primitive language a few years ago. You can learn about JS in depth with the poorly-named ?You don?t know JS? free online book. From boxed at killingar.net Sat Sep 22 19:03:13 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sun, 23 Sep 2018 01:03:13 +0200 Subject: [Python-ideas] PEPs: Theory of operation In-Reply-To: <23462.44403.641797.27777@turnbull.sk.tsukuba.ac.jp> References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> <23462.44403.641797.27777@turnbull.sk.tsukuba.ac.jp> Message-ID: >>> If one doesn't know who the senior developers are yet, she should >>> think twice about whether she's ready to PEP anything. That's >>> not a litmus test; some PEPs have eventually succeeded though the >>> proponent was new to the project development process.[2] But it's >>> a lot less painful if you can tell who's likely to be able to >>> sway the whole project one way or the other. >> >> I think that entire paragraph made it sound even worse than what I >> wrote originally. It reads to an outsider as ?if you don?t know >> what?s wrong I?m not going to tell you?. > > "What's wrong" *with what*? Nothing in that paragraph implies that > anything is wrong with anything. Sorry. I was vague. Let me try to explain what I meant. It?s a common trope that people who are bad at relationships expect their partners to be mind readers. This is exemplified by the expression I quoted: ?if you don?t know what?s wrong, I?m not going to tell you?. This is funny/sad because it is precisely when someone doesn?t know something that it is important to tell them instead of clamming up and refuse further information. Me and others have pointed out that we can?t figure out and the docs don?t say how the change process happens. The response to this was >>> If one doesn't know who the senior developers are yet, she should >>> think twice about whether she's ready to PEP anything I can?t see how this is different from the trope. Is there a committee? Then why not just name it? How does one figure this out? Should I just do some statistics on the git repo and surmise that the top committers are the committee? Do I have to read the commit log and all mails in this mailing list and Python-dev the last 10 years? Do I need a time machine so I can attend sprints and pycons and core developer meetings that have already happened? Is there a secret handshake? I am being a bit silly with these suggestions but it?s to point out that I see no way to exclude any of those possibilities from PEP1 or your mails. In fact they seem to me less like silly examples now than before your mails. >> Is this me specifically or ?you? in the abstract? English isn?t >> great here. > > Nothing in that post is about you, it's just that your post triggered > mine, and a quote from your post was a convenient lead-in to a > discussion of several aspects of the PEP process (and more generally > the decision to implement a feature or not) that are pretty opaque to > most newcomers. Good. Thanks for the clarification. / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at selik.org Sat Sep 22 21:31:21 2018 From: mike at selik.org (Michael Selik) Date: Sat, 22 Sep 2018 18:31:21 -0700 Subject: [Python-ideas] PEPs: Theory of operation In-Reply-To: References: <859A4CFF-23ED-4329-9FAA-DA59AA65FC80@gmail.com> <23459.25799.174825.591496@turnbull.sk.tsukuba.ac.jp> <93AF02F1-CE05-4D91-B25C-C0D6DC1FC73D@killingar.net> <23461.63786.966966.519896@turnbull.sk.tsukuba.ac.jp> <16B03E30-B59B-4933-BC79-19E7E7E0EFB6@killingar.net> <23462.44403.641797.27777@turnbull.sk.tsukuba.ac.jp> Message-ID: On Sat, Sep 22, 2018 at 2:00 PM Stephen J. Turnbull wrote: > If one doesn't know who the senior developers are yet, she should > think twice about whether she's ready to PEP anything. On Sat, Sep 22, 2018 at 4:03 PM Anders Hovm?ller wrote: > Is there a committee? Then why not just name it? > How does one figure this out? I sympathise, because I think the documentation could be more clear. There is a list of people with "push" privileges and a little bit of why they gained that privilege, or at least from whom. There's also a list of their interests, which corresponds somewhat to which modules they're "in charge of". https://devguide.python.org/developers/ https://devguide.python.org/experts/ Note that there isn't a listing for syntax or builtins. Perhaps that should be remedied, but it'd require a volunteer. Further, now that Guido has abdicated, the process for PEP approval is uncertain. From mike at selik.org Sat Sep 22 21:56:25 2018 From: mike at selik.org (Michael Selik) Date: Sat, 22 Sep 2018 18:56:25 -0700 Subject: [Python-ideas] Proposal for an inplace else (?=) operator In-Reply-To: References: Message-ID: On Sat, Sep 22, 2018 at 4:53 AM Lee Braiden wrote: > Problem: [Python] prevents actual default argument values being set in a function signature > Feedback would be much appreciated. You'd be more convincing if you stated the problem more precisely. Python supports default values for function arguments. Regardless, I'll echo the discussion of the None-aware operators PEP: it's quite simple to write ``if arg is None: arg = ...`` in the function body. I don't find the use case compelling. From marko.ristin at gmail.com Sun Sep 23 01:09:37 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sun, 23 Sep 2018 07:09:37 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? Message-ID: Hi, (I'd like to fork from a previous thread, "Pre-conditions and post-conditions", since it got long and we started discussing a couple of different things. Let's put the general discussion related to design-by-contract in this thread and I'll spawn another thread for the discussion about the concrete implementation of a design-by-contract library in Python.) After the discussion we had on the list and after browsing the internet a bit, I'm still puzzled why design-by-contract was not more widely adopted and why so few languages support it. Please have a look at these articles and answers: - https://www.leadingagile.com/2018/05/design-by-contract-part-one/ - https://ask.slashdot.org/story/07/03/10/009237/why-is-design-by-contract-not-more-popular -- this one is from 2007, but represents well IMO the way people discuss it - https://stackoverflow.com/questions/481312/why-is-design-by-contract-not-so-popular-compared-to-test-driven-development and this answer in particular https://stackoverflow.com/a/28680756/1600678 - https://softwareengineering.stackexchange.com/questions/128717/why-is-there-such-limited-support-for-design-by-contract-in-most-modern-programm I did see that there are a lot of misconceptions about it ("simple asserts", "developer overhead", "needs upfront design", "same as unit testing"). This is probably the case with any novel concept that people are not familiar with. However, what does puzzle me is that once the misconceptions are rectified ("it's not simple asserts", "the development is actually faster", "no need for upfront design", "not orthogonal, but dbc + unit testing is better than just unit testing"), the concept is still discarded *. *After properly reading about design-by-contract and getting deeper into the topic, there is no rational argument against it and the benefits are obvious. And still, people just wave their hand and continue without formalizing the contracts in the code and keep on writing them in the descriptions. * Why is that so? *I'm completely at loss about that -- especially about the historical reasons (some mentioned that design-by-contract did not take off since Bertrand Meyer holds the trademark on the term and because of his character. Is that the reason?). One explanation that seems plausible to me is that many programmers are actually having a hard time with formalization and logic rules (*e.g., *implication, quantifiers), maybe due to missing education (*e.g. *many programmers are people who came to programming from other less-formal fields). It's hence easier for them to write in human text and takes substantial cognitive load to formalize these thoughts in code. Does that explains it? What do you think? What is the missing part of the puzzle? Cheers, Marko -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Sun Sep 23 02:04:49 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sun, 23 Sep 2018 08:04:49 +0200 Subject: [Python-ideas] "old" values in postconditions Message-ID: Hi, (I'd like to fork from a previous thread, "Pre-conditions and post-conditions", since it got long and we started discussing a couple of different things. Let's discuss in this thread the implementation of a library for design-by-contract and how to push it forward to hopefully add it to the standard library one day.) For those unfamiliar with contracts and current state of the discussion in the previous thread, here's a short summary. The discussion started by me inquiring about the possibility to add design-by-contract concepts into the core language. The idea was rejected by the participants mainly because they thought that the merit of the feature does not merit its costs. This is quite debatable and seems to reflect many a discussion about design-by-contract in general. Please see the other thread, "Why is design-by-contract not widely adopted?" if you are interested in that debate. We (a colleague of mine and I) decided to implement a library to bring design-by-contract to Python since we don't believe that the concept will make it into the core language anytime soon and we needed badly a tool to facilitate our work with a growing code base. The library is available at http://github.com/Parquery/icontract. The hope is to polish it so that the wider community could use it and once the quality is high enough, make a proposal to add it to the standard Python libraries. We do need a standard library for contracts, otherwise projects with *conflicting* contract libraries can not integrate (*e.g., *the contracts can not be inherited between two different contract libraries). So far, the most important bits have been implemented in icontract: - Preconditions, postconditions, class invariants - Inheritance of the contracts (including strengthening and weakening of the inherited contracts) - Informative violation messages (including information about the values involved in the contract condition) - Sphinx extension to include contracts in the automatically generated documentation (sphinx-icontract) - Linter to statically check that the arguments of the conditions are correct (pyicontract-lint) We are successfully using it in our code base and have been quite happy about the implementation so far. There is one bit still missing: accessing "old" values in the postcondition (*i.e., *shallow copies of the values prior to the execution of the function). This feature is necessary in order to allow us to verify state transitions. For example, consider a new dictionary class that has "get" and "put" methods: from typing import Optional from icontract import post class NovelDict: def length(self)->int: ... def get(self, key: str) -> Optional[str]: ... @post(lambda self, key, value: self.get(key) == value) @post(lambda self, key: old(self.get(key)) is None and old(self.length()) + 1 == self.length(), "length increased with a new key") @post(lambda self, key: old(self.get(key)) is not None and old(self.length()) == self.length(), "length stable with an existing key") def put(self, key: str, value: str) -> None: ... How could we possible implement this "old" function? Here is my suggestion. I'd introduce a decorator "before" that would allow you to store whatever values in a dictionary object "old" (*i.e. *an object whose properties correspond to the key/value pairs). The "old" is then passed to the condition. Here is it in code: # omitted contracts for brevity class NovelDict: def length(self)->int: ... # omitted contracts for brevity def get(self, key: str) -> Optional[str]: ... @before(lambda self, key: {"length": self.length(), "get": self.get(key)}) @post(lambda self, key, value: self.get(key) == value) @post(lambda self, key, old: old.get is None and old.length + 1 == self.length(), "length increased with a new key") @post(lambda self, key, old: old.get is not None and old.length == self.length(), "length stable with an existing key") def put(self, key: str, value: str) -> None: ... The linter would statically check that all attributes accessed in "old" have to be defined in the decorator "before" so that attribute errors would be caught early. The current implementation of the linter is fast enough to be run at save time so such errors should usually not happen with a properly set IDE. "before" decorator would also have "enabled" property, so that you can turn it off (*e.g., *if you only want to run a postcondition in testing). The "before" decorators can be stacked so that you can also have a more fine-grained control when each one of them is running (some during test, some during test and in production). The linter would enforce that before's "enabled" is a disjunction of all the "enabled"'s of the corresponding postconditions where the old value appears. Is this a sane approach to "old" values? Any alternative approach you would prefer? What about better naming? Is "before" a confusing name? Thanks a lot for your thoughts! Cheers, Marko -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Sun Sep 23 03:15:24 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 23 Sep 2018 03:15:24 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Sun, Sep 23, 2018, 1:10 AM Marko Ristin-Kaufmann wrote: > One explanation that seems plausible to me is that many programmers are > actually having a hard time with formalization and logic rules (*e.g., *implication, > quantifiers), maybe due to missing education (*e.g. *many programmers are > people who came to programming from other less-formal fields). It's hence > easier for them to write in human text and takes substantial cognitive load > to formalize these thoughts in code. Does that explains it? > I've tried to explain my own reasons for not being that interested in DbC in other threads. I've been familiar with DbC libraries in Python for close to 20 years, and it never struck me as worth the effort of using. I'm not alone in this. A large majority of folks formally educted in computer science and related fields have been aware of DbC for decades but deliberately decided not to use them in their own code. Maybe you and Bertram Meyer are simple better than that 99% of programmers... Or maybe the benefit is not so self-evidently and compelling as it feels to you. To me, as I've said, DbC imposes a very large cost for both writers and readers of code. While it's possible to split hairs about the edge cases where assertions and unit tests cannot cover identical ground, the reality is that the benefits are extremely close between the different techniques. However, it's vastly easier to take a more incremental and as-needed approach using assertions and unit tests than it is with DbC. Moreover, unit tests have the giant advantage of living *elsewhere* than in the main code itself... This probably doesn't matter so much to writers, but it's a huge win for readers. Even with doctests?which I'm somewhat unusual in actually liking?even though the tests live in the same file and function/class as the operational code, it still feels relatively easy to separate the concerns visual when reading such code. I just cannot get that with DbC. I know you can inside I'm wrong about all this, and my code would be better and faster if I would accept this niche orthodoxy. But I just do not see DbC becoming non-niche in any plausible future, neither in Python not in any other mainstream language. Opinions are free to differ, and I could be wrong. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From goosey15 at gmail.com Sun Sep 23 06:13:59 2018 From: goosey15 at gmail.com (Angus Hollands) Date: Sun, 23 Sep 2018 11:13:59 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: Hi Marko, I think there are several ways to approach this problem, though am not weighing in on whether DbC is a good thing in Python. I wrote a simple implementation of DbC which is currently a run-time checker. You could, with the appropriate tooling, validate statically too (as with all approaches). In my approach, I use a ?proxy? object to allow the contract code to be defined at function definition time. It does mean that some things are not as pretty as one would like - anything that cannot be hooked into with magic methods i.e isinstance, but I think this is acceptable as it makes features like old easier. Also, one hopes that it encourages simpler contract checks as a side-effect. Feel free to take a look - https://github.com/agoose77/pyffel It is by no means well written, but a fun PoC nonetheless. Regards, Angus ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo.fisher at gmail.com Sun Sep 23 06:33:35 2018 From: hugo.fisher at gmail.com (Hugh Fisher) Date: Sun, 23 Sep 2018 20:33:35 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: > Date: Sun, 23 Sep 2018 07:09:37 +0200 > From: Marko Ristin-Kaufmann > To: Python-Ideas > Subject: [Python-ideas] Why is design-by-contracts not widely adopted? [ munch ] > *. *After properly reading about design-by-contract and getting deeper into > the topic, there is no rational argument against it and the benefits are > obvious. And still, people just wave their hand and continue without > formalizing the contracts in the code and keep on writing them in the > descriptions. Firstly, I see a difference between rational arguments against Design By Contract (DbC) and against DbC in Python. Rejecting DbC for Python is not the same as rejecting DbC entirely. Programming languages are different, obviously. Python is not the same as C is not the same as Lisp... To me this also means that different languages are used for different problem domains, and in different styles of development. I wouldn't use DbC in programming C or assembler because it's not really helpful for the kind of low level close to the machine stuff I use C or assembler for. And I wouldn't use DbC for Python because I wouldn't find it helpful for the kind of dynamic, exploratory development I do in Python. I don't write strict contracts for Python code because in a dynamically typed, and duck typed, programming language they just don't make sense to me. Which is not to say I think Design by Contract is bad, just that it isn't good for Python. Secondly, these "obvious" benefits. If they're obvious, I want to know why aren't you using Eiffel? It's a programming language designed around DbC concepts. It's been around for three decades, at least as long as Python or longer. There's an existing base of compilers and support tools and libraries and textbooks and experienced programmers to work with. Could it be that Python has better libraries, is faster to develop for, attracts more programmers? If so, I suggest it's worth considering that this might be *because* Python doesn't have DbC. Or is this an updated version of the old saying "real programmers write FORTRAN in any language" ? If you are accustomed to Design by Contract, think of your time in the Python world as a trip to another country. Relax and try to program like the locals do. You might enjoy it. -- cheers, Hugh Fisher From desmoulinmichel at gmail.com Mon Sep 24 02:49:34 2018 From: desmoulinmichel at gmail.com (Michel Desmoulin) Date: Mon, 24 Sep 2018 08:49:34 +0200 Subject: [Python-ideas] =?utf-8?q?JS=E2=80=99_governance_model_is_worth_i?= =?utf-8?q?nspecting?= In-Reply-To: <3AD40D9B-D9F4-4F25-95E7-8CFB4697D622@gmail.com> References: <3AD40D9B-D9F4-4F25-95E7-8CFB4697D622@gmail.com> Message-ID: Le 22/09/2018 ? 20:27, James Lu a ?crit : > > To my mind, there is one very big reason we should be cautious about > > adopting JS language-design policies, namely, that they have led to a > > very, very poorly designed language. No doubt a good deal of that is > > baggage from early stages in which JS had a poor to nonexistent language > > design governance model. Nonetheless, the failure of JS to fix its > > numerous fundamental flaws, and especially the rapid feature churn in > > recent years, suggests to me that their model should be viewed with > > skepticism. > I disagree. The language is often very flexible and effective in its domains. I don?t know what you mean by ?rapid feature churn?, churn usually means existing features are superseded by newer ones- this isn?t the case. > > JS is much more nuanced than it appears on the surface. It?s understandable that those with only a glossing of JS look down on it, because JS really was a primitive language a few years ago. > > You can learn about JS in depth with the poorly-named ?You don?t know JS? free online book. > I worked with JS for the last 10 years, and I agree that "we should be cautious about adopting JS language-design policies", particularly about the fact they completly ignored readability in their concerns. But still, using the old JS baggages to justify we reject what they are doing currently is not a good argument: - they can't break the whole Web so deprecation is very hard. Python 2 => 3 should make us understand that. Yes it sucks you can still declare a variable global by default. It also sucked we had to rewrite most good python modules during the last decade. - the new JS features have been so far a good fit for the language and overall made it better. - fast pace evolution is only for the JS ecosystem (and I agree it's terrible). But the spec and implementations have been very reasonable in their progress. Now it's hard to know if it's because of the design policy or in spite of it. But while I still dislike JS, it IS a vastly better language that it used to be and we should not disregard the design policies because of these particular issues. From marko.ristin at gmail.com Mon Sep 24 03:46:16 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Mon, 24 Sep 2018 09:46:16 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: Hi, Thank you for your replies, Hugh and David! Please let me address the points in serial. *Obvious benefits* You both seem to misconceive the contracts. The goal of the design-by-contract is not reduced to testing the correctness of the code, as I reiterated already a couple of times in the previous thread. The contracts document *formally* what the caller and the callee expect and need to satisfy when using a method, a function or a class. This is meant for a module that is used by multiple people which are not necessarily familiar with the code. They are *not *a niche. There are 150K projects on pypi.org. Each one of them would benefit if annotated with the contracts. Please do re-read my previous messages on the topic a bit more attentively. These two pages I also found informative and they are quite fast to read (<15 min): https://www.win.tue.nl/~wstomv/edu/2ip30/references/design-by-contract/index.html https://gleichmann.wordpress.com/2007/12/09/test-driven-development-and-design-by-contract-friend-or-foe/ Here is a quick summary of the argument. When you are documenting a method you have the following options: 1) Write preconditions and postconditions formally and include them automatically in the documentation (*e.g., *by using icontract library). 2) Write precondtions and postconditions in docstring of the method as human text. 3) Write doctests in the docstring of the method. 4) Expect the user to read the actual implementation. 5) Expect the user to read the testing code. Here is what seems obvious to me. *Please do point me to what is not obvious to you* because that is the piece of puzzle that I am missing (*i.e. *why this is not obvious and what are the intricacies). I enumerated the statements for easier reference: a) Using 1) is the only option when you have to deal with inheritance. Other approaches can no do that *without much repetition *in practice *.* b) If you write contracts in text, they will become stale over time (*i.e. *disconnected from the implementation and *plain wrong and misleading*). It is a common problem that the comments rot over time and I hope it does not need further argument (please let me know if you really think that the comments *do not rot*). c) Using 3), doctests, means that you need mocking as soon as your method depends on non-trivial data structures. Moreover, if the output of the function is not trivial and/or long, you actually need to write the contract (as postcondition) just *after *the call in the doctest. Documenting preconditions includes writing down the error that will be thrown. Additionally, you need to write that what you are documenting actually also holds for all other examples, not just for this particular test case (*e.g.*, in human text as a separate sentence before/after the doctest in the docstring). d) Reading other people's code in 4) and 5) is not trivial in most cases and requires a lot of attention as soon as the method includes calls to submethods and functions. This is impractical in most situation*s *since *most code is non-trivial to read* and is subject to frequent changes. e) Most of the time, 5) is not even a viable option as the testing code is not even shipped with the package and includes going to github (if the package is open-sourced) and searching through the directory tree to find the test. This forces every user of a library to get familiar with the *testing code *of the library. f) 4) and 5) are *obviously* a waste of time for the user -- please do explain why this might not be the case. Whenever I use the library, I do not expect to be forced to look into its test code and its implementation. I expect to read the documentation and just use the library if I'm confident about its quality. I have rarely read the implementation of the standard libraries (notable exceptions in my case are ast and subprocess module) or any well-established third-party library I frequently use (numpy, opencv, sklearn, nltk, zmq, lmdb, sqlalchemy). If the documentation is not clear about the contracts, I use trial-and-error to figure out the contracts myself. This again is *obviously* a waste of time of the *user *and it's far easier to read the contracts directly than use trial-and-error *.* *Contracts are difficult to read.* David wrote: > To me, as I've said, DbC imposes a very large cost for both writers and > readers of code. > This is again something that eludes me and I would be really thankful if you could clarify. Please consider for an example, pypackagery ( https://pypackagery.readthedocs.io/en/latest/packagery.html) and the documentation of its function resolve_initial_paths: packagery.resolve_initial_paths(*initial_paths*) Resolve the initial paths of the dependency graph by recursively adding *.py files beneath given directories. Parameters: *initial_paths* (List[Path]) ? initial paths as absolute paths Return type: List[Path] Returns: list of initial files (*i.e.* no directories) Requires: - all(pth.is_absolute() for pth in initial_paths) Ensures: - len(result) >= len(initial_paths) if initial_paths else result == [] - all(pth.is_absolute() for pth in result) - all(pth.is_file() for pth in result) How is this difficult to read, unless the reader is not familiar with formalism and has a hard time parsing the quantifiers and logic rules? Mind that all these bits are deemed *important* by the writer -- and need to be included in the function description *somehow* -- you can choose between 1)-5). 1) seems obviously best to me. 1) will be tested at least at test time*. *If I have a bug in the implementation (*e.g., *I include a directory in the result), the testing framework will notify me again. Here is what the reader would have needed to read without the formalism in the docstring as text (*i.e., *2): * All input paths must be absolute. * If the initial paths are empty, the result is an empty list. * All the paths in the result are also absolute. * The resulting paths only include files. and here is an example with doctest (3): >>> result = packagery.resolve_initial_paths([]) [] >>> with temppathlib.NamedTemporaryFile() as tmp1, \ ... temppathlib.NamedTemporaryFile() as tmp2: ... tmp1.path.write_text("some text") ... tmp2.path.write_text("another text") ... result = packagery.resolve_initial_paths([tmp1, tmp2]) ... assert all(pth.is_absolute() for pth in result) ... assert all(pth.is_file() for pth in result) >>> with temppathlib.TemporaryDirectory() as tmp: ... packagery.resolve_initial_paths([tmp.path]) Traceback (most recent call last): ... ValueError("Unexpected directory in the paths") >>> with temppathlib.TemporaryDirectory() as tmp: ... pth = tmp.path / "some-file.py" ... pth.write_text("some text") ... packagery.initial_paths([pth.relative_to(tmp.path)]) Traceback (most recent call last): ... ValueError("Unexpected relative path in the initial paths") Now, how can reading the text (2, code rot) or reading the doctests (3, longer, includes contracts) be easier and more maintainable compared to reading the contracts? I would be really thankful for the explanation -- I feel really stupid as for me this is totally obvious and, evidently, for other people it is not. I hope we all agree that the arguments about this example (resolve_initial_paths) selected here are not particular to pypackagery, but that they generalize to most of the functions and methods out there. *Writing contracts is difficult.* David wrote: > To me, as I've said, DbC imposes a very large cost for both writers and > readers of code. > The effort of writing contracts include as of now: * include icontract (or any other design-by-contract library) to setup.py (or requirements.txt), one line one-off * include sphinx-icontract to docs/source/conf.py and docs/source/requirements.txt, two lines, one-off * write your contracts (usually one line per contract). The contracts (1) in the above-mentioned function look like this (omitting the contracts run only at test time): @icontract.pre(lambda initial_paths: all(pth.is_absolute() for pth in initial_paths)) @icontract.post(lambda result: all(pth.is_file() for pth in result)) @icontract.post(lambda result: all(pth.is_absolute() for pth in result)) @icontract.post(lambda initial_paths, result: len(result) >= len(initial_paths) if initial_paths else result == []) def resolve_initial_paths(initial_paths: List[pathlib.Path]) -> List[pathlib.Path]: ... Putting aside how this code could be made more succinct (use "args" or "a" argument in the condition to represent the arguments, using from ... import ..., renaming "result" argument to "r", introducing a shortcut methods slowpre and slowpost to encapsulate the slow contracts not to be executed in the production), how is this difficult to write? It's 4 lines of code. Writing text (2) is 4 lines. Writing doctests (3) is 23 lines and includes the contracts. Again, given that the writer is trained in writing formal expressions, the mental effort is the same for writing the text and writing the formal contract (in cases of non-native English speakers, I'd even argue that formal expressions are sometimes *easier* to write). *99% vs 1%* > I'm not alone in this. A large majority of folks formally educated in > computer science and related fields have been aware of DbC for decades but > deliberately decided not to use them in their own code. Maybe you and > Bertram Meyer are simple better than that 99% of programmers... Or maybe > the benefit is not so self-evidently and compelling as it feels to you. I think that ignorance plays a major role here. Many people have misconceptions about the design-by-contract. They just use 2) for more complex methods, or 3) for rather trivial methods. They are not aware that it's easy to use the contracts (1) and fear using them for non-rational reasons (*e.g., *habits). This is also what Todd Plesel writes in https://www.win.tue.nl/~wstomv/edu/2ip30/references/design-by-contract/index.html#IfDesignByContractIsSoGreat : The vast majority of those developing software - even that intended to be reused - are simply ignorant of the concept. As a result they produce application programmer interfaces (APIs) that are under-specified thus passing the burden to the application programmer to discover by trial and error, the 'acceptable boundaries' of the software interface (undocumented contract's terms). But such ad-hoc operational definitions of software interface discovered through reverse-engineering are subject to change upon the next release and so offers no stable way to ensure software correctness. The fact that many people involved in writing software lack pertinent education (e.g., CS/CE degrees) and training (professional courses, read software engineering journals, attend conferences etc.) is *not* a reason they don't know about DBC since the concept is not covered adequately in such mediums anyway. That is, *ignorance of DBC extends not just throughout practitioners but also throughout educators and many industry-experts.* He lists some more factors and misconceptions that hinder the adoption. I would definitely recommend you to read at least that section if not the whole piece. The conclusion paragraph "Culture Shift: Too Soon or Too Late" was also telling: > *The simplicity and obvious benefits of Design By Contract lead one to > wonder why it has not become 'standard practice' in the software > development industry.* When the concept has been explained to various > technical people (all non-programmers), they invariably agree that it is a > sensible approach and some even express dismay that software components are > not developed this way. > > It is just another indicator of the immaturity of the software development > industry. The failure to produce high-quality products is also blatantly > obvious from the non-warranty license agreement of commercial software. Yet > consumers continue to buy software they suspect and even expect to be of > poor quality. Both quality and lack-of-quality have a price tag, but the > difference is in who pays and when. As long as companies can continue to > experience rising profits while selling poor-quality products, what > incentive is there to change? Perhaps the fall-out of the "Year 2000" > problem will focus enough external pressure on the industry to jolt it > towards improved software development methods. There is talk of certifying > programmers like other professionals. If and when that occurs, the benefits > of Design By Contract just might begin to be appreciated. > > But it is doubtful. Considering the typical 20 year rule for adopting > superior technology, DBC as exemplified by Eiffel, has another decade to > go. But if Java succeeds in becoming a widely-used language and JavaBeans > become a widespread form of reuse then it would already be too late for DBC > to have an impact. iContract will be a hardly-noticed event much like ANNA > for Ada and A++ for C++. This is because *the philosophy/mindset/culture > is established by the initial publication of the language and its standard > library.* > (Emphasis mine; iContract refers to a Java design-by-contract library) Hence the salient argument is the lack of *tools* for DbC. So far, none of the existing DbC libraries in Python really have the capabilities to be included in the code base. The programmer had to duplicate the contract, the messages did not include the values involved in the condition, one could not inherit the contracts and the contracts were not included in the documentation. Some libraries supported some of these features, but none up to icontract library supported them all. icontract finally supports all these features. I have *never* seen a rational argument how writing contracts (1) is *inferior *to approaches 2-5), except that it's hard for programmers untrained in writing formal expressions and for the lack of tools. I would be really thankful if you could address these points and show me where I am wrong *given that formalism and tools are not a problem*. We can train the untrained, and we can develop tools (and put them into standard library). This will push adoption to far above 1%. Finally, it is *obvious* to me that the documentation is important. I see lacking documentation as one of the major hits in the productivity of a programmer. If there is a tool that could easily improve the documentation (*i.e. *formal contracts with one line of code per contract) and automatically keep it in sync with the code (by checking the contracts during the testing), I don't see any *rational *reason why you would dispense of such a tool. Again, please do correct me and contradict -- I don't want to sound condescending or arrogant -- I literally can't wrap my head around why *anybody* would dispense of a such an easy-to-use tool that gives you better documentation (*i.e. *superior to approaches 2-5) except for lack of formal skills and lack of supporting library. If you think that the documentation is *not *important, then please, do explain that since it goes counter to all my previous experience and intuition (which, of course, can be wrong). *Why not Eiffel?* Hugh wrote: > Secondly, these "obvious" benefits. If they're obvious, I want to know why > aren't you using Eiffel? It's a programming language designed around DbC > concepts. It's been around for three decades, at least as long as Python or > longer. There's an existing base of compilers and support tools and > libraries > and textbooks and experienced programmers to work with. > > Could it be that Python has better libraries, is faster to develop for, > attracts > more programmers? If so, I suggest it's worth considering that this might > be *because* Python doesn't have DbC. Python is easier to write and read, and there are no libraries which are close in quality in Eiffel space (notably, Numpy, OpenCV, nltk and sklearn). I really don't see how the quality of these libraries have anything to do with lack (or presence) of the contracts. OpenCV and Numpy have contracts all over their code (written as assertions and not documented), albeit with very non-informative violation messages. And they are great libraries. Their users would hugely benefit from a more mature and standardized contracts library with informative violation messages. *Duck Typing* Hugh wrote: > And I wouldn't use DbC for Python because > I wouldn't find it helpful for the kind of dynamic, exploratory development > I do in Python. I don't write strict contracts for Python code because in a > dynamically typed, and duck typed, programming language they just don't > make sense to me. Which is not to say I think Design by Contract is bad, > just that it isn't good for Python. > I really don't see how DbC has to do with duck typing (unless you reduce it to mere isinstance conditions, which would simply be a straw-man argument) -- could you please clarify? As soon as you need to document your code, and this is what most modules have to do in teams of more than one person (especially so if you are developing a library for a wider audience), you need to write down the contracts. Please see above where I tried to explained that 2-5) are inferior approaches to documenting contracts compared to 1). As I wrote above, I would be very, very thankful if you point me to other approaches (apart from 1-5) that are superior to contracts or state an argument why approaches 2-5) are superior to the contracts since that is what I miss to see. Cheers, Marko On Sun, 23 Sep 2018 at 12:34, Hugh Fisher wrote: > > Date: Sun, 23 Sep 2018 07:09:37 +0200 > > From: Marko Ristin-Kaufmann > > To: Python-Ideas > > Subject: [Python-ideas] Why is design-by-contracts not widely adopted? > > [ munch ] > > > *. *After properly reading about design-by-contract and getting deeper > into > > the topic, there is no rational argument against it and the benefits are > > obvious. And still, people just wave their hand and continue without > > formalizing the contracts in the code and keep on writing them in the > > descriptions. > > Firstly, I see a difference between rational arguments against Design By > Contract (DbC) and against DbC in Python. Rejecting DbC for Python is > not the same as rejecting DbC entirely. > > Programming languages are different, obviously. Python is not the same > as C is not the same as Lisp... To me this also means that different > languages are used for different problem domains, and in different styles > of development. I wouldn't use DbC in programming C or assembler > because it's not really helpful for the kind of low level close to the > machine > stuff I use C or assembler for. And I wouldn't use DbC for Python because > I wouldn't find it helpful for the kind of dynamic, exploratory development > I do in Python. I don't write strict contracts for Python code because in a > dynamically typed, and duck typed, programming language they just don't > make sense to me. Which is not to say I think Design by Contract is bad, > just that it isn't good for Python. > > Secondly, these "obvious" benefits. If they're obvious, I want to know why > aren't you using Eiffel? It's a programming language designed around DbC > concepts. It's been around for three decades, at least as long as Python or > longer. There's an existing base of compilers and support tools and > libraries > and textbooks and experienced programmers to work with. > > Could it be that Python has better libraries, is faster to develop for, > attracts > more programmers? If so, I suggest it's worth considering that this might > be *because* Python doesn't have DbC. > > Or is this an updated version of the old saying "real programmers write > FORTRAN in any language" ? If you are accustomed to Design by Contract, > think of your time in the Python world as a trip to another country. Relax > and try to program like the locals do. You might enjoy it. > > -- > > cheers, > Hugh Fisher > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From turnbull.stephen.fw at u.tsukuba.ac.jp Mon Sep 24 08:15:57 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Mon, 24 Sep 2018 21:15:57 +0900 Subject: [Python-ideas] =?utf-8?q?JS=E2=80=99_governance_model_is_worth_i?= =?utf-8?q?nspecting?= In-Reply-To: References: <3AD40D9B-D9F4-4F25-95E7-8CFB4697D622@gmail.com> Message-ID: <23464.54653.678992.185038@turnbull.sk.tsukuba.ac.jp> Michel Desmoulin writes: > [W]e should not disregard the design policies because of these > particular issues. Please stop. As long as core developers don't get involved, it's just noise. If you must continue this thread, PEP it. No major change in the procedures described in the DevGuide, PEP 1, and so on will take place without a PEP. If you're serious, you'll have to put in that much effort to get a hearing. If you're not, you're wasting lines in my mail client's summary screen. From wes.turner at gmail.com Mon Sep 24 10:34:03 2018 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 24 Sep 2018 10:34:03 -0400 Subject: [Python-ideas] =?utf-8?q?JS=E2=80=99_governance_model_is_worth_i?= =?utf-8?q?nspecting?= In-Reply-To: References: Message-ID: On Thursday, September 20, 2018, James Lu wrote: > JS? decisions are made by a body known as TC39, a fairly/very small group > of JS implementers. https://github.com/tc39/ Python has devs with committer privileges: https://devguide.python.org/experts/ There are maintainers for many modules: https://devguide.python.org/experts/ > > First, JS has an easy and widely supported way to modify the language for > yourself: Babel. Babel transpires your JS to older JS, which is then run. > > You can publish your language modification on the JS package manager, npm. Babel plugins are packaged for and installable with npm: https://babeljs.io/docs/en/plugins New ES features can run on older JS interpreter features with transpilation by Babel. > > When a feature is being considered for inclusion in mainline JS, the > proposal must first gain a champion (represented by ?)that is a member of > TC-39. The guidelines say that the proposal?s features should already have > found use in the community. Then it moves through three stages, and the > champion must think the proposal is ready for the next stage before it can > move on. I?m hazy on what the criterion for each of the three stages is. > The fourth stage is approved. Is there a link to a document describing the PEP process (with and without BDFL)? That would be a helpful link to add to the table here: https://devguide.python.org/#contributing e.g. "How to write a PEP" as an ISSUE_TEMPLATE/ might be helpful: - [ ] Read the meta-PEPs - [ ] and find the appropriate BDFL-delegate - [ ] copy the PEP 12 RST template - [ ] add the headings specified in PEP 1 - [ ] Read PEP 1 "Meta-PEPs (PEPs about PEPs or Processes)" https://www.python.org/dev/peps/#meta-peps-peps-about-peps-or-processes PEP 12 -- Sample reStructuredText PEP Template https://www.python.org/dev/peps/pep-0012/ PEP 1 -- PEP Purpose and Guidelines https://www.python.org/dev/peps/pep-0001/ PEPs https://github.com/python/peps > I believe the global TC39 committee meets regularly in person, and at > those meetings, proposals can advance stages- these meetings are frequent > enough for the process to be fast and slow enough that people can have the > time to try out a feature before it becomes main line JS. Meeting notes are > made public. PEP 1 describes the PEP mailing list and editors. > > The language and its future features are discussed on ESDiscuss.org, which > is surprisingly filled with quality and respectful discussion, largely from > experts in the JavaScript language. python-dev@, python-ideas@, > > I?m fairly hazy on the details, this is just the summary off the top of my > head. > > ? > I?m not saying this should be Python?s governance model, just to keep JS? > in mind. Which features of the TC39 committee's ECMAscript (ES) language governance model would be helpful to incorporate into the Python language governance model? -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.m.bray at gmail.com Mon Sep 24 11:10:27 2018 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 24 Sep 2018 17:10:27 +0200 Subject: [Python-ideas] Asynchronous exception handling around with/try statement borders In-Reply-To: References: <2cbe6bbe-eae0-e69d-590a-c28e05de523b@mozilla.com> Message-ID: On Fri, Sep 21, 2018 at 12:58 AM Chris Angelico wrote: > > On Fri, Sep 21, 2018 at 8:52 AM Kyle Lahnakoski wrote: > > Since the java.lang.Thread.stop() "debacle", it has been obvious that > > stopping code to run other code has been dangerous. KeyboardInterrupt > > (any interrupt really) is dangerous. Now, we can probably code a > > solution, but how about we remove the danger: > > > > I suggest we remove interrupts from Python, and make them act more like > > java.lang.Thread.interrupt(); setting a thread local bit to indicate an > > interrupt has occurred. Then we can write explicit code to check for > > that bit, and raise an exception in a safe place if we wish. This can > > be done with Python code, or convenient places in Python's C source > > itself. I imagine it would be easier to whitelist where interrupts can > > raise exceptions, rather than blacklisting where they should not. > > The time machine strikes again! > > https://docs.python.org/3/c-api/exceptions.html#signal-handling Although my original post did not explicitly mention PyErr_CheckSignals() and friends, it had already taken that into account and it is not a silver bullet, at least w.r.t. the exact issue I raised, which had to do with the behavior of context managers versus the setup() try: do_thing() finally: cleanup() pattern, and the question of how signals are handled between Python interpreter opcodes. There is a still-open bug on the issue tracker discussing the exact issue in greater details: https://bugs.python.org/issue29988 From turnbull.stephen.fw at u.tsukuba.ac.jp Mon Sep 24 13:22:18 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Tue, 25 Sep 2018 02:22:18 +0900 Subject: [Python-ideas] =?utf-8?q?JS=E2=80=99_governance_model_is_worth_i?= =?utf-8?q?nspecting?= In-Reply-To: References: Message-ID: <23465.7498.340344.880175@turnbull.sk.tsukuba.ac.jp> Wes Turner writes: > Is there a link to a document describing the PEP process (with and > without BDFL)? PEP 1, and https://devguide.python.org/langchanges/# But most changes don't need a PEP. We're only discussing this now because Anders's proposal would need a PEP. In general, though, PEPs are rare. There are many hundreds of pull requests accepted every year, but there only about 500 PEPs (including rejected, withdrawn, and deferred PEPs) over the entire history of Python. Of those, quite a few are Process and Informational PEPs. The Language Changes section is #20, and doesn't get a quick link from the table of links in the "Contributing" section of the home page of the DevGuide. That's as it should be, IMO, so I'm not going to write up a PR to add such a link. YMMV, go right ahead. From rosuav at gmail.com Mon Sep 24 13:53:23 2018 From: rosuav at gmail.com (Chris Angelico) Date: Tue, 25 Sep 2018 03:53:23 +1000 Subject: [Python-ideas] Asynchronous exception handling around with/try statement borders In-Reply-To: References: <2cbe6bbe-eae0-e69d-590a-c28e05de523b@mozilla.com> Message-ID: On Tue, Sep 25, 2018 at 1:10 AM Erik Bray wrote: > > On Fri, Sep 21, 2018 at 12:58 AM Chris Angelico wrote: > > > > On Fri, Sep 21, 2018 at 8:52 AM Kyle Lahnakoski wrote: > > > Since the java.lang.Thread.stop() "debacle", it has been obvious that > > > stopping code to run other code has been dangerous. KeyboardInterrupt > > > (any interrupt really) is dangerous. Now, we can probably code a > > > solution, but how about we remove the danger: > > > > > > I suggest we remove interrupts from Python, and make them act more like > > > java.lang.Thread.interrupt(); setting a thread local bit to indicate an > > > interrupt has occurred. Then we can write explicit code to check for > > > that bit, and raise an exception in a safe place if we wish. This can > > > be done with Python code, or convenient places in Python's C source > > > itself. I imagine it would be easier to whitelist where interrupts can > > > raise exceptions, rather than blacklisting where they should not. > > > > The time machine strikes again! > > > > https://docs.python.org/3/c-api/exceptions.html#signal-handling > > Although my original post did not explicitly mention > PyErr_CheckSignals() and friends, it had already taken that into > account and it is not a silver bullet, at least w.r.t. the exact issue > I raised, which had to do with the behavior of context managers versus > the > > setup() > try: > do_thing() > finally: > cleanup() > > pattern, and the question of how signals are handled between Python > interpreter opcodes. There is a still-open bug on the issue tracker > discussing the exact issue in greater details: > https://bugs.python.org/issue29988 To be fair, your post not only didn't mention CheckSignals, but it almost perfectly described its behaviour. So I stand by my response. :) I don't think the system needs to be replaced; it ought to be possible to resolve the context manager issue without tossing out the existing code. ChrisA From barry at barrys-emacs.org Mon Sep 24 13:39:21 2018 From: barry at barrys-emacs.org (Barry Scott) Date: Mon, 24 Sep 2018 18:39:21 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <8030445F-D92C-4E00-A1A0-EB4735C53ED7@barrys-emacs.org> > On 23 Sep 2018, at 11:13, Angus Hollands wrote: > > Hi Marko, > > I think there are several ways to approach this problem, though am not weighing in on whether DbC is a good thing in Python. I wrote a simple implementation of DbC which is currently a run-time checker. You could, with the appropriate tooling, validate statically too (as with all approaches). In my approach, I use a ?proxy? object to allow the contract code to be defined at function definition time. It does mean that some things are not as pretty as one would like - anything that cannot be hooked into with magic methods i.e isinstance, but I think this is acceptable as it makes features like old easier. Also, one hopes that it encourages simpler contract checks as a side-effect. Feel free to take a look - https://github.com/agoose77/pyffel > It is by no means well written, but a fun PoC nonetheless. > This is an interesting PoC, nice work! I like that its easy to read the tests. Given a library like this the need to build DbC into python seems unnecessary. What do other people think? Barry > Regards, > Angus > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at barrys-emacs.org Mon Sep 24 13:47:37 2018 From: barry at barrys-emacs.org (Barry Scott) Date: Mon, 24 Sep 2018 18:47:37 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <0CA76D06-ECA4-46BF-A18E-942870BE8607@barrys-emacs.org> > On 23 Sep 2018, at 11:33, Hugh Fisher wrote: > > Could it be that Python has better libraries, is faster to develop for, attracts > more programmers? If so, I suggest it's worth considering that this might > be *because* Python doesn't have DbC. I'm not sure how you get from the lack of DbC being a feature to python's success. I use DbC in my python code via the asserts and its been very useful in my experience. If there was a nice way to get better then the assert method I'd use it. Like Angus's PoC. I assume that developers that are not interesting in DbC would simply not use any library/syntax that supported it. Barry -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Mon Sep 24 15:09:34 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Mon, 24 Sep 2018 21:09:34 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <8030445F-D92C-4E00-A1A0-EB4735C53ED7@barrys-emacs.org> References: <8030445F-D92C-4E00-A1A0-EB4735C53ED7@barrys-emacs.org> Message-ID: Hi Barry, I think the main issue with pyffel is that it can not support function calls in general. If I understood it right, and Angus please correct me, you would need to wrap every function that you would call from within the contract. But the syntax is much nicer than icontract or dpcontracts (see these packages on pypi). What if we renamed "args" argument and "old" argument in those libraries to just "a" and "o", respectively? Maybe that gives readable code without too much noise: @requires(lambda self, a, o: self.sum == o.sum - a.amount) def withdraw(amount: int) -> None: ... There is this lambda keyword in front, but it's not too bad? I'll try to contact dpcontracts maintainers. Maybe it's possible to at least merge a couple of libraries into one and make it a de facto standard. @Agnus, would you also like to join the effort? Cheers, Marko Le lun. 24 sept. 2018 ? 19:57, Barry Scott a ?crit : > > > On 23 Sep 2018, at 11:13, Angus Hollands wrote: > > Hi Marko, > > I think there are several ways to approach this problem, though am not > weighing in on whether DbC is a good thing in Python. I wrote a simple > implementation of DbC which is currently a run-time checker. You could, > with the appropriate tooling, validate statically too (as with all > approaches). In my approach, I use a ?proxy? object to allow the contract > code to be defined at function definition time. It does mean that some > things are not as pretty as one would like - anything that cannot be hooked > into with magic methods i.e isinstance, but I think this is acceptable as > it makes features like old easier. Also, one hopes that it encourages > simpler contract checks as a side-effect. Feel free to take a look - > https://github.com/agoose77/pyffel > It is by no means well written, but a fun PoC nonetheless. > > This is an interesting PoC, nice work! I like that its easy to read the > tests. > > Given a library like this the need to build DbC into python seems > unnecessary. > > What do other people think? > > Barry > > > > Regards, > Angus > ? > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at barrys-emacs.org Mon Sep 24 15:34:47 2018 From: barry at barrys-emacs.org (Barry Scott) Date: Mon, 24 Sep 2018 20:34:47 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <8030445F-D92C-4E00-A1A0-EB4735C53ED7@barrys-emacs.org> Message-ID: <37D7E886-201E-4F31-987D-5B21CA0691B8@barrys-emacs.org> > On 24 Sep 2018, at 20:09, Marko Ristin-Kaufmann wrote: > > Hi Barry, > I think the main issue with pyffel is that it can not support function calls in general. If I understood it right, and Angus please correct me, you would need to wrap every function that you would call from within the contract. > > But the syntax is much nicer than icontract or dpcontracts (see these packages on pypi). What if we renamed "args" argument and "old" argument in those libraries to just "a" and "o", respectively? Maybe that gives readable code without too much noise: The args and old and not noise its easier to read the a and o. a and o as aliases for more descriptive names maybe, but not as the only name. > > @requires(lambda self, a, o: self.sum == o.sum - a.amount) > def withdraw(amount: int) -> None: > ... > > There is this lambda keyword in front, but it's not too bad? The lambda smells of internals that I should not have to care about being exposed. So -1 on lambda being required. Also being able to supply a list of conditions was a +1. > > I'll try to contact dpcontracts maintainers. Maybe it's possible to at least merge a couple of libraries into one and make it a de facto standard. @Agnus, would you also like to join the effort? > > Cheers, > Marko > > > > > > Le lun. 24 sept. 2018 ? 19:57, Barry Scott > a ?crit : > > >> On 23 Sep 2018, at 11:13, Angus Hollands > wrote: >> >> Hi Marko, >> >> I think there are several ways to approach this problem, though am not weighing in on whether DbC is a good thing in Python. I wrote a simple implementation of DbC which is currently a run-time checker. You could, with the appropriate tooling, validate statically too (as with all approaches). In my approach, I use a ?proxy? object to allow the contract code to be defined at function definition time. It does mean that some things are not as pretty as one would like - anything that cannot be hooked into with magic methods i.e isinstance, but I think this is acceptable as it makes features like old easier. Also, one hopes that it encourages simpler contract checks as a side-effect. Feel free to take a look - https://github.com/agoose77/pyffel >> It is by no means well written, but a fun PoC nonetheless. >> > This is an interesting PoC, nice work! I like that its easy to read the tests. > > Given a library like this the need to build DbC into python seems unnecessary. > > What do other people think? > > Barry > > > >> Regards, >> Angus >> >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Mon Sep 24 18:34:16 2018 From: jamtlu at gmail.com (James Lu) Date: Mon, 24 Sep 2018 18:34:16 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: Message-ID: You could disassemble (import dis) the lambda to biew the names of the lambdas. @before(lambda self, key, _, length, get: self.length(), self.get(key)) Perhaps you could disassemble the function code and look at all operations or accesses that are done to ?old.? and evaluate those expressions before the function runs. Then you could ?replace? the expression. @post(lambda self, key, old: old.get is None and old.length + 1 == self.length()) Either the system would grab old.get and old.length or be greedy and grab old.get is None and old.length + 1. It would then replace the old.get and old.length with injects that only respond to is None and +1. Or, a syntax like this @post(lambda self, key, old: [old.get(old.key)] is None and [old.self.length() + 1] == self.length()) Where the stuff inside the brackets is evaluated before the decorated function runs. It would be useful for networking functions or functions that do something ephemeral, where data related to the value being accessed needed for the expression no longer exists after the function. This does conflict with list syntax forever, so maybe either force people to do list((expr,)) or use an alternate syntax like one item set syntax { } or double set syntax {{ }} or double list syntax [[ ]]. Ditto with having to avoid the literals for the normal meaning. You could modify Python to accept any expression for the lambda function and propose that as a PEP. (Right now it?s hardcoded as a dotted name and optionally a single argument list surrounded by parentheses.) I suggest that instead of ?@before? it?s ?@snapshot? and instead of ?old? it?s ?snapshot?. Python does have unary plus/minus syntax as well as stream operators (<<, >>) and list slicing syntax and the @ operator and operators & and | if you want to play with syntax. There?s also the line continuation character for crazy lambdas. Personally I prefer @post(lambda self, key, old: {{old.self.get(old.key)}} and {{old.self.length() + 1}} == self.length()) because it?s explicit about what it does (evaluate the expressions within {{ }} before the function runs. I also find it elegant. Alternatively, inside the {{ }} could be a special scope where locals() is all the arguments @pre could?ve received as a dictionary. For either option you can remove the old parameter from the lambda. Example: @post(lambda self, key: {{self.get(key)}} and {{self.length() + 1}} == self.length()) Perhaps the convention should be to write {{ expr }} (with the spaces in between). You?d probably have to use the ast module to inspect it instead of the dis modul. Then find some way to reconstruct the expressions inside the double brackets- perhaps by reconstructing the AST and compiling it to a code object, or perhaps by finding the part of the string the expression is located. dis can give you the code as a string and you can run a carefully crafted regex on it. From jamtlu at gmail.com Mon Sep 24 18:34:23 2018 From: jamtlu at gmail.com (James Lu) Date: Mon, 24 Sep 2018 18:34:23 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? Message-ID: <4C344111-1E74-4954-B8F8-6B10433C0257@gmail.com> Perhaps it?s because fewer Python functions involve transitioning between states. Web development and statistics don?t involve many state transition. State transitions are where I think I would find it useful to write contracts out explicitly. From jamtlu at gmail.com Mon Sep 24 22:26:08 2018 From: jamtlu at gmail.com (James Lu) Date: Mon, 24 Sep 2018 22:26:08 -0400 Subject: [Python-ideas] JS? governance model is worth inspecting Message-ID: <0E7ACD0F-8007-42C8-8711-0068C3856260@gmail.com> > Which features of the TC39 committee's ECMAscript (ES) language governance > model would be helpful to incorporate into the Python language governance > model? Having ?beta? or ?alpha? editions of features, special versions of the interpreter people can test out to see if they prefer the version with the new feature. To prevent splintering, the main releases would only support the main feature set. In a worst case scenario, people can compile incompatible code to .pyc before running it. From turnbull.stephen.fw at u.tsukuba.ac.jp Tue Sep 25 00:56:37 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Tue, 25 Sep 2018 13:56:37 +0900 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <37D7E886-201E-4F31-987D-5B21CA0691B8@barrys-emacs.org> References: <8030445F-D92C-4E00-A1A0-EB4735C53ED7@barrys-emacs.org> <37D7E886-201E-4F31-987D-5B21CA0691B8@barrys-emacs.org> Message-ID: <23465.49157.902727.935888@turnbull.sk.tsukuba.ac.jp> Barry Scott writes: > > @requires(lambda self, a, o: self.sum == o.sum - a.amount) > > def withdraw(amount: int) -> None: > > ... > > > > There is this lambda keyword in front, but it's not too bad? > > The lambda smells of internals that I should not have to care about > being exposed. > So -1 on lambda being required. If you want to get rid of the lambda you can use strings and then 'eval' them in the condition. Adds overhead. If you want to avoid the extra runtime overhead of parsing expressions, it might be nice to prototype with MacroPy. This should also allow eliminating the lambda by folding it into the macro (I haven't used MacroPy but it got really good reviews by fans of that kind of thing). It would be possible to avoid decorator syntax if you want to with this implementation. I'm not sure that DbC is enough of a fit for Python that it's worth changing syntax to enable nice syntax natively, but detailed reports on a whole library (as long as it's not tiny) using DbC with a nice syntax (MacroPy would be cleaner, but I think it would be easy to "see through" the quoted conditions in an eval-based implementation) would go a long way to making me sit up and take notice. (I'm not influential enough to care about, but I suspect some committers would be impressed too. YMMV) Steve From marko.ristin at gmail.com Tue Sep 25 02:18:49 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Tue, 25 Sep 2018 08:18:49 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <23465.49157.902727.935888@turnbull.sk.tsukuba.ac.jp> References: <8030445F-D92C-4E00-A1A0-EB4735C53ED7@barrys-emacs.org> <37D7E886-201E-4F31-987D-5B21CA0691B8@barrys-emacs.org> <23465.49157.902727.935888@turnbull.sk.tsukuba.ac.jp> Message-ID: Hi Steve, Thanks a lot for pointing us to macropy -- I was not aware of the library, it looks very interesting! Do you have any experience how macropy fit with current IDEs and static linters (pylint, mypy)? I fired up pylint and mypy on the sample code from their web site, played a bit with it and it seems that they go along well. I'm also a bit worried how macropy would work out in the libraries published to pypi -- imagine if many people start using contracts. Suddenly, all these libraries would not only depend on a contract library but on a macro library as well. Is that something we should care about? Potential dependency hell? (I already have a bad feeling about making icontract depend on asttokens and considerin-lining asttokens into icontract particularly for that reason). I'm also worried about this one (from https://macropy3.readthedocs.io/en/latest/overview.html): > Note that this means *you cannot use macros in a file that is run > directly*, as it will not be passed through the import hooks. That would make contracts unusable in any stand-alone script, right? Cheers, Marko On Tue, 25 Sep 2018 at 06:56, Stephen J. Turnbull < turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > Barry Scott writes: > > > > @requires(lambda self, a, o: self.sum == o.sum - a.amount) > > > def withdraw(amount: int) -> None: > > > ... > > > > > > There is this lambda keyword in front, but it's not too bad? > > > > The lambda smells of internals that I should not have to care about > > being exposed. > > So -1 on lambda being required. > > If you want to get rid of the lambda you can use strings and then > 'eval' them in the condition. Adds overhead. > > If you want to avoid the extra runtime overhead of parsing > expressions, it might be nice to prototype with MacroPy. This should > also allow eliminating the lambda by folding it into the macro (I > haven't used MacroPy but it got really good reviews by fans of that > kind of thing). It would be possible to avoid decorator syntax if you > want to with this implementation. > > I'm not sure that DbC is enough of a fit for Python that it's worth > changing syntax to enable nice syntax natively, but detailed reports > on a whole library (as long as it's not tiny) using DbC with a nice > syntax (MacroPy would be cleaner, but I think it would be easy to "see > through" the quoted conditions in an eval-based implementation) would > go a long way to making me sit up and take notice. (I'm not > influential enough to care about, but I suspect some committers would > be impressed too. YMMV) > > Steve > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Tue Sep 25 03:28:08 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Tue, 25 Sep 2018 09:28:08 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <8030445F-D92C-4E00-A1A0-EB4735C53ED7@barrys-emacs.org> <37D7E886-201E-4F31-987D-5B21CA0691B8@barrys-emacs.org> <23465.49157.902727.935888@turnbull.sk.tsukuba.ac.jp> Message-ID: Hi Steve and others, After some thinking, I'm coming to a conclusion that it might be wrong to focus too much about how the contracts are written -- as long as they are formal, easily transformable to another representation and fairly maintainable. Whether it's with a lambda, without, with "args" or "a", with "old" or "o" -- it does not matter that much as long as it is pragmatic and not something crazy complex. This would also mean that we should not add complexity (*e.g., *by adding macros) and limit the magic as much as possible. It is actually much more important in which form they are presented to the end-user. I already made an example with sphinx-icontract in a message before -- an improved version might use mathematical symbols (*e.g., *replace all() with ?, replace len() with |.|, nicely use subscripts for ranges, use case distinction with curly bracket "{" instead of if.. else ..., etc.). This would make them even shorter and easier to parse. Let me iterate the example I already pasted in the thread before to highlight what I have in mind: packagery.resolve_initial_paths(*initial_paths*) Resolve the initial paths of the dependency graph by recursively adding *.py files beneath given directories. Parameters: *initial_paths* (List[Path]) ? initial paths as absolute paths Return type: List[Path] Returns: list of initial files (*i.e.* no directories) Requires: - all(pth.is_absolute() for pth in initial_paths) Ensures: - all(pth in result for pth in initial_paths if pth.is_file()) (Initial files also in result) - len(result) >= len(initial_paths) if initial_paths else result == [] - all(pth.is_absolute() for pth in result) - all(pth.is_file() for pth in result) The contracts need to extend __doc__ of the function accordingly (and the contracts in __doc__ also need to reflect the inheritance of the contracts!), so that we can use help(). There should be also a plugin for Pycharm, Pydev, vim and emacs to show the contracts in an abbreviated and more readable form in the code and only show them in raw form when we want to edit them (*i.e., *when we move cursor over them). I suppose inheritance of contracts needs to be reflected in quick-inspection windows, but not in the code view. Diffs and github/bitbucket/... code reviews might be a bit cumbersome since they enforce the raw form of the contracts, but as long as syntax is pragmatic, I don't expect this to be a blocker. Is this a sane focus? Cheers, Marko On Tue, 25 Sep 2018 at 08:18, Marko Ristin-Kaufmann wrote: > Hi Steve, > Thanks a lot for pointing us to macropy -- I was not aware of the library, > it looks very interesting! > > Do you have any experience how macropy fit with current IDEs and static > linters (pylint, mypy)? I fired up pylint and mypy on the sample code from > their web site, played a bit with it and it seems that they go along well. > > I'm also a bit worried how macropy would work out in the libraries > published to pypi -- imagine if many people start using contracts. > Suddenly, all these libraries would not only depend on a contract library > but on a macro library as well. Is that something we should care about? > Potential dependency hell? (I already have a bad feeling about making > icontract depend on asttokens and considerin-lining asttokens into > icontract particularly for that reason). > > I'm also worried about this one (from > https://macropy3.readthedocs.io/en/latest/overview.html): > >> Note that this means *you cannot use macros in a file that is run >> directly*, as it will not be passed through the import hooks. > > > That would make contracts unusable in any stand-alone script, right? > > Cheers, > Marko > > On Tue, 25 Sep 2018 at 06:56, Stephen J. Turnbull < > turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > >> Barry Scott writes: >> >> > > @requires(lambda self, a, o: self.sum == o.sum - a.amount) >> > > def withdraw(amount: int) -> None: >> > > ... >> > > >> > > There is this lambda keyword in front, but it's not too bad? >> > >> > The lambda smells of internals that I should not have to care about >> > being exposed. >> > So -1 on lambda being required. >> >> If you want to get rid of the lambda you can use strings and then >> 'eval' them in the condition. Adds overhead. >> >> If you want to avoid the extra runtime overhead of parsing >> expressions, it might be nice to prototype with MacroPy. This should >> also allow eliminating the lambda by folding it into the macro (I >> haven't used MacroPy but it got really good reviews by fans of that >> kind of thing). It would be possible to avoid decorator syntax if you >> want to with this implementation. >> >> I'm not sure that DbC is enough of a fit for Python that it's worth >> changing syntax to enable nice syntax natively, but detailed reports >> on a whole library (as long as it's not tiny) using DbC with a nice >> syntax (MacroPy would be cleaner, but I think it would be easy to "see >> through" the quoted conditions in an eval-based implementation) would >> go a long way to making me sit up and take notice. (I'm not >> influential enough to care about, but I suspect some committers would >> be impressed too. YMMV) >> >> Steve >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From goosey15 at gmail.com Tue Sep 25 03:37:40 2018 From: goosey15 at gmail.com (Angus Hollands) Date: Tue, 25 Sep 2018 08:37:40 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <8030445F-D92C-4E00-A1A0-EB4735C53ED7@barrys-emacs.org> Message-ID: Hi Mario, yes I'd pass in some kind of 'old' object as a proxy to the old object state. My demo can handle function calls, unless they themselves ultimately call something which can't be proxies e.g is instance (which delegates to the test class, not the instance), or boolean evaluation of some expression (e.g an if block). I don't think that this is awful - contracts should probably be fairly concise while expressive - but definitely non-ideal. I haven't really time to work on this at the moment; I admit, it was a specific problem of interest, rather than a domain I have much experience with. In fact, it was probably an excuse to overload all of the operators on an object! Kind regards, Angus On Mon, 24 Sep 2018, 20:09 Marko Ristin-Kaufmann, wrote: > Hi Barry, > I think the main issue with pyffel is that it can not support function > calls in general. If I understood it right, and Angus please correct me, > you would need to wrap every function that you would call from within the > contract. > > But the syntax is much nicer than icontract or dpcontracts (see these > packages on pypi). What if we renamed "args" argument and "old" argument in > those libraries to just "a" and "o", respectively? Maybe that gives > readable code without too much noise: > > @requires(lambda self, a, o: self.sum == o.sum - a.amount) > def withdraw(amount: int) -> None: > ... > > There is this lambda keyword in front, but it's not too bad? > > I'll try to contact dpcontracts maintainers. Maybe it's possible to at > least merge a couple of libraries into one and make it a de facto standard. > @Agnus, would you also like to join the effort? > > Cheers, > Marko > > > > > > Le lun. 24 sept. 2018 ? 19:57, Barry Scott a > ?crit : > >> >> >> On 23 Sep 2018, at 11:13, Angus Hollands wrote: >> >> Hi Marko, >> >> I think there are several ways to approach this problem, though am not >> weighing in on whether DbC is a good thing in Python. I wrote a simple >> implementation of DbC which is currently a run-time checker. You could, >> with the appropriate tooling, validate statically too (as with all >> approaches). In my approach, I use a ?proxy? object to allow the contract >> code to be defined at function definition time. It does mean that some >> things are not as pretty as one would like - anything that cannot be hooked >> into with magic methods i.e isinstance, but I think this is acceptable >> as it makes features like old easier. Also, one hopes that it encourages >> simpler contract checks as a side-effect. Feel free to take a look - >> https://github.com/agoose77/pyffel >> It is by no means well written, but a fun PoC nonetheless. >> >> This is an interesting PoC, nice work! I like that its easy to read the >> tests. >> >> Given a library like this the need to build DbC into python seems >> unnecessary. >> >> What do other people think? >> >> Barry >> >> >> >> Regards, >> Angus >> ? >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> >> >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue Sep 25 04:01:28 2018 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 25 Sep 2018 20:01:28 +1200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Mon, 24 Sep 2018 at 19:47, Marko Ristin-Kaufmann wrote: > > Hi, > > Thank you for your replies, Hugh and David! Please let me address the points in serial. > > Obvious benefits > You both seem to misconceive the contracts. The goal of the design-by-contract is not reduced to testing the correctness of the code, as I reiterated already a couple of times in the previous thread. The contracts document formally what the caller and the callee expect and need to satisfy when using a method, a function or a class. This is meant for a module that is used by multiple people which are not necessarily familiar with the code. They are not a niche. There are 150K projects on pypi.org. Each one of them would benefit if annotated with the contracts. You'll lose folks attention very quickly when you try to tell folk what they do and don't understand. Claiming that DbC annotations will improve the documentation of every single library on PyPI is an extraordinary claim, and such claims require extraordinary proof. I can think of many libraries where necessary pre and post conditions (such as 'self is still locked') are going to be noisy, and at risk of reducing comprehension if the DbC checks are used to enhance/extended documentation. Some of the examples you've been giving would be better expressed with a more capable type system in my view (e.g. Rust's), but I have no good idea about adding that into Python :/. Anyhow, the thing I value most about python is its pithyness: its extremely compact, allowing great developer efficiency, but the cost of testing is indeed excessive if the tests are not structured well. That said, its possible to run test suites with 10's of thousands of tests in only a few seconds, so there's plenty of headroom for most projects. -Rob From turnbull.stephen.fw at u.tsukuba.ac.jp Tue Sep 25 06:08:04 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Tue, 25 Sep 2018 19:08:04 +0900 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <8030445F-D92C-4E00-A1A0-EB4735C53ED7@barrys-emacs.org> <37D7E886-201E-4F31-987D-5B21CA0691B8@barrys-emacs.org> <23465.49157.902727.935888@turnbull.sk.tsukuba.ac.jp> Message-ID: <23466.2308.577009.907279@turnbull.sk.tsukuba.ac.jp> Marko Ristin-Kaufmann writes: > Thanks a lot for pointing us to macropy -- I was not aware of the library, > it looks very interesting! > > Do you have any experience how macropy fit Sorry, no. I was speaking as someone who is familiar with macros from Lisp but doesn't miss them in Python, and who also has been watching python-dev and python-ideas for about two decades now, so I've heard of things like MacroPy and know how the core developers think to a great extent. > I'm also a bit worried how macropy would work out in the libraries > published to pypi -- imagine if many people start using contracts. > Suddenly, all these libraries would not only depend on a contract library > but on a macro library as well. That's right. > Is that something we should care about? Yes. Most Pythonistas (at least at present) don't much like macros. They fear turning every program into its own domain-specific language. I can't claim much experience with dependency hell, but I think that's much less important from your point of view (see below). My point is mainly that, as you probably are becoming painfully aware, getting syntax changes into Python is a fairly drawnout process. For an example of the kind of presentation that motivates people to change their mind from the default state of "if it isn't in Python yet, YAGNI" to "yes, let's do *this* one", see https://www.python.org/dev/peps/pep-0572/#appendix-a-tim-peters-s-findings Warning: Tim Peters is legendary, though still active occasionally. All he has to do is post to get people to take notice. But this Appendix is an example of why he gets that kind of R-E-S-P-E-C-T.[1] So the whole thing is a secret plot ;-) to present the most beautiful syntax possible in your PEP (which will *not* be about DbC, but rather about a small set of enabling syntax changes, hopefully a singleton), along with an extended example, or a representative sample, of usage. Because you have a working implementation using MacroPy (or the less pretty[2] but fewer dependencies version based on condition strings and eval) people can actually try it on their own code and (you hope, they don't :-) they find a nestful of bugs by using it. > Potential dependency hell? (I already have a bad feeling about > making icontract depend on asttokens and considerin-lining > asttokens into icontract particularly for that reason). I don't think so. First, inlining an existing library is almost always a bad idea. As for the main point, if the user sticks to one major revision, and only upgrades to compatible bugfixes in the Python+stdlib distribution, I don't see why two or three libraries would be a major issue for a feature that the developer/project uses extremely frequently. I've rarely experienced dependency hell, and in both cases it was web frameworks (Django and Zope, to be specific, and the dependencies involved were more or less internal to those frameworks). If you or people you trust have other experience, forget what I just said. :-) Of course it depends on the library, but as long as the library is pretty strict about backward compatibility, you can upgrade it and get new functionality for other callers in your code base (which are likely to appear, you know -- human beings cannot stand to leave a tool unused once they install it!) > > Note that this means *you cannot use macros in a file that is run > > directly*, as it will not be passed through the import hooks. > > That would make contracts unusable in any stand-alone script, > right? Yes, but really, no: # The run.py described by the MacroPy docs assumes a script that # runs by just importing it. I don't have time to work out # whether that makes more sense. This idiom of importing just a # couple of libraries, and then invoking a function with a # conventional name such as "run" or "process" is quite common. # If you have docutils install, check out rstpep2html.py. import macropy.activate from my_contractful_library import main main() and away you go. 5 years from now that script will be a badge of honor among Pythonic DbCers, and you won't be willing to give it up! Just kidding, of course -- the ideal outcome is that the use case is sufficiently persuasive to justify a syntax change so you don't need MacroPy, or, perhaps some genius will come along and provide some obscure construct that is already legal syntax! HTH Footnotes: [1] R.I.P. Aretha! [2] Urk, I just realized there's another weakness to strings: you get no help on checking their syntax from the compiler. For a proof-of- concept that's OK, but if you end up using the DbC library in your codebase for a couple years while the needed syntax change gathers support, that would be really bad. From turnbull.stephen.fw at u.tsukuba.ac.jp Tue Sep 25 06:08:30 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Tue, 25 Sep 2018 19:08:30 +0900 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <8030445F-D92C-4E00-A1A0-EB4735C53ED7@barrys-emacs.org> Message-ID: <23466.2334.553202.98954@turnbull.sk.tsukuba.ac.jp> Angus Hollands writes: > yes I'd pass in some kind of 'old' object as a proxy to the old object > state. Mostly you shouldn't need to do this, you can copy the state: def method(self, args): import copy old = copy.deepcopy(self) This is easy but verbose to do with a decorator, and I imagine a bunch of issues about the 'old' object with multiple decorators, so I omit it here. You might want a variety of such decorators. Ie, using copy.copy vs copy.deepcopy vs a special-case copy for a particular class because there are large objects that are actually constant that you don't want to copy (an "is" test would be enough, so the copy would actually implement part of the contract). Or the copy function could be an argument to the decorator or a method on the object. From boxed at killingar.net Tue Sep 25 06:30:45 2018 From: boxed at killingar.net (=?UTF-8?Q?Anders_Hovm=C3=B6ller?=) Date: Tue, 25 Sep 2018 03:30:45 -0700 (PDT) Subject: [Python-ideas] Keyword only argument on function call In-Reply-To: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> Message-ID: <951c4837-ca57-4e7b-9502-985a58f5c05c@googlegroups.com> Hi, I'd like to reopen this discussion if anyone is interested. Some things have changed since I wrote my original proposal so I'll first summarize: 1. People seem to prefer the syntax `foo(=a)` over the syntax I suggested. I believe this is even more trivial to implement in CPython than my original proposal anyway... 2. I have updated my analysis tool: https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c It will now also give you statistics on the number of arguments function calls have. I would love to see some statistics for other closed source programs you might be working on and how big those code bases are. 3. I have made a sort-of implementation with MacroPy: https://github.com/boxed/macro-kwargs/blob/master/test.py I think this is a dead end, but it was easy to implement and fun to try! 4. I have also recently had the idea that a foo=foo type pattern could be handled in for example PyCharm as a code folding feature (and maybe as a completion feature). I still think that changing Pythons syntax is the right way to go in the long run but with point 4 above one could experience what this feature would feel like without running a custom version of Python and without changing your code. I admit to a lot of trepidation about wading into PyCharms code though, I have tried to do this once before and I gave up. Any thoughts? / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Tue Sep 25 06:31:06 2018 From: jamtlu at gmail.com (James Lu) Date: Tue, 25 Sep 2018 06:31:06 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: Message-ID: <7A802860-439B-4399-A286-E687C6F7624A@gmail.com> Have you looked at the built-in AST module, ast? https://docs.python.org/3/library/ast.html I don?t see anything preventing you from walking the AST Python itself can give you- you?d look for two Set AST nodes if we were to do {{ }}. There?s also the parser built-in module. You can use it if you first use dis.code_info to get the source then re-parse it. It helps with parse trees. Parse trees are generated before the AST I think. You?d use the parser module?s ST objects with the token module?s constants, for example token.LBRACE or token.RBRACE. Have you looked at the built-in dis module? You can use dis.code_info(obj) to get the string of the function. Then you could look for your specified syntax with regex and recompile that with the ast module. Sent from my iPhone > On Sep 25, 2018, at 1:49 AM, Marko Ristin-Kaufmann wrote: > > Hi James, > Thanks for the feedback! > > I also thought about decompiling the condition to find its AST and figure out what old values are needed. However, this is not so easily done at the moment as all the decompilation libraries I looked at (meta, ucompyle6) are simply struggling to keep up with the development of the python bytecode. In other words, the decompiler needs to be updated with every new version of python which is kind of a loosing race (unless the python devs themselves don't provide this functionality which is not the case as far as I know). > > There is macropy (https://github.com/lihaoyi/macropy) which was suggested on the other thread (https://groups.google.com/forum/#!topic/python-ideas/dmXz_7LH4GI) that I'm currently looking at. > > Cheers, > Marko > > >> On Tue, 25 Sep 2018 at 00:35, James Lu wrote: >> You could disassemble (import dis) the lambda to biew the names of the lambdas. >> >> @before(lambda self, key, _, length, get: self.length(), self.get(key)) >> >> Perhaps you could disassemble the function code and look at all operations or accesses that are done to ?old.? and evaluate those expressions before the function runs. Then you could ?replace? the expression. >> @post(lambda self, key, old: old.get is None and old.length + 1 == >> self.length()) >> >> Either the system would grab old.get and old.length or be greedy and grab old.get is None and old.length + 1. It would then replace the old.get and old.length with injects that only respond to is None and +1. >> >> Or, a syntax like this >> @post(lambda self, key, old: [old.get(old.key)] is None and [old.self.length() + 1] == >> self.length()) >> >> Where the stuff inside the brackets is evaluated before the decorated function runs. It would be useful for networking functions or functions that do something ephemeral, where data related to the value being accessed needed for the expression no longer exists after the function. >> >> This does conflict with list syntax forever, so maybe either force people to do list((expr,)) or use an alternate syntax like one item set syntax { } or double set syntax {{ }} or double list syntax [[ ]]. Ditto with having to avoid the literals for the normal meaning. >> >> You could modify Python to accept any expression for the lambda function and propose that as a PEP. (Right now it?s hardcoded as a dotted name and optionally a single argument list surrounded by parentheses.) >> >> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ?old? it?s ?snapshot?. >> >> Python does have unary plus/minus syntax as well as stream operators (<<, >>) and list slicing syntax and the @ operator and operators & and | if you want to play with syntax. There?s also the line continuation character for crazy lambdas. >> >> Personally I prefer >> @post(lambda self, key, old: {{old.self.get(old.key)}} and {{old.self.length() + 1}} == >> self.length()) >> >> because it?s explicit about what it does (evaluate the expressions within {{ }} before the function runs. I also find it elegant. >> >> Alternatively, inside the {{ }} could be a special scope where locals() is all the arguments @pre could?ve received as a dictionary. For either option you can remove the old parameter from the lambda. Example: >> @post(lambda self, key: {{self.get(key)}} and {{self.length() + 1}} == >> self.length()) >> >> Perhaps the convention should be to write {{ expr }} (with the spaces in between). >> >> You?d probably have to use the ast module to inspect it instead of the dis modul. Then find some way to reconstruct the expressions inside the double brackets- perhaps by reconstructing the AST and compiling it to a code object, or perhaps by finding the part of the string the expression is located. dis can give you the code as a string and you can run a carefully crafted regex on it. >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo.fisher at gmail.com Tue Sep 25 07:12:24 2018 From: hugo.fisher at gmail.com (Hugh Fisher) Date: Tue, 25 Sep 2018 21:12:24 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: > Date: Mon, 24 Sep 2018 09:46:16 +0200 > From: Marko Ristin-Kaufmann > To: Python-Ideas > Subject: Re: [Python-ideas] Why is design-by-contracts not widely > adopted? > Message-ID: > > Content-Type: text/plain; charset="utf-8" [munch] > Their users would hugely benefit from a more mature > and standardized contracts library with informative violation messages. Will respond in another message, because it's a big topic. > I really don't see how DbC has to do with duck typing (unless you reduce it > to mere isinstance conditions, which would simply be a straw-man argument) > -- could you please clarify? I argue that Design by Contract doesn't make sense for Python and other dynamically typed, duck typed languages because it's contrary to how the language, and the programmer, expects to work. In Python we can write something like: def foo(x): x.bar(y) What's the type of x? What's the type of y? What is the contract of bar? Don't know, don't care. x, or y, can be an instance, a class, a module, a proxy for a remote web service. The only "contract" is that object x will respond to message bar that takes one argument. Object x, do whatever you want with it. And that's a feature, not a bug, not bad design. It follows Postel's Law for Internet protocols of being liberal in what you accept. It follows the Agile principle of valuing working software over comprehensive doco. It allows software components to be glued together quickly and easily. It's a style of programming that has been successful for many years, not just in Python but also in Lisp and Smalltalk and Perl and JavaScript. It works. Not for everything. If I were writing the avionics control routines for a helicopter gas turbine, I'd use formal notation and static type checking and preconditions and whatnot. But I wouldn't be using Python either. > As soon as you need to document your code, and > this is what most modules have to do in teams of more than one person > (especially so if you are developing a library for a wider audience), you > need to write down the contracts. Please see above where I tried to > explained that 2-5) are inferior approaches to documenting contracts > compared to 1). You left off option 6), plain text. Comments. Docstrings. README files. Web pages. Books. In my experience, this is what most people consider documentation. A good book, a good blog post, can explain more about how a library works and what the implementation requirements and restrictions are than formal contract notation. In particular, contracts in Eiffel don't explain *why* they're there. As for 4) reading the code, why not? "Use the source, Luke" is now a programming cliche because it works. It's particularly appropriate for Python packages which are usually distributed in source form and, as you yourself noted, easy to read. -- cheers, Hugh Fisher From mertz at gnosis.cx Tue Sep 25 07:47:27 2018 From: mertz at gnosis.cx (David Mertz) Date: Tue, 25 Sep 2018 07:47:27 -0400 Subject: [Python-ideas] Fwd: Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <951c4837-ca57-4e7b-9502-985a58f5c05c@googlegroups.com> Message-ID: I'm still not sure why all this focus on new syntax or convoluted IDE enhancements. I presented a very simple utility function that accomplishes exactly the started goal of DRY in keyword arguments. Yes, I wrote a first version that was incomplete. And perhaps these 8-9 lines miss some corner case. But the basic goal is really, really easy to accomplish with existing Python. >>> import inspect >>> def reach(name): ... for f in inspect.stack(): ... if name in f[0].f_locals: ... return f[0].f_locals[name] ... return None ... >>> def use(names): ... kws = {} ... for name in names.split(): ... kws[name] = reach(name) ... return kws ... >>> def function(a=11, b=22, c=33, d=44): ... print(a, b, c, d) ... >>> function(a=77, **use('b d')) 77 None 33 None >>> def foo(): ... a, b, c = 1, 2, 3 ... function(a=77, **use('b d')) ... >>> foo() 77 2 33 None On Tue, Sep 25, 2018, 6:31 AM Anders Hovm?ller wrote: > Hi, > > I'd like to reopen this discussion if anyone is interested. Some things > have changed since I wrote my original proposal so I'll first summarize: > > 1. People seem to prefer the syntax `foo(=a)` over the syntax I suggested. > I believe this is even more trivial to implement in CPython than my > original proposal anyway... > 2. I have updated my analysis tool: > https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c It will > now also give you statistics on the number of arguments function calls > have. I would love to see some statistics for other closed source programs > you might be working on and how big those code bases are. > 3. I have made a sort-of implementation with MacroPy: > https://github.com/boxed/macro-kwargs/blob/master/test.py I think this is > a dead end, but it was easy to implement and fun to try! > 4. I have also recently had the idea that a foo=foo type pattern could be > handled in for example PyCharm as a code folding feature (and maybe as a > completion feature). > > I still think that changing Pythons syntax is the right way to go in the > long run but with point 4 above one could experience what this feature > would feel like without running a custom version of Python and without > changing your code. I admit to a lot of trepidation about wading into > PyCharms code though, I have tried to do this once before and I gave up. > > Any thoughts? > > / Anders > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo.fisher at gmail.com Tue Sep 25 07:59:53 2018 From: hugo.fisher at gmail.com (Hugh Fisher) Date: Tue, 25 Sep 2018 21:59:53 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: References: Message-ID: > Date: Mon, 24 Sep 2018 09:46:16 +0200 > From: Marko Ristin-Kaufmann > To: Python-Ideas > Subject: Re: [Python-ideas] Why is design-by-contracts not widely > adopted? [munch] > Python is easier to write and read, and there are no libraries which are > close in quality in Eiffel space (notably, Numpy, OpenCV, nltk and > sklearn). I really don't see how the quality of these libraries have > anything to do with lack (or presence) of the contracts. OpenCV and Numpy > have contracts all over their code (written as assertions and not > documented), albeit with very non-informative violation messages. And they > are great libraries. Their users would hugely benefit from a more mature > and standardized contracts library with informative violation messages. I would say the most likely outcome of adding Design by Contract would be no change in the quality or usefulness of these libraries, with a small but not insignificant chance of a decline in quality. Fred Brooks in his "No Silver Bullet" paper distinguished between essential complexity, which is the problem we try to solve with software, and accidental complexity, solving the problems caused by your tools and/or process that get in the way of solving the actual problem. "Yak shaving" is a similar, less formal term for accidental complexity, when you have to do something before you can do something before you can actually do some useful work. Adding new syntax or semantics to a programming language very often adds accidental complexity. C and Python (currently) are known as simple languages. When starting a programming project in C or Python, there's maybe a brief discussion about C99 or C11, or Python 3.5 or 3.6, but that's it. There's one way to do it. On the other hand C++ is notorious for having been designed with a shovel rather than a chisel. The people adding all the "features" were well intentioned, but it's still a mess. C++ programming projects often start by specifying exactly which bits of the language the programming team will be allowed to use. I've seen these reach hundreds of pages in length, consuming God knows how many hours to create, without actually creating a single line of useful software. I think a major reason that Design by Contract hasn't been widely adopted in the three decades since its introduction is because, mostly, it creates more accidental complexity than it reduces essential complexity, so the costs outweigh any benefits. Software projects, in any language, never have enough time to do everything. By your own example, the Python developers of numpy, OpenCV, nlk, and sklearn; who most certainly weren't writing contracts; produced better quality software than the Eiffel equivalent developers who (I assume) did use DbC. Shouldn't the Eiffel developers be changing their development method, not the Python developers? Maybe in a world with infinite resources contracts could be added to those Python packages, or everything in PyPi, and it would be an improvement. But we don't. So I'd like to see the developers of numpy etc keep doing whatever it is that they're doing now. -- cheers, Hugh Fisher From boxed at killingar.net Tue Sep 25 08:32:30 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Tue, 25 Sep 2018 14:32:30 +0200 Subject: [Python-ideas] Fwd: Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <951c4837-ca57-4e7b-9502-985a58f5c05c@googlegroups.com> Message-ID: <0CC4B614-3A09-4299-BDA4-2CBB8CA5C3D1@killingar.net> > I'm still not sure why all this focus on new syntax or convoluted IDE enhancements. I presented a very simple utility function that accomplishes exactly the started goal of DRY in keyword arguments. And I?ve already stated my reasons for rejecting this specific solution, but I?ll repeat them for onlookers: 1. Huge performance penalty 2. Rather verbose, so somewhat fails on the stated goal of improving readability 3. Tooling* falls down very hard on this My macropy implementation that I linked to solves 1, improves 2 somewhat (but not much), and handled half of 3 by resulting in code that tooling can validate that the passed variables exists but fails in that tooling won?t correctly validate that the arguments actually correspond to existing parameters. * by tooling I mean editors like PyCharm and static analysis tools like mypy / Anders From rhodri at kynesim.co.uk Tue Sep 25 09:08:23 2018 From: rhodri at kynesim.co.uk (Rhodri James) Date: Tue, 25 Sep 2018 14:08:23 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: References: Message-ID: <02de4e01-fd83-016b-bcba-f8b508e7884f@kynesim.co.uk> On 25/09/18 12:59, Hugh Fisher wrote: Thank you for a very well thought out post, Hugh. I completely agree. I just wanted to pull out one comment: > Adding new syntax or semantics to a programming language very often adds > accidental complexity. This is, in my view, the main reason why the bar for adding new syntax to Python is and should be so high. People advocating new syntax often remark that programmers can choose not to use it; they don't have to write their Python using the new syntax. That is true as far as it goes. However, programmers do have to *read* Python using the new syntax, so it does impact on them. The additional accidental complexity isn't something you can just dismiss because not everyone will have to use it. -- Rhodri James *-* Kynesim Ltd From mertz at gnosis.cx Tue Sep 25 11:27:44 2018 From: mertz at gnosis.cx (David Mertz) Date: Tue, 25 Sep 2018 11:27:44 -0400 Subject: [Python-ideas] Fwd: Keyword only argument on function call In-Reply-To: <0CC4B614-3A09-4299-BDA4-2CBB8CA5C3D1@killingar.net> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <951c4837-ca57-4e7b-9502-985a58f5c05c@googlegroups.com> <0CC4B614-3A09-4299-BDA4-2CBB8CA5C3D1@killingar.net> Message-ID: On Tue, Sep 25, 2018 at 8:32 AM Anders Hovm?ller wrote: > > I'm still not sure why all this focus on new syntax or convoluted IDE > enhancements. I presented a very simple utility function that accomplishes > exactly the started goal of DRY in keyword arguments. > > And I?ve already stated my reasons for rejecting this specific solution, > but I?ll repeat them for onlookers: > > 1. Huge performance penalty > Huh? Have you actually benchmarked this is some way?! A couple lookups into the namespace are really not pricey operations. The cost is definitely more than zero, but for any function that does anything even slightly costly, the lookups would be barely in the noise. > 2. Rather verbose, so somewhat fails on the stated goal of improving > readability > The "verbose" idea I propose is 3-4 characters more, per function call, than your `fun(a, b, *, this, that)` proposal. It will actually be shorter than your newer `fun(a, b, =this, =that)` proposal once you use 4 or more keyword arguments. > 3. Tooling* falls down very hard on this > It's true that tooling doesn't currently support my hypothetical function. It also does not support your hypothetical syntax. It would be *somewhat easier* to add special support for a function with a special name like `use()` than for new syntax. But obviously that varies by which tool and what purpose it is accomplishing. Of course, PyCharm and MyPy and PyLint aren't going to bother special casing a `use()` function unless or until it is widely used and/or part of the builtins or standard library. I don't actually advocate for such inclusion, but I wouldn't be stridently against that since it's just another function name, nothing really special. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Tue Sep 25 12:12:16 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Tue, 25 Sep 2018 12:12:16 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: Message-ID: On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann wrote: > > Hi, > > (I'd like to fork from a previous thread, "Pre-conditions and post-conditions", since it got long and we started discussing a couple of different things. Let's discuss in this thread the implementation of a library for design-by-contract and how to push it forward to hopefully add it to the standard library one day.) > > For those unfamiliar with contracts and current state of the discussion in the previous thread, here's a short summary. The discussion started by me inquiring about the possibility to add design-by-contract concepts into the core language. The idea was rejected by the participants mainly because they thought that the merit of the feature does not merit its costs. This is quite debatable and seems to reflect many a discussion about design-by-contract in general. Please see the other thread, "Why is design-by-contract not widely adopted?" if you are interested in that debate. > > We (a colleague of mine and I) decided to implement a library to bring design-by-contract to Python since we don't believe that the concept will make it into the core language anytime soon and we needed badly a tool to facilitate our work with a growing code base. > > The library is available at http://github.com/Parquery/icontract. The hope is to polish it so that the wider community could use it and once the quality is high enough, make a proposal to add it to the standard Python libraries. We do need a standard library for contracts, otherwise projects with conflicting contract libraries can not integrate (e.g., the contracts can not be inherited between two different contract libraries). > > So far, the most important bits have been implemented in icontract: > > Preconditions, postconditions, class invariants > Inheritance of the contracts (including strengthening and weakening of the inherited contracts) > Informative violation messages (including information about the values involved in the contract condition) > Sphinx extension to include contracts in the automatically generated documentation (sphinx-icontract) > Linter to statically check that the arguments of the conditions are correct (pyicontract-lint) > > We are successfully using it in our code base and have been quite happy about the implementation so far. > > There is one bit still missing: accessing "old" values in the postcondition (i.e., shallow copies of the values prior to the execution of the function). This feature is necessary in order to allow us to verify state transitions. > > For example, consider a new dictionary class that has "get" and "put" methods: > > from typing import Optional > > from icontract import post > > class NovelDict: > def length(self)->int: > ... > > def get(self, key: str) -> Optional[str]: > ... > > @post(lambda self, key, value: self.get(key) == value) > @post(lambda self, key: old(self.get(key)) is None and old(self.length()) + 1 == self.length(), > "length increased with a new key") > @post(lambda self, key: old(self.get(key)) is not None and old(self.length()) == self.length(), > "length stable with an existing key") > def put(self, key: str, value: str) -> None: > ... > > How could we possible implement this "old" function? > > Here is my suggestion. I'd introduce a decorator "before" that would allow you to store whatever values in a dictionary object "old" (i.e. an object whose properties correspond to the key/value pairs). The "old" is then passed to the condition. Here is it in code: > > # omitted contracts for brevity > class NovelDict: > def length(self)->int: > ... > > # omitted contracts for brevity > def get(self, key: str) -> Optional[str]: > ... > > @before(lambda self, key: {"length": self.length(), "get": self.get(key)}) > @post(lambda self, key, value: self.get(key) == value) > @post(lambda self, key, old: old.get is None and old.length + 1 == self.length(), > "length increased with a new key") > @post(lambda self, key, old: old.get is not None and old.length == self.length(), > "length stable with an existing key") > def put(self, key: str, value: str) -> None: > ... > > The linter would statically check that all attributes accessed in "old" have to be defined in the decorator "before" so that attribute errors would be caught early. The current implementation of the linter is fast enough to be run at save time so such errors should usually not happen with a properly set IDE. > > "before" decorator would also have "enabled" property, so that you can turn it off (e.g., if you only want to run a postcondition in testing). The "before" decorators can be stacked so that you can also have a more fine-grained control when each one of them is running (some during test, some during test and in production). The linter would enforce that before's "enabled" is a disjunction of all the "enabled"'s of the corresponding postconditions where the old value appears. > > Is this a sane approach to "old" values? Any alternative approach you would prefer? What about better naming? Is "before" a confusing name? The dict can be splatted into the postconditions, so that no special name is required. This would require either that the lambdas handle **kws, or that their caller inspect them to see what names they take. Perhaps add a function to functools which only passes kwargs that fit. Then the precondition mechanism can pass `self`, `key`, and `value` as kwargs instead of args. For functions that have *args and **kwargs, it may be necessary to pass them to the conditions as args and kwargs instead. The name "before" is a confusing name. It's not just something that happens before. It's really a pre-`let`, adding names to the scope of things after it, but with values taken before the function call. Based on that description, other possible names are `prelet`, `letbefore`, `predef`, `defpre`, `beforescope`. Better a name that is clearly confusing than one that is obvious but misleading. By the way, should the first postcondition be `self.get(key) is value`, checking for identity rather than equality? From jamtlu at gmail.com Tue Sep 25 12:52:11 2018 From: jamtlu at gmail.com (James Lu) Date: Tue, 25 Sep 2018 12:52:11 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: Message-ID: <692CDAC5-9985-40A9-A421-B4D69D1294AA@gmail.com> Hmm, I was wrong: there is no reliable way to get the code of a lambda function. If it was possible to execute all code paths of the function, we could monkey patch the builtins so { } used our own custom set class. Alternatively, the decorator could also accept a string. Or maybe we could send a PEP to add the .func_code attribute to lambdas as well as normal functions. There?s also a technique online where they find the lambda?s source by locating the file the function was defined in and then removing the irrelevant parts, but that just doesn?t sound practical to me. There?s also MacroPy. I think the best solution would be to mock the old object and record the operations done to the object, like the other replier gave a PoC of. Proposed syntax from icontract import post, old @post(lambda: ..., key=old.self.key(), ) Sent from my iPhone > On Sep 25, 2018, at 1:49 AM, Marko Ristin-Kaufmann wrote: > > Hi James, > Thanks for the feedback! > > I also thought about decompiling the condition to find its AST and figure out what old values are needed. However, this is not so easily done at the moment as all the decompilation libraries I looked at (meta, ucompyle6) are simply struggling to keep up with the development of the python bytecode. In other words, the decompiler needs to be updated with every new version of python which is kind of a loosing race (unless the python devs themselves don't provide this functionality which is not the case as far as I know). > > There is macropy (https://github.com/lihaoyi/macropy) which was suggested on the other thread (https://groups.google.com/forum/#!topic/python-ideas/dmXz_7LH4GI) that I'm currently looking at. > > Cheers, > Marko > > >> On Tue, 25 Sep 2018 at 00:35, James Lu wrote: >> You could disassemble (import dis) the lambda to biew the names of the lambdas. >> >> @before(lambda self, key, _, length, get: self.length(), self.get(key)) >> >> Perhaps you could disassemble the function code and look at all operations or accesses that are done to ?old.? and evaluate those expressions before the function runs. Then you could ?replace? the expression. >> @post(lambda self, key, old: old.get is None and old.length + 1 == >> self.length()) >> >> Either the system would grab old.get and old.length or be greedy and grab old.get is None and old.length + 1. It would then replace the old.get and old.length with injects that only respond to is None and +1. >> >> Or, a syntax like this >> @post(lambda self, key, old: [old.get(old.key)] is None and [old.self.length() + 1] == >> self.length()) >> >> Where the stuff inside the brackets is evaluated before the decorated function runs. It would be useful for networking functions or functions that do something ephemeral, where data related to the value being accessed needed for the expression no longer exists after the function. >> >> This does conflict with list syntax forever, so maybe either force people to do list((expr,)) or use an alternate syntax like one item set syntax { } or double set syntax {{ }} or double list syntax [[ ]]. Ditto with having to avoid the literals for the normal meaning. >> >> You could modify Python to accept any expression for the lambda function and propose that as a PEP. (Right now it?s hardcoded as a dotted name and optionally a single argument list surrounded by parentheses.) >> >> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ?old? it?s ?snapshot?. >> >> Python does have unary plus/minus syntax as well as stream operators (<<, >>) and list slicing syntax and the @ operator and operators & and | if you want to play with syntax. There?s also the line continuation character for crazy lambdas. >> >> Personally I prefer >> @post(lambda self, key, old: {{old.self.get(old.key)}} and {{old.self.length() + 1}} == >> self.length()) >> >> because it?s explicit about what it does (evaluate the expressions within {{ }} before the function runs. I also find it elegant. >> >> Alternatively, inside the {{ }} could be a special scope where locals() is all the arguments @pre could?ve received as a dictionary. For either option you can remove the old parameter from the lambda. Example: >> @post(lambda self, key: {{self.get(key)}} and {{self.length() + 1}} == >> self.length()) >> >> Perhaps the convention should be to write {{ expr }} (with the spaces in between). >> >> You?d probably have to use the ast module to inspect it instead of the dis modul. Then find some way to reconstruct the expressions inside the double brackets- perhaps by reconstructing the AST and compiling it to a code object, or perhaps by finding the part of the string the expression is located. dis can give you the code as a string and you can run a carefully crafted regex on it. >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Tue Sep 25 12:52:35 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Tue, 25 Sep 2018 12:52:35 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: References: Message-ID: Those arguments are rules of thumb, which may or may not apply to DbC, and speculation, based on why DbC isn't more popular, to explain why DbC isn't more popular. They are general arguments for features in general, whereas Marko has been giving arguments for why DbC in particular is good or why it isn't more popular. The general arguments don't address the specific arguments. I don't use DbC, but I do use Numpy. Numpy is a very mathematical library, with many pure functions. It has lots of similarities between its functions and methods. I can easily see how design-by-contract can help Numpy users read the documentation and compare functions. Text is often less structured, so it is less likely to come out consistent. After all, isn't that why we keep adding structure to it, such as with Javadocs and Sphinx? Those examples add more syntax, while Marko's proposal doesn't necessarily require more syntax. On Tue, Sep 25, 2018 at 8:00 AM Hugh Fisher wrote: > > > Date: Mon, 24 Sep 2018 09:46:16 +0200 > > From: Marko Ristin-Kaufmann > > To: Python-Ideas > > Subject: Re: [Python-ideas] Why is design-by-contracts not widely > > adopted? > > [munch] > > > Python is easier to write and read, and there are no libraries which are > > close in quality in Eiffel space (notably, Numpy, OpenCV, nltk and > > sklearn). I really don't see how the quality of these libraries have > > anything to do with lack (or presence) of the contracts. OpenCV and Numpy > > have contracts all over their code (written as assertions and not > > documented), albeit with very non-informative violation messages. And they > > are great libraries. Their users would hugely benefit from a more mature > > and standardized contracts library with informative violation messages. > > I would say the most likely outcome of adding Design by Contract would > be no change in the quality or usefulness of these libraries, with a small > but not insignificant chance of a decline in quality. > > Fred Brooks in his "No Silver Bullet" paper distinguished between essential > complexity, which is the problem we try to solve with software, and accidental > complexity, solving the problems caused by your tools and/or process that > get in the way of solving the actual problem. "Yak shaving" is a similar, less > formal term for accidental complexity, when you have to do something before > you can do something before you can actually do some useful work. > > Adding new syntax or semantics to a programming language very often adds > accidental complexity. > > C and Python (currently) are known as simple languages. When starting a > programming project in C or Python, there's maybe a brief discussion about > C99 or C11, or Python 3.5 or 3.6, but that's it. There's one way to do it. > > On the other hand C++ is notorious for having been designed with a shovel > rather than a chisel. The people adding all the "features" were well > intentioned, > but it's still a mess. C++ programming projects often start by > specifying exactly > which bits of the language the programming team will be allowed to use. I've > seen these reach hundreds of pages in length, consuming God knows how > many hours to create, without actually creating a single line of useful > software. > > I think a major reason that Design by Contract hasn't been widely adopted > in the three decades since its introduction is because, mostly, it creates > more accidental complexity than it reduces essential complexity, so the > costs outweigh any benefits. > > Software projects, in any language, never have enough time to do everything. > By your own example, the Python developers of numpy, OpenCV, nlk, and > sklearn; who most certainly weren't writing contracts; produced better quality > software than the Eiffel equivalent developers who (I assume) did use DbC. > Shouldn't the Eiffel developers be changing their development method, not > the Python developers? > > Maybe in a world with infinite resources contracts could be added to those > Python packages, or everything in PyPi, and it would be an improvement. > But we don't. So I'd like to see the developers of numpy etc keep doing > whatever it is that they're doing now. > > -- > > cheers, > Hugh Fisher > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ From marko.ristin at gmail.com Tue Sep 25 13:18:49 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Tue, 25 Sep 2018 19:18:49 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: Hi Robert, You'll lose folks attention very quickly when you try to tell folk > what they do and don't understand. I apologize if I sounded offending, that was definitely not my intention. I appreciate that you addressed that. I suppose it's cultural/language issue and the wording was probably inappropriate. Please let me clarify what I meant: there was a misconception as DbC was reduced to a tool for testing, and, in a separate message, reduced to type-checks at runtime. These are clearly misconceptions, as DbC (as origianally proposed by Hoare and later popularized by Meyer) include other relevant aspects which are essential and hence can not be overseen or simply ignored. If we are arguing about DbC without these aspects then we are simply falling pray to a straw-man fallacy. Claiming that DbC annotations will improve the documentation of every > single library on PyPI is an extraordinary claim, and such claims > require extraordinary proof. I don't know what you mean by "extraordinary" claim and "extraordinary" proof, respectively. I tried to show that DbC is a great tool and far superior to any other tools currently used to document contracts in a library, please see my message https://groups.google.com/d/msg/python-ideas/dmXz_7LH4GI/5A9jbpQ8CAAJ. Let me re-use the enumeration I used in the message and give you a short summary. The implicit or explicit contracts are there willy-nilly. When you use a module, either you need to figure them out using trial-and-error or looking at the implementation (4), looking at the test cases and hoping that they generalize (5), write them as doctests (3) or write them in docstrings as human text (2); or you write them formally as explicit contracts (1). I could not identify any other methods that can help you with expectations when you call a function or use a class (apart from formal methods and proofs, which I omitted as they seem too esoteric for the current discussion). *Given that: * * There is no other method for representing contracts, * people are trained and can read formal statements and * there is tooling available to write, maintain and represent contracts in a nice way I see formal contracts (1) as a superior tool. The deficiencies of other approaches are: 2) Comments and docstrings inevitably rot and get disconnected from the implementation in my and many other people's experience and studies. 3) Doctests are much longer and hence more tedious to read and maintain, they need extra text to signal the intent (is it a simple test or an example how boundary conditions are handled or ...). In any non-trivial case, they need to include even the contract itself. 4) Looking at other people's code to figure out the contracts is tedious and usually difficult for any non-trivial function. 5) Test cases can be difficult to read since they include much broader testing logic (mocking, set up). Most libraries do not ship with the test code. Identifying test cases which demonstrate the contracts can be difficult. *Any* function that is used by multiple developers which operates on the restricted range of input values and gives out structured output values benefits from contracts (1) since the user of the function needs to figure them out to properly call the function and handle its results correctly. I assume that every package on pypi is published to be used by wider audience, and not the developer herself. Hence every package on pypi would benefit from formal contracts. Some predicates are hard to formulate, and we will never be able to formally write down *all* the contracts. But that doesn't imply for me to *not use contracts at all* (analogously, some functionality is untestable, but that doesn't mean that we don't test what we can). I would be very grateful if you could point me where this exposition is wrong (maybe referring to my original message, https://groups.google.com/d/msg/python-ideas/dmXz_7LH4GI/5A9jbpQ8CAAJ, which I spent more thought on formulating). So far, I was not confronted against nor read on the internet a plausible argument against formal contracts (the only two exceptions being lack of tools and less-skilled programmers have a hard time reading formal statements as soon as they include boolean logic and quantifiers). I'm actively working on the former, and hope that the latter would improve with time as education in computer sciences improves. Another argument, which I did read often on internet, but don't really count is that quality software is not a priority and most projects hence dispense of documentation or testing. This should, hopefully, not apply to public pypi packages and is highly impractical for any medium-size project with multiple developers (and very costly in the long run). I can think of many libraries where necessary pre and post conditions > (such as 'self is still locked') are going to be noisy, and at risk of > reducing comprehension if the DbC checks are used to enhance/extended > documentation. It is up to the developer to decide which contracts are enforced during testing, production or displayed in the documentation (you can pick the subset of the three, it's not an exclusion). This feature ("enabled" argument to a contract) has been already implemented in the icontract library. Some of the examples you've been giving would be better expressed with a more capable type system in my view (e.g. Rust's), but I have no good idea about adding that into Python :/. I don't see how type system would help regardless how strict it would be? Unless *each *input and *each *output represent a special type, which would be super confusing as soon as you would put them in the containers and have to struggle with invariance, contravariance and covariance. Please see https://github.com/rust-lang/rfcs/issues/1077 for a discussion about introducing DbC to Rust. Unfortunately, the discussion about contracts in Rust is also based on misconceptions (*e.g., *see https://github.com/rust-lang/rfcs/issues/1077#issuecomment-94582917) -- there seems to be something wrong in the way anybody proposing DbC exposes contracts to the wider audience and miss to address these issues in a good way. So most people just react instinctively with "80% already covered with type systems" / "mere runtime type checks, use assert" and "that's only an extension to testing, so why bother" :(. I would now like to answer Hugh and withdraw from the discussion pro/contra formal contracts unless there is a rational, logical argument disputing the DbC in its entirety (not in one of its specific aspects or as a misconception/straw-man). A lot has been already said, many articles have been written (I linked some of the pages which I thought were short & good reads and I would gladly supply more reading material). I doubt I can find a better way to contribute to the discussion. Cheers, Marko On Tue, 25 Sep 2018 at 10:01, Robert Collins wrote: > On Mon, 24 Sep 2018 at 19:47, Marko Ristin-Kaufmann > wrote: > > > > Hi, > > > > Thank you for your replies, Hugh and David! Please let me address the > points in serial. > > > > Obvious benefits > > You both seem to misconceive the contracts. The goal of the > design-by-contract is not reduced to testing the correctness of the code, > as I reiterated already a couple of times in the previous thread. The > contracts document formally what the caller and the callee expect and need > to satisfy when using a method, a function or a class. This is meant for a > module that is used by multiple people which are not necessarily familiar > with the code. They are not a niche. There are 150K projects on pypi.org. > Each one of them would benefit if annotated with the contracts. > > You'll lose folks attention very quickly when you try to tell folk > what they do and don't understand. > > Claiming that DbC annotations will improve the documentation of every > single library on PyPI is an extraordinary claim, and such claims > require extraordinary proof. > > I can think of many libraries where necessary pre and post conditions > (such as 'self is still locked') are going to be noisy, and at risk of > reducing comprehension if the DbC checks are used to enhance/extended > documentation. > > Some of the examples you've been giving would be better expressed with > a more capable type system in my view (e.g. Rust's), but I have no > good idea about adding that into Python :/. > > Anyhow, the thing I value most about python is its pithyness: its > extremely compact, allowing great developer efficiency, but the cost > of testing is indeed excessive if the tests are not structured well. > That said, its possible to run test suites with 10's of thousands of > tests in only a few seconds, so there's plenty of headroom for most > projects. > > -Rob > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Tue Sep 25 13:20:10 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Tue, 25 Sep 2018 19:20:10 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <23466.2308.577009.907279@turnbull.sk.tsukuba.ac.jp> References: <8030445F-D92C-4E00-A1A0-EB4735C53ED7@barrys-emacs.org> <37D7E886-201E-4F31-987D-5B21CA0691B8@barrys-emacs.org> <23465.49157.902727.935888@turnbull.sk.tsukuba.ac.jp> <23466.2308.577009.907279@turnbull.sk.tsukuba.ac.jp> Message-ID: Hi Steve, I'll give it a shot and implement a proof-of-concept icontrac-macro library based on macropy and see if that works. I'll keep you posted. Cheers, Marko On Tue, 25 Sep 2018 at 12:08, Stephen J. Turnbull < turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > Marko Ristin-Kaufmann writes: > > > Thanks a lot for pointing us to macropy -- I was not aware of the > library, > > it looks very interesting! > > > > Do you have any experience how macropy fit > > Sorry, no. I was speaking as someone who is familiar with macros from > Lisp but doesn't miss them in Python, and who also has been watching > python-dev and python-ideas for about two decades now, so I've heard > of things like MacroPy and know how the core developers think to a > great extent. > > > I'm also a bit worried how macropy would work out in the libraries > > published to pypi -- imagine if many people start using contracts. > > Suddenly, all these libraries would not only depend on a contract > library > > but on a macro library as well. > > That's right. > > > Is that something we should care about? > > Yes. Most Pythonistas (at least at present) don't much like macros. > They fear turning every program into its own domain-specific language. > I can't claim much experience with dependency hell, but I think that's > much less important from your point of view (see below). > > My point is mainly that, as you probably are becoming painfully aware, > getting syntax changes into Python is a fairly drawnout process. For > an example of the kind of presentation that motivates people to change > their mind from the default state of "if it isn't in Python yet, > YAGNI" to "yes, let's do *this* one", see > https://www.python.org/dev/peps/pep-0572/#appendix-a-tim-peters-s-findings > > Warning: Tim Peters is legendary, though still active occasionally. > All he has to do is post to get people to take notice. But this > Appendix is an example of why he gets that kind of R-E-S-P-E-C-T.[1] > > So the whole thing is a secret plot ;-) to present the most beautiful > syntax possible in your PEP (which will *not* be about DbC, but rather > about a small set of enabling syntax changes, hopefully a singleton), > along with an extended example, or a representative sample, of usage. > Because you have a working implementation using MacroPy (or the less > pretty[2] but fewer dependencies version based on condition strings > and eval) people can actually try it on their own code and (you hope, > they don't :-) they find a nestful of bugs by using it. > > > Potential dependency hell? (I already have a bad feeling about > > making icontract depend on asttokens and considerin-lining > > asttokens into icontract particularly for that reason). > > I don't think so. First, inlining an existing library is almost > always a bad idea. As for the main point, if the user sticks to one > major revision, and only upgrades to compatible bugfixes in the > Python+stdlib distribution, I don't see why two or three libraries > would be a major issue for a feature that the developer/project uses > extremely frequently. I've rarely experienced dependency hell, and in > both cases it was web frameworks (Django and Zope, to be specific, and > the dependencies involved were more or less internal to those > frameworks). If you or people you trust have other experience, forget > what I just said. :-) > > Of course it depends on the library, but as long as the library is > pretty strict about backward compatibility, you can upgrade it and get > new functionality for other callers in your code base (which are > likely to appear, you know -- human beings cannot stand to leave a > tool unused once they install it!) > > > > Note that this means *you cannot use macros in a file that is run > > > directly*, as it will not be passed through the import hooks. > > > > That would make contracts unusable in any stand-alone script, > > right? > > Yes, but really, no: > > # The run.py described by the MacroPy docs assumes a script that > # runs by just importing it. I don't have time to work out > # whether that makes more sense. This idiom of importing just a > # couple of libraries, and then invoking a function with a > # conventional name such as "run" or "process" is quite common. > # If you have docutils install, check out rstpep2html.py. > > import macropy.activate > from my_contractful_library import main > main() > > and away you go. 5 years from now that script will be a badge of > honor among Pythonic DbCers, and you won't be willing to give it up! > Just kidding, of course -- the ideal outcome is that the use case is > sufficiently persuasive to justify a syntax change so you don't need > MacroPy, or, perhaps some genius will come along and provide some > obscure construct that is already legal syntax! > > HTH > > Footnotes: > [1] R.I.P. Aretha! > > [2] Urk, I just realized there's another weakness to strings: you get > no help on checking their syntax from the compiler. For a proof-of- > concept that's OK, but if you end up using the DbC library in your > codebase for a couple years while the needed syntax change gathers > support, that would be really bad. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Tue Sep 25 13:27:10 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Tue, 25 Sep 2018 13:27:10 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: <692CDAC5-9985-40A9-A421-B4D69D1294AA@gmail.com> References: <692CDAC5-9985-40A9-A421-B4D69D1294AA@gmail.com> Message-ID: Ew, magic. `{{foo}}` is already valid syntax (though it will always fail). I don't like this path. If the proposal requires new syntax or magic, it will be less likely to get accepted or even pip'd. Remember also that PyPy, IronPython, and Jython are still alive, and the latter two are still aiming for Python 3 (PyPy 3 is already available). On Tue, Sep 25, 2018 at 12:52 PM James Lu wrote: > > Hmm, I was wrong: there is no reliable way to get the code of a lambda function. You may be looking for inspect.signature(func). > Or maybe we could send a PEP to add the .func_code attribute to lambdas as well as normal functions. The new name is `.__code__`. There is some documentation of code objects in the `inspect` documentation (names beginning with `co_`). More documentation: https://docs.python.org/3.7/reference/datamodel.html#index-55 > There?s also a technique online where they find the lambda?s source by locating the file the function was defined in and then removing the irrelevant parts, but that just doesn?t sound practical to me. I'm surprised you haven't found inspect.getsource(func) Doesn't always work if the source is not associated to the definition of func, such as with functions defined in C. That shouldn't be a problem with lambdas. You might need to worry about file permission. I don't know if other interpreters will find it difficult to support `inspect`. You may be looking for `ast.parse(inspect.getsource(func))`. From jamtlu at gmail.com Tue Sep 25 13:47:50 2018 From: jamtlu at gmail.com (James Lu) Date: Tue, 25 Sep 2018 13:47:50 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <692CDAC5-9985-40A9-A421-B4D69D1294AA@gmail.com> Message-ID: <1D43FA84-DB7C-4020-BC8B-314D417BA9C9@gmail.com> > I'm surprised you haven't found > inspect.getsource(func) I did. That?s exactly what I was describing in the paragraph. It wouldn?t work in interactive mode and it includes everything on the same line of the lambda definition. From marko.ristin at gmail.com Tue Sep 25 13:49:00 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Tue, 25 Sep 2018 19:49:00 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: References: Message-ID: Hi Hugh, Software projects, in any language, never have enough time to do everything. > By your own example, the Python developers of numpy, OpenCV, nlk, and > sklearn; *who most certainly weren't writing contracts;* produced better > quality > software than the Eiffel equivalent developers who (I assume) did use DbC. > Shouldn't the Eiffel developers be changing their development method, not > the Python developers? (emphasis mine) This is *absolutely* *not true* as you can notice if you multiply any two matrices of wrong dimensions in numpy or opencv (or use them as weights in sklearn). For example, have a look at OpenCV functions. *Most of them include preconditions and postconditions *(*e.g., * https://docs.opencv.org/3.4.3/dc/d8c/namespacecvflann.html#a57191110b01f200e478c658f3b7a362d). I would even go as far to claim that OpenCV would be unusable without the contracts. Imagine if you had to figure out the dimensions of the matrix after each operation if it were lacking in the documentation. That would make the development sluggish as a snail. Numpy provides contracts in text, *e.g. *see https://www.numpy.org/devdocs/reference/generated/numpy.ndarray.transpose.html#numpy.ndarray.transpose : ndarray.transpose(**axes*) Returns a view of the array with axes transposed. For a 1-D array, this has no effect. (To change between column and row vectors, first cast the 1-D array into a matrix object.) For a 2-D array, this is the usual matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and a.shape = (i[0], i[1], ... i[n-2], i[n-1]), then a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0]). As you can see, there are three contracts: 1) no effect on 1D array, 2) if a 2D array, it equals the matrix transpose, 3) if n-D array, the order of axes indicates the permutation. The contract 3) is written out formally. It might not be very clear or precise what is meant in 2) where formalizing it (at least to a certain extent) would remove many doubts. It is obvious to me that supplementing or replacing these contracts *in text* with *formal *contracts (1 and 2, since 3 is already formal) is extremely beneficial since: a) many developers use numpy and an improvement in documentation (such as higher precision and clarity) has a large impact on the users and b) enforcing the contracts automatically (be it only during testing or in production) prevents bugs related to contract violation in numpy such that the users can effectively rely on the contracts. The argument b) is important since now I just rely that these statements are true whenever I use numpy. If there is an error in numpy it takes a long time to figure out since I doubt the last that there is an error in numpy and especially it takes even longer that I suspect numpy of not satisfying its written contracts. Please mind that contracts can be toggled on/off whenever the performance is important so that slow execution is not an argument against the formal contracts. Cheers, Marko On Tue, 25 Sep 2018 at 14:01, Hugh Fisher wrote: > > Date: Mon, 24 Sep 2018 09:46:16 +0200 > > From: Marko Ristin-Kaufmann > > To: Python-Ideas > > Subject: Re: [Python-ideas] Why is design-by-contracts not widely > > adopted? > > [munch] > > > Python is easier to write and read, and there are no libraries which are > > close in quality in Eiffel space (notably, Numpy, OpenCV, nltk and > > sklearn). I really don't see how the quality of these libraries have > > anything to do with lack (or presence) of the contracts. OpenCV and Numpy > > have contracts all over their code (written as assertions and not > > documented), albeit with very non-informative violation messages. And > they > > are great libraries. Their users would hugely benefit from a more mature > > and standardized contracts library with informative violation messages. > > I would say the most likely outcome of adding Design by Contract would > be no change in the quality or usefulness of these libraries, with a small > but not insignificant chance of a decline in quality. > > Fred Brooks in his "No Silver Bullet" paper distinguished between essential > complexity, which is the problem we try to solve with software, and > accidental > complexity, solving the problems caused by your tools and/or process that > get in the way of solving the actual problem. "Yak shaving" is a similar, > less > formal term for accidental complexity, when you have to do something before > you can do something before you can actually do some useful work. > > Adding new syntax or semantics to a programming language very often adds > accidental complexity. > > C and Python (currently) are known as simple languages. When starting a > programming project in C or Python, there's maybe a brief discussion about > C99 or C11, or Python 3.5 or 3.6, but that's it. There's one way to do it. > > On the other hand C++ is notorious for having been designed with a shovel > rather than a chisel. The people adding all the "features" were well > intentioned, > but it's still a mess. C++ programming projects often start by > specifying exactly > which bits of the language the programming team will be allowed to use. > I've > seen these reach hundreds of pages in length, consuming God knows how > many hours to create, without actually creating a single line of useful > software. > > I think a major reason that Design by Contract hasn't been widely adopted > in the three decades since its introduction is because, mostly, it creates > more accidental complexity than it reduces essential complexity, so the > costs outweigh any benefits. > > Software projects, in any language, never have enough time to do > everything. > By your own example, the Python developers of numpy, OpenCV, nlk, and > sklearn; who most certainly weren't writing contracts; produced better > quality > software than the Eiffel equivalent developers who (I assume) did use DbC. > Shouldn't the Eiffel developers be changing their development method, not > the Python developers? > > Maybe in a world with infinite resources contracts could be added to those > Python packages, or everything in PyPi, and it would be an improvement. > But we don't. So I'd like to see the developers of numpy etc keep doing > whatever it is that they're doing now. > > -- > > cheers, > Hugh Fisher > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Tue Sep 25 13:54:30 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Tue, 25 Sep 2018 19:54:30 +0200 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: <1D43FA84-DB7C-4020-BC8B-314D417BA9C9@gmail.com> References: <692CDAC5-9985-40A9-A421-B4D69D1294AA@gmail.com> <1D43FA84-DB7C-4020-BC8B-314D417BA9C9@gmail.com> Message-ID: Hi James and Franklin, getsource() definitely does not work. I tried for a long, long time to make it work and finally gave up. I parse in icontract the whole file where the lambda function resides and use asttokens to locate the node of the lambda (along some tree traversing upwards and making assumptions where the condition lambda lives). Have a look at: https://github.com/Parquery/icontract/blob/391d43005287831892b19dfdcbcfd3d48662c638/icontract/represent.py#L309 and https://github.com/Parquery/icontract/blob/391d43005287831892b19dfdcbcfd3d48662c638/icontract/represent.py#L157 On Tue, 25 Sep 2018 at 19:48, James Lu wrote: > > I'm surprised you haven't found > > inspect.getsource(func) > > I did. That?s exactly what I was describing in the paragraph. It wouldn?t > work in interactive mode and it includes everything on the same line of the > lambda definition. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Tue Sep 25 14:11:27 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Tue, 25 Sep 2018 20:11:27 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: Hi Hugh, > As soon as you need to document your code, and > > this is what most modules have to do in teams of more than one person > > (especially so if you are developing a library for a wider audience), you > > need to write down the contracts. Please see above where I tried to > > explained that 2-5) are inferior approaches to documenting contracts > > compared to 1). > > You left off option 6), plain text. Comments. Docstrings. > That was actually the option 2): > 2) Write precondtions and postconditions in docstring of the method as > human text. The problem with text is that it is not verifiable and hence starts to "rot". Noticing that text is wrong involves much more developer time & attention than automatically verifying the formal contracts. In Python we can write something like: def foo(x): x.bar(y) What's the type of x? What's the type of y? What is the contract of bar? Don't know, don't care. x, or y, can be an instance, a class, a module, a proxy for a remote web service. The only "contract" is that object x will respond to message bar that takes one argument. Object x, do whatever you want with it. I still don't see how this is connected to contracts or how contracts play a role there? If foo can accept any x and return any result then there is *no *contract. But hardly any function is like that. Most exercise a certain behavior on a subset of possible input values. The outputs also satisfy certain contracts, *i.e.* they also live in a certain subset of possible outputs. (Please mind that I don't mean strictly numerical ranges here -- it can be any subset of structured data.) As I already mentioned, the contracts have nothing to do with typing. You can use them for runtime type checks -- but that's a reduction of the concept to a very particular use case. Usually contracts read like this (from the numpy example linked in another message, https://www.numpy.org/devdocs/reference/generated/numpy.ndarray.transpose.html#numpy.ndarray.transpose ): ndarray.transpose(**axes*) Returns a view of the array with axes transposed. *For a 1-D array, this has no effect. (To change between column and row vectors, first cast the 1-D array into a matrix object.) For a 2-D array, this is the usual matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and a.shape = (i[0], i[1], ... i[n-2], i[n-1]), then a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0]).* (emphasis mine) Mind the three postconditions (case 1D array, case 2D array, case N-D array). As for 4) reading the code, why not? "Use the source, Luke" is now a > programming cliche because it works. It's particularly appropriate for > Python packages which are usually distributed in source form and, as > you yourself noted, easy to read. Because it is hard and costs a lot of time. The point of encapsulating a function is that I as a user don't have to know its details of implementation and its wider dependencies in the implementation. Looking at the code is the tool of last resort to figure out the contracts. Imagine if you had to look at the implementation of numpy.transpose() to figure out what happens when transposing a N-D array. Cheers, Marko On Tue, 25 Sep 2018 at 13:13, Hugh Fisher wrote: > > Date: Mon, 24 Sep 2018 09:46:16 +0200 > > From: Marko Ristin-Kaufmann > > To: Python-Ideas > > Subject: Re: [Python-ideas] Why is design-by-contracts not widely > > adopted? > > Message-ID: > > i7eQ2Gr8KDDOFONeaB-Qvt8LZo2A at mail.gmail.com> > > Content-Type: text/plain; charset="utf-8" > > [munch] > > > Their users would hugely benefit from a more mature > > and standardized contracts library with informative violation messages. > > Will respond in another message, because it's a big topic. > > > I really don't see how DbC has to do with duck typing (unless you reduce > it > > to mere isinstance conditions, which would simply be a straw-man > argument) > > -- could you please clarify? > > I argue that Design by Contract doesn't make sense for Python and other > dynamically typed, duck typed languages because it's contrary to how the > language, and the programmer, expects to work. > > In Python we can write something like: > > def foo(x): > x.bar(y) > > What's the type of x? What's the type of y? What is the contract of bar? > Don't know, don't care. x, or y, can be an instance, a class, a module, a > proxy for a remote web service. The only "contract" is that object x will > respond to message bar that takes one argument. Object x, do whatever > you want with it. > > And that's a feature, not a bug, not bad design. It follows Postel's Law > for Internet protocols of being liberal in what you accept. It follows the > Agile principle of valuing working software over comprehensive doco. > It allows software components to be glued together quickly and easily. > > It's a style of programming that has been successful for many years, > not just in Python but also in Lisp and Smalltalk and Perl and JavaScript. > It works. > > Not for everything. If I were writing the avionics control routines for a > helicopter gas turbine, I'd use formal notation and static type checking > and preconditions and whatnot. But I wouldn't be using Python either. > > > As soon as you need to document your code, and > > this is what most modules have to do in teams of more than one person > > (especially so if you are developing a library for a wider audience), you > > need to write down the contracts. Please see above where I tried to > > explained that 2-5) are inferior approaches to documenting contracts > > compared to 1). > > You left off option 6), plain text. Comments. Docstrings. README files. > Web pages. Books. In my experience, this is what most people consider > documentation. A good book, a good blog post, can explain more about > how a library works and what the implementation requirements and > restrictions are than formal contract notation. In particular, contracts in > Eiffel don't explain *why* they're there. > > As for 4) reading the code, why not? "Use the source, Luke" is now a > programming cliche because it works. It's particularly appropriate for > Python packages which are usually distributed in source form and, as > you yourself noted, easy to read. > > -- > > cheers, > Hugh Fisher > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Tue Sep 25 14:20:06 2018 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Tue, 25 Sep 2018 14:20:06 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <692CDAC5-9985-40A9-A421-B4D69D1294AA@gmail.com> <1D43FA84-DB7C-4020-BC8B-314D417BA9C9@gmail.com> Message-ID: Ah. It wasn't clear to me from the thread that James was using `inspect`. As it happens, not only does getsource give more than it should, it also gives less than it should. The following bug still exists in 3.6.1. It was closed as a wontfix bug back in Python 2 because, I presume, fixing it would require changing code objects. https://bugs.python.org/issue17631 Regardless, I think it'd be better not to rely on such magic for the proposal. While it's a fun puzzle, whether it's possible or not, it still modifies the syntax and/or scoping and/or evaluation order rules of Python. On Tue, Sep 25, 2018 at 1:54 PM Marko Ristin-Kaufmann wrote: > > Hi James and Franklin, > > getsource() definitely does not work. I tried for a long, long time to make it work and finally gave up. I parse in icontract the whole file where the lambda function resides and use asttokens to locate the node of the lambda (along some tree traversing upwards and making assumptions where the condition lambda lives). > > Have a look at: > > https://github.com/Parquery/icontract/blob/391d43005287831892b19dfdcbcfd3d48662c638/icontract/represent.py#L309 > > and > https://github.com/Parquery/icontract/blob/391d43005287831892b19dfdcbcfd3d48662c638/icontract/represent.py#L157 > > > On Tue, 25 Sep 2018 at 19:48, James Lu wrote: >> >> > I'm surprised you haven't found >> > inspect.getsource(func) >> >> I did. That?s exactly what I was describing in the paragraph. It wouldn?t work in interactive mode and it includes everything on the same line of the lambda definition. >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ From rosuav at gmail.com Tue Sep 25 15:40:45 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 26 Sep 2018 05:40:45 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 3:19 AM Marko Ristin-Kaufmann wrote: >> Claiming that DbC annotations will improve the documentation of every >> single library on PyPI is an extraordinary claim, and such claims >> require extraordinary proof. > > > I don't know what you mean by "extraordinary" claim and "extraordinary" proof, respectively. I tried to show that DbC is a great tool and far superior to any other tools currently used to document contracts in a library, please see my message https://groups.google.com/d/msg/python-ideas/dmXz_7LH4GI/5A9jbpQ8CAAJ. Let me re-use the enumeration I used in the message and give you a short summary. > An ordinary claim is like "DbC can be used to improve code and/or documentation", and requires about as much evidence as you can stuff into a single email. Simple claim, low burden of proof. An extraordinary claim is like "DbC can improve *every single project* on PyPI". That requires a TON of proof. Obviously we won't quibble if you can only demonstrate that 99.95% of them can be improved, but you have to at least show that the bulk of them can. > There are 150K projects on pypi.org. Each one of them would benefit if annotated with the contracts. This is the extraordinary claim. To justify it, you have to show that virtually ANY project would benefit from contracts. So far, I haven't seen any such proof. ChrisA From leebraid at gmail.com Tue Sep 25 16:09:45 2018 From: leebraid at gmail.com (Lee Braiden) Date: Tue, 25 Sep 2018 21:09:45 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: Eh. It's too easy to cry "show me the facts" in any argument. To do that too often is to reduce all discussion to pendantry. That verifying data against the contract a function makes code more reliable should be self evident to anyone with even the most rudimentary understanding of a function call, let alone a library or large application. It's the reason why type checking exists, and why bounds checking exists, and why unit checking exists too. On Tue, 25 Sep 2018, 20:43 Chris Angelico, wrote: > On Wed, Sep 26, 2018 at 3:19 AM Marko Ristin-Kaufmann > wrote: > >> Claiming that DbC annotations will improve the documentation of every > >> single library on PyPI is an extraordinary claim, and such claims > >> require extraordinary proof. > > > > > > I don't know what you mean by "extraordinary" claim and "extraordinary" > proof, respectively. I tried to show that DbC is a great tool and far > superior to any other tools currently used to document contracts in a > library, please see my message > https://groups.google.com/d/msg/python-ideas/dmXz_7LH4GI/5A9jbpQ8CAAJ. > Let me re-use the enumeration I used in the message and give you a short > summary. > > > > An ordinary claim is like "DbC can be used to improve code and/or > documentation", and requires about as much evidence as you can stuff > into a single email. Simple claim, low burden of proof. > > An extraordinary claim is like "DbC can improve *every single project* > on PyPI". That requires a TON of proof. Obviously we won't quibble if > you can only demonstrate that 99.95% of them can be improved, but you > have to at least show that the bulk of them can. > > > There are 150K projects on pypi.org. Each one of them would benefit if > annotated with the contracts. > > This is the extraordinary claim. To justify it, you have to show that > virtually ANY project would benefit from contracts. So far, I haven't > seen any such proof. > > ChrisA > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Tue Sep 25 16:39:05 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 26 Sep 2018 06:39:05 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 6:09 AM Lee Braiden wrote: > > Eh. It's too easy to cry "show me the facts" in any argument. To do that too often is to reduce all discussion to pendantry. > > That verifying data against the contract a function makes code more reliable should be self evident to anyone with even the most rudimentary understanding of a function call, let alone a library or large application. It's the reason why type checking exists, and why bounds checking exists, and why unit checking exists too. > It's easy, but it's also often correct. >From my reading of this thread, there HAS been evidence given that DbC can be beneficial in some cases. I do not believe there has been evidence enough to cite the number of projects on PyPI as "this is how many projects would benefit". Part of the trouble is finding a concise syntax for the contracts that is still sufficiently expressive. ChrisA From klahnakoski at mozilla.com Tue Sep 25 17:58:36 2018 From: klahnakoski at mozilla.com (Kyle Lahnakoski) Date: Tue, 25 Sep 2018 17:58:36 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <29373c17-a0f0-e7fb-f907-b3d84e716c3b@mozilla.com> I use DbC occasionally to clarify my thoughts during a refactoring, and then only in the places that continue to make mistakes. In general, I am not in a domain that benefits from DbC. Contracts are code: More code means more bugs. Declarative contracts are succinct, but difficult to debug when wrong; I believe this because the debugger support for contracts is poor; There is no way to step through the logic and see the intermediate reasoning in complex contracts.? A contract is an incomplete duplication of what the code already does: at some level of complexity I prefer to use a duplicate independent implementation and compare inputs/outputs. Writing contracts cost time and money; and that cost should be weighed against the number and flexibility of the customers that use the code.? A one-time script, a webapp for you team, an Android app for your startup, fraud software, and Facebook make different accounting decisions.? I contend most code projects can not justify DbC. On 2018-09-24 03:46, Marko Ristin-Kaufmann wrote: > When you are documenting a method you have the following options: > 1) Write preconditions and postconditions formally and include them > automatically in the documentation (/e.g., /by using icontract library). > 2) Write precondtions and postconditions in docstring of the method as > human text. > 3) Write doctests in the docstring of the method. > 4) Expect the user to read the actual implementation. > 5) Expect the user to read the testing code. > There are other ways to communicate how a method works. 6) The name of the method 7) How the method is called throughout the codebase 8) observing input and output values during debugging 9) observing input and output values in production 10) relying on convention inside, and outside, the application 11) Don't communicate - Sometimes / is too high; code is not repaired, only replaced. > This is again something that eludes me and I would be really thankful > if you could clarify. Please consider for an example, pypackagery > (https://pypackagery.readthedocs.io/en/latest/packagery.html) and the > documentation of its function resolve_initial_paths: > > |packagery.||resolve_initial_paths|(/initial_paths/) > > Resolve the initial paths of the dependency graph by recursively > adding |*.py| files beneath given directories. > > Parameters: > > *initial_paths* (|List|[|Path|]) ? initial paths as absolute paths > > Return type: > > |List|[|Path|] > > Returns: > > list of initial files (/i.e./ no directories) > > Requires: > > * |all(pth.is_absolute() for pth in initial_paths)| > > Ensures: > > * |len(result) >= len(initial_paths) if initial_paths else > result == []| > * |all(pth.is_absolute() for pth in result)| > * |all(pth.is_file() for pth in result)| > > > How is this difficult to read,[...]? This contract does not help me:? Does it work on Windows? What is_absolute()?? is "file:///" absolute? How does this code fail?? What does a permission access problem look like?? Can initial_paths can be None? Can initial_paths be files? directories?? What are the side effects? resolve_initial_path() is a piece code is better understood by looking at the callers (#7), or not exposing it publicly (#11).? You can also use a different set of abstractions, to make the code easier to read:? ?? UNION(file for p in initial_paths for file in p.leaves() if file.extension=="py") At a high level, I can see the allure of DbC:? Programming can be a craft, and a person can derive deep personal satisfaction from perfecting the code they work on. DbC provides you with more decoration, more elaboration, more ornamentation, more control.? This is not bad, but I see all your arguments as personal ascetic sense.? DbC is only appealing under certain accounting rules.? Please consider the possibility that "the best code" is: low $$$, buggy, full of tangles, and mostly gets the job done.?? :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Tue Sep 25 19:19:57 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 26 Sep 2018 09:19:57 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <29373c17-a0f0-e7fb-f907-b3d84e716c3b@mozilla.com> References: <29373c17-a0f0-e7fb-f907-b3d84e716c3b@mozilla.com> Message-ID: On Wed, Sep 26, 2018 at 7:59 AM Kyle Lahnakoski wrote: > I use DbC occasionally to clarify my thoughts during a refactoring, and then only in the places that continue to make mistakes. In general, I am not in a domain that benefits from DbC. > > Contracts are code: More code means more bugs. Contracts are executable documentation. If you can lift them directly into user-readable documentation (and by "user" here I mean the user of a library), they can save you the work of keeping your documentation accurate. > This contract does not help me: > > What is_absolute()? is "file:///" absolute? I'd have to assume that is_absolute() is defined elsewhere. Which means that the value of this contract depends entirely on having other functions, probably ALSO contractually-defined, to explain it. > How does this code fail? > What does a permission access problem look like? Probably an exception. This is Python code, and I would generally assume that problems are reported as exceptions. > Can initial_paths can be None? This can be answered from the type declaration. It doesn't say Optional, so no, it can't be None. > Can initial_paths be files? directories? Presumably not a question you'd get if you were actually using it; the point of the function is to "[r]esolve the initial paths of the dependency graph by recursively adding *.py files beneath given directories", so you'd call it because you have directories and want files back. > What are the side effects? Hopefully none, other than the normal implications of hitting the file system. It's easy to show beautiful examples that may actually depend on other things. Whether that's representative of all contracts is another question. ChrisA From mikhailwas at gmail.com Tue Sep 25 20:46:31 2018 From: mikhailwas at gmail.com (Mikhail V) Date: Wed, 26 Sep 2018 03:46:31 +0300 Subject: [Python-ideas] "while:" for the loop Message-ID: I suggest allowing "while:" syntax for the infinite loop. I.e. instead of "while 1:" and "while True:" notations. IIRC, in the past this was mentioned in python-list discussions as alternative for the "while True:"/"while 1:" syntax. I even had impression that there was nothing rational against this (apart from the traditional "don't change anything" principle) My opinion: 1. I think it'd definitely improve clarity. Although it's not extremely frequent statement, it still appears in algorithms, where additional noise interfers the reader's concentration. 2. This should become the answer to the "how should I denote an infinte loop?" question. 3. In schools/unis they teach algorithms with Python syntax so it will be easier to remember and to write. Adoption of this spelling is natural and straightforward. 4. It does seem to be a rare case of a very easy to implement syntax change (an expert note needed) Also I have personal sympathy for this because I like to use explicit "break" in the loop body, even though I could use "while expression:" syntax, but I prefer this: while 1: ... if i == N : break instead of this: while i < N: ... It helps me to concentrate by reading, especially in nested loops and those situations with multiple break points. So by me this syntax would definitely achieve an extra +. There were alternative suggestions, e.g. introducing a new keyword "loop", but obviously a new keyword is much harder to do now. I don't know what to add to this actually, I think the idea is understood. I see, it is not the most important problem, but if there is nothing serious against this, I think it's worth it and would be quite a positive (small) improvement and a nice gift to those involved in algorithms. As for statistics - IIRC someone gave statistics once, but the only thing I can remember - "while 1/True" is used quite a lot in the std lib, so the numbers exceeded my expectation (because I expected that it's used mostly in algorithms). Mikhail From mike at selik.org Tue Sep 25 22:34:07 2018 From: mike at selik.org (Michael Selik) Date: Tue, 25 Sep 2018 22:34:07 -0400 Subject: [Python-ideas] "while:" for the loop In-Reply-To: References: Message-ID: On Tue, Sep 25, 2018 at 8:46 PM Mikhail V wrote: > I suggest allowing "while:" syntax for the infinite loop. > I.e. instead of "while 1:" and "while True:" notations. > > My opinion: > 1. I think it'd definitely improve clarity. I prefer the explicit phrase, ``while True:``. Saying "while" without a condition is strange, like a sentence fragment. The ``while 1:`` pattern is a carryover from Python 2, when ``True`` was not yet a keyword. From rosuav at gmail.com Tue Sep 25 22:37:13 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 26 Sep 2018 12:37:13 +1000 Subject: [Python-ideas] "while:" for the loop In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 12:35 PM Michael Selik wrote: > > On Tue, Sep 25, 2018 at 8:46 PM Mikhail V wrote: > > I suggest allowing "while:" syntax for the infinite loop. > > I.e. instead of "while 1:" and "while True:" notations. > > > > My opinion: > > 1. I think it'd definitely improve clarity. > > I prefer the explicit phrase, ``while True:``. Saying "while" without > a condition is strange, like a sentence fragment. The ``while 1:`` > pattern is a carryover from Python 2, when ``True`` was not yet a > keyword. I like saying while "something": where the string describes the loop's real condition. For instance, while "moar data": if reading from a socket, or while "not KeyboardInterrupt": if the loop is meant to be halted by SIGINT. ChrisA From mikhailwas at gmail.com Tue Sep 25 23:28:18 2018 From: mikhailwas at gmail.com (Mikhail V) Date: Wed, 26 Sep 2018 06:28:18 +0300 Subject: [Python-ideas] "while:" for the loop In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 5:38 AM Chris Angelico wrote: > > > I like saying while "something": where the string describes the loop's > real condition. For instance, while "moar data": if reading from a > socket, or while "not KeyboardInterrupt": if the loop is meant to be > halted by SIGINT. > > ChrisA if doing so, would not it be more practical to write is as an in-line comment then? with new syntax it could be like this: """ while: # not KeyboardInterrupt asd asd asd asd asd asd asd asd asd """ Similar effect, but I would find it better at least because it would be highlighted as a comment and not as a string, + no quotes noise. From rosuav at gmail.com Tue Sep 25 23:36:04 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 26 Sep 2018 13:36:04 +1000 Subject: [Python-ideas] "while:" for the loop In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 1:29 PM Mikhail V wrote: > > On Wed, Sep 26, 2018 at 5:38 AM Chris Angelico wrote: > > > > > > I like saying while "something": where the string describes the loop's > > real condition. For instance, while "moar data": if reading from a > > socket, or while "not KeyboardInterrupt": if the loop is meant to be > > halted by SIGINT. > > > > ChrisA > > if doing so, would not it be more practical > to write is as an in-line comment then? > with new syntax it could be like this: > """ > while: # not KeyboardInterrupt > asd asd asd > asd asd asd > asd asd asd > """ > Similar effect, but I would find it better at least because it would > be highlighted as a comment and not as a string, + no quotes noise. A comment is not better than an inline condition, no. I *want* it to be highlighted as part of the code, not as a comment. Because it isn't a comment - it's a loop condition. ChrisA From marko.ristin at gmail.com Wed Sep 26 00:23:24 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Wed, 26 Sep 2018 06:23:24 +0200 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: Message-ID: Hi, Franklin wrote: > The name "before" is a confusing name. It's not just something that > happens before. It's really a pre-`let`, adding names to the scope of > things after it, but with values taken before the function call. Based > on that description, other possible names are `prelet`, `letbefore`, > `predef`, `defpre`, `beforescope`. Better a name that is clearly > confusing than one that is obvious but misleading. James wrote: > I suggest that instead of ?@before? it?s ?@snapshot? and instead of ?old? > it?s ?snapshot?. I like "snapshot", it's a bit clearer than prefixing/postfixing verbs with "pre" which might be misread (*e.g., *"prelet" has a meaning in Slavic languages and could be subconsciously misread, "predef" implies to me a pre- *definition* rather than prior-to-definition , "beforescope" is very clear for me, but it might be confusing for others as to what it actually refers to ). What about "@capture" (7 letters for captures *versus *8 for snapshot)? I suppose "@let" would be playing with fire if Python with conflicting new keywords since I assume "let" to be one of the candidates. Actually, I think there is probably no way around a decorator that captures/snapshots the data before the function call with a lambda (or even a separate function). "Old" construct, if we are to parse it somehow from the condition function, would limit us only to shallow copies (and be complex to implement as soon as we are capturing out-of-argument values such as globals *etc.)*. Moreove, what if we don't need shallow copies? I could imagine a dozen of cases where shallow copy is not what the programmer wants: for example, s/he might need to make deep copies, hash or otherwise transform the input data to hold only part of it instead of copying (*e.g., *so as to allow equality check without a double copy of the data, or capture only the value of certain property transformed in some way). I'd still go with the dictionary to allow for this extra freedom. We could have a convention: "a" denotes to the current arguments, and "b" denotes the captured values. It might make an interesting hint that we put "b" before "a" in the condition. You could also interpret "b" as "before" and "a" as "after", but also "a" as "arguments". @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: ... "b" can be omitted if it is not used. Under the hub, all the arguments to the condition would be passed by keywords. In case of inheritance, captures would be inherited as well. Hence the library would check at run-time that the returned dictionary with captured values has no identifier that has been already captured, and the linter checks that statically, before running the code. Reading values captured in the parent at the code of the child class might be a bit hard -- but that is case with any inherited methods/properties. In documentation, I'd list all the captures of both ancestor and the current class. I'm looking forward to reading your opinion on this and alternative suggestions :) Marko On Tue, 25 Sep 2018 at 18:12, Franklin? Lee wrote: > On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann > wrote: > > > > Hi, > > > > (I'd like to fork from a previous thread, "Pre-conditions and > post-conditions", since it got long and we started discussing a couple of > different things. Let's discuss in this thread the implementation of a > library for design-by-contract and how to push it forward to hopefully add > it to the standard library one day.) > > > > For those unfamiliar with contracts and current state of the discussion > in the previous thread, here's a short summary. The discussion started by > me inquiring about the possibility to add design-by-contract concepts into > the core language. The idea was rejected by the participants mainly because > they thought that the merit of the feature does not merit its costs. This > is quite debatable and seems to reflect many a discussion about > design-by-contract in general. Please see the other thread, "Why is > design-by-contract not widely adopted?" if you are interested in that > debate. > > > > We (a colleague of mine and I) decided to implement a library to bring > design-by-contract to Python since we don't believe that the concept will > make it into the core language anytime soon and we needed badly a tool to > facilitate our work with a growing code base. > > > > The library is available at http://github.com/Parquery/icontract. The > hope is to polish it so that the wider community could use it and once the > quality is high enough, make a proposal to add it to the standard Python > libraries. We do need a standard library for contracts, otherwise projects > with conflicting contract libraries can not integrate (e.g., the contracts > can not be inherited between two different contract libraries). > > > > So far, the most important bits have been implemented in icontract: > > > > Preconditions, postconditions, class invariants > > Inheritance of the contracts (including strengthening and weakening of > the inherited contracts) > > Informative violation messages (including information about the values > involved in the contract condition) > > Sphinx extension to include contracts in the automatically generated > documentation (sphinx-icontract) > > Linter to statically check that the arguments of the conditions are > correct (pyicontract-lint) > > > > We are successfully using it in our code base and have been quite happy > about the implementation so far. > > > > There is one bit still missing: accessing "old" values in the > postcondition (i.e., shallow copies of the values prior to the execution of > the function). This feature is necessary in order to allow us to verify > state transitions. > > > > For example, consider a new dictionary class that has "get" and "put" > methods: > > > > from typing import Optional > > > > from icontract import post > > > > class NovelDict: > > def length(self)->int: > > ... > > > > def get(self, key: str) -> Optional[str]: > > ... > > > > @post(lambda self, key, value: self.get(key) == value) > > @post(lambda self, key: old(self.get(key)) is None and > old(self.length()) + 1 == self.length(), > > "length increased with a new key") > > @post(lambda self, key: old(self.get(key)) is not None and > old(self.length()) == self.length(), > > "length stable with an existing key") > > def put(self, key: str, value: str) -> None: > > ... > > > > How could we possible implement this "old" function? > > > > Here is my suggestion. I'd introduce a decorator "before" that would > allow you to store whatever values in a dictionary object "old" (i.e. an > object whose properties correspond to the key/value pairs). The "old" is > then passed to the condition. Here is it in code: > > > > # omitted contracts for brevity > > class NovelDict: > > def length(self)->int: > > ... > > > > # omitted contracts for brevity > > def get(self, key: str) -> Optional[str]: > > ... > > > > @before(lambda self, key: {"length": self.length(), "get": > self.get(key)}) > > @post(lambda self, key, value: self.get(key) == value) > > @post(lambda self, key, old: old.get is None and old.length + 1 == > self.length(), > > "length increased with a new key") > > @post(lambda self, key, old: old.get is not None and old.length == > self.length(), > > "length stable with an existing key") > > def put(self, key: str, value: str) -> None: > > ... > > > > The linter would statically check that all attributes accessed in "old" > have to be defined in the decorator "before" so that attribute errors would > be caught early. The current implementation of the linter is fast enough to > be run at save time so such errors should usually not happen with a > properly set IDE. > > > > "before" decorator would also have "enabled" property, so that you can > turn it off (e.g., if you only want to run a postcondition in testing). The > "before" decorators can be stacked so that you can also have a more > fine-grained control when each one of them is running (some during test, > some during test and in production). The linter would enforce that before's > "enabled" is a disjunction of all the "enabled"'s of the corresponding > postconditions where the old value appears. > > > > Is this a sane approach to "old" values? Any alternative approach you > would prefer? What about better naming? Is "before" a confusing name? > > The dict can be splatted into the postconditions, so that no special > name is required. This would require either that the lambdas handle > **kws, or that their caller inspect them to see what names they take. > Perhaps add a function to functools which only passes kwargs that fit. > Then the precondition mechanism can pass `self`, `key`, and `value` as > kwargs instead of args. > > For functions that have *args and **kwargs, it may be necessary to > pass them to the conditions as args and kwargs instead. > > The name "before" is a confusing name. It's not just something that > happens before. It's really a pre-`let`, adding names to the scope of > things after it, but with values taken before the function call. Based > on that description, other possible names are `prelet`, `letbefore`, > `predef`, `defpre`, `beforescope`. Better a name that is clearly > confusing than one that is obvious but misleading. > > By the way, should the first postcondition be `self.get(key) is > value`, checking for identity rather than equality? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Wed Sep 26 00:47:29 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Wed, 26 Sep 2018 06:47:29 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: Hi Chris, An extraordinary claim is like "DbC can improve *every single project* > on PyPI". That requires a TON of proof. Obviously we won't quibble if > you can only demonstrate that 99.95% of them can be improved, but you > have to at least show that the bulk of them can. I tried to give the "proof" (not a formal one, though) in my previous message. The assumptions are that: * There are always contracts, they can be either implicit or explicit. You need always to figure them out before you call a function or use its result. * Figuring out contracts by trial-and-error and reading the code (the implementation or the test code) is time consuming and hard. * The are tools for formal contracts. * The contracts written in documentation as human text inevitably rot and they are much harder to maintain than automatically verified formal contracts. * The reader is familiar with formal statements, and hence reading formal statements is faster than reading the code or trial-and-error. I then went on to show why I think, under these assumptions, that formal contracts are superior as a documentation tool and hence beneficial. Do you think that any of these assumptions are wrong? Is there a hole in my logical reasoning presented in my previous message? I would be very grateful for any pointers! If these assumptions hold and there is no mistake in my reasoning, wouldn't that qualify as a proof? Cheers, Marko On Tue, 25 Sep 2018 at 21:43, Chris Angelico wrote: > On Wed, Sep 26, 2018 at 3:19 AM Marko Ristin-Kaufmann > wrote: > >> Claiming that DbC annotations will improve the documentation of every > >> single library on PyPI is an extraordinary claim, and such claims > >> require extraordinary proof. > > > > > > I don't know what you mean by "extraordinary" claim and "extraordinary" > proof, respectively. I tried to show that DbC is a great tool and far > superior to any other tools currently used to document contracts in a > library, please see my message > https://groups.google.com/d/msg/python-ideas/dmXz_7LH4GI/5A9jbpQ8CAAJ. > Let me re-use the enumeration I used in the message and give you a short > summary. > > > > An ordinary claim is like "DbC can be used to improve code and/or > documentation", and requires about as much evidence as you can stuff > into a single email. Simple claim, low burden of proof. > > An extraordinary claim is like "DbC can improve *every single project* > on PyPI". That requires a TON of proof. Obviously we won't quibble if > you can only demonstrate that 99.95% of them can be improved, but you > have to at least show that the bulk of them can. > > > There are 150K projects on pypi.org. Each one of them would benefit if > annotated with the contracts. > > This is the extraordinary claim. To justify it, you have to show that > virtually ANY project would benefit from contracts. So far, I haven't > seen any such proof. > > ChrisA > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Wed Sep 26 01:11:43 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Wed, 26 Sep 2018 07:11:43 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <29373c17-a0f0-e7fb-f907-b3d84e716c3b@mozilla.com> References: <29373c17-a0f0-e7fb-f907-b3d84e716c3b@mozilla.com> Message-ID: Hi Kyle, 6) The name of the method > 7) How the method is called throughout the codebase > 10) relying on convention inside, and outside, the application > Sorry, by formulating 2) as "docstring" I excluded names of the methods as well as variables. Please assume that 2) actually entails those as well. They are human text and hence not automatically verifiable, hence qualify as 2). 8) observing input and output values during debugging > 9) observing input and output values in production > Sorry, again I implicitly subsumed 8-9 under 4), reading the implementation code (including the trial-and-error). My assumption was that it is incomparably more costly to apply trial-and-error than read the contracts given that contracts can be formulated. Of course, not all contracts can be formulated all the time. 11) Don't communicate - Sometimes / is too high; > code is not repaired, only replaced. > I don't see this as an option for any publicly available, high-quality module on pypi or in any organization. As I already noted in my message to Hugh, the argument in favor of* undocumented and/or untested code* are not the arguments. I assume we want a *maintainable* and *usable* modules. I've never talked about undocumented throw-away exploratory code. Most of the Python features become futile in that case (type annotations and static type checking with mypy, to name only the few). Does it work on Windows? > This is probably impossible to write as a contract, but needs to be tested (though maybe there is a way to check it and encapsulate the check in a separate function and put it into the contract). What is_absolute()? is "file:///" absolute? > Since the type is pathlib.Path (as written in the type annotation), it's pathlib.Path.is_absolute() method. Please see https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.is_absolute At a high level, I can see the allure of DbC: Programming can be a craft, > and a person can derive deep personal satisfaction from perfecting the code > they work on. DbC provides you with more decoration, more elaboration, more > ornamentation, more control. This is not bad, but I see all your arguments > as personal ascetic sense. DbC is only appealing under certain accounting > rules. Please consider the possibility that "the best code" is: low $$$, > buggy, full of tangles, and mostly gets the job done. :) Actually, this goes totally contrary to most of my experience. Bad code is unmaintainable and ends up being much more costly down the line. It's also what we were taught in software engineering lectures in the university (some 10-15 years ago) and I always assumed that the studies presented there were correct. Saying that writing down contracts is costly is a straw-man. It is costly if you need to examine the function and write them down. If you *are writing *the function and just keep adding the contracts as-you-go, it's basically very little overhead cost. You make an assumption of the input, and instead of just coding on, you scroll up, write it down formally, and go back where you stopped and continue the implementation. Or you think for a minute what contracts your function needs to expect/satisfy before you start writing it (or during the design). I don't see how this can be less efficient than trial-and-error and making possibly wrong assumptions based on the output that you see without any documentation by running the code of the module. Cheers, Marko On Tue, 25 Sep 2018 at 23:59, Kyle Lahnakoski wrote: > > I use DbC occasionally to clarify my thoughts during a refactoring, and > then only in the places that continue to make mistakes. In general, I am > not in a domain that benefits from DbC. > > Contracts are code: More code means more bugs. Declarative contracts are > succinct, but difficult to debug when wrong; I believe this because the > debugger support for contracts is poor; There is no way to step through the > logic and see the intermediate reasoning in complex contracts. A contract > is an incomplete duplication of what the code already does: at some level > of complexity I prefer to use a duplicate independent implementation and > compare inputs/outputs. > Writing contracts cost time and money; and that cost should be weighed > against the number and flexibility of the customers that use the code. A > one-time script, a webapp for you team, an Android app for your startup, > fraud software, and Facebook make different accounting decisions. I > contend most code projects can not justify DbC. > > > On 2018-09-24 03:46, Marko Ristin-Kaufmann wrote: > > When you are documenting a method you have the following options: > 1) Write preconditions and postconditions formally and include them > automatically in the documentation (*e.g., *by using icontract library). > 2) Write precondtions and postconditions in docstring of the method as > human text. > 3) Write doctests in the docstring of the method. > 4) Expect the user to read the actual implementation. > 5) Expect the user to read the testing code. > > > There are other ways to communicate how a method works. > > 6) The name of the method > 7) How the method is called throughout the codebase > 8) observing input and output values during debugging > 9) observing input and output values in production > 10) relying on convention inside, and outside, the application > 11) Don't communicate - Sometimes / is too > high; code is not repaired, only replaced. > > > This is again something that eludes me and I would be really thankful if > you could clarify. Please consider for an example, pypackagery ( > https://pypackagery.readthedocs.io/en/latest/packagery.html) and the > documentation of its function resolve_initial_paths: > packagery.resolve_initial_paths(*initial_paths*) > > Resolve the initial paths of the dependency graph by recursively adding > *.py files beneath given directories. > Parameters: > > *initial_paths* (List[Path]) ? initial paths as absolute paths > Return type: > > List[Path] > Returns: > > list of initial files (*i.e.* no directories) > Requires: > > - all(pth.is_absolute() for pth in initial_paths) > > Ensures: > > - len(result) >= len(initial_paths) if initial_paths else result == [] > - all(pth.is_absolute() for pth in result) > - all(pth.is_file() for pth in result) > > > How is this difficult to read,[...]? > > > This contract does not help me: > > Does it work on Windows? > What is_absolute()? is "file:///" absolute? > How does this code fail? > What does a permission access problem look like? > Can initial_paths can be None? > Can initial_paths be files? directories? > What are the side effects? > > resolve_initial_path() is a piece code is better understood by looking at > the callers (#7), or not exposing it publicly (#11). You can also use a > different set of abstractions, to make the code easier to read: > > UNION(file for p in initial_paths for file in p.leaves() if > file.extension=="py") > > At a high level, I can see the allure of DbC: Programming can be a craft, > and a person can derive deep personal satisfaction from perfecting the code > they work on. DbC provides you with more decoration, more elaboration, more > ornamentation, more control. This is not bad, but I see all your arguments > as personal ascetic sense. DbC is only appealing under certain accounting > rules. Please consider the possibility that "the best code" is: low $$$, > buggy, full of tangles, and mostly gets the job done. :) > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Wed Sep 26 01:24:46 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Wed, 26 Sep 2018 07:24:46 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <29373c17-a0f0-e7fb-f907-b3d84e716c3b@mozilla.com> Message-ID: Hi Chris, It's easy to show beautiful examples that may actually depend on other > things. Whether that's representative of all contracts is another > question. I agree. There are also many contracts which are simply too hard to write down formally. But many are also easily captured in formal manner in my experience. The question is, of course, how many and you make a fair point there. @Chris and others requesting data: my time is way too limited to provide a large-scale code analysis of many pypi packages (family obligations with a toddler, 9-6 job). I'm not doing research, and such a study would require substantial time resources. Is there an alternative request that you think that I (and other volunteers?) could accomplish in a reasonable (free) time? Maybe you could compile a list of 100-200 (or even less) functions from representative modules and I try to annotate them with contracts and we see if that's convincing? It's up to you to pick representative functions and up to me to annotate them with contracts. That would diffuse the argument that I intentionally picked the functions whose contracts are easily and nice to annotate. Cheers, Marko On Wed, 26 Sep 2018 at 01:20, Chris Angelico wrote: > On Wed, Sep 26, 2018 at 7:59 AM Kyle Lahnakoski > wrote: > > I use DbC occasionally to clarify my thoughts during a refactoring, and > then only in the places that continue to make mistakes. In general, I am > not in a domain that benefits from DbC. > > > > Contracts are code: More code means more bugs. > > Contracts are executable documentation. If you can lift them directly > into user-readable documentation (and by "user" here I mean the user > of a library), they can save you the work of keeping your > documentation accurate. > > > This contract does not help me: > > > > What is_absolute()? is "file:///" absolute? > > I'd have to assume that is_absolute() is defined elsewhere. Which > means that the value of this contract depends entirely on having other > functions, probably ALSO contractually-defined, to explain it. > > > How does this code fail? > > What does a permission access problem look like? > > Probably an exception. This is Python code, and I would generally > assume that problems are reported as exceptions. > > > Can initial_paths can be None? > > This can be answered from the type declaration. It doesn't say > Optional, so no, it can't be None. > > > Can initial_paths be files? directories? > > Presumably not a question you'd get if you were actually using it; the > point of the function is to "[r]esolve the initial paths of the > dependency graph by recursively adding *.py files beneath given > directories", so you'd call it because you have directories and want > files back. > > > What are the side effects? > > Hopefully none, other than the normal implications of hitting the file > system. > > It's easy to show beautiful examples that may actually depend on other > things. Whether that's representative of all contracts is another > question. > > ChrisA > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Wed Sep 26 01:39:48 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 26 Sep 2018 15:39:48 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 2:47 PM Marko Ristin-Kaufmann wrote: > > Hi Chris, > >> An extraordinary claim is like "DbC can improve *every single project* >> on PyPI". That requires a TON of proof. Obviously we won't quibble if >> you can only demonstrate that 99.95% of them can be improved, but you >> have to at least show that the bulk of them can. > > > I tried to give the "proof" (not a formal one, though) in my previous message. (Formal proof isn't necessary here; we say "extraordinary proof", but it'd be more accurate to say "extraordinary evidence".) > The assumptions are that: > * There are always contracts, they can be either implicit or explicit. You need always to figure them out before you call a function or use its result. Not all code has such contracts. You could argue that code which does not is inferior to code which does, but not everything follows a strictly-definable pattern. > * Figuring out contracts by trial-and-error and reading the code (the implementation or the test code) is time consuming and hard. Agreed. > * The are tools for formal contracts. That's the exact point you're trying to make, so it isn't evidence for itself. Tools for formal contracts exist as third party in Python, and if that were good enough for you, we wouldn't be discussing this. There are no such tools in the standard library or language that make formal contracts easy. > * The contracts written in documentation as human text inevitably rot and they are much harder to maintain than automatically verified formal contracts. Agreed. > * The reader is familiar with formal statements, and hence reading formal statements is faster than reading the code or trial-and-error. Disagreed. I would most certainly NOT assume that every reader knows any particular syntax for such contracts. However, this is a weaker point. So I'll give you two and two halves for that. Good enough to make do. > I then went on to show why I think, under these assumptions, that formal contracts are superior as a documentation tool and hence beneficial. Do you think that any of these assumptions are wrong? Is there a hole in my logical reasoning presented in my previous message? I would be very grateful for any pointers! > > If these assumptions hold and there is no mistake in my reasoning, wouldn't that qualify as a proof? > It certainly qualifies as proof that SOME code MAY benefit from contracts. It does not reach the much higher bar to support the claim that "there are X projects on PyPI and every single one of them would benefit". For instance, would youtube-dl benefit from DbC? To most people, it's an application, not a library. Even if you're invoking it from within a Python script, it's usually easiest to use the main entrypoint rather than delve into its internals. Case in point: https://github.com/Rosuav/MegaClip/blob/master/megaclip.py#L71 It's actually easier to shell out to a subprocess than to call on youtube-dl as a library. Would DbC benefit youtube-dl's internal functions? Maybe, but that's for the youtube-dl people to decide; it wouldn't in the slightest benefit my app. You might argue that a large proportion of PyPI projects will be "library-style" packages, where the main purpose is to export a bunch of functions. But even then, I'm not certain that they'd all benefit from DbC. Some would, and you've definitely made the case for that; but I'm still -0.5 on adding anything of the sort to the stdlib, as I don't yet see that *enough* projects would actually benefit. People have said the same thing about type checking, too. Would *every* project on PyPI benefit from MyPy's type checks? No. Syntax for them was added, not because EVERYONE should use them, but because SOME will use them, and it's worth having some language support. You would probably do better to argue along those lines than to try to claim that every single project ought to be using contracts. ChrisA From robertc at robertcollins.net Wed Sep 26 02:17:22 2018 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 26 Sep 2018 18:17:22 +1200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, 26 Sep 2018 at 05:19, Marko Ristin-Kaufmann wrote: > > Hi Robert, ... >> Claiming that DbC annotations will improve the documentation of every >> single library on PyPI is an extraordinary claim, and such claims >> require extraordinary proof. > > > I don't know what you mean by "extraordinary" claim and "extraordinary" proof, respectively. Chris already addressed this. > I tried to show that DbC is a great tool and far superior to any other tools currently used to document contracts in a library, please see my message https://groups.google.com/d/msg/python-ideas/dmXz_7LH4GI/5A9jbpQ8CAAJ. Let me re-use the enumeration I used in the message and give you a short summary. > When you are documenting a method you have the following options: > 1) Write preconditions and postconditions formally and include them automatically in the documentation (e.g., by using icontract library). > 2) Write precondtions and postconditions in docstring of the method as human text. > 3) Write doctests in the docstring of the method. > 4) Expect the user to read the actual implementation. > 5) Expect the user to read the testing code. So you can also: 0) Write clear documentation e.g. just document the method without doctests. You can write doctests in example documentation, or even in unit tests if desired: having use cases be tested is often valuable. > The implicit or explicit contracts are there willy-nilly. When you use a module, either you need to figure them out using trial-and-error or looking at the implementation (4), looking at the test cases and hoping that they generalize (5), write them as doctests (3) or write them in docstrings as human text (2); or you write them formally as explicit contracts (1). > > I could not identify any other methods that can help you with expectations when you call a function or use a class (apart from formal methods and proofs, which I omitted as they seem too esoteric for the current discussion). > > Given that: > * There is no other method for representing contracts, > * people are trained and can read formal statements and > * there is tooling available to write, maintain and represent contracts in a nice way > > I see formal contracts (1) as a superior tool. The deficiencies of other approaches are: > 2) Comments and docstrings inevitably rot and get disconnected from the implementation in my and many other people's experience and studies. > 3) Doctests are much longer and hence more tedious to read and maintain, they need extra text to signal the intent (is it a simple test or an example how boundary conditions are handled or ...). In any non-trivial case, they need to include even the contract itself. > 4) Looking at other people's code to figure out the contracts is tedious and usually difficult for any non-trivial function. > 5) Test cases can be difficult to read since they include much broader testing logic (mocking, set up). Most libraries do not ship with the test code. Identifying test cases which demonstrate the contracts can be difficult. I would say that contracts *are* a formal method. They are machine interpretable rules about when the function may be called and about what it may do and how it must leave things. https://en.wikipedia.org/wiki/Formal_methods The critique I offer of DbC in a Python context is the same as for other formal methods: is the benefit worth the overhead. If you're writing a rocket controller, Python might not be the best language for it. > Any function that is used by multiple developers which operates on the restricted range of input values and gives out structured output values benefits from contracts (1) since the user of the function needs to figure them out to properly call the function and handle its results correctly. I assume that every package on pypi is published to be used by wider audience, and not the developer herself. Hence every package on pypi would benefit from formal contracts. In theory there is no difference between theory and practice, but in practice there may be differences between practice and theory :). Less opaquely, you're using a model to try and extrapolate human behaviour. This is not an easy thing to do, and you are very likely to be missing factors. For instance, perhaps training affects perceived benefits. Perhaps a lack of experimental data affects uptake in more data driven groups. Perhaps increased friction in changing systems is felt to be a negative. And crucially, perhaps some of these things are true. As previously mentioned, Python has wonderful libraries; does it have more per developer than languages with DbC built in? If so then that might speak to developer productivity without these formal contracts: it may be that where the risks of failure are below the threshold that formal methods are needed, that we're better off with the tradeoff Python makes today. > Some predicates are hard to formulate, and we will never be able to formally write down all the contracts. But that doesn't imply for me to not use contracts at all (analogously, some functionality is untestable, but that doesn't mean that we don't test what we can). > > I would be very grateful if you could point me where this exposition is wrong (maybe referring to my original message, https://groups.google.com/d/msg/python-ideas/dmXz_7LH4GI/5A9jbpQ8CAAJ, which I spent more thought on formulating). I think the underlying problem is that you're treating this as a logic problem (what does logic say applies here), rather than an engineering problem (what can we measure and what does it tell us about whats going on). At least, thats how it appears to me. > So far, I was not confronted against nor read on the internet a plausible argument against formal contracts (the only two exceptions being lack of tools and less-skilled programmers have a hard time reading formal statements as soon as they include boolean logic and quantifiers). I'm actively working on the former, and hope that the latter would improve with time as education in computer sciences improves. > > Another argument, which I did read often on internet, but don't really count is that quality software is not a priority and most projects hence dispense of documentation or testing. This should, hopefully, not apply to public pypi packages and is highly impractical for any medium-size project with multiple developers (and very costly in the long run). I have looked for but could not find any studies into the developer productivity (and correctness) tradeoffs that DbC creates, other than stuff from 20 years ago which by its age clearly cannot contrast with modern Python/Ruby/Rust etc. Consider this: the goal of software development is to deliver features, at some level of correctness. One very useful measure of productivity then is a measure of how long it takes a given team to produce those features at that level of correctness. If DbC reduces the time it takes to get those features, it is increasing productivity. If it increases the time it takes to get those features, it is decreasing productivity, *even if it increases correctness*. Being more correct than needed is not beneficial much of the time. Does DbC deliver higher productivity @ a given correctness level? I don't know - thats why I went looking for research, but I couldn't find any (I may have missed it of course, I'd be happy to read some citations). I'm specifically looking for empirical data here, not extrapolation or rationalisations. >> I can think of many libraries where necessary pre and post conditions >> (such as 'self is still locked') are going to be noisy, and at risk of >> reducing comprehension if the DbC checks are used to enhance/extended >> documentation. > > > It is up to the developer to decide which contracts are enforced during testing, production or displayed in the documentation (you can pick the subset of the three, it's not an exclusion). This feature ("enabled" argument to a contract) has been already implemented in the icontract library. > > Some of the examples you've been giving would be better expressed with > a more capable type system in my view (e.g. Rust's), but I have no > good idea about adding that into Python :/. > I don't see how type system would help regardless how strict it would be? Unless each input and each output represent a special type, which would be super confusing as soon as you would put them in the containers and have to struggle with invariance, contravariance and covariance. Please see https://github.com/rust-lang/rfcs/issues/1077 for a discussion about introducing DbC to Rust. Unfortunately, the discussion about contracts in Rust is also based on misconceptions (e.g., see https://github.com/rust-lang/rfcs/issues/1077#issuecomment-94582917) -- there seems to be something wrong in the way anybody proposing DbC exposes contracts to the wider audience and miss to address these issues in a good way. So most people just react instinctively with "80% already covered with type systems" / "mere runtime type checks, use assert" and "that's only an extension to testing, so why bother" :(. > > I would now like to answer Hugh and withdraw from the discussion pro/contra formal contracts unless there is a rational, logical argument disputing the DbC in its entirety (not in one of its specific aspects or as a misconception/straw-man). A lot has been already said, many articles have been written (I linked some of the pages which I thought were short & good reads and I would gladly supply more reading material). I doubt I can find a better way to contribute to the discussion. Sure; like I said, I think the fundamental question about DbC is actually whether it helps: a) all programs b) all nontrivial programs c) high assurance programs My suspicion, for which I have only anecdata, is that its really in c) today. Kindof where TDD was in the early 2000's (and as I understand the research, its been shown to be a wash: you do get more tests than test-last or test-during, and more tests is correlated with quality and ease of evolution, but if you add that test coverage in test-during or test-last, you end up with the same benefits). -Rob From marko.ristin at gmail.com Wed Sep 26 03:10:21 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Wed, 26 Sep 2018 09:10:21 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: Hi Chris, > * There are always contracts, they can be either implicit or explicit. > You need always to figure them out before you call a function or use its > result. > > Not all code has such contracts. You could argue that code which does > not is inferior to code which does, but not everything follows a > strictly-definable pattern. > Well, you need to know what to supply to a function and what to expect, at least subconsciously. The question is just whether you can formulate these contracts or not; and whether you do or not. > * The are tools for formal contracts. > > That's the exact point you're trying to make, so it isn't evidence for > itself. Tools for formal contracts exist as third party in Python, and > if that were good enough for you, we wouldn't be discussing this. > There are no such tools in the standard library or language that make > formal contracts easy. > The original objection was that DbC in general is not beneficial; not that there are lacking tools for it (this probably got lost in the many messages on this thread). If you assume that there are no suitable tools for DbC, then yes, DbC is certainly *not *beneficial to any project since using it will be clumsy and difficult. It's a chicken-and-egg problem, so we need to assume that there are good tools for DbC in order for it to be beneficial. Disagreed. I would most certainly NOT assume that every reader knows > any particular syntax for such contracts. However, this is a weaker > point. The contracts are written as boolean expressions. While I agree that many programmers have a hard time with boolean expressions and quantifiers, I don't see this as a blocker to DbC. There is no other special syntax for DbC unless we decide to add it to the core language (which I don't see as probable). What I would like to have is a standard library so that inter-library interactions based on contracts are possible and an ecosystem could emerge around it. Quite some people object that DbC should be rejected as their "average programmer Joe/Jane" can't deal with the formal expressions. Some readers have problems parsing "all(i > 0 for i in result) or len(result) < 3" and can only parse "All numbers in the result are positive and/or the result has fewer than 3 items". There are also programmers who have a hard time reading "all(key == value.some_property for key, value in result.items())" and could only read "The resulting values in the dictionary are keyed by their some_property attribute.". I hope that education in computer science is improving and that soon programmers will be able to read these formal expressions. I'm also not sure if that is the case for non-English speakers. Hence I assumed that the readers of the contracts are familiar with formal boolean expressions. If you assume that readers have hard time parsing quantifiers and boolean logic then DbC is again certainly not beneficial. It would make me sad if a feature is rejected on grounds that we have to accept that many programmers don't master the basics of computer science (which I consider the boolean expressions to be). You might argue that a large proportion of PyPI projects will be > "library-style" packages, where the main purpose is to export a bunch > of functions. But even then, I'm not certain that they'd all benefit > from DbC. > Thanks for this clarification (and the download-yt example)! I actually only had packages-as-libraries in mind, not the executable scripts; my mistake. So, yes, "any Pypi package" should be reformulated to "any library on Pypi" (meant to be used by a wider audience than the developers themselves). People have said the same thing about type checking, too. Would > *every* project on PyPI benefit from MyPy's type checks? No. Syntax > for them was added, not because EVERYONE should use them, but because > SOME will use them, and it's worth having some language support. You > would probably do better to argue along those lines than to try to > claim that every single project ought to be using contracts. I totally agree. The discussion related to DbC in my mind always revolved around these use cases where type annotations are beneficial as well. Thanks for pointing that out and I'd like to apologize for the confusion! For the future discussion, let's focus on these use cases and please do ignore the rest. I'd still say that there is a plethora of libraries published on Pypi (Is there a way to find out the stats?). but I'm still -0.5 on adding anything of the sort to the stdlib, as I > don't yet see that *enough* projects would actually benefit. > Please see my previous message -- could you maybe say what would convince you that enough projects would actually benefit from formal contracts? Cheers, Marko On Wed, 26 Sep 2018 at 07:40, Chris Angelico wrote: > On Wed, Sep 26, 2018 at 2:47 PM Marko Ristin-Kaufmann > wrote: > > > > Hi Chris, > > > >> An extraordinary claim is like "DbC can improve *every single project* > >> on PyPI". That requires a TON of proof. Obviously we won't quibble if > >> you can only demonstrate that 99.95% of them can be improved, but you > >> have to at least show that the bulk of them can. > > > > > > I tried to give the "proof" (not a formal one, though) in my previous > message. > > (Formal proof isn't necessary here; we say "extraordinary proof", but > it'd be more accurate to say "extraordinary evidence".) > > > The assumptions are that: > > * There are always contracts, they can be either implicit or explicit. > You need always to figure them out before you call a function or use its > result. > > Not all code has such contracts. You could argue that code which does > not is inferior to code which does, but not everything follows a > strictly-definable pattern. > > > * Figuring out contracts by trial-and-error and reading the code (the > implementation or the test code) is time consuming and hard. > > Agreed. > > > * The are tools for formal contracts. > > That's the exact point you're trying to make, so it isn't evidence for > itself. Tools for formal contracts exist as third party in Python, and > if that were good enough for you, we wouldn't be discussing this. > There are no such tools in the standard library or language that make > formal contracts easy. > > > * The contracts written in documentation as human text inevitably rot > and they are much harder to maintain than automatically verified formal > contracts. > > Agreed. > > > * The reader is familiar with formal statements, and hence reading > formal statements is faster than reading the code or trial-and-error. > > Disagreed. I would most certainly NOT assume that every reader knows > any particular syntax for such contracts. However, this is a weaker > point. > > So I'll give you two and two halves for that. Good enough to make do. > > > I then went on to show why I think, under these assumptions, that formal > contracts are superior as a documentation tool and hence beneficial. Do you > think that any of these assumptions are wrong? Is there a hole in my > logical reasoning presented in my previous message? I would be very > grateful for any pointers! > > > > If these assumptions hold and there is no mistake in my reasoning, > wouldn't that qualify as a proof? > > > > It certainly qualifies as proof that SOME code MAY benefit from > contracts. It does not reach the much higher bar to support the claim > that "there are X projects on PyPI and every single one of them would > benefit". For instance, would youtube-dl benefit from DbC? To most > people, it's an application, not a library. Even if you're invoking it > from within a Python script, it's usually easiest to use the main > entrypoint rather than delve into its internals. Case in point: > > https://github.com/Rosuav/MegaClip/blob/master/megaclip.py#L71 > > It's actually easier to shell out to a subprocess than to call on > youtube-dl as a library. Would DbC benefit youtube-dl's internal > functions? Maybe, but that's for the youtube-dl people to decide; it > wouldn't in the slightest benefit my app. > > You might argue that a large proportion of PyPI projects will be > "library-style" packages, where the main purpose is to export a bunch > of functions. But even then, I'm not certain that they'd all benefit > from DbC. Some would, and you've definitely made the case for that; > but I'm still -0.5 on adding anything of the sort to the stdlib, as I > don't yet see that *enough* projects would actually benefit. > > People have said the same thing about type checking, too. Would > *every* project on PyPI benefit from MyPy's type checks? No. Syntax > for them was added, not because EVERYONE should use them, but because > SOME will use them, and it's worth having some language support. You > would probably do better to argue along those lines than to try to > claim that every single project ought to be using contracts. > > ChrisA > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Wed Sep 26 03:18:00 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 26 Sep 2018 17:18:00 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 5:10 PM Marko Ristin-Kaufmann wrote: > The original objection was that DbC in general is not beneficial; not that there are lacking tools for it (this probably got lost in the many messages on this thread). If you assume that there are no suitable tools for DbC, then yes, DbC is certainly not beneficial to any project since using it will be clumsy and difficult. It's a chicken-and-egg problem, so we need to assume that there are good tools for DbC in order for it to be beneficial. > >> Disagreed. I would most certainly NOT assume that every reader knows >> any particular syntax for such contracts. However, this is a weaker >> point. > > > The contracts are written as boolean expressions. While I agree that many programmers have a hard time with boolean expressions and quantifiers, I don't see this as a blocker to DbC. There is no other special syntax for DbC unless we decide to add it to the core language (which I don't see as probable). What I would like to have is a standard library so that inter-library interactions based on contracts are possible and an ecosystem could emerge around it. > It's easy to say that they're boolean expressions. But that's like saying that unit tests are just a bunch of boolean expressions too. Why do we have lots of different forms of test, rather than just a big fat "assert this and this and this and this and this and this"? Because the key to unit testing is not "boolean expressions", it's a language that can usefully describe what it is we're testing. Contracts aren't just boolean expressions - they're a language (or a mini-language) that lets you define WHAT the contract entails. >> You might argue that a large proportion of PyPI projects will be >> "library-style" packages, where the main purpose is to export a bunch >> of functions. But even then, I'm not certain that they'd all benefit >> from DbC. > > > Thanks for this clarification (and the download-yt example)! I actually only had packages-as-libraries in mind, not the executable scripts; my mistake. So, yes, "any Pypi package" should be reformulated to "any library on Pypi" (meant to be used by a wider audience than the developers themselves). > Okay. Even with that qualification, though, I still think that not every library will benefit from this. For example, matplotlib's plt.show() method guarantees that... a plot will be shown, and the user will have dismissed it, before it returns. Unless you're inside Jupyter/iPython, in which case it's different. Or if you're in certain other environments, in which case it's different again. How do you define the contract for something that is fundamentally interactive? You can define very weak contracts. For instance, input() guarantees that its return value is a string. Great! DbC doing the job of type annotations. Can you guarantee anything else about that string? Is there anything else useful that can be spelled easily? > I totally agree. The discussion related to DbC in my mind always revolved around these use cases where type annotations are beneficial as well. Thanks for pointing that out and I'd like to apologize for the confusion! For the future discussion, let's focus on these use cases and please do ignore the rest. I'd still say that there is a plethora of libraries published on Pypi (Is there a way to find out the stats?). > Ugh.... I would love to say "yes", but I can't. I guess maybe you could look at a bunch of requirements.txt files and see which things get dragged in that way? All you'll really get is heuristics at best, and even that, I don't know how to provide. Sorry. ChrisA From boxed at killingar.net Wed Sep 26 03:18:54 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Wed, 26 Sep 2018 09:18:54 +0200 Subject: [Python-ideas] Fwd: Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <951c4837-ca57-4e7b-9502-985a58f5c05c@googlegroups.com> <0CC4B614-3A09-4299-BDA4-2CBB8CA5C3D1@killingar.net> Message-ID: <15167F13-5D53-4721-A64C-58D003738367@killingar.net> David, I saw now that I missed the biggest problem with your proposal: yet again you deliberately throw away errors. I'm talking about making Python code _less_ error prone, while you seem to want to make it _more_. Anyway, I'll modify your reach() to not have the if in it that has this error hiding property, it also simplifies it a lot. It should look like this: def reach(name): return inspect.stack()[-2][0].f_locals[name] > 1. Huge performance penalty > > Huh? Have you actually benchmarked this is some way?! A couple lookups into the namespace are really not pricey operations. The cost is definitely more than zero, but for any function that does anything even slightly costly, the lookups would be barely in the noise. I'm talking about using this for all or most function calls that aren't positional only. So no, you can absolutely not assume I only use it to call expensive functions. And yea, I did benchmark it, and since you didn't define what you would think is acceptable for a benchmark you've left the door open for me to define it. This is the result of a benchmark for 10k calls (full source at the very end of this email): CPython 3.6 time with use: 0:00:02.587355 time with standard kwargs: 0:00:00.003079 time with positional args: 0:00:00.003023 pypy 6.0 time with use: 0:00:01.177555 time with standard kwargs: 0:00:00.002565 time with positional args: 0:00:00.001953 So for CPython 3.6 it's 2.587355/0.003079 = 840x times slower and pypy: 1.177555/0.002565 = 460x slower I'm quite frankly a bit amazed pypy is so good. I was under the impression it would be much worse there. They've clearly improved the speed of the stack inspection since I last checked. > > 2. Rather verbose, so somewhat fails on the stated goal of improving readability > > The "verbose" idea I propose is 3-4 characters more, per function call, than your `fun(a, b, *, this, that)` proposal. It will actually be shorter than your newer `fun(a, b, =this, =that)` proposal once you use 4 or more keyword arguments. True enough. > 3. Tooling* falls down very hard on this > > It's true that tooling doesn't currently support my hypothetical function. It also does not support your hypothetical syntax. If it was included in Python it would of course be added super fast, while the use() function would not. This argument is just bogus. > It would be *somewhat easier* to add special support for a function with a special name like `use()` than for new syntax. But obviously that varies by which tool and what purpose it is accomplishing. Easier how? Technically? Maybe. Politically? Absolutely not. If it's in Python then all tools _must_ follow. This solved the political problem of getting tool support and that is the only hard one. The technical problem is a rounding error in this situation. > Of course, PyCharm and MyPy and PyLint aren't going to bother special casing a `use()` function unless or until it is widely used and/or part of the builtins or standard library. I don't actually advocate for such inclusion, but I wouldn't be stridently against that since it's just another function name, nothing really special. Ah, yea, I see here you're granting my point above. Good to see we can agree on this at least. / Anders Benchmark code: ----------------------- import inspect from datetime import datetime def reach(name): return inspect.stack()[-2][0].f_locals[name] def use(names): kws = {} for name in names.split(): kws[name] = reach(name) return kws def function(a=11, b=22, c=33, d=44): pass def foo(): a, b, c = 1, 2, 3 function(a=77, **use('b')) c = 10000 start = datetime.now() for _ in range(c): foo() print('time with use: %s' % (datetime.now() - start)) def bar(): a, b, c = 1, 2, 3 function(a=77, b=b) start = datetime.now() for _ in range(c): bar() print('time with standard kwargs: %s' % (datetime.now() - start)) def baz(): a, b, c = 1, 2, 3 function(77, b) start = datetime.now() for _ in range(c): baz() print('time with positional args: %s' % (datetime.now() - start)) -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at brice.xyz Wed Sep 26 03:43:46 2018 From: contact at brice.xyz (Brice Parent) Date: Wed, 26 Sep 2018 09:43:46 +0200 Subject: [Python-ideas] "while:" for the loop In-Reply-To: References: Message-ID: <339c12fe-6d5c-4e59-52c9-da64f97c308b@brice.xyz> Le 26/09/2018 ? 05:36, Chris Angelico a ?crit?: > On Wed, Sep 26, 2018 at 1:29 PM Mikhail V wrote: >> On Wed, Sep 26, 2018 at 5:38 AM Chris Angelico wrote: >>> >>> I like saying while "something": where the string describes the loop's >>> real condition. For instance, while "moar data": if reading from a >>> socket, or while "not KeyboardInterrupt": if the loop is meant to be >>> halted by SIGINT. >>> >>> ChrisA >> if doing so, would not it be more practical >> to write is as an in-line comment then? >> with new syntax it could be like this: >> """ >> while: # not KeyboardInterrupt >> asd asd asd >> asd asd asd >> asd asd asd >> """ >> Similar effect, but I would find it better at least because it would >> be highlighted as a comment and not as a string, + no quotes noise. > A comment is not better than an inline condition, no. I *want* it to > be highlighted as part of the code, not as a comment. Because it isn't > a comment - it's a loop condition. For what it's worth, I'm not a fan of either solutions. In both cases (string or comment), KeyboardInterrupt seems to be the only way to get out of the loop, which which even if it were the case, it would breaks the DRY idea, because if you add in the future a reason to break out of the loop, you'd have to write the condition+break and update the comment/string to still be consistent. I don't think `while True:` is not explicit. If you think about it, True will always evaluate positively (right??), so it can't be the hardest part of learning Python. But in some cases, mostly when I work on tiny microptyhon projects that I share with non-python experts, where I avoid complicated code fragments like comprehensions, I usually use `while "forever":` or `while FOREVER:` (where forever was set to True before). In these case, I don't need them to understand exactly why the loop is indeed infinite, I just want them to know it is. But all these solutions are available right now, without any syntax change. About the original proposal, even though I'm not a native English speaker, writing `while:` seems like an wobbling sentence, we are waiting for it to be completed. My mind says "while what?" and tries to find out if it's infinite or if it has to find the condition elsewhere in the code (like some kind of do...until or do...while loop). -Brice From marko.ristin at gmail.com Wed Sep 26 03:51:20 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Wed, 26 Sep 2018 09:51:20 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: Hi Chris, It's easy to say that they're boolean expressions. But that's like > saying that unit tests are just a bunch of boolean expressions too. > Why do we have lots of different forms of test, rather than just a big > fat "assert this and this and this and this and this and this"? > Because the key to unit testing is not "boolean expressions", it's a > language that can usefully describe what it is we're testing. > Contracts aren't just boolean expressions - they're a language (or a > mini-language) that lets you define WHAT the contract entails. > Sorry, I misunderstood you. You are probably referring to knowing the terms like "preconditions, postconditions, invariants, strengthening/weakening", right? In that case, yes, I agree, I presuppose that readers are familiar with the concepts of DbC. Otherwise, of course, it makes no sense to use DbC if you assume nobody could actually figure out what it is :). But that's not the objection that is made too often -- what I really read often is that DbC is not beneficial not because people are not familiar with or can learn DbC, but really because reading boolean expressions is hard. That is what I was referring to in my original message. > Thanks for this clarification (and the download-yt example)! I actually > only had packages-as-libraries in mind, not the executable scripts; my > mistake. So, yes, "any Pypi package" should be reformulated to "any library > on Pypi" (meant to be used by a wider audience than the developers > themselves). > > > > Okay. Even with that qualification, though, I still think that not > every library will benefit from this. For example, matplotlib's > plt.show() method guarantees that... a plot will be shown, and the > user will have dismissed it, before it returns. Unless you're inside > Jupyter/iPython, in which case it's different. Or if you're in certain > other environments, in which case it's different again. How do you > define the contract for something that is fundamentally interactive? > > You can define very weak contracts. For instance, input() guarantees > that its return value is a string. Great! DbC doing the job of type > annotations. Can you guarantee anything else about that string? Is > there anything else useful that can be spelled easily? > In this case, no, I would not add any formal contracts to the function. Not all contracts can be formulated, and not all contracts are even meaningful. I suppose interactive functions are indeed a case that it is not possible. If any string can be returned, than the contract is empty. However, contracts can be useful when testing the GUI -- often it is difficult to script the user behavior. What many people do is record a session and re-play it. If there is a bug, fix it. Then re-record. While writing unit tests for GUI is hard since GUI changes rapidly during development and scripting formally the user behavior is tedious, DbC might be an alternative where you specify as much as you can, and then just re-run through the session. This implies, of course, a human tester. Let me take a look at matplotlib show: matplotlib.pyplot.show(**args*, ***kw*)[source] Display a figure. When running in ipython with its pylab mode, display all figures and return to the ipython prompt. In non-interactive mode, display all figures and block until the figures have been closed; in interactive mode it has no effect unless figures were created prior to a change from non-interactive to interactive mode (not recommended). In that case it displays the figures but does not block. A single experimental keyword argument, *block*, may be set to True or False to override the blocking behavior described above. Here are the contracts as a toy example; (I'm not familiar at all with the internals of matplotlib. I haven't spent much time really parsing and analyzing the docstring -- as I'll write below, it also confuses me since the docstring is not precise enough.) * If in ipython with its pylab mode, all figures should be displayed. * If in non-interactive mode, display all figures and block I'm actually confused with what they mean with: > In non-interactive mode, display all figures and block until the figures > have been closed; in interactive mode *it has no effect* *unless figures > were created prior to a change from non-interactive to interactive mode* > (not recommended). In that case it displays the figures but does not block. > > A single experimental keyword argument, *block*, may be set to True or > False to override the blocking behavior described above. > If only they spelled that out as a contract :) "it has no effect": What does not have effect? The call to the function? Or the setting of some parameter? Or the order of the calls? "unless ...to interactive mode": this would be actually really more readable as a formal contract. It would imply some refactoring to add a property when a figure was created (before or after the change to interactive mode from non-interactive mode). Right now, it's not very clear how you can test that even yourself as a caller. And what does "(not recommended)" mean? Prohibited? The blocking behavior and the corresponding argument is hard to parse from the description for me. Writing it as a contract would, IMO, be much more readable. Here is my try at the contracts. Assuming that there is a list of figures and that they have a property "displayed" and that "State.blocked" global variable refers to whether the interface is blocked or not:: @post(lambda: all(figure.displayed for figure in figures) @post(lambda: not ipython.in_pylab_mode() or not State.blocked) @post(lambda: not interactive() or State.blocked) matplotlib.pyplot.show() The function that actually displays the figure could ensure that the "displayed" property of the figure is set and that the window associated with the figure is visible. Cheers, Marko On Wed, 26 Sep 2018 at 09:19, Chris Angelico wrote: > On Wed, Sep 26, 2018 at 5:10 PM Marko Ristin-Kaufmann > wrote: > > The original objection was that DbC in general is not beneficial; not > that there are lacking tools for it (this probably got lost in the many > messages on this thread). If you assume that there are no suitable tools > for DbC, then yes, DbC is certainly not beneficial to any project since > using it will be clumsy and difficult. It's a chicken-and-egg problem, so > we need to assume that there are good tools for DbC in order for it to be > beneficial. > > > >> Disagreed. I would most certainly NOT assume that every reader knows > >> any particular syntax for such contracts. However, this is a weaker > >> point. > > > > > > The contracts are written as boolean expressions. While I agree that > many programmers have a hard time with boolean expressions and quantifiers, > I don't see this as a blocker to DbC. There is no other special syntax for > DbC unless we decide to add it to the core language (which I don't see as > probable). What I would like to have is a standard library so that > inter-library interactions based on contracts are possible and an ecosystem > could emerge around it. > > > > It's easy to say that they're boolean expressions. But that's like > saying that unit tests are just a bunch of boolean expressions too. > Why do we have lots of different forms of test, rather than just a big > fat "assert this and this and this and this and this and this"? > Because the key to unit testing is not "boolean expressions", it's a > language that can usefully describe what it is we're testing. > Contracts aren't just boolean expressions - they're a language (or a > mini-language) that lets you define WHAT the contract entails. > > >> You might argue that a large proportion of PyPI projects will be > >> "library-style" packages, where the main purpose is to export a bunch > >> of functions. But even then, I'm not certain that they'd all benefit > >> from DbC. > > > > > > Thanks for this clarification (and the download-yt example)! I actually > only had packages-as-libraries in mind, not the executable scripts; my > mistake. So, yes, "any Pypi package" should be reformulated to "any library > on Pypi" (meant to be used by a wider audience than the developers > themselves). > > > > Okay. Even with that qualification, though, I still think that not > every library will benefit from this. For example, matplotlib's > plt.show() method guarantees that... a plot will be shown, and the > user will have dismissed it, before it returns. Unless you're inside > Jupyter/iPython, in which case it's different. Or if you're in certain > other environments, in which case it's different again. How do you > define the contract for something that is fundamentally interactive? > > You can define very weak contracts. For instance, input() guarantees > that its return value is a string. Great! DbC doing the job of type > annotations. Can you guarantee anything else about that string? Is > there anything else useful that can be spelled easily? > > > I totally agree. The discussion related to DbC in my mind always > revolved around these use cases where type annotations are beneficial as > well. Thanks for pointing that out and I'd like to apologize for the > confusion! For the future discussion, let's focus on these use cases and > please do ignore the rest. I'd still say that there is a plethora of > libraries published on Pypi (Is there a way to find out the stats?). > > > > Ugh.... I would love to say "yes", but I can't. I guess maybe you > could look at a bunch of requirements.txt files and see which things > get dragged in that way? All you'll really get is heuristics at best, > and even that, I don't know how to provide. Sorry. > > ChrisA > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Wed Sep 26 03:53:05 2018 From: mertz at gnosis.cx (David Mertz) Date: Wed, 26 Sep 2018 03:53:05 -0400 Subject: [Python-ideas] Fwd: Keyword only argument on function call In-Reply-To: <15167F13-5D53-4721-A64C-58D003738367@killingar.net> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <951c4837-ca57-4e7b-9502-985a58f5c05c@googlegroups.com> <0CC4B614-3A09-4299-BDA4-2CBB8CA5C3D1@killingar.net> <15167F13-5D53-4721-A64C-58D003738367@killingar.net> Message-ID: On Wed, Sep 26, 2018, 3:19 AM Anders Hovm?ller wrote: > I saw now that I missed the biggest problem with your proposal: yet again > you deliberately throw away errors. I'm talking about making Python code > _less_ error prone, while you seem to want to make it _more_. > Beyond the belligerent tone, is there an actual POINT here? It's the middle of the night and I'm on my tablet. I'm not sure what sort of error, or in what circumstance, my toy code "throws away" errors. Actually saying so rather than playing a coy guessing game would be helpful. > So for CPython 3.6 it's 2.587355/0.003079 = 840x times slower > and pypy: 1.177555/0.002565 = 460x slower > Yes, for functions whose entire body consists of `pass`, adding pretty much any cost to unpacking the arguments will show down the operation a lot. I'm actually sightly surprised the win is not bigger in Pypy than in CPython. I'd kinda expect them to optimize away the entire call when a function call is a NOOP. Anyway, this is 100% consistent with what I said. For functions with actual bodies, the lookup is negligible. It could be made a lot faster, I'm sure, if you wrote `use()` in C. Probably even just by optimizing the Python version (`reach()` doesn't need to be a separate call, for example, it's just better to illustrate that way). Changing the basic syntax of Python to optimize NOOPs really is a non-starter. In general, changing syntax at all to avoid something easily accomplished with existing forms is?and should be?a very high barrier to cross. I haven't used macropy. I should play with it. I'm guessing it could be used to create a zero-cost `use()` that had exactly the same API as my toy `use()` function. If so, you could starting using and publishing a toy version today and provide the optimized version as an alternative also. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Sep 26 04:36:23 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 26 Sep 2018 09:36:23 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, 26 Sep 2018 at 06:41, Chris Angelico wrote: > > On Wed, Sep 26, 2018 at 2:47 PM Marko Ristin-Kaufmann > wrote: > > > > Hi Chris, > > > >> An extraordinary claim is like "DbC can improve *every single project* > >> on PyPI". That requires a TON of proof. Obviously we won't quibble if > >> you can only demonstrate that 99.95% of them can be improved, but you > >> have to at least show that the bulk of them can. > > > > > > I tried to give the "proof" (not a formal one, though) in my previous message. > > (Formal proof isn't necessary here; we say "extraordinary proof", but > it'd be more accurate to say "extraordinary evidence".) > > > The assumptions are that: > > * There are always contracts, they can be either implicit or explicit. You need always to figure them out before you call a function or use its result. > > Not all code has such contracts. You could argue that code which does > not is inferior to code which does, but not everything follows a > strictly-definable pattern. Also, the implicit contracts code currently has are typically pretty loose. What you need to "figure out" is very general. Explicit contracts are typically demonstrated as being relatively strict, and figuring out and writing such contracts is more work than writing code with loose implicit contracts. Whether the trade-off of defining tight explicit contracts once vs inferring a loose implicit contract every time you call the function is worth it, depends on how often the function is called. For most single use (or infrequently used) functions, I'd argue that the trade-off *isn't* worth it. Here's a quick example from the pip codebase: # Retry every half second for up to 3 seconds @retry(stop_max_delay=3000, wait_fixed=500) def rmtree(dir, ignore_errors=False): shutil.rmtree(dir, ignore_errors=ignore_errors, onerror=rmtree_errorhandler) What contract would you put on this code? The things I can think of: 1. dir is a directory: obvious from the name, not worth the runtime cost of checking as shutil.rmtree will do that and we don't want to duplicate work. 2. dir is a string: covered by type declarations, if we used them. No need for contracts 3. ignore_errors is a boolean: covered by type declarations. 4. dir should exist: Checked by shutil.rmtree, don't want to duplicate work. 5. After completion, dir won't exist. Obvious unless we have doubts about what shutil.rmtree does (but that would have a contract too). Also, we don't want the runtime overhead (again). In addition, adding those contracts to the code would expand it significantly, making readability suffer (as it is, rmtree is clearly a thin wrapper around shutil.rmtree). > > * Figuring out contracts by trial-and-error and reading the code (the implementation or the test code) is time consuming and hard. > > Agreed. With provisos. Figuring out contracts in sufficient detail to use the code is *in many cases* simple. For harder cases, agreed. But that's why this is simply a proof that contracts *can* be useful, not that 100% of code would benefit from them. > > * The are tools for formal contracts. > > That's the exact point you're trying to make, so it isn't evidence for > itself. Tools for formal contracts exist as third party in Python, and > if that were good enough for you, we wouldn't be discussing this. > There are no such tools in the standard library or language that make > formal contracts easy. > > > * The contracts written in documentation as human text inevitably rot and they are much harder to maintain than automatically verified formal contracts. > > Agreed. Agreed, if contracts are automatically verified. But when runtime cost comes up, people suggest that contracts can be disabled in production code - which invalidates the "automatically verified" premise. > > * The reader is familiar with formal statements, and hence reading formal statements is faster than reading the code or trial-and-error. > > Disagreed. I would most certainly NOT assume that every reader knows > any particular syntax for such contracts. However, this is a weaker > point. Depends on what "formal statement" means. If it means "short snippet of Python code", then yes, the reader will be familiar. But there's only so much you can do in a short snippet of Python, without calling out to other functions (which may or may not be "obvious" in their behavour) so whether it's easier to read a contract is somewhat in conflict with wanting strong contracts. > So I'll give you two and two halves for that. Good enough to make do. > > > I then went on to show why I think, under these assumptions, that formal contracts are superior as a documentation tool and hence beneficial. Do you think that any of these assumptions are wrong? Is there a hole in my logical reasoning presented in my previous message? I would be very grateful for any pointers! > > > > If these assumptions hold and there is no mistake in my reasoning, wouldn't that qualify as a proof? > > > [...] > You might argue that a large proportion of PyPI projects will be > "library-style" packages, where the main purpose is to export a bunch > of functions. But even then, I'm not certain that they'd all benefit > from DbC. Some would, and you've definitely made the case for that; > but I'm still -0.5 on adding anything of the sort to the stdlib, as I > don't yet see that *enough* projects would actually benefit. The argument above, if it's a valid demonstration that all code would benefit from contracts, would *also* imply that every function in the stdlib should have contracts added. Are you proposing that, too, and is your proposal not just for syntax for contracts, but *also* for wholesale addition of contracts to the stdlib? If so, you should be far more explicit that this is what you're proposing, because you'd likely get even more pushback over that sort of churn in the stdlib than over a syntax change to support contracts. Even Guido didn't push that far with type annotations... > People have said the same thing about type checking, too. Would > *every* project on PyPI benefit from MyPy's type checks? No. Syntax > for them was added, not because EVERYONE should use them, but because > SOME will use them, and it's worth having some language support. You > would probably do better to argue along those lines than to try to > claim that every single project ought to be using contracts. Precisely. To answer the original question, "Why is design by contracts not widely adopted?" part of the answer is that I suspect extreme claims like this have put many people off, seeing design by contract as more of an evangelical stance than a practical tool. If it were promoted more as a potentially useful addition to the programmer's toolbox, and less of the solution to every problem, it may have gained more traction. (Similar issues are why people are skeptical of functional programming, and many other tools - even the "strong typing vs weak typing" debate can have a flavour of this "my proposal solves all the world's ills" attitude). Personally, I'm open to the benefits of design by contract. But if I need to buy into a whole philosophy to use it (or engage with its user community) I'll pass. Paul From rosuav at gmail.com Wed Sep 26 04:37:51 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 26 Sep 2018 18:37:51 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 5:51 PM Marko Ristin-Kaufmann wrote: > > Hi Chris, > >> It's easy to say that they're boolean expressions. But that's like >> saying that unit tests are just a bunch of boolean expressions too. >> Why do we have lots of different forms of test, rather than just a big >> fat "assert this and this and this and this and this and this"? >> Because the key to unit testing is not "boolean expressions", it's a >> language that can usefully describe what it is we're testing. >> Contracts aren't just boolean expressions - they're a language (or a >> mini-language) that lets you define WHAT the contract entails. > > > Sorry, I misunderstood you. You are probably referring to knowing the terms like "preconditions, postconditions, invariants, strengthening/weakening", right? In that case, yes, I agree, I presuppose that readers are familiar with the concepts of DbC. Otherwise, of course, it makes no sense to use DbC if you assume nobody could actually figure out what it is :). > Let's say you want to define a precondition and postcondition for this function: def fibber(n): return n < 2 ? n : fibber(n-1) + fibber(n-2) What would you specify? Can you say, as a postcondition, that the return value must be a Fibonacci number? Can you say that, for any 'n' greater than about 30, the CPU temperature will have risen? How do you describe those as boolean expressions? The art of the contract depends on being able to adequately define the conditions. > > However, contracts can be useful when testing the GUI -- often it is difficult to script the user behavior. What many people do is record a session and re-play it. If there is a bug, fix it. Then re-record. While writing unit tests for GUI is hard since GUI changes rapidly during development and scripting formally the user behavior is tedious, DbC might be an alternative where you specify as much as you can, and then just re-run through the session. This implies, of course, a human tester. > That doesn't sound like the function's contract. That sounds like a test case - of which you would have multiple, using different "scripted session" inputs and different outputs (some of success, some of expected failure). > Here is my try at the contracts. Assuming that there is a list of figures and that they have a property "displayed" and that "State.blocked" global variable refers to whether the interface is blocked or not:: > @post(lambda: all(figure.displayed for figure in figures) > @post(lambda: not ipython.in_pylab_mode() or not State.blocked) > @post(lambda: not interactive() or State.blocked) > matplotlib.pyplot.show() > There is no such thing as "State.blocked". It blocks. The function *does not return* until the figure has been displayed, and dismissed. There's no way to recognize that inside the function's state. Contracts are great when every function is 100% deterministic and can maintain invariants and/or transform from one set of invariants to another. Contracts are far less clear when the definitions are muddier. ChrisA From rosuav at gmail.com Wed Sep 26 04:44:24 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 26 Sep 2018 18:44:24 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 6:36 PM Paul Moore wrote: > > On Wed, 26 Sep 2018 at 06:41, Chris Angelico wrote: > > > > On Wed, Sep 26, 2018 at 2:47 PM Marko Ristin-Kaufmann > > wrote: > > > * The contracts written in documentation as human text inevitably rot and they are much harder to maintain than automatically verified formal contracts. > > > > Agreed. > > Agreed, if contracts are automatically verified. But when runtime cost > comes up, people suggest that contracts can be disabled in production > code - which invalidates the "automatically verified" premise. Even if they're only verified as a dedicated testing pass ("okay, let's run the unit tests, let's run the contract verifier, let's run the type checker, cool, we're good"), they're still better off than unchecked comments/documentation in terms of code rot. That said, though: the contract for a function and the documentation for the function are inextricably linked *already*, and if you let your API docs rot when you make changes that callers need to be aware of, you have failed your callers. Wholesale use of contracts would not remove the need for good documentation; what it might allow is easier version compatibility testing. It gives you a somewhat automated (or at least automatable) tool for checking if two similar libraries (eg version X.Y and version X.Y-1) are compatible with your code. That would be of some value, if it could be trusted; you could quickly run your code through a checker and say "hey, tell me what the oldest version of Python is that will run this", and get back a response without actually running a gigantic test suite - since it could check based on the *diffs* in the contracts rather than the contracts themselves. But that would require a lot of support, all up and down the stack. ChrisA From rosuav at gmail.com Wed Sep 26 04:52:49 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 26 Sep 2018 18:52:49 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 6:37 PM Chris Angelico wrote: > > On Wed, Sep 26, 2018 at 5:51 PM Marko Ristin-Kaufmann > wrote: > > > > Hi Chris, > > > >> It's easy to say that they're boolean expressions. But that's like > >> saying that unit tests are just a bunch of boolean expressions too. > >> Why do we have lots of different forms of test, rather than just a big > >> fat "assert this and this and this and this and this and this"? > >> Because the key to unit testing is not "boolean expressions", it's a > >> language that can usefully describe what it is we're testing. > >> Contracts aren't just boolean expressions - they're a language (or a > >> mini-language) that lets you define WHAT the contract entails. > > > > > > Sorry, I misunderstood you. You are probably referring to knowing the terms like "preconditions, postconditions, invariants, strengthening/weakening", right? In that case, yes, I agree, I presuppose that readers are familiar with the concepts of DbC. Otherwise, of course, it makes no sense to use DbC if you assume nobody could actually figure out what it is :). > > > > Let's say you want to define a precondition and postcondition for this function: > > def fibber(n): > return n < 2 ? n : fibber(n-1) + fibber(n-2) Uhhhhhhhh.... def fibber(n): return n if n < 2 else fibber(n-1) + fibber(n-2) Let's, uhh, pretend that I didn't just mix languages there. For the original, we can say: @post(raises=ProgrammerFailedSanCheckError) ChrisA From p.f.moore at gmail.com Wed Sep 26 04:59:04 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 26 Sep 2018 09:59:04 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, 26 Sep 2018 at 09:45, Chris Angelico wrote: > > On Wed, Sep 26, 2018 at 6:36 PM Paul Moore wrote: > > > > On Wed, 26 Sep 2018 at 06:41, Chris Angelico wrote: > > > > > > On Wed, Sep 26, 2018 at 2:47 PM Marko Ristin-Kaufmann > > > wrote: > > > > * The contracts written in documentation as human text inevitably rot and they are much harder to maintain than automatically verified formal contracts. > > > > > > Agreed. > > > > Agreed, if contracts are automatically verified. But when runtime cost > > comes up, people suggest that contracts can be disabled in production > > code - which invalidates the "automatically verified" premise. > > Even if they're only verified as a dedicated testing pass ("okay, > let's run the unit tests, let's run the contract verifier, let's run > the type checker, cool, we're good"), they're still better off than > unchecked comments/documentation in terms of code rot. Absolutely. But if the contracts are checked at runtime, they are precisely as good as tests - they will flag violations *in any circumstances you check*. That's great, but nothing new. I understood that one of the benefits of contracts was that it would handle cases that you *forgot* to test - like assertions do, in essence - and would need to be active in production (again, like assertions, if we assume we've not explicitly chosen to run with assertions disabled) to get that benefit. There's certainly benefits for the "contracts as additional tests" viewpoint. But whenever that's proposed as what people understand by DbC, the response is "no, it's not like that at all". So going back to the "why isn't DbC more popular" question - because no-one can get a clear handle on whether they are "like tests" or "like assertions" or "something else" :-) Paul From boxed at killingar.net Wed Sep 26 05:12:35 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Wed, 26 Sep 2018 11:12:35 +0200 Subject: [Python-ideas] Fwd: Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <951c4837-ca57-4e7b-9502-985a58f5c05c@googlegroups.com> <0CC4B614-3A09-4299-BDA4-2CBB8CA5C3D1@killingar.net> <15167F13-5D53-4721-A64C-58D003738367@killingar.net> Message-ID: <22B127AB-AA47-46DE-AC99-80CBBEA2C6E5@killingar.net> > I saw now that I missed the biggest problem with your proposal: yet again you deliberately throw away errors. I'm talking about making Python code _less_ error prone, while you seem to want to make it _more_. > > Beyond the belligerent tone, is there an actual POINT here? Yes, there is a point: you keep insisting that I shut up about my ideas and you motivate it by giving first totally broken code, then error prone and slow code and then you are upset that I point out these facts. I think it's a bit much when you complain about the tone after all that. Especially after you wrote "If someone steps out of line of being polite and professional, just ignore it" the 9th of September in this very thread. > It's the middle of the night and I'm on my tablet. Maybe you could just reply later? > I'm not sure what sort of error, or in what circumstance, my toy code "throws away" errors. Actually saying so rather than playing a coy guessing game would be helpful. You explicitly wrote your code so that it tries to pass a local variable "d" that does not exist and the function does not take, and it doesn't crash on that. I guess you forgot? You've done it several times now and I've already pointed this out. > So for CPython 3.6 it's 2.587355/0.003079 = 840x times slower > and pypy: 1.177555/0.002565 = 460x slower > > Yes, for functions whose entire body consists of `pass`, adding pretty much any cost to unpacking the arguments will show down the operation a lot. If you add sleep(0.001) it's still a factor 1.3! This is NOT a trivial overhead. > Changing the basic syntax of Python to optimize NOOPs really is a non-starter. This is not a belligerent tone you think? > In general, changing syntax at all to avoid something easily accomplished with existing forms is?and should be?a very high barrier to cross. Sure. I'm not arguing that it should be a low barrier, I'm arguing that it's worth it. And I'm trying to discuss alternatives. > I haven't used macropy. I should play with it. I'm guessing it could be used to create a zero-cost `use()` that had exactly the same API as my toy `use()` function. If so, you could starting using and publishing a toy version today and provide the optimized version as an alternative also. Let me quote my mail from yesterday: "3. I have made a sort-of implementation with MacroPy: https://github.com/boxed/macro-kwargs/blob/master/test.py I think this is a dead end, but it was easy to implement and fun to try!" Let me also clarify another point: I wanted to open up the discussion to people who are interested in the general problem and just discuss some ideas. I am not at this point trying to get a PEP through. If that was my agenda, I would have already submitted the PEP. I have not. / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Wed Sep 26 05:31:06 2018 From: mertz at gnosis.cx (David Mertz) Date: Wed, 26 Sep 2018 05:31:06 -0400 Subject: [Python-ideas] Fwd: Keyword only argument on function call In-Reply-To: <22B127AB-AA47-46DE-AC99-80CBBEA2C6E5@killingar.net> References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <951c4837-ca57-4e7b-9502-985a58f5c05c@googlegroups.com> <0CC4B614-3A09-4299-BDA4-2CBB8CA5C3D1@killingar.net> <15167F13-5D53-4721-A64C-58D003738367@killingar.net> <22B127AB-AA47-46DE-AC99-80CBBEA2C6E5@killingar.net> Message-ID: On Wed, Sep 26, 2018, 5:12 AM Anders Hovm?ller wrote: > I saw now that I missed the biggest problem with your proposal: yet again >> you deliberately throw away errors. I'm talking about making Python code >> _less_ error prone, while you seem to want to make it _more_. >> > > Beyond the belligerent tone, is there an actual POINT here? > > Yes, there is a point: you keep insisting that I shut up about my ideas > and you motivate it by giving first totally broken code, then error prone > and slow code and then you are upset that I point out these facts. I think > it's a bit much when you complain about the tone after all that. Especially > after you wrote "If someone steps out of line of being polite and > professional, just ignore it" the 9th of September in this very thread. > That's fine. I'm not really as bothered by your belligerent tone as I'm trying to find the point underneath it. I guess... and I'm just guessing from your hints... that you don't like the "default to None" behavior of my *TOY* code. That's fine. It's a throwaway demonstration, not an API I'm attached to. You're new here. You may not understand that, in Python, we have a STRONG, preference for doing things with libraries before changing syntax. The argument that one can do something using existing, available techniques is prima facie weight against new syntax. Obviously there ARE times when syntax is added, so the fact isn't an absolute conclusion. But so far, your arguments have only seemed to amount to "I (Anders) like this syntax." The supposed performance win, the brevity, and the hypothetical future tooling, are just hand waving so far. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Wed Sep 26 05:40:36 2018 From: mertz at gnosis.cx (David Mertz) Date: Wed, 26 Sep 2018 05:40:36 -0400 Subject: [Python-ideas] Fwd: Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <951c4837-ca57-4e7b-9502-985a58f5c05c@googlegroups.com> <0CC4B614-3A09-4299-BDA4-2CBB8CA5C3D1@killingar.net> <15167F13-5D53-4721-A64C-58D003738367@killingar.net> <22B127AB-AA47-46DE-AC99-80CBBEA2C6E5@killingar.net> Message-ID: Oh, I see that you indeed implemented a macropy version at https://github.com/boxed/macro-kwargs/blob/master/test.py. Other than use() vs grab() as the function name, it's the same thing. Is it true that the macro version has no performance cost? So it's now perfectly straightforward to provide both a function and a macro for grab(), and users can play with that API, right? Without changing Python, programmers can use this "shortcut keyword arguments corresponding to local names." On Wed, Sep 26, 2018, 5:31 AM David Mertz wrote: > On Wed, Sep 26, 2018, 5:12 AM Anders Hovm?ller > wrote: > >> I saw now that I missed the biggest problem with your proposal: yet again >>> you deliberately throw away errors. I'm talking about making Python code >>> _less_ error prone, while you seem to want to make it _more_. >>> >> >> Beyond the belligerent tone, is there an actual POINT here? >> >> Yes, there is a point: you keep insisting that I shut up about my ideas >> and you motivate it by giving first totally broken code, then error prone >> and slow code and then you are upset that I point out these facts. I think >> it's a bit much when you complain about the tone after all that. Especially >> after you wrote "If someone steps out of line of being polite and >> professional, just ignore it" the 9th of September in this very thread. >> > > That's fine. I'm not really as bothered by your belligerent tone as I'm > trying to find the point underneath it. > > I guess... and I'm just guessing from your hints... that you don't like > the "default to None" behavior of my *TOY* code. That's fine. It's a > throwaway demonstration, not an API I'm attached to. > > You're new here. You may not understand that, in Python, we have a STRONG, > preference for doing things with libraries before changing syntax. The > argument that one can do something using existing, available techniques is > prima facie weight against new syntax. Obviously there ARE times when > syntax is added, so the fact isn't an absolute conclusion. > > But so far, your arguments have only seemed to amount to "I (Anders) like > this syntax." The supposed performance win, the brevity, and the > hypothetical future tooling, are just hand waving so far. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Wed Sep 26 06:44:38 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Wed, 26 Sep 2018 12:44:38 +0200 Subject: [Python-ideas] Fwd: Keyword only argument on function call In-Reply-To: References: <2F0EA733-60FA-41C9-8D00-F032654A25BE@me.com> <951c4837-ca57-4e7b-9502-985a58f5c05c@googlegroups.com> <0CC4B614-3A09-4299-BDA4-2CBB8CA5C3D1@killingar.net> <15167F13-5D53-4721-A64C-58D003738367@killingar.net> <22B127AB-AA47-46DE-AC99-80CBBEA2C6E5@killingar.net> Message-ID: <7C977C15-EF5D-4FA3-B79E-9D295A2FC4AA@killingar.net> > Oh, I see that you indeed implemented a macropy version at https://github.com/boxed/macro-kwargs/blob/master/test.py . > Other than use() vs grab() as the function name, it's the same thing. Well, except that it's import time, and that you do get the tooling on the existence of the local variables. You still don't get any check that the function has the parameters you're trying to match with your keyword arguments so it's a bit of a half measure. Steve had a fun idea of using the syntax foo=? where you could transform ? to the real name at import time. It's similar to the MacroPy solution but can be implemented with a super tiny import hook with an AST transformation, and you get the tooling part the MacroPy version is missing, but of course you lose the parts you get with MacroPy. So again it's a half measure.. just the other half. > Is it true that the macro version has no performance cost? In python 2 pretty much since it's compile time, in python 3 no because MacroPy3 has a bug in how pyc files are cached (they aren't). But even in the python 3 case the performance impact is at import time so not really significant. > So it's now perfectly straightforward to provide both a function and a macro for grab(), and users can play with that API, right? Without changing Python, programmers can use this "shortcut keyword arguments corresponding to local names." Sort of. But for my purposes I don't really think it's a valid approach. I'm working on a 240kloc code base (real lines, not comments and blank lines). I don't think it's a good idea, and I wouldn't be able to sell it ot the team either, to introduce macropy to a significant enough chunk of the code base to make a difference. Plus the tooling problem mentioned above would make this worse than normal kwarg anyway from a robustness point of view. I'm thinking that point 4 of my original list of ideas (PyCharm code folding) is the way to go. This would mean I could change huge chunks of code to the standard python keyword argument syntax and then still get the readability improvement in my editor without affecting anyone else. It has the downside that you don't to see this new syntax in other tools of course, but I think that's fine for trying out the syntax. The biggest problem I see is that I feel rather scared about trying to implement this in PyCharm. I've tried to find the code for soft line breaks to implement a much nicer version of that, but I ended up giving up because I just couldn't find where this happened in the code! My experience with the PyCharm code base is basically "I'm amazed it works at all!". If you know anyone who feels comfortable with the PyCharm code that could point me in the right direction I would of course be very greatful! / Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From rebane2001 at gmail.com Wed Sep 26 07:12:47 2018 From: rebane2001 at gmail.com (Jasper Rebane) Date: Wed, 26 Sep 2018 14:12:47 +0300 Subject: [Python-ideas] Add .= as a method return value assignment operator Message-ID: Hi, When using Python, I find myself often using assignment operators, like 'a += 1' instead of 'a = a + 1', which saves me a lot of time and hassle Unfortunately, this doesn't apply to methods, thus we have to write code like this: text = "foo" text = text.replace("foo","bar") # "bar" I propose that we should add '.=' as a method return value assignment operator so we could write the code like this instead: text = "foo" text .= replace("foo","bar") # "bar" This looks cleaner, saves time and makes debugging easier Here are a few more examples: text = " foo " text .= strip() # "foo" text = "foo bar" text .= split(" ") # ['foo', 'bar'] text = b'foo' text .= decode("UTF-8") # "foo" foo = {1,2,3} bar = {2,3,4} foo .= difference(bar) # {1} Rebane -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Wed Sep 26 08:30:50 2018 From: jamtlu at gmail.com (James Lu) Date: Wed, 26 Sep 2018 08:30:50 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: Message-ID: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> I still prefer snapshot, though capture is a good name too. We could use generator syntax and inspect the argument names. Instead of ?a?, perhaps use ?_?. Or maybe use ?A.?, for arguments. Some people might prefer ?P? for parameters, since parameters sometimes means the value received while the argument means the value passed. (#A1) from icontract import snapshot, __ @snapshot(some_func(_.some_argument.some_attr) for some_identifier, _ in __) Or (#A2) @snapshot(some_func(some_argument.some_attr) for some_identifier, _, some_argument in __) ? Or (#A3) @snapshot(lambda some_argument,_,some_identifier: some_func(some_argument.some_attr)) Or (#A4) @snapshot(lambda _,some_identifier: some_func(_.some_argument.some_attr)) @snapshot(lambda _,some_identifier, other_identifier: some_func(_.some_argument.some_attr), other_func(_.self)) I like #A4 the most because it?s fairly DRY and avoids the extra punctuation of @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) > On Sep 26, 2018, at 12:23 AM, Marko Ristin-Kaufmann wrote: > > Hi, > > Franklin wrote: >> The name "before" is a confusing name. It's not just something that >> happens before. It's really a pre-`let`, adding names to the scope of >> things after it, but with values taken before the function call. Based >> on that description, other possible names are `prelet`, `letbefore`, >> `predef`, `defpre`, `beforescope`. Better a name that is clearly >> confusing than one that is obvious but misleading. > > James wrote: >> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ?old? it?s ?snapshot?. > > > I like "snapshot", it's a bit clearer than prefixing/postfixing verbs with "pre" which might be misread (e.g., "prelet" has a meaning in Slavic languages and could be subconsciously misread, "predef" implies to me a pre-definition rather than prior-to-definition , "beforescope" is very clear for me, but it might be confusing for others as to what it actually refers to ). What about "@capture" (7 letters for captures versus 8 for snapshot)? I suppose "@let" would be playing with fire if Python with conflicting new keywords since I assume "let" to be one of the candidates. > > Actually, I think there is probably no way around a decorator that captures/snapshots the data before the function call with a lambda (or even a separate function). "Old" construct, if we are to parse it somehow from the condition function, would limit us only to shallow copies (and be complex to implement as soon as we are capturing out-of-argument values such as globals etc.). Moreove, what if we don't need shallow copies? I could imagine a dozen of cases where shallow copy is not what the programmer wants: for example, s/he might need to make deep copies, hash or otherwise transform the input data to hold only part of it instead of copying (e.g., so as to allow equality check without a double copy of the data, or capture only the value of certain property transformed in some way). > > I'd still go with the dictionary to allow for this extra freedom. We could have a convention: "a" denotes to the current arguments, and "b" denotes the captured values. It might make an interesting hint that we put "b" before "a" in the condition. You could also interpret "b" as "before" and "a" as "after", but also "a" as "arguments". > > @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) > @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) > def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: > ... > "b" can be omitted if it is not used. Under the hub, all the arguments to the condition would be passed by keywords. > > In case of inheritance, captures would be inherited as well. Hence the library would check at run-time that the returned dictionary with captured values has no identifier that has been already captured, and the linter checks that statically, before running the code. Reading values captured in the parent at the code of the child class might be a bit hard -- but that is case with any inherited methods/properties. In documentation, I'd list all the captures of both ancestor and the current class. > > I'm looking forward to reading your opinion on this and alternative suggestions :) > Marko > >> On Tue, 25 Sep 2018 at 18:12, Franklin? Lee wrote: >> On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann >> wrote: >> > >> > Hi, >> > >> > (I'd like to fork from a previous thread, "Pre-conditions and post-conditions", since it got long and we started discussing a couple of different things. Let's discuss in this thread the implementation of a library for design-by-contract and how to push it forward to hopefully add it to the standard library one day.) >> > >> > For those unfamiliar with contracts and current state of the discussion in the previous thread, here's a short summary. The discussion started by me inquiring about the possibility to add design-by-contract concepts into the core language. The idea was rejected by the participants mainly because they thought that the merit of the feature does not merit its costs. This is quite debatable and seems to reflect many a discussion about design-by-contract in general. Please see the other thread, "Why is design-by-contract not widely adopted?" if you are interested in that debate. >> > >> > We (a colleague of mine and I) decided to implement a library to bring design-by-contract to Python since we don't believe that the concept will make it into the core language anytime soon and we needed badly a tool to facilitate our work with a growing code base. >> > >> > The library is available at http://github.com/Parquery/icontract. The hope is to polish it so that the wider community could use it and once the quality is high enough, make a proposal to add it to the standard Python libraries. We do need a standard library for contracts, otherwise projects with conflicting contract libraries can not integrate (e.g., the contracts can not be inherited between two different contract libraries). >> > >> > So far, the most important bits have been implemented in icontract: >> > >> > Preconditions, postconditions, class invariants >> > Inheritance of the contracts (including strengthening and weakening of the inherited contracts) >> > Informative violation messages (including information about the values involved in the contract condition) >> > Sphinx extension to include contracts in the automatically generated documentation (sphinx-icontract) >> > Linter to statically check that the arguments of the conditions are correct (pyicontract-lint) >> > >> > We are successfully using it in our code base and have been quite happy about the implementation so far. >> > >> > There is one bit still missing: accessing "old" values in the postcondition (i.e., shallow copies of the values prior to the execution of the function). This feature is necessary in order to allow us to verify state transitions. >> > >> > For example, consider a new dictionary class that has "get" and "put" methods: >> > >> > from typing import Optional >> > >> > from icontract import post >> > >> > class NovelDict: >> > def length(self)->int: >> > ... >> > >> > def get(self, key: str) -> Optional[str]: >> > ... >> > >> > @post(lambda self, key, value: self.get(key) == value) >> > @post(lambda self, key: old(self.get(key)) is None and old(self.length()) + 1 == self.length(), >> > "length increased with a new key") >> > @post(lambda self, key: old(self.get(key)) is not None and old(self.length()) == self.length(), >> > "length stable with an existing key") >> > def put(self, key: str, value: str) -> None: >> > ... >> > >> > How could we possible implement this "old" function? >> > >> > Here is my suggestion. I'd introduce a decorator "before" that would allow you to store whatever values in a dictionary object "old" (i.e. an object whose properties correspond to the key/value pairs). The "old" is then passed to the condition. Here is it in code: >> > >> > # omitted contracts for brevity >> > class NovelDict: >> > def length(self)->int: >> > ... >> > >> > # omitted contracts for brevity >> > def get(self, key: str) -> Optional[str]: >> > ... >> > >> > @before(lambda self, key: {"length": self.length(), "get": self.get(key)}) >> > @post(lambda self, key, value: self.get(key) == value) >> > @post(lambda self, key, old: old.get is None and old.length + 1 == self.length(), >> > "length increased with a new key") >> > @post(lambda self, key, old: old.get is not None and old.length == self.length(), >> > "length stable with an existing key") >> > def put(self, key: str, value: str) -> None: >> > ... >> > >> > The linter would statically check that all attributes accessed in "old" have to be defined in the decorator "before" so that attribute errors would be caught early. The current implementation of the linter is fast enough to be run at save time so such errors should usually not happen with a properly set IDE. >> > >> > "before" decorator would also have "enabled" property, so that you can turn it off (e.g., if you only want to run a postcondition in testing). The "before" decorators can be stacked so that you can also have a more fine-grained control when each one of them is running (some during test, some during test and in production). The linter would enforce that before's "enabled" is a disjunction of all the "enabled"'s of the corresponding postconditions where the old value appears. >> > >> > Is this a sane approach to "old" values? Any alternative approach you would prefer? What about better naming? Is "before" a confusing name? >> >> The dict can be splatted into the postconditions, so that no special >> name is required. This would require either that the lambdas handle >> **kws, or that their caller inspect them to see what names they take. >> Perhaps add a function to functools which only passes kwargs that fit. >> Then the precondition mechanism can pass `self`, `key`, and `value` as >> kwargs instead of args. >> >> For functions that have *args and **kwargs, it may be necessary to >> pass them to the conditions as args and kwargs instead. >> >> The name "before" is a confusing name. It's not just something that >> happens before. It's really a pre-`let`, adding names to the scope of >> things after it, but with values taken before the function call. Based >> on that description, other possible names are `prelet`, `letbefore`, >> `predef`, `defpre`, `beforescope`. Better a name that is clearly >> confusing than one that is obvious but misleading. >> >> By the way, should the first postcondition be `self.get(key) is >> value`, checking for identity rather than equality? > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Wed Sep 26 08:40:25 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Wed, 26 Sep 2018 14:40:25 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: Hi Chris and Paul, Let me please answer your messages in one go as they are related. Paul wrote: > For most single use (or infrequently used) functions, I'd argue that the > trade-off *isn't* worth it. > > Here's a quick example from the pip codebase: > > # Retry every half second for up to 3 seconds > @retry(stop_max_delay=3000, wait_fixed=500) > def rmtree(dir, ignore_errors=False): > shutil.rmtree(dir, ignore_errors=ignore_errors, > onerror=rmtree_errorhandler) Absolutely, I agree. If it's a single-use or infrequently used function that hardly anybody uses, it's not worth it. Moreover, if some contracts are harder to figure out than the implementation, then they are probably in most cases not worth the effort, too. Please mind that I said: any *library* would benefit from it, as in the users of any library on pipy would benefit from better, formal and more precise documentation. That doesn't imply that all the contracts need to be specified or that you have to specify the contract for *every *function, or that you omit the documentation altogether. Some contracts are simply too hard to get explicitly. Some are not meaningful even if you could write them down. Some you'd like to add, but run only at test time since too slow in production. Some run in production, but are not included in the documentation (*e.g., *to prevent the system to enter a bad state though it is not meaningful for the user to actually *read* that contract). Since contracts are formally written, they can be verified. Human text *can not*. Specifying all the contracts is in most cases *not *meaningful. In my day-to-day programming, I specify contracts on the fly and they help me express formally to the next girl/guy who will use my code what to expect or what not. That does not mean that s/he can just skip the documentation or that contracts describe fully what the function does. They merely help -- help him/her what arguments and results are expected. That *does not mean *that I fully specified all the predicates on the arguments and the result. It's merely a help ? la * "Ah, this argument needs to be bigger than that argument!" * "Ah, the resulting dictionary is shorter than the input list!" * "Ah, the result does not include the input list" * "Ah, this function accepts only files (no directories) and relative paths!" * "Ah, I put the bounding box outside of the image -- something went wrong!" * "Ah, this method allows me to put the bounding box outside of the image and will fill all the outside pixels with black!" *etc.* For example, if I have an object detector operating on a region-of-interest and returning bounding boxes of the objects, the postconditions will not be: "all the bounding boxes are cars", that would impossible. But the postcondition might check that all the bounding boxes are within the region-of-interest or slightly outside, but not completely outside *etc.* Let's be careful not to make a straw-man here, *i.e. *to push DbC *ad absurdum *and then discard it that way. Also, the implicit contracts code currently has are typically pretty loose. > What you need to "figure out" is very general. Explicit contracts are > typically demonstrated as being relatively strict, and figuring out and > writing such contracts is more work than writing code with loose implicit > contracts. Whether the trade-off of defining tight explicit contracts once > vs inferring a loose implicit contract every time you call the function is > worth it, depends on how often the function is called. For most single use > (or infrequently used) functions, I'd argue that the trade-off *isn't* > worth it. I don't believe such an approach would ever be pragmatical, *i.e. *automatic versioning based on the contracts. It might hint it (as a warning: you changed a contract, but did not bump the version), but relying solely on this mechanism to get the versioning would imply that you specified *all *the contracts. Though Bertrand might have always envisioned it as the best state of the world, even he himself repeatedly said that it's better to specify rather 10% than 0% contracts and 20% rather than 10% contracts. Chris wrote: > def fibber(n): > return n if n < 2 else fibber(n-1) + fibber(n-2) It depends who you are writing this function for -- for example, instead of the contract, why not just include the code implementation as the documentation? The only *meaningful* contract I could imagine: @pre n >= 0 Everything else would be just repetition of the function. If you implemented some optimized and complex fibber_optimized() function, then your contracts would probably look like: @pre n >= 0 @post (n < 2 ) or result == fibber_optimized(n-1) + fibber_optimized(n-2) @post not (n < 2) or result == n > Here is my try at the contracts. Assuming that there is a list of figures > and that they have a property "displayed" and that "State.blocked" global > variable refers to whether the interface is blocked or not:: > > @post(lambda: all(figure.displayed for figure in figures) > > @post(lambda: not ipython.in_pylab_mode() or not State.blocked) > > @post(lambda: not interactive() or State.blocked) > > matplotlib.pyplot.show() > > > > There is no such thing as "State.blocked". It blocks. The function > *does not return* until the figure has been displayed, and dismissed. > There's no way to recognize that inside the function's state. > > Contracts are great when every function is 100% deterministic and can > maintain invariants and/or transform from one set of invariants to > another. Contracts are far less clear when the definitions are > muddier. > Sorry, it has been ages that I used matplotlib. I thought it meant "blocking" as in "the drawing thread blocks". Since blocking basically means halting the execution-- then the contracts can't help. They are checked *before *and *after *the function executes. They can not solve the halting problem. For that you need formal methods (and I doubt that formal methods would ever work for matplotlib). The contracts *do not check *what happens *during *the execution of the function. They are not meant to do that. Even the invariants of the class are checked *before *and *after the call to public methods *(and the private methods are allowed to break them!). Please mind that this is actually not the argument pro/against the contracts -- you are discussing in this particular case a tool (different to DbC) which should test the behavior *during the execution *of a function. Chirs wrote: > > However, contracts can be useful when testing the GUI -- often it is > difficult to script the user behavior. What many people do is record a > session and re-play it. If there is a bug, fix it. Then re-record. While > writing unit tests for GUI is hard since GUI changes rapidly during > development and scripting formally the user behavior is tedious, DbC might > be an alternative where you specify as much as you can, and then just > re-run through the session. This implies, of course, a human tester. > > > > That doesn't sound like the function's contract. That sounds like a > test case - of which you would have multiple, using different > "scripted session" inputs and different outputs (some of success, some > of expected failure). > Well, yes, you can view it as a testing technique; it assumes that scripting a session is often difficult for GUIs and sometimes harder (since combinatorial) than the implementation itself. What I saw people do is write the contracts, put the program in debug mode and let the human tester test it. Think of it as a random test where only checks are your pre/postconditions. The human annotator mimics a meaningful random walk and uses the application as s/he sees fit. I'm not championing this approach, just noting it here as a potential use case. There's certainly benefits for the "contracts as additional tests" > viewpoint. But whenever that's proposed as what people understand by > DbC, the response is "no, it's not like that at all". So going back to > the "why isn't DbC more popular" question - because no-one can get a > clear handle on whether they are "like tests" or "like assertions" or > "something else" :-) I think that it's a tool that you can use for many things: * verifiable documentation * deeper testing (every test case tests also other parts of the system, akin to asserts) * automatic test generation * hand-break akin to asserts I find the first one, verifiable documentation, to be the most useful one in working with my team at the moment. Cheers, Marko On Wed, 26 Sep 2018 at 10:59, Paul Moore wrote: > On Wed, 26 Sep 2018 at 09:45, Chris Angelico wrote: > > > > On Wed, Sep 26, 2018 at 6:36 PM Paul Moore wrote: > > > > > > On Wed, 26 Sep 2018 at 06:41, Chris Angelico wrote: > > > > > > > > On Wed, Sep 26, 2018 at 2:47 PM Marko Ristin-Kaufmann > > > > wrote: > > > > > * The contracts written in documentation as human text inevitably > rot and they are much harder to maintain than automatically verified formal > contracts. > > > > > > > > Agreed. > > > > > > Agreed, if contracts are automatically verified. But when runtime cost > > > comes up, people suggest that contracts can be disabled in production > > > code - which invalidates the "automatically verified" premise. > > > > Even if they're only verified as a dedicated testing pass ("okay, > > let's run the unit tests, let's run the contract verifier, let's run > > the type checker, cool, we're good"), they're still better off than > > unchecked comments/documentation in terms of code rot. > > Absolutely. But if the contracts are checked at runtime, they are > precisely as good as tests - they will flag violations *in any > circumstances you check*. That's great, but nothing new. I understood > that one of the benefits of contracts was that it would handle cases > that you *forgot* to test - like assertions do, in essence - and would > need to be active in production (again, like assertions, if we assume > we've not explicitly chosen to run with assertions disabled) to get > that benefit. > > There's certainly benefits for the "contracts as additional tests" > viewpoint. But whenever that's proposed as what people understand by > DbC, the response is "no, it's not like that at all". So going back to > the "why isn't DbC more popular" question - because no-one can get a > clear handle on whether they are "like tests" or "like assertions" or > "something else" :-) > > Paul > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Wed Sep 26 08:43:41 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Wed, 26 Sep 2018 14:43:41 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? References: Message-ID: P.S. My offer still stands: I would be very glad to annotate with contracts a set of functions you deem representative (*e.g., *from a standard library or from some widely used library). Then we can discuss how these contracts. It would be an inaccurate estimate of the benefits of DbC in Python, but it's at least better than no estimate. We can have as little as 10 functions for the start. Hopefully a couple of other people would join, so then we can even see what the variance of contracts would look like. On Wed, 26 Sep 2018 at 14:40, Marko Ristin-Kaufmann wrote: > Hi Chris and Paul, > > Let me please answer your messages in one go as they are related. > > Paul wrote: > >> For most single use (or infrequently used) functions, I'd argue that the >> trade-off *isn't* worth it. >> >> Here's a quick example from the pip codebase: >> >> # Retry every half second for up to 3 seconds >> @retry(stop_max_delay=3000, wait_fixed=500) >> def rmtree(dir, ignore_errors=False): >> shutil.rmtree(dir, ignore_errors=ignore_errors, >> onerror=rmtree_errorhandler) > > > Absolutely, I agree. If it's a single-use or infrequently used function > that hardly anybody uses, it's not worth it. Moreover, if some contracts > are harder to figure out than the implementation, then they are probably in > most cases not worth the effort, too. > > Please mind that I said: any *library* would benefit from it, as in the > users of any library on pipy would benefit from better, formal and more > precise documentation. That doesn't imply that all the contracts need to be > specified or that you have to specify the contract for *every *function, > or that you omit the documentation altogether. Some contracts are simply > too hard to get explicitly. Some are not meaningful even if you could write > them down. Some you'd like to add, but run only at test time since too slow > in production. Some run in production, but are not included in the > documentation (*e.g., *to prevent the system to enter a bad state though > it is not meaningful for the user to actually *read* that contract). > > Since contracts are formally written, they can be verified. Human text *can > not*. Specifying all the contracts is in most cases *not *meaningful. In > my day-to-day programming, I specify contracts on the fly and they help me > express formally to the next girl/guy who will use my code what to expect > or what not. That does not mean that s/he can just skip the documentation > or that contracts describe fully what the function does. They merely help > -- help him/her what arguments and results are expected. That *does not > mean *that I fully specified all the predicates on the arguments and the > result. It's merely a help ? la > * "Ah, this argument needs to be bigger than that argument!" > * "Ah, the resulting dictionary is shorter than the input list!" > * "Ah, the result does not include the input list" > * "Ah, this function accepts only files (no directories) and relative > paths!" > * "Ah, I put the bounding box outside of the image -- something went > wrong!" > * "Ah, this method allows me to put the bounding box outside of the image > and will fill all the outside pixels with black!" > > *etc.* > For example, if I have an object detector operating on a > region-of-interest and returning bounding boxes of the objects, the > postconditions will not be: "all the bounding boxes are cars", that would > impossible. But the postcondition might check that all the bounding boxes > are within the region-of-interest or slightly outside, but not completely > outside *etc.* > > Let's be careful not to make a straw-man here, *i.e. *to push DbC *ad > absurdum *and then discard it that way. > > Also, the implicit contracts code currently has are typically pretty >> loose. What you need to "figure out" is very general. Explicit contracts >> are typically demonstrated as being relatively strict, and figuring out and >> writing such contracts is more work than writing code with loose implicit >> contracts. Whether the trade-off of defining tight explicit contracts once >> vs inferring a loose implicit contract every time you call the function is >> worth it, depends on how often the function is called. For most single use >> (or infrequently used) functions, I'd argue that the trade-off *isn't* >> worth it. > > > I don't believe such an approach would ever be pragmatical, *i.e. *automatic > versioning based on the contracts. It might hint it (as a warning: you > changed a contract, but did not bump the version), but relying solely on > this mechanism to get the versioning would imply that you specified *all *the > contracts. Though Bertrand might have always envisioned it as the best > state of the world, even he himself repeatedly said that it's better to > specify rather 10% than 0% contracts and 20% rather than 10% contracts. > > Chris wrote: > >> def fibber(n): >> return n if n < 2 else fibber(n-1) + fibber(n-2) > > > It depends who you are writing this function for -- for example, instead > of the contract, why not just include the code implementation as the > documentation? The only *meaningful* contract I could imagine: > @pre n >= 0 > > Everything else would be just repetition of the function. If you > implemented some optimized and complex fibber_optimized() function, then > your contracts would probably look like: > @pre n >= 0 > @post (n < 2 ) or result == fibber_optimized(n-1) + fibber_optimized(n-2) > @post not (n < 2) or result == n > > > Here is my try at the contracts. Assuming that there is a list of >> figures and that they have a property "displayed" and that "State.blocked" >> global variable refers to whether the interface is blocked or not:: >> > @post(lambda: all(figure.displayed for figure in figures) >> > @post(lambda: not ipython.in_pylab_mode() or not State.blocked) >> > @post(lambda: not interactive() or State.blocked) >> > matplotlib.pyplot.show() >> > >> >> There is no such thing as "State.blocked". It blocks. The function >> *does not return* until the figure has been displayed, and dismissed. >> There's no way to recognize that inside the function's state. >> >> Contracts are great when every function is 100% deterministic and can >> maintain invariants and/or transform from one set of invariants to >> another. Contracts are far less clear when the definitions are >> muddier. >> > > Sorry, it has been ages that I used matplotlib. I thought it meant > "blocking" as in "the drawing thread blocks". Since blocking basically > means halting the execution-- then the contracts can't help. They are > checked *before *and *after *the function executes. They can not solve > the halting problem. For that you need formal methods (and I doubt that > formal methods would ever work for matplotlib). The contracts *do not > check *what happens *during *the execution of the function. They are not > meant to do that. Even the invariants of the class are checked *before *and > *after the call to public methods *(and the private methods are allowed > to break them!). > > Please mind that this is actually not the argument pro/against the > contracts -- you are discussing in this particular case a tool (different > to DbC) which should test the behavior *during the execution *of a > function. > > Chirs wrote: > >> > However, contracts can be useful when testing the GUI -- often it is >> difficult to script the user behavior. What many people do is record a >> session and re-play it. If there is a bug, fix it. Then re-record. While >> writing unit tests for GUI is hard since GUI changes rapidly during >> development and scripting formally the user behavior is tedious, DbC might >> be an alternative where you specify as much as you can, and then just >> re-run through the session. This implies, of course, a human tester. >> > >> >> That doesn't sound like the function's contract. That sounds like a >> test case - of which you would have multiple, using different >> "scripted session" inputs and different outputs (some of success, some >> of expected failure). >> > > Well, yes, you can view it as a testing technique; it assumes that > scripting a session is often difficult for GUIs and sometimes harder (since > combinatorial) than the implementation itself. What I saw people do is > write the contracts, put the program in debug mode and let the human tester > test it. Think of it as a random test where only checks are your > pre/postconditions. The human annotator mimics a meaningful random walk and > uses the application as s/he sees fit. I'm not championing this approach, > just noting it here as a potential use case. > > There's certainly benefits for the "contracts as additional tests" >> viewpoint. But whenever that's proposed as what people understand by >> DbC, the response is "no, it's not like that at all". So going back to >> the "why isn't DbC more popular" question - because no-one can get a >> clear handle on whether they are "like tests" or "like assertions" or >> "something else" :-) > > > I think that it's a tool that you can use for many things: > * verifiable documentation > * deeper testing (every test case tests also other parts of the system, > akin to asserts) > * automatic test generation > * hand-break akin to asserts > > I find the first one, verifiable documentation, to be the most useful one > in working with my team at the moment. > > Cheers, > Marko > > On Wed, 26 Sep 2018 at 10:59, Paul Moore wrote: > >> On Wed, 26 Sep 2018 at 09:45, Chris Angelico wrote: >> > >> > On Wed, Sep 26, 2018 at 6:36 PM Paul Moore wrote: >> > > >> > > On Wed, 26 Sep 2018 at 06:41, Chris Angelico >> wrote: >> > > > >> > > > On Wed, Sep 26, 2018 at 2:47 PM Marko Ristin-Kaufmann >> > > > wrote: >> > > > > * The contracts written in documentation as human text inevitably >> rot and they are much harder to maintain than automatically verified formal >> contracts. >> > > > >> > > > Agreed. >> > > >> > > Agreed, if contracts are automatically verified. But when runtime cost >> > > comes up, people suggest that contracts can be disabled in production >> > > code - which invalidates the "automatically verified" premise. >> > >> > Even if they're only verified as a dedicated testing pass ("okay, >> > let's run the unit tests, let's run the contract verifier, let's run >> > the type checker, cool, we're good"), they're still better off than >> > unchecked comments/documentation in terms of code rot. >> >> Absolutely. But if the contracts are checked at runtime, they are >> precisely as good as tests - they will flag violations *in any >> circumstances you check*. That's great, but nothing new. I understood >> that one of the benefits of contracts was that it would handle cases >> that you *forgot* to test - like assertions do, in essence - and would >> need to be active in production (again, like assertions, if we assume >> we've not explicitly chosen to run with assertions disabled) to get >> that benefit. >> >> There's certainly benefits for the "contracts as additional tests" >> viewpoint. But whenever that's proposed as what people understand by >> DbC, the response is "no, it's not like that at all". So going back to >> the "why isn't DbC more popular" question - because no-one can get a >> clear handle on whether they are "like tests" or "like assertions" or >> "something else" :-) >> >> Paul >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Sep 26 08:58:00 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 26 Sep 2018 13:58:00 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, 26 Sep 2018 at 13:40, Marko Ristin-Kaufmann wrote: > Please mind that I said: any library would benefit from it, as in the users of any library on pipy would benefit from better, formal and more precise documentation. That doesn't imply that all the contracts need to be specified or that you have to specify the contract for every function, or that you omit the documentation altogether. Some contracts are simply too hard to get explicitly. Some are not meaningful even if you could write them down. Some you'd like to add, but run only at test time since too slow in production. Some run in production, but are not included in the documentation (e.g., to prevent the system to enter a bad state though it is not meaningful for the user to actually read that contract). > > Since contracts are formally written, they can be verified. Human text can not. Specifying all the contracts is in most cases not meaningful. In my day-to-day programming, I specify contracts on the fly and they help me express formally to the next girl/guy who will use my code what to expect or what not. That does not mean that s/he can just skip the documentation or that contracts describe fully what the function does. They merely help -- help him/her what arguments and results are expected. That does not mean that I fully specified all the predicates on the arguments and the result. It's merely a help ? la > * "Ah, this argument needs to be bigger than that argument!" > * "Ah, the resulting dictionary is shorter than the input list!" > * "Ah, the result does not include the input list" > * "Ah, this function accepts only files (no directories) and relative paths!" > * "Ah, I put the bounding box outside of the image -- something went wrong!" > * "Ah, this method allows me to put the bounding box outside of the image and will fill all the outside pixels with black!" etc. Whoops, I think the rules changed under me again :-( Are we talking here about coding explicit executable contracts in the source code of the library, or using (formally described in terms of (pseudo-)code) contract-style descriptions in the documentation, or simply using the ideas of contract-based thinking in designing and writing code? > For example, if I have an object detector operating on a region-of-interest and returning bounding boxes of the objects, the postconditions will not be: "all the bounding boxes are cars", that would impossible. But the postcondition might check that all the bounding boxes are within the region-of-interest or slightly outside, but not completely outside etc. I understand that you have to pick an appropriate level of strictness when writing contracts. That's not ever been in question (at least in my mind). > Let's be careful not to make a straw-man here, i.e. to push DbC ad absurdum and then discard it that way. I'm not trying to push DbC to that point. What I *am* trying to do is make it clear that your arguments (and in particular the fact that you keep insisting that "everything" can benefit) are absurd. If you'd tone back on the extreme claims (as Chris has also asked) then you'd be more likely to get people interested. This is why (as you originally asked) DbC is not more popular - its proponents don't seem to be able to accept that it might not be the solution to every problem. Python users are typically fairly practical, and think in terms of "if it helps in this situation, I'll use it". Expecting them to embrace an argument that demands they accept it applies to *everything* is likely to meet with resistance. You didn't address my question "does this apply to the stdlib"? If it doesn't, your argument has a huge hole - how did you decide that the solution you're describing as "beneficial to all libraries" doesn't improve the stdlib? If it does, then why not demonstrate your case? Give concrete examples - look at some module in the stdlib (for example, pathlib) and show exactly what contracts you'd add to the code, what the result would look like to the library user (who normally doesn't read the source code) and to the core dev (who does). Remember that pathlib (like all of the stdlib) doesn't use type annotations, and that is a *deliberate* choice, mandated by Guido when he first introduced type annotations. So you're not allowed to add contracts like "arg1 is a string", nor are you allowed to say that the lack of type annotations makes the exercise useless. I think I've probably said all I can usefully say here. If you do write up a DbC-enhanced pathlib, I'll be interested in seeing it and may well have more to say as a result. If not, I think I'm just going to file your arguments as "not proven". Paul From marko.ristin at gmail.com Wed Sep 26 08:58:40 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Wed, 26 Sep 2018 14:58:40 +0200 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> Message-ID: Hi James, Actually, following on #A4, you could also write those as multiple decorators: @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) @snpashot(lambda _, other_identifier: other_func(_.self)) Am I correct? "_" looks a bit hard to read for me (implying ignored arguments). Why uppercase "P" and not lowercase (uppercase implies a constant for me)? Then "O" for "old" and "P" for parameters in a condition: @post(lambda O, P: ...) ? It also has the nice property that it follows both the temporal and the alphabet order :) On Wed, 26 Sep 2018 at 14:30, James Lu wrote: > I still prefer snapshot, though capture is a good name too. We could use > generator syntax and inspect the argument names. > > Instead of ?a?, perhaps use ?_?. Or maybe use ?A.?, for arguments. Some > people might prefer ?P? for parameters, since parameters sometimes means > the value received while the argument means the value passed. > > (#A1) > > from icontract import snapshot, __ > @snapshot(some_func(_.some_argument.some_attr) for some_identifier, _ in > __) > > Or (#A2) > > @snapshot(some_func(some_argument.some_attr) for some_identifier, _, > some_argument in __) > > ? > Or (#A3) > > @snapshot(lambda some_argument,_,some_identifier: > some_func(some_argument.some_attr)) > > Or (#A4) > > @snapshot(lambda _,some_identifier: some_func(_.some_argument.some_attr)) > @snapshot(lambda _,some_identifier, other_identifier: > some_func(_.some_argument.some_attr), other_func(_.self)) > > I like #A4 the most because it?s fairly DRY and avoids the extra > punctuation of > > @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) > > > On Sep 26, 2018, at 12:23 AM, Marko Ristin-Kaufmann < > marko.ristin at gmail.com> wrote: > > Hi, > > Franklin wrote: > >> The name "before" is a confusing name. It's not just something that >> happens before. It's really a pre-`let`, adding names to the scope of >> things after it, but with values taken before the function call. Based >> on that description, other possible names are `prelet`, `letbefore`, >> `predef`, `defpre`, `beforescope`. Better a name that is clearly >> confusing than one that is obvious but misleading. > > > James wrote: > >> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ?old? >> it?s ?snapshot?. > > > I like "snapshot", it's a bit clearer than prefixing/postfixing verbs with > "pre" which might be misread (*e.g., *"prelet" has a meaning in Slavic > languages and could be subconsciously misread, "predef" implies to me a pre- > *definition* rather than prior-to-definition , "beforescope" is very > clear for me, but it might be confusing for others as to what it actually > refers to ). What about "@capture" (7 letters for captures *versus *8 for > snapshot)? I suppose "@let" would be playing with fire if Python with > conflicting new keywords since I assume "let" to be one of the candidates. > > Actually, I think there is probably no way around a decorator that > captures/snapshots the data before the function call with a lambda (or even > a separate function). "Old" construct, if we are to parse it somehow from > the condition function, would limit us only to shallow copies (and be > complex to implement as soon as we are capturing out-of-argument values > such as globals *etc.)*. Moreove, what if we don't need shallow copies? I > could imagine a dozen of cases where shallow copy is not what the > programmer wants: for example, s/he might need to make deep copies, hash or > otherwise transform the input data to hold only part of it instead of > copying (*e.g., *so as to allow equality check without a double copy of > the data, or capture only the value of certain property transformed in some > way). > > I'd still go with the dictionary to allow for this extra freedom. We could > have a convention: "a" denotes to the current arguments, and "b" denotes > the captured values. It might make an interesting hint that we put "b" > before "a" in the condition. You could also interpret "b" as "before" and > "a" as "after", but also "a" as "arguments". > > @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) > @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) > def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: > ... > > "b" can be omitted if it is not used. Under the hub, all the arguments to > the condition would be passed by keywords. > > In case of inheritance, captures would be inherited as well. Hence the > library would check at run-time that the returned dictionary with captured > values has no identifier that has been already captured, and the linter > checks that statically, before running the code. Reading values captured in > the parent at the code of the child class might be a bit hard -- but that > is case with any inherited methods/properties. In documentation, I'd list > all the captures of both ancestor and the current class. > > I'm looking forward to reading your opinion on this and alternative > suggestions :) > Marko > > On Tue, 25 Sep 2018 at 18:12, Franklin? Lee > wrote: > >> On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann >> wrote: >> > >> > Hi, >> > >> > (I'd like to fork from a previous thread, "Pre-conditions and >> post-conditions", since it got long and we started discussing a couple of >> different things. Let's discuss in this thread the implementation of a >> library for design-by-contract and how to push it forward to hopefully add >> it to the standard library one day.) >> > >> > For those unfamiliar with contracts and current state of the discussion >> in the previous thread, here's a short summary. The discussion started by >> me inquiring about the possibility to add design-by-contract concepts into >> the core language. The idea was rejected by the participants mainly because >> they thought that the merit of the feature does not merit its costs. This >> is quite debatable and seems to reflect many a discussion about >> design-by-contract in general. Please see the other thread, "Why is >> design-by-contract not widely adopted?" if you are interested in that >> debate. >> > >> > We (a colleague of mine and I) decided to implement a library to bring >> design-by-contract to Python since we don't believe that the concept will >> make it into the core language anytime soon and we needed badly a tool to >> facilitate our work with a growing code base. >> > >> > The library is available at http://github.com/Parquery/icontract. The >> hope is to polish it so that the wider community could use it and once the >> quality is high enough, make a proposal to add it to the standard Python >> libraries. We do need a standard library for contracts, otherwise projects >> with conflicting contract libraries can not integrate (e.g., the contracts >> can not be inherited between two different contract libraries). >> > >> > So far, the most important bits have been implemented in icontract: >> > >> > Preconditions, postconditions, class invariants >> > Inheritance of the contracts (including strengthening and weakening of >> the inherited contracts) >> > Informative violation messages (including information about the values >> involved in the contract condition) >> > Sphinx extension to include contracts in the automatically generated >> documentation (sphinx-icontract) >> > Linter to statically check that the arguments of the conditions are >> correct (pyicontract-lint) >> > >> > We are successfully using it in our code base and have been quite happy >> about the implementation so far. >> > >> > There is one bit still missing: accessing "old" values in the >> postcondition (i.e., shallow copies of the values prior to the execution of >> the function). This feature is necessary in order to allow us to verify >> state transitions. >> > >> > For example, consider a new dictionary class that has "get" and "put" >> methods: >> > >> > from typing import Optional >> > >> > from icontract import post >> > >> > class NovelDict: >> > def length(self)->int: >> > ... >> > >> > def get(self, key: str) -> Optional[str]: >> > ... >> > >> > @post(lambda self, key, value: self.get(key) == value) >> > @post(lambda self, key: old(self.get(key)) is None and >> old(self.length()) + 1 == self.length(), >> > "length increased with a new key") >> > @post(lambda self, key: old(self.get(key)) is not None and >> old(self.length()) == self.length(), >> > "length stable with an existing key") >> > def put(self, key: str, value: str) -> None: >> > ... >> > >> > How could we possible implement this "old" function? >> > >> > Here is my suggestion. I'd introduce a decorator "before" that would >> allow you to store whatever values in a dictionary object "old" (i.e. an >> object whose properties correspond to the key/value pairs). The "old" is >> then passed to the condition. Here is it in code: >> > >> > # omitted contracts for brevity >> > class NovelDict: >> > def length(self)->int: >> > ... >> > >> > # omitted contracts for brevity >> > def get(self, key: str) -> Optional[str]: >> > ... >> > >> > @before(lambda self, key: {"length": self.length(), "get": >> self.get(key)}) >> > @post(lambda self, key, value: self.get(key) == value) >> > @post(lambda self, key, old: old.get is None and old.length + 1 == >> self.length(), >> > "length increased with a new key") >> > @post(lambda self, key, old: old.get is not None and old.length == >> self.length(), >> > "length stable with an existing key") >> > def put(self, key: str, value: str) -> None: >> > ... >> > >> > The linter would statically check that all attributes accessed in "old" >> have to be defined in the decorator "before" so that attribute errors would >> be caught early. The current implementation of the linter is fast enough to >> be run at save time so such errors should usually not happen with a >> properly set IDE. >> > >> > "before" decorator would also have "enabled" property, so that you can >> turn it off (e.g., if you only want to run a postcondition in testing). The >> "before" decorators can be stacked so that you can also have a more >> fine-grained control when each one of them is running (some during test, >> some during test and in production). The linter would enforce that before's >> "enabled" is a disjunction of all the "enabled"'s of the corresponding >> postconditions where the old value appears. >> > >> > Is this a sane approach to "old" values? Any alternative approach you >> would prefer? What about better naming? Is "before" a confusing name? >> >> The dict can be splatted into the postconditions, so that no special >> name is required. This would require either that the lambdas handle >> **kws, or that their caller inspect them to see what names they take. >> Perhaps add a function to functools which only passes kwargs that fit. >> Then the precondition mechanism can pass `self`, `key`, and `value` as >> kwargs instead of args. >> >> For functions that have *args and **kwargs, it may be necessary to >> pass them to the conditions as args and kwargs instead. >> >> The name "before" is a confusing name. It's not just something that >> happens before. It's really a pre-`let`, adding names to the scope of >> things after it, but with values taken before the function call. Based >> on that description, other possible names are `prelet`, `letbefore`, >> `predef`, `defpre`, `beforescope`. Better a name that is clearly >> confusing than one that is obvious but misleading. >> >> By the way, should the first postcondition be `self.get(key) is >> value`, checking for identity rather than equality? >> > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Sep 26 08:58:47 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 26 Sep 2018 13:58:47 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, 26 Sep 2018 at 13:43, Marko Ristin-Kaufmann wrote: > > P.S. My offer still stands: I would be very glad to annotate with contracts a set of functions you deem representative (e.g., from a standard library or from some widely used library). Then we can discuss how these contracts. It would be an inaccurate estimate of the benefits of DbC in Python, but it's at least better than no estimate. We can have as little as 10 functions for the start. Hopefully a couple of other people would join, so then we can even see what the variance of contracts would look like. Our mails crossed in the ether, sorry. Pathlib. Paul From jamtlu at gmail.com Wed Sep 26 08:59:51 2018 From: jamtlu at gmail.com (James Lu) Date: Wed, 26 Sep 2018 08:59:51 -0400 Subject: [Python-ideas] "while:" for the loop In-Reply-To: <163566c0-705a-e18f-ceed-5f53e32629f0@brice.xyz> References: <339c12fe-6d5c-4e59-52c9-da64f97c308b@brice.xyz> <60CB0CA1-8F5D-4BFA-850A-EEFA896A7169@gmail.com> <163566c0-705a-e18f-ceed-5f53e32629f0@brice.xyz> Message-ID: <2727F0D1-F7D4-476C-AFFB-EE5D6CCFEA8F@gmail.com> repeat could be only considered a keyword when it?s used as a loop Sent from my iPhone > On Sep 26, 2018, at 8:46 AM, Brice Parent wrote: > >> Le 26/09/2018 ? 14:33, James Lu a ?crit : >> what about ?repeat:?? >> >> Sent from my iPhone > I'm not sure it was on purpose, but you replied to me only, and not the entire list. > I believe the adding of a new keyword to do something that is already straightforward (`while True:`), and that doesn't add any new functionality, won't probably ever be accepted. From marko.ristin at gmail.com Wed Sep 26 09:04:43 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Wed, 26 Sep 2018 15:04:43 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: Hi Paul, Are we talking here about coding explicit executable contracts in the > source code of the library, or using (formally described in terms of > (pseudo-)code) contract-style descriptions in the documentation, or > simply using the ideas of contract-based thinking in designing and > writing code? > The current implementation of icontract uses decorators to decorate the functions and classes (and metaclasses to support inheritance of contracts). You have an "enabled" flag which you can set on/off if you want to disable the contract in some situations. We are talking about the explicit executable contracts :). You didn't address my question "does this apply to the stdlib"? If it > doesn't, your argument has a huge hole - how did you decide that the > solution you're describing as "beneficial to all libraries" doesn't > improve the stdlib? If it does, then why not demonstrate your case? > Give concrete examples - look at some module in the stdlib (for > example, pathlib) and show exactly what contracts you'd add to the > code, what the result would look like to the library user (who > normally doesn't read the source code) and to the core dev (who does). > Remember that pathlib (like all of the stdlib) doesn't use type > annotations, and that is a *deliberate* choice, mandated by Guido when > he first introduced type annotations. So you're not allowed to add > contracts like "arg1 is a string", nor are you allowed to say that the > lack of type annotations makes the exercise useless. Sorry, I missed that point; the messages are getting long :) Yes, the contracts would make sense in stdlib as well, I'd say. @Chris Angelico would annotating pathlib convince you that contracts are useful? Is it general enough to start with? On Wed, 26 Sep 2018 at 14:58, Paul Moore wrote: > On Wed, 26 Sep 2018 at 13:40, Marko Ristin-Kaufmann > wrote: > > > Please mind that I said: any library would benefit from it, as in the > users of any library on pipy would benefit from better, formal and more > precise documentation. That doesn't imply that all the contracts need to be > specified or that you have to specify the contract for every function, or > that you omit the documentation altogether. Some contracts are simply too > hard to get explicitly. Some are not meaningful even if you could write > them down. Some you'd like to add, but run only at test time since too slow > in production. Some run in production, but are not included in the > documentation (e.g., to prevent the system to enter a bad state though it > is not meaningful for the user to actually read that contract). > > > > Since contracts are formally written, they can be verified. Human text > can not. Specifying all the contracts is in most cases not meaningful. In > my day-to-day programming, I specify contracts on the fly and they help me > express formally to the next girl/guy who will use my code what to expect > or what not. That does not mean that s/he can just skip the documentation > or that contracts describe fully what the function does. They merely help > -- help him/her what arguments and results are expected. That does not mean > that I fully specified all the predicates on the arguments and the result. > It's merely a help ? la > > * "Ah, this argument needs to be bigger than that argument!" > > * "Ah, the resulting dictionary is shorter than the input list!" > > * "Ah, the result does not include the input list" > > * "Ah, this function accepts only files (no directories) and relative > paths!" > > * "Ah, I put the bounding box outside of the image -- something went > wrong!" > > * "Ah, this method allows me to put the bounding box outside of the > image and will fill all the outside pixels with black!" etc. > > Whoops, I think the rules changed under me again :-( > > Are we talking here about coding explicit executable contracts in the > source code of the library, or using (formally described in terms of > (pseudo-)code) contract-style descriptions in the documentation, or > simply using the ideas of contract-based thinking in designing and > writing code? > > > For example, if I have an object detector operating on a > region-of-interest and returning bounding boxes of the objects, the > postconditions will not be: "all the bounding boxes are cars", that would > impossible. But the postcondition might check that all the bounding boxes > are within the region-of-interest or slightly outside, but not completely > outside etc. > > I understand that you have to pick an appropriate level of strictness > when writing contracts. That's not ever been in question (at least in > my mind). > > > Let's be careful not to make a straw-man here, i.e. to push DbC ad > absurdum and then discard it that way. > > I'm not trying to push DbC to that point. What I *am* trying to do is > make it clear that your arguments (and in particular the fact that you > keep insisting that "everything" can benefit) are absurd. If you'd > tone back on the extreme claims (as Chris has also asked) then you'd > be more likely to get people interested. This is why (as you > originally asked) DbC is not more popular - its proponents don't seem > to be able to accept that it might not be the solution to every > problem. Python users are typically fairly practical, and think in > terms of "if it helps in this situation, I'll use it". Expecting them > to embrace an argument that demands they accept it applies to > *everything* is likely to meet with resistance. > > You didn't address my question "does this apply to the stdlib"? If it > doesn't, your argument has a huge hole - how did you decide that the > solution you're describing as "beneficial to all libraries" doesn't > improve the stdlib? If it does, then why not demonstrate your case? > Give concrete examples - look at some module in the stdlib (for > example, pathlib) and show exactly what contracts you'd add to the > code, what the result would look like to the library user (who > normally doesn't read the source code) and to the core dev (who does). > Remember that pathlib (like all of the stdlib) doesn't use type > annotations, and that is a *deliberate* choice, mandated by Guido when > he first introduced type annotations. So you're not allowed to add > contracts like "arg1 is a string", nor are you allowed to say that the > lack of type annotations makes the exercise useless. > > I think I've probably said all I can usefully say here. If you do > write up a DbC-enhanced pathlib, I'll be interested in seeing it and > may well have more to say as a result. If not, I think I'm just going > to file your arguments as "not proven". > > Paul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Wed Sep 26 09:08:35 2018 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 26 Sep 2018 23:08:35 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 11:04 PM Marko Ristin-Kaufmann wrote: > @Chris Angelico would annotating pathlib convince you that contracts are useful? Is it general enough to start with? > I won't promise it'll convince me, but it'll certainly be a starting-point for real discussion. Also, it's a fairly coherent "library-style" module - a nice warm-up. So, when you're ready for a REAL challenge, annotate tkinter :) Actually, annotating the builtins might be worth doing too. Either instead of or after pathlib. ChrisA From p.f.moore at gmail.com Wed Sep 26 09:15:13 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 26 Sep 2018 14:15:13 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, 26 Sep 2018 at 14:04, Marko Ristin-Kaufmann wrote: > > @Chris Angelico would annotating pathlib convince you that contracts are useful? Is it general enough to start with? Whoa - be careful here. No-one is saying that "contracts aren't useful" (at least not that I'd heard). We're saying that contracts *aren't a solution for every library in existence* (which was essentially your claim). Annotating pathlib can't convince anyone of that claim. At best (and this is all that I'm imagining) it might give some indication of why you have such an apparently-unreasonable level of confidence in contracts. I do not expect *ever* to believe you when you say that all projects would benefit from contracts. What I might get from such an exercise is a better understanding of what you're imagining when you talk about a real project "using contracts". Best case scenario - I might be persuaded to use them occasionally. Worst case scenario - I find them distracting and unhelpful, and we agree to differ on their value. Paul From marko.ristin at gmail.com Wed Sep 26 09:39:32 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Wed, 26 Sep 2018 15:39:32 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: Hi Paul, Quite a few people replied on this thread and the previous one before the fork that dbc is either useless in Python or at best useful in avionics/niche applications. I'm really only saying that contracts are a superior (complementary) tool to informal documentation, doctests, reading the implementation, reading the tests and trial-and-error. For every library that have the contracts which can be written down formally in a pragmatic way, and when the users of the library are multiple -- then these libraries would benefit from dbc. That's all that I'm saying and it might have come over as arrogant due to limits of the medium. It was not my intention to sound so. I'll have a look at pathlib then. Cheers, Marko Le mer. 26 sept. 2018 ? 15:15, Paul Moore a ?crit : > On Wed, 26 Sep 2018 at 14:04, Marko Ristin-Kaufmann > wrote: > > > > @Chris Angelico would annotating pathlib convince you that contracts > are useful? Is it general enough to start with? > > Whoa - be careful here. No-one is saying that "contracts aren't > useful" (at least not that I'd heard). We're saying that contracts > *aren't a solution for every library in existence* (which was > essentially your claim). Annotating pathlib can't convince anyone of > that claim. At best (and this is all that I'm imagining) it might give > some indication of why you have such an apparently-unreasonable level > of confidence in contracts. > > I do not expect *ever* to believe you when you say that all projects > would benefit from contracts. What I might get from such an exercise > is a better understanding of what you're imagining when you talk about > a real project "using contracts". Best case scenario - I might be > persuaded to use them occasionally. Worst case scenario - I find them > distracting and unhelpful, and we agree to differ on their value. > > Paul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kohnt at tobiaskohn.ch Wed Sep 26 09:42:16 2018 From: kohnt at tobiaskohn.ch (Tobias Kohn) Date: Wed, 26 Sep 2018 15:42:16 +0200 Subject: [Python-ideas] "while:" for the loop In-Reply-To: <2727F0D1-F7D4-476C-AFFB-EE5D6CCFEA8F@gmail.com> References: <339c12fe-6d5c-4e59-52c9-da64f97c308b@brice.xyz> <60CB0CA1-8F5D-4BFA-850A-EEFA896A7169@gmail.com> <163566c0-705a-e18f-ceed-5f53e32629f0@brice.xyz> <2727F0D1-F7D4-476C-AFFB-EE5D6CCFEA8F@gmail.com> Message-ID: <20180926154216.Horde.QtSSvZtHavu8sEtgr9eTIl3@webmail.tobiaskohn.ch> Hello, Although I doubt it will really make it into Python's grammar, I am all +1 for the idea of having "repeat" as a loop keyword in Python.? Actually, I have been using "repeat" as a keyword in Python for quite some time now, and found it not only convenient, but also a great help in education. My version of "repeat" has two versions.? The primary usage is with an expression that evaluates to a number and specifies how many times the loop is to be repeated: ``` repeat : ??? ``` The second version is without the number, and, indeed, stands for an infinite loop. ``` repeat: ??? ``` I must admit, though, that I had hardly ever reason to use the second form. The implementation is currently built on top of Jython, and uses something like a preprocessor to implement the "repeat"-keyword.? In order to keep some backward compatibility, the preprocessor does check a few cases to determine how it is used, and does not always treat it as a keyword. Even though it is possible to write a parser that detects if "repeat" is used as the keyword for a loop or anything else, it will make the syntax very brittle.? Take, for instance: ``` repeat (3*4) ``` Now, is this supposed to be a function call, or is it a `repeat`-statement with a missing colon at the end?? I therefore would strongly advise against having a word act as a keyword sometimes, and sometimes not.? Probably a better solution would be to have something like `from __features__ import repeat` at the beginning. My reason for introducing "repeat" into Python in the first place was because of didactical considerations.? Even though it might not seem so, variables are indeed a very hard concept in programming, particularly for younger students.? And variables get particularly hard when combined with loops, where the value of a variable changes all the time (this requires a lot of abstract thinking to be properly understood).? Loops alone, on the other hand, are a relatively easy concept that could be used early one.? So, there is this dilemma: how do you teach loops at an early stage without using variables?? That's when I added the "repeat"-keyword for loops, and it has worked marvellously so far :). Cheers, Tobias Quoting James Lu : > repeat could be only considered a keyword when it?s used as a loop > > Sent from my iPhone > >> On Sep 26, 2018, at 8:46 AM, Brice Parent wrote: >> >>> Le 26/09/2018 ? 14:33, James Lu a ?crit : >>> what about ?repeat:?? >>> >>> Sent from my iPhone >> >> I'm not sure it was on purpose, but you replied to me only, and not >> the entire list. >> I believe the adding of a new keyword to do something that is >> already straightforward (`while True:`), and that doesn't add any >> new functionality, won't probably ever be accepted. > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideasCode of > Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mehaase at gmail.com Wed Sep 26 09:46:13 2018 From: mehaase at gmail.com (Mark E. Haase) Date: Wed, 26 Sep 2018 09:46:13 -0400 Subject: [Python-ideas] "while:" for the loop In-Reply-To: References: Message-ID: On Tue, Sep 25, 2018 at 8:47 PM Mikhail V wrote: > As for statistics - IIRC someone gave statistics once, but the only > thing I can remember - > "while 1/True" is used quite a lot in the std lib, so the numbers > exceeded my expectation > (because I expected that it's used mostly in algorithms). > This proposal would be a lot stronger if you included those statistics in this thread and showed examples of stdlib code before/after the proposed syntax change. The meaning of "while" in natural language suggests a period of time or condition. It does not mean "forever". Therefore, it's not a good semantic fit. Python has shown a willingness to introduce new keywords: "async" and "await" were added in Python 3.5, which was released about 3 years ago. I imagine that a new loop keyword could be introduced in a backwards-compatible way. If you dislike the look of `while True:`, you can almost always hide it inside a generator and use a `for ? in ?` block with clearer meaning. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhodri at kynesim.co.uk Wed Sep 26 11:03:16 2018 From: rhodri at kynesim.co.uk (Rhodri James) Date: Wed, 26 Sep 2018 16:03:16 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <37e7961b-528d-309f-114f-7194b9051892@kynesim.co.uk> On 25/09/18 21:09, Lee Braiden wrote: > Eh. It's too easy to cry "show me the facts" in any argument. To do that > too often is to reduce all discussion to pendantry. I will resist pointing out the spelling mistake... oh damn :-) The trouble with not crying "show me the facts" is that it is very easy to make beautiful sounding assertions into a vacuum that fall apart the moment you subject them to reality. I'm sure we can all think of politicians of a variety of parties and nationalities with an unfortunate habit of doing exactly that. Marko is making some very big assertions about how much of a benefit Design by Contract is. I flat-out don't believe him. It's up to him to provide some evidence, since he's the one pressing for change. > That verifying data against the contract a function makes code more > reliable should be self evident to anyone with even the most rudimentary > understanding of a function call, let alone a library or large > application. Let's assume that the contracts are meaningful and useful (which I'm pretty sure won't be 100% true; some people are bound to assume that writing contracts means they don't have to think). Assuming that you aren't doing some kind of wide-ranging static analysis (which doesn't seem to be what we're talking about), all that the contracts have bought you is the assurance that *this* invocation of the function with *these* parameters giving *this* result is what you expected. It does not say anything about the reliability of the function in general. It seems to me that a lot of the DbC philosophy seems to assume that functions are complex black-boxes whose behaviours are difficult to grasp. In my experience this is very rarely true. Most functions I write are fairly short and easily grokked, even if they do complicated things. That's part of the skill of breaking a problem down, IMHO; if the function is long and horrible-looking, I've already got it wrong and no amount of protective scaffolding like DbC is going to help. > It's the reason why type checking exists, Except Python doesn't type check so much as try operations and see if they work. > and why bounds checking exists, Not in C, not most of the time :-) > and why unit checking exists too. Unit tests are good when you can do them. A fair bit of the embedded code I write isn't very susceptible to automated testing, though, not without spending twice as long writing (and testing!) the test environment as the code. -- Rhodri James *-* Kynesim Ltd From p.f.moore at gmail.com Wed Sep 26 11:25:54 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 26 Sep 2018 16:25:54 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <37e7961b-528d-309f-114f-7194b9051892@kynesim.co.uk> References: <37e7961b-528d-309f-114f-7194b9051892@kynesim.co.uk> Message-ID: On Wed, 26 Sep 2018 at 16:04, Rhodri James wrote: > Marko is making some very big assertions about how much of a benefit > Design by Contract is. I flat-out don't believe him. It's up to him to > provide some evidence, since he's the one pressing for change. And to be fair, he's now offered to put some time into producing such a demonstration, so asking for some facts has had a positive outcome. (As well as demonstrating that Marko is happy to back up his position and not just make unsubstantiated assertions - so that in itself is good to know). Paul From mike at selik.org Wed Sep 26 12:23:48 2018 From: mike at selik.org (Michael Selik) Date: Wed, 26 Sep 2018 12:23:48 -0400 Subject: [Python-ideas] "while:" for the loop In-Reply-To: <20180926154216.Horde.QtSSvZtHavu8sEtgr9eTIl3@webmail.tobiaskohn.ch> References: <339c12fe-6d5c-4e59-52c9-da64f97c308b@brice.xyz> <60CB0CA1-8F5D-4BFA-850A-EEFA896A7169@gmail.com> <163566c0-705a-e18f-ceed-5f53e32629f0@brice.xyz> <2727F0D1-F7D4-476C-AFFB-EE5D6CCFEA8F@gmail.com> <20180926154216.Horde.QtSSvZtHavu8sEtgr9eTIl3@webmail.tobiaskohn.ch> Message-ID: On Wed, Sep 26, 2018, 9:42 AM Tobias Kohn wrote: > Although I doubt it will really make it into Python's grammar, I am all +1 > for the idea of having "repeat" as a loop keyword in Python. Actually, I > have been using "repeat" as a keyword in Python for quite some time now, > and found it not only convenient, but also a great help in education. > Guido has repeatedly (haha) rejected this proposal [0]. He has written that he considered it, but decided that in practical code one almost always loops over data, and does not want an arbitrary number of iterations. The range object solves this problem. You might notice that a repeat keyword appears in many graphics-focused child education languages, but not many "serious" languages. [0] I remember at least two times, but can't find them with search at the moment. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Wed Sep 26 12:29:48 2018 From: mertz at gnosis.cx (David Mertz) Date: Wed, 26 Sep 2018 12:29:48 -0400 Subject: [Python-ideas] "while:" for the loop In-Reply-To: References: <339c12fe-6d5c-4e59-52c9-da64f97c308b@brice.xyz> <60CB0CA1-8F5D-4BFA-850A-EEFA896A7169@gmail.com> <163566c0-705a-e18f-ceed-5f53e32629f0@brice.xyz> <2727F0D1-F7D4-476C-AFFB-EE5D6CCFEA8F@gmail.com> <20180926154216.Horde.QtSSvZtHavu8sEtgr9eTIl3@webmail.tobiaskohn.ch> Message-ID: We also have: from itertools import count for i in count(): ... If you want to keep track of how close to infinity you are. :-) On Wed, Sep 26, 2018, 12:24 PM Michael Selik wrote: > On Wed, Sep 26, 2018, 9:42 AM Tobias Kohn wrote: > >> Although I doubt it will really make it into Python's grammar, I am all >> +1 for the idea of having "repeat" as a loop keyword in Python. >> Actually, I have been using "repeat" as a keyword in Python for quite some >> time now, and found it not only convenient, but also a great help in >> education. >> > Guido has repeatedly (haha) rejected this proposal [0]. He has written > that he considered it, but decided that in practical code one almost always > loops over data, and does not want an arbitrary number of iterations. The > range object solves this problem. > > You might notice that a repeat keyword appears in many graphics-focused > child education languages, but not many "serious" languages. > > [0] I remember at least two times, but can't find them with search at the > moment. > >> _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Wed Sep 26 12:22:46 2018 From: jamtlu at gmail.com (James Lu) Date: Wed, 26 Sep 2018 12:22:46 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: > It's easy to say that they're boolean expressions. But that's like > saying that unit tests are just a bunch of boolean expressions too. > Why do we have lots of different forms of test, rather than just a big > fat "assert this and this and this and this and this and this"? > Because the key to unit testing is not "boolean expressions", it's a > language that can usefully describe what it is we're testing. > Contracts aren't just boolean expressions - they're a language (or a > mini-language) that lets you define WHAT the contract entails. Please read the earlier discussion from Marko. Contracts are like unit tests but acts as documentation that is right next to the function definition. It?s also much shorter in number of lines to define. You can write a simple unit smoke test to turn a contract into a unit test. Contracts serve unit testing and documentation at the same time. Sent from my iPhone > On Sep 26, 2018, at 3:18 AM, Chris Angelico wrote: > > It's easy to say that they're boolean expressions. But that's like > saying that unit tests are just a bunch of boolean expressions too. > Why do we have lots of different forms of test, rather than just a big > fat "assert this and this and this and this and this and this"? > Because the key to unit testing is not "boolean expressions", it's a > language that can usefully describe what it is we're testing. > Contracts aren't just boolean expressions - they're a language (or a > mini-language) that lets you define WHAT the contract entails. From kirillbalunov at gmail.com Wed Sep 26 13:05:51 2018 From: kirillbalunov at gmail.com (Kirill Balunov) Date: Wed, 26 Sep 2018 20:05:51 +0300 Subject: [Python-ideas] "while:" for the loop In-Reply-To: References: <339c12fe-6d5c-4e59-52c9-da64f97c308b@brice.xyz> <60CB0CA1-8F5D-4BFA-850A-EEFA896A7169@gmail.com> <163566c0-705a-e18f-ceed-5f53e32629f0@brice.xyz> <2727F0D1-F7D4-476C-AFFB-EE5D6CCFEA8F@gmail.com> <20180926154216.Horde.QtSSvZtHavu8sEtgr9eTIl3@webmail.tobiaskohn.ch> Message-ID: On Wed, Sep 26, 2018, 19:30 David Mertz wrote: > We also have: > > from itertools import count > for i in count(): > ... > > If you want to keep track of how close to infinity you are. :-) > We also have: from itertools import repeat for i in repeat(...): ... with kind regards, -gdg -------------- next part -------------- An HTML attachment was scrubbed... URL: From dstanek at dstanek.com Wed Sep 26 14:11:40 2018 From: dstanek at dstanek.com (David Stanek) Date: Wed, 26 Sep 2018 14:11:40 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 12:49 AM Marko Ristin-Kaufmann wrote: > >> An extraordinary claim is like "DbC can improve *every single project* >> on PyPI". That requires a TON of proof. Obviously we won't quibble if >> you can only demonstrate that 99.95% of them can be improved, but you >> have to at least show that the bulk of them can. > > > I tried to give the "proof" (not a formal one, though) in my previous message. > I have to admit that I haven't kept up with the discussion today, but I was also hoping to see some proof. I'm genuinely interested in seeing if this is something that can help me and the teams I work with. I was very interested in DbC a long time ago, but never found a way to make it valuable to me. I'd like to see a project from PyPI converted to use DbC. This would make it easy to see the real world difference between an implementation developed using DbC compared to one that is well documented, tested and maybe even includes type hints. Toy or cherry-picked examples don't help me get a feel for it. -- david stanek web: https://dstanek.com twitter: https://twitter.com/dstanek linkedin: https://www.linkedin.com/in/dstanek/ From mikhailwas at gmail.com Wed Sep 26 14:12:09 2018 From: mikhailwas at gmail.com (Mikhail V) Date: Wed, 26 Sep 2018 21:12:09 +0300 Subject: [Python-ideas] "while:" for the loop In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 4:46 PM Mark E. Haase wrote: > > On Tue, Sep 25, 2018 at 8:47 PM Mikhail V wrote: >> >> As for statistics - IIRC someone gave statistics once, but the only >> thing I can remember [...] "while 1/True" is used quite a lot in the > > This proposal would be a lot stronger if you included those statistics > in this thread and showed examples of stdlib code before/after the > proposed syntax change. I can't find the discussion with the statistics, but here two related discussions: https://mail.python.org/pipermail/python-ideas/2014-June/028202.html https://mail.python.org/pipermail/python-ideas/2017-March/045344.html The statistics was in one of 2016 or 2017 discussions, but I have failed to find it. By any chance- maybe someone can do it here? I could do but it would need to exclude comments/ strings from the search - I don't have a such tool at hand. Numbers for "while 1" and "while True" should suffice. > I imagine that a new loop keyword could be introduced in > a backwards-compatible way. Now I don't know whom to believe. In the first linked discussion, e.g. Mr. D'Aprano and Mr. Coghlan (from my impression both from dev team) unambiguosly claimed that adding e.g. "loop" as a loop token lead to necessity of excluding ALL variables and functions/methods named "loop" from all sources. https://mail.python.org/pipermail/python-ideas/2014-June/028206.html So what is the real situation? What I can tell - simple syntax highlighting systems and syntax checkers will not be able to tell the difference between the statement initial token and identifier. Speaking of a new keyword: I can see only one benefit of new keyword in this case: it can be extended with additional usage for "loop N" loop N: print ("=====") # iterates N times And I like the word "loop" slightly more than "repeat". But still - even if it's technically possible without breaking something, I am a bit skeptical, as it's a step towards *bloat*. Keeping less keywords, especially for loops is much more important than 'natural language' semantics. BTW: Here is statistics for the C language: https://stackoverflow.com/questions/20186809/endless-loop-in-c-c It turns out that "for(;;)" is more frequent than "while 1" and "while true". Since Python has "for" than maybe it is possible to use "for" for the infinite loop: for: ... And maybe even it could be made into "for N:"? Now: for N: is SyntaxError, so it might be possible to use "for" for both cases. What do you think? > The meaning of "while" in natural language suggests a period of time or > condition. It does not mean "forever". Therefore, it's not a good semantic fit. I am not much into the "what it means in natural languages" games, as long as it's understandable and already established token. "while True:" is the infinite loop and the "True" or "1" does not add anything practically useful for working with sources, period. Mostly causes unwanted attention, as I expect expression or variable there. So basically it's noise. "while:" on the contrary is visually clean, gives 100% difference with "while Variable:" - it just tells : here is the start of the loop. So the semantics would be: "while:" is an infinite loop, that's it. Mikhail From marko.ristin at gmail.com Wed Sep 26 14:20:37 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Wed, 26 Sep 2018 20:20:37 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: Hi David, I'm writing contracts for pathlib as a proof-of-concept. See pypackagery for an example of a project on pypi that uses contracts: https://pypi.org/project/pypackagery/ The docs with contracts are available at: https://pypackagery.readthedocs.io/en/latest/packagery.html Mind that the interface to icontract might change soon so I'd wait a month or two before it stabilizes. Cheers, Marko Le mer. 26 sept. 2018 ? 20:12, David Stanek a ?crit : > On Wed, Sep 26, 2018 at 12:49 AM Marko Ristin-Kaufmann > wrote: > > > >> An extraordinary claim is like "DbC can improve *every single project* > >> on PyPI". That requires a TON of proof. Obviously we won't quibble if > >> you can only demonstrate that 99.95% of them can be improved, but you > >> have to at least show that the bulk of them can. > > > > > > I tried to give the "proof" (not a formal one, though) in my previous > message. > > > > I have to admit that I haven't kept up with the discussion today, but > I was also hoping to see some proof. I'm genuinely interested in > seeing if this is something that can help me and the teams I work > with. I was very interested in DbC a long time ago, but never found a > way to make it valuable to me. > > I'd like to see a project from PyPI converted to use DbC. This would > make it easy to see the real world difference between an > implementation developed using DbC compared to one that is well > documented, tested and maybe even includes type hints. Toy or > cherry-picked examples don't help me get a feel for it. > > -- > david stanek > web: https://dstanek.com > twitter: https://twitter.com/dstanek > linkedin: https://www.linkedin.com/in/dstanek/ > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Sep 26 15:07:43 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 26 Sep 2018 21:07:43 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Tue, Sep 25, 2018 at 10:10 PM Lee Braiden wrote: > It's the reason why type checking exists, and why bounds checking exists, > and why unit checking exists too. > And yet Python has none of those. They all provide safety, but also place a burden on the developer. So why use Python? I?m not arguing that all those features don?t have their advantages, but I am wondering why add them all to Python, rather than using a language designed for safety? But Python has such a great ecosystem of packages! Yes, it does ? but you might ask yourself why that is. All that being said ? do go and make a nice DbC package for Python ? maybe all us naysayers will grow to love it! But could we please stop cluttering this list with discussion of how great or not great it is? These meta-conversations are getting really tiresome. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Wed Sep 26 15:36:10 2018 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 27 Sep 2018 05:36:10 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Thu, Sep 27, 2018 at 5:07 AM Chris Barker wrote: > > On Tue, Sep 25, 2018 at 10:10 PM Lee Braiden wrote: >> >> It's the reason why type checking exists, and why bounds checking exists, and why unit checking exists too. > > And yet Python has none of those. They all provide safety, but also place a burden on the developer. Type checking? Depends on your definition. Since objects are typed, Python is safe against the old "I passed an integer when it expected an array" problem (which probably would result in a junk memory read, in C), whether the function's arguments are type-checked or not. And if you want actual type checking, that's available too, just not as part of the core language (and it has language support for its syntax). Bounds checking? Most definitely exists. If you try to subscript a string or list with a value greater than its length, you get an exception. Unit checking? If that's "unit testing", that's part of the standard library, so yes, that definitely exists in Python. ChrisA From jamtlu at gmail.com Wed Sep 26 15:56:17 2018 From: jamtlu at gmail.com (James Lu) Date: Wed, 26 Sep 2018 15:56:17 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> Message-ID: <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> Hi Marko, > Actually, following on #A4, you could also write those as multiple decorators: > @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) > @snpashot(lambda _, other_identifier: other_func(_.self)) Yes, though if we?re talking syntax using kwargs would probably be better. Using ?P? instead of ?_?: (I agree that _ smells of ignored arguments) @snapshot(some_identifier=lambda P: ..., some_identifier2=lambda P: ...) Kwargs has the advantage that you can extend multiple lines without repeating @snapshot, though many lines of @capture would probably be more intuitive since each decorator captures one variable. > Why uppercase "P" and not lowercase (uppercase implies a constant for me)? To me, the capital letters are more prominent and explicit- easier to see when reading code. It also implies its a constant for you- you shouldn?t be modifying it, because then you?d be interfering with the function itself. Side node: maybe it would be good to have an @icontract.nomutate (probably use a different name, maybe @icontract.readonly) that makes sure a method doesn?t mutate its own __dict__ (and maybe the __dict__ of the members of its __dict__). It wouldn?t be necessary to put the decorator on every read only function, just the ones your worried might mutate. Maybe a @icontract.nomutate(param=?paramname?) that ensures the __dict__ of all members of the param name have the same equality or identity before and after. The semantics would need to be worked out. > On Sep 26, 2018, at 8:58 AM, Marko Ristin-Kaufmann wrote: > > Hi James, > > Actually, following on #A4, you could also write those as multiple decorators: > @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) > @snpashot(lambda _, other_identifier: other_func(_.self)) > > Am I correct? > > "_" looks a bit hard to read for me (implying ignored arguments). > > Why uppercase "P" and not lowercase (uppercase implies a constant for me)? Then "O" for "old" and "P" for parameters in a condition: > @post(lambda O, P: ...) > ? > > It also has the nice property that it follows both the temporal and the alphabet order :) > >> On Wed, 26 Sep 2018 at 14:30, James Lu wrote: >> I still prefer snapshot, though capture is a good name too. We could use generator syntax and inspect the argument names. >> >> Instead of ?a?, perhaps use ?_?. Or maybe use ?A.?, for arguments. Some people might prefer ?P? for parameters, since parameters sometimes means the value received while the argument means the value passed. >> >> (#A1) >> >> from icontract import snapshot, __ >> @snapshot(some_func(_.some_argument.some_attr) for some_identifier, _ in __) >> >> Or (#A2) >> >> @snapshot(some_func(some_argument.some_attr) for some_identifier, _, some_argument in __) >> >> ? >> Or (#A3) >> >> @snapshot(lambda some_argument,_,some_identifier: some_func(some_argument.some_attr)) >> >> Or (#A4) >> >> @snapshot(lambda _,some_identifier: some_func(_.some_argument.some_attr)) >> @snapshot(lambda _,some_identifier, other_identifier: some_func(_.some_argument.some_attr), other_func(_.self)) >> >> I like #A4 the most because it?s fairly DRY and avoids the extra punctuation of >> >> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >> >> On Sep 26, 2018, at 12:23 AM, Marko Ristin-Kaufmann wrote: >> >>> Hi, >>> >>> Franklin wrote: >>>> The name "before" is a confusing name. It's not just something that >>>> happens before. It's really a pre-`let`, adding names to the scope of >>>> things after it, but with values taken before the function call. Based >>>> on that description, other possible names are `prelet`, `letbefore`, >>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>> confusing than one that is obvious but misleading. >>> >>> James wrote: >>>> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ?old? it?s ?snapshot?. >>> >>> >>> I like "snapshot", it's a bit clearer than prefixing/postfixing verbs with "pre" which might be misread (e.g., "prelet" has a meaning in Slavic languages and could be subconsciously misread, "predef" implies to me a pre-definition rather than prior-to-definition , "beforescope" is very clear for me, but it might be confusing for others as to what it actually refers to ). What about "@capture" (7 letters for captures versus 8 for snapshot)? I suppose "@let" would be playing with fire if Python with conflicting new keywords since I assume "let" to be one of the candidates. >>> >>> Actually, I think there is probably no way around a decorator that captures/snapshots the data before the function call with a lambda (or even a separate function). "Old" construct, if we are to parse it somehow from the condition function, would limit us only to shallow copies (and be complex to implement as soon as we are capturing out-of-argument values such as globals etc.). Moreove, what if we don't need shallow copies? I could imagine a dozen of cases where shallow copy is not what the programmer wants: for example, s/he might need to make deep copies, hash or otherwise transform the input data to hold only part of it instead of copying (e.g., so as to allow equality check without a double copy of the data, or capture only the value of certain property transformed in some way). >>> >>> I'd still go with the dictionary to allow for this extra freedom. We could have a convention: "a" denotes to the current arguments, and "b" denotes the captured values. It might make an interesting hint that we put "b" before "a" in the condition. You could also interpret "b" as "before" and "a" as "after", but also "a" as "arguments". >>> >>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>> @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) >>> def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: >>> ... >>> "b" can be omitted if it is not used. Under the hub, all the arguments to the condition would be passed by keywords. >>> >>> In case of inheritance, captures would be inherited as well. Hence the library would check at run-time that the returned dictionary with captured values has no identifier that has been already captured, and the linter checks that statically, before running the code. Reading values captured in the parent at the code of the child class might be a bit hard -- but that is case with any inherited methods/properties. In documentation, I'd list all the captures of both ancestor and the current class. >>> >>> I'm looking forward to reading your opinion on this and alternative suggestions :) >>> Marko >>> >>>> On Tue, 25 Sep 2018 at 18:12, Franklin? Lee wrote: >>>> On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann >>>> wrote: >>>> > >>>> > Hi, >>>> > >>>> > (I'd like to fork from a previous thread, "Pre-conditions and post-conditions", since it got long and we started discussing a couple of different things. Let's discuss in this thread the implementation of a library for design-by-contract and how to push it forward to hopefully add it to the standard library one day.) >>>> > >>>> > For those unfamiliar with contracts and current state of the discussion in the previous thread, here's a short summary. The discussion started by me inquiring about the possibility to add design-by-contract concepts into the core language. The idea was rejected by the participants mainly because they thought that the merit of the feature does not merit its costs. This is quite debatable and seems to reflect many a discussion about design-by-contract in general. Please see the other thread, "Why is design-by-contract not widely adopted?" if you are interested in that debate. >>>> > >>>> > We (a colleague of mine and I) decided to implement a library to bring design-by-contract to Python since we don't believe that the concept will make it into the core language anytime soon and we needed badly a tool to facilitate our work with a growing code base. >>>> > >>>> > The library is available at http://github.com/Parquery/icontract. The hope is to polish it so that the wider community could use it and once the quality is high enough, make a proposal to add it to the standard Python libraries. We do need a standard library for contracts, otherwise projects with conflicting contract libraries can not integrate (e.g., the contracts can not be inherited between two different contract libraries). >>>> > >>>> > So far, the most important bits have been implemented in icontract: >>>> > >>>> > Preconditions, postconditions, class invariants >>>> > Inheritance of the contracts (including strengthening and weakening of the inherited contracts) >>>> > Informative violation messages (including information about the values involved in the contract condition) >>>> > Sphinx extension to include contracts in the automatically generated documentation (sphinx-icontract) >>>> > Linter to statically check that the arguments of the conditions are correct (pyicontract-lint) >>>> > >>>> > We are successfully using it in our code base and have been quite happy about the implementation so far. >>>> > >>>> > There is one bit still missing: accessing "old" values in the postcondition (i.e., shallow copies of the values prior to the execution of the function). This feature is necessary in order to allow us to verify state transitions. >>>> > >>>> > For example, consider a new dictionary class that has "get" and "put" methods: >>>> > >>>> > from typing import Optional >>>> > >>>> > from icontract import post >>>> > >>>> > class NovelDict: >>>> > def length(self)->int: >>>> > ... >>>> > >>>> > def get(self, key: str) -> Optional[str]: >>>> > ... >>>> > >>>> > @post(lambda self, key, value: self.get(key) == value) >>>> > @post(lambda self, key: old(self.get(key)) is None and old(self.length()) + 1 == self.length(), >>>> > "length increased with a new key") >>>> > @post(lambda self, key: old(self.get(key)) is not None and old(self.length()) == self.length(), >>>> > "length stable with an existing key") >>>> > def put(self, key: str, value: str) -> None: >>>> > ... >>>> > >>>> > How could we possible implement this "old" function? >>>> > >>>> > Here is my suggestion. I'd introduce a decorator "before" that would allow you to store whatever values in a dictionary object "old" (i.e. an object whose properties correspond to the key/value pairs). The "old" is then passed to the condition. Here is it in code: >>>> > >>>> > # omitted contracts for brevity >>>> > class NovelDict: >>>> > def length(self)->int: >>>> > ... >>>> > >>>> > # omitted contracts for brevity >>>> > def get(self, key: str) -> Optional[str]: >>>> > ... >>>> > >>>> > @before(lambda self, key: {"length": self.length(), "get": self.get(key)}) >>>> > @post(lambda self, key, value: self.get(key) == value) >>>> > @post(lambda self, key, old: old.get is None and old.length + 1 == self.length(), >>>> > "length increased with a new key") >>>> > @post(lambda self, key, old: old.get is not None and old.length == self.length(), >>>> > "length stable with an existing key") >>>> > def put(self, key: str, value: str) -> None: >>>> > ... >>>> > >>>> > The linter would statically check that all attributes accessed in "old" have to be defined in the decorator "before" so that attribute errors would be caught early. The current implementation of the linter is fast enough to be run at save time so such errors should usually not happen with a properly set IDE. >>>> > >>>> > "before" decorator would also have "enabled" property, so that you can turn it off (e.g., if you only want to run a postcondition in testing). The "before" decorators can be stacked so that you can also have a more fine-grained control when each one of them is running (some during test, some during test and in production). The linter would enforce that before's "enabled" is a disjunction of all the "enabled"'s of the corresponding postconditions where the old value appears. >>>> > >>>> > Is this a sane approach to "old" values? Any alternative approach you would prefer? What about better naming? Is "before" a confusing name? >>>> >>>> The dict can be splatted into the postconditions, so that no special >>>> name is required. This would require either that the lambdas handle >>>> **kws, or that their caller inspect them to see what names they take. >>>> Perhaps add a function to functools which only passes kwargs that fit. >>>> Then the precondition mechanism can pass `self`, `key`, and `value` as >>>> kwargs instead of args. >>>> >>>> For functions that have *args and **kwargs, it may be necessary to >>>> pass them to the conditions as args and kwargs instead. >>>> >>>> The name "before" is a confusing name. It's not just something that >>>> happens before. It's really a pre-`let`, adding names to the scope of >>>> things after it, but with values taken before the function call. Based >>>> on that description, other possible names are `prelet`, `letbefore`, >>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>> confusing than one that is obvious but misleading. >>>> >>>> By the way, should the first postcondition be `self.get(key) is >>>> value`, checking for identity rather than equality? >>> _______________________________________________ >>> Python-ideas mailing list >>> Python-ideas at python.org >>> https://mail.python.org/mailman/listinfo/python-ideas >>> Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Wed Sep 26 18:52:11 2018 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 27 Sep 2018 10:52:11 +1200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Thu, 27 Sep 2018 at 00:44, Marko Ristin-Kaufmann wrote: > > P.S. My offer still stands: I would be very glad to annotate with contracts a set of functions you deem representative (e.g., from a standard library or from some widely used library). Then we can discuss how these contracts. It would be an inaccurate estimate of the benefits of DbC in Python, but it's at least better than no estimate. We can have as little as 10 functions for the start. Hopefully a couple of other people would join, so then we can even see what the variance of contracts would look like. i think requests would be a very interesting library to annotate. Just had a confused developer wondering why calling an API with session.post(...., data={...some object dict here}) didn't work properly. (Solved by s/data/json), but perhaps illustrative of something this might help with? -Rob From rosuav at gmail.com Wed Sep 26 18:59:23 2018 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 27 Sep 2018 08:59:23 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Thu, Sep 27, 2018 at 8:53 AM Robert Collins wrote: > > On Thu, 27 Sep 2018 at 00:44, Marko Ristin-Kaufmann > wrote: > > > > P.S. My offer still stands: I would be very glad to annotate with contracts a set of functions you deem representative (e.g., from a standard library or from some widely used library). Then we can discuss how these contracts. It would be an inaccurate estimate of the benefits of DbC in Python, but it's at least better than no estimate. We can have as little as 10 functions for the start. Hopefully a couple of other people would join, so then we can even see what the variance of contracts would look like. > > i think requests would be a very interesting library to annotate. Just > had a confused developer wondering why calling an API with > session.post(...., data={...some object dict here}) didn't work > properly. (Solved by s/data/json), but perhaps illustrative of > something this might help with? Not sure what you mean by not working; my suspicion is that it DID work, but didn't do what you thought it did (it would form-encode). Contracts wouldn't help there, because it's fully legal and correct. (Unless session.post() differs from requests.post(), but I doubt that that'd be the case.) ChrisA From robertc at robertcollins.net Wed Sep 26 19:22:14 2018 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 27 Sep 2018 11:22:14 +1200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Thu., 27 Sep. 2018, 11:00 Chris Angelico, wrote: > On Thu, Sep 27, 2018 at 8:53 AM Robert Collins > wrote: > > > > On Thu, 27 Sep 2018 at 00:44, Marko Ristin-Kaufmann > > wrote: > > > > > > P.S. My offer still stands: I would be very glad to annotate with > contracts a set of functions you deem representative (e.g., from a standard > library or from some widely used library). Then we can discuss how these > contracts. It would be an inaccurate estimate of the benefits of DbC in > Python, but it's at least better than no estimate. We can have as little as > 10 functions for the start. Hopefully a couple of other people would join, > so then we can even see what the variance of contracts would look like. > > > > i think requests would be a very interesting library to annotate. Just > > had a confused developer wondering why calling an API with > > session.post(...., data={...some object dict here}) didn't work > > properly. (Solved by s/data/json), but perhaps illustrative of > > something this might help with? > > Not sure what you mean by not working; my suspicion is that it DID > work, but didn't do what you thought it did (it would form-encode). > Contracts wouldn't help there, because it's fully legal and correct. > Giving post a data= results in form encoding, as you say. Giving it a json= results in json encoding. Works is a bit of a weasel word. The python code did not crash. However it did not perform as desired, either. And since that is pretty much the entire value proposition of DbC... it seems like a good case to dissect. (Unless session.post() differs from requests.post(), but I doubt that > that'd be the case.) > It doesn't. Rob > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Wed Sep 26 19:50:04 2018 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 27 Sep 2018 09:50:04 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Thu, Sep 27, 2018 at 9:22 AM Robert Collins wrote: > > On Thu., 27 Sep. 2018, 11:00 Chris Angelico, wrote: >> >> On Thu, Sep 27, 2018 at 8:53 AM Robert Collins >> wrote: >> > >> > On Thu, 27 Sep 2018 at 00:44, Marko Ristin-Kaufmann >> > wrote: >> > > >> > > P.S. My offer still stands: I would be very glad to annotate with contracts a set of functions you deem representative (e.g., from a standard library or from some widely used library). Then we can discuss how these contracts. It would be an inaccurate estimate of the benefits of DbC in Python, but it's at least better than no estimate. We can have as little as 10 functions for the start. Hopefully a couple of other people would join, so then we can even see what the variance of contracts would look like. >> > >> > i think requests would be a very interesting library to annotate. Just >> > had a confused developer wondering why calling an API with >> > session.post(...., data={...some object dict here}) didn't work >> > properly. (Solved by s/data/json), but perhaps illustrative of >> > something this might help with? >> >> Not sure what you mean by not working; my suspicion is that it DID >> work, but didn't do what you thought it did (it would form-encode). >> Contracts wouldn't help there, because it's fully legal and correct. > > > Giving post a data= results in form encoding, as you say. > Giving it a json= results in json encoding. > > Works is a bit of a weasel word. > > The python code did not crash. However it did not perform as desired, either. > > And since that is pretty much the entire value proposition of DbC... it seems like a good case to dissect. Okay, but what is the contract that's being violated when you use data= ? How would you define the contract that this would track? That's what I'm asking. ChrisA From robertc at robertcollins.net Wed Sep 26 20:07:18 2018 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 27 Sep 2018 12:07:18 +1200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On 27 September 2018 at 11:50, Chris Angelico wrote: > Okay, but what is the contract that's being violated when you use > data= ? How would you define the contract that this would track? > That's what I'm asking. I don't know :). From a modelling perspective the correctness of the behaviour here depends on state in the server :/. -Rob From rosuav at gmail.com Wed Sep 26 20:08:56 2018 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 27 Sep 2018 10:08:56 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: On Thu, Sep 27, 2018 at 10:07 AM Robert Collins wrote: > > On 27 September 2018 at 11:50, Chris Angelico wrote: > > > Okay, but what is the contract that's being violated when you use > > data= ? How would you define the contract that this would track? > > That's what I'm asking. > > I don't know :). From a modelling perspective the correctness of the > behaviour here depends on state in the server :/. Exactly. It's the same problem as with trying to write contracts for matplotlib's plt.show() - its job is fundamentally external, so you can't define preconditions and postconditions for it. This is one of the limitations of contracts, or at least that's my understanding. Maybe I can be proven wrong here? ChrisA From greg.ewing at canterbury.ac.nz Wed Sep 26 19:08:11 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 27 Sep 2018 11:08:11 +1200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <5BAC115B.70800@canterbury.ac.nz> Chris Angelico wrote: > For example, matplotlib's > plt.show() method guarantees that... a plot will be shown, and the > user will have dismissed it, before it returns. Unless you're inside > Jupyter/iPython, in which case it's different. Or if you're in certain > other environments, in which case it's different again. How do you > define the contract for something that is fundamentally interactive? Indeed, this is what bothers me about DbC fanaticism. It seems to have been conceived by people thinking about very computer-sciency kinds of problems, e.g. you're implenenting a data structure which has certain important invariants that can be expressed mathematically, and code can be written that checks them reasonably efficiently. All very neat and self-contained. But a lot of real-world code isn't like that -- much of the time, the correctness of a piece of code can't be tested without reference to things outside that piece of code, or even outside of the program altogether. How would you write DbC contracts for a GUI, for example? Most of the postconditions consist of "an image looking something like this appears on the screen", where "this" is a very complicated function of the previous state of the system and the user's input. IMO, the reason DbC hasn't taken off is that it assumes an idealised model of what programming consists of that doesn't match reality well enough. -- Greg From python-ideas at mgmiller.net Wed Sep 26 22:25:41 2018 From: python-ideas at mgmiller.net (Mike Miller) Date: Wed, 26 Sep 2018 19:25:41 -0700 Subject: [Python-ideas] "while:" for the loop In-Reply-To: References: Message-ID: <514da80f-153d-3c2e-479b-faf02d661a17@mgmiller.net> On 2018-09-25 17:46, Mikhail V wrote: > I suggest allowing "while:" syntax for the infinite loop. > I.e. instead of "while 1:" and "while True:" notations. I like this idea, and would have use for it. -Mike From mertz at gnosis.cx Wed Sep 26 22:51:31 2018 From: mertz at gnosis.cx (David Mertz) Date: Wed, 26 Sep 2018 22:51:31 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: I'm not sure what to do with the repeated assurance that that various things "are obvious." It really is NOT the case that me, or Paul Moore, or Hugh Fisher, or Greg Ewing are simply too simple minded to understand what DbC is. The off-putting evangelical quality around DbC is something like the similar evangelical insistence that OOP solves all problems that one tended to hear in the late-1980s to mid-1990s especially. The fact that no one can quite pin down just *what* this special quality of DbC is doesn't help... the reality is that they really ARE NOT much different from assertions, in either practice or theory. Actually, the evangelical OOP thing connects with the evangelical DbC. One of the "advantages" of DbC is often stated as support for inheritance. But the truth is, I hardly ever use inheritance in my code, or at most very shallow inheritance in ways where invariants are mostly not preserved. My use of OOP in Python is basically mixins plus magic methods... and no, it's not because I don't understand OOP (I wrote, for example, some of the most widely read papers on metaclass programming in Python, and designed and wrote about making a?toy, admittedly?method resolution order and OOP system for R; my frequent co-author wrote the canonical paper on C3 linearization, in which I'm acknowledged for my edits, but am not an author). I also wrote an article or two about DbC in Python in the early 2000s. None of this is very new. On Mon, Sep 24, 2018 at 3:47 AM Marko Ristin-Kaufmann < marko.ristin at gmail.com> wrote: > *Obvious benefits* > You both seem to misconceive the contracts. The goal of the > design-by-contract is not reduced to testing the correctness of the code, > as I reiterated already a couple of times in the previous thread. The > contracts document *formally* what the caller and the callee expect and > need to satisfy when using a method, a function or a class. This is meant > for a module that is used by multiple people which are not necessarily > familiar with the code. They are *not *a niche. There are 150K projects > on pypi.org. Each one of them would benefit if annotated with the > contracts. > Greg Ewing, Chris Angelica, and Paul Moore make the point very well that MOST code is simply not the sort of thing that is amenable to having contracts. What one needs to state is either Turing complete or trivially reducible to type declarations. Or the behavior is duck typed loosely enough that it can do the right thing without being able to specify the pre- and post-conditions more precisely than "do what this function does." Very often, that looseness is simply NOT a problem, and is part of what makes Python flexible and successful. *Contracts are difficult to read.* > David wrote: > >> To me, as I've said, DbC imposes a very large cost for both writers and >> readers of code. >> > > This is again something that eludes me and I would be really thankful if > you could clarify. Please consider for an example, pypackagery ( > https://pypackagery.readthedocs.io/en/latest/packagery.html) and the > documentation of its function resolve_initial_paths: > The documentation you show below is definitely beautiful I guess that's generated by your Sphinx enhancement, right? There are similar systems for pulling things out of docstrings that follow conventions, but there's a small incremental value in making them live tests at the same time. But reading the end-user documentation is not the *reading* I'm talking about. The reading I mean is looking at the actual source code files. Stating all the invariants you want code to follow makes the definitions of functions/methods longer and more cognitive effort to parse. A ten line function body is likely to be accompanied by 15 lines of invariants that one can state about hopes for the function behavior. That's not always bad... but it's definitely not always good. This is why unit tests are often much better, organizationally. The function definitions can remain relatively concise and clean because most of the time when you are reading or modifying them you don't WANT to think about those precise contracts. Sure, maybe some tool support like folding of contracts could make the burden less; but not less than putting them in entirely separate files. Most of the time, I'd like to look at the simplest expression of the code possible. Then on a different day, or with a different set of eyes, I can look at the completely distinct file that has arbitrarily many unit tests to check invariants I'd like functions to maintain. Yes, the issues are a little bit different in classes and nested hierarchies... but like I said, I never write those, and tend not to like code that does. > packagery.resolve_initial_paths(*initial_paths*) > > Resolve the initial paths of the dependency graph by recursively adding > *.py files beneath given directories. > Parameters: > > *initial_paths* (List[Path]) ? initial paths as absolute paths > Return type: > > List[Path] > Returns: > > list of initial files (*i.e.* no directories) > Requires: > > - all(pth.is_absolute() for pth in initial_paths) > > Ensures: > > - len(result) >= len(initial_paths) if initial_paths else result == [] > - all(pth.is_absolute() for pth in result) > - all(pth.is_file() for pth in result) > > Again, this is a straw man. > *Writing contracts is difficult.* > David wrote: > >> To me, as I've said, DbC imposes a very large cost for both writers and >> readers of code. >> > > The effort of writing contracts include as of now: > * include icontract (or any other design-by-contract library) to setup.py > (or requirements.txt), one line one-off > * include sphinx-icontract to docs/source/conf.py and > docs/source/requirements.txt, two lines, one-off > * write your contracts (usually one line per contract). > It's not much work to add `import icontract` of course. But *writing contracts* is a lot of work. Usually they work best only for pure functions, and only for ones that deal with rather complex data structures (flat is better than nested). In Python, you usually simply do not have to think about the issues that contracts guard against. > I think that ignorance plays a major role here. Many people have > misconceptions about the design-by-contract. They just use 2) for more > complex methods, or 3) for rather trivial methods. They are not aware that > it's easy to use the contracts (1) and fear using them for non-rational > reasons (*e.g., *habits). > Again, this evangelical spirit to "burn the heretics" really isn't going to win folks over. No one replying here is ignorant. We all do not see any "silver bullet" in DbC that you advocate. It's a moderately useful style that I wouldn't object to using if a codebase style guide demanded it. But it's rarely the tool I would reach for on my own. I just find it easier to understand and use a combination of assertions and unit tests. Neither is *exactly* the same thing as DbC, but it's pretty darn close in practice. And no, my hesitance isn't because I don't understand boolean logic. I've also studied a bit of graduate logic and model theory in a long ago life. Yours, David... -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From turnbull.stephen.fw at u.tsukuba.ac.jp Wed Sep 26 23:48:45 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Thu, 27 Sep 2018 12:48:45 +0900 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <23468.21277.928647.16559@turnbull.sk.tsukuba.ac.jp> Robert Collins writes: > I think the underlying problem is that you're treating this as a > logic problem (what does logic say applies here), rather than an > engineering problem (what can we measure and what does it tell us > about whats going on). Pure gold. I'm glad your name is "Robert Collins", or I would have skipped the post and just muted the thread. > My suspicion, for which I have only anecdata, is that its really in c) > today. Kind of where TDD was in the early 2000's (and as I understand > the research, its been shown to be a wash: you do get more tests than > test-last or test-during, That's a pretty big deal though, if it means the shop doesn't need The Big Nurse to prod you to write more tests. > and more tests is correlated with quality and ease of evolution, > but if you add that test coverage in test-during or test-last, you > end up with the same benefits). "No Silver Bullet." QED I think Watts Humphrey should have titled his classic, "Any Discipline for Software Engineering", and subtitled it "'Whatever' Works, as Long as You Actually Do It".[1] All of his books on the practice of software engineering really do say that, by the way. He recommends *starting* with *his* way because it worked for him and many students, so you can just follow the cookbook until "actually doing" becomes natural. Then change to doing what comes more naturally to you. Footnotes: [1] Fred Brooks would have done that, I think. Humphrey was way too stuffy to do that. :-) From greg.ewing at canterbury.ac.nz Thu Sep 27 01:30:45 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 27 Sep 2018 17:30:45 +1200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <5BAC6B05.8090404@canterbury.ac.nz> Chris Angelico wrote: > if you let your API > docs rot when you make changes that callers need to be aware of, you > have failed your callers. Yes, I find that documentation auto-generated from code is usually a poor substitute for human-written documentation. Dumping your formally-written contracts into the docs makes the reader reverse-engineer them to figure out what the programmer was really trying to say. Which do you find easier to grok at a glance: all(L[i] <= L[i+1] for i in range(len(L) - 1)) or # The list is now sorted -- Greg From rosuav at gmail.com Thu Sep 27 01:36:09 2018 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 27 Sep 2018 15:36:09 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <5BAC6B05.8090404@canterbury.ac.nz> References: <5BAC6B05.8090404@canterbury.ac.nz> Message-ID: On Thu, Sep 27, 2018 at 3:31 PM Greg Ewing wrote: > > Chris Angelico wrote: > > if you let your API > > docs rot when you make changes that callers need to be aware of, you > > have failed your callers. > > Yes, I find that documentation auto-generated from code is > usually a poor substitute for human-written documentation. > Dumping your formally-written contracts into the docs makes > the reader reverse-engineer them to figure out what the > programmer was really trying to say. > > Which do you find easier to grok at a glance: > > all(L[i] <= L[i+1] for i in range(len(L) - 1)) > > or > > # The list is now sorted > Well, with that particular example, you're capitalizing on the known meaning of the English word "sorted". So to be fair, you should do the same in Python: postcondition: L == sorted(L) This would be inappropriate for an actual sorting function, but let's say you're altering the value of an item in a sorted list, and shifting it within that list to get it to the new position, or something like that.python-ideas But yes, in general I do agree: it's frequently cleaner to use an English word than to craft a Python equivalent. ChrisA From turnbull.stephen.fw at u.tsukuba.ac.jp Thu Sep 27 01:48:20 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Thu, 27 Sep 2018 14:48:20 +0900 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <23468.28452.651240.393516@turnbull.sk.tsukuba.ac.jp> Paul Moore writes: > With provisos. Figuring out contracts in sufficient detail to use > the code is *in many cases* simple. For harder cases, agreed. But > that's why this is simply a proof that contracts *can* be useful, > not that 100% of code would benefit from them. Note that that is not what Marko wrote: he wrote that 100% of projects on PyPI would benefit. Like others, I'd like to see at least a before/after on *one* whole project (that's not tiny, so it has several meaningful APIs and several calls for at least a few of them), to see *how much* of that project actually benefits, and how much of that benefit derives from "more sophisticated than assert" DbC tech. For now, I think the whole thread is technically speaking off-topic, though. Let's stipulate that DbC is a valuable technology that Python programmers should have available. (1) Why is it so valuable that it needs to be in the python.org distribution? We already have asserts for the simplest cases. Won't PyPI do for the full-blown Meyer-style implementation? Development discussion for that implementation should take place on GitHub[1] or a similar channel, IMO. (2) We don't have an implementation to include, or if you like there are many PoCs, and there's no good reason to suppose that there will be consensus on the candidate for inclusion in this decade. Again, development discussions for those implementations should take place on GitHub before coming here to discuss (a) which is best for the stdlib, (b) whether it's good enough, and (c) whether it's needed in the stdlib at all. Is there a clear, leading candidate that could be proposed "soon", icontracts maybe? (3) The syntaxes proposed so far require inelegant constructs, like lambdas or strs, so that decorator arguments can be evaluated lazily. Lazy evaluation of arguments is something that newcomers often want after being burned by "def foo(l=[]):". But there are at least two plausible ways to handle this. One is like Lisp macros: "defmacro mfoo(not, one, of, its, args, is, evaluated)". Another would be a marker for args to be returned unevalled: "def foo(eval_me, eval_me_not: unevalled, eval_me_too)".[2] Thus, unevalled arguments *may* be a plausible syntax change that would help support DbC as well as other possibly desirable use cases, but we have no proposal to discuss. Do we? I'm not yet suggesting that this thread *should* be terminated here (and that's not to avoid charges of hypocrisy as I post to other subthreads ;-). But I think we should be continuously aware of the three questions I posed above. Footnotes: [1] Personally, I'd prefer it be GitLab. :-) [2] Yes, I'm teasing the type annotations folks, I doubt this syntax will fly. From turnbull.stephen.fw at u.tsukuba.ac.jp Thu Sep 27 01:56:27 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Thu, 27 Sep 2018 14:56:27 +0900 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <23468.28939.657597.684412@turnbull.sk.tsukuba.ac.jp> Chris Angelico writes: > Okay, but what is the contract that's being violated when you use > data= ? How would you define the contract that this would track? > That's what I'm asking. The output is input to some other component. In production, that component would not be under your control, perhaps, but in testing it surely is. The contract would be associated with the other component, and it would be "my input is JSON". Alternatively, if "your" component is being used in a protocol that specifies JSON, the contract could be "what I post is JSON". Presumably that contract can't be checked in production, but in testing it could be. If the inputs to your component satisfy their contracts, but JSON isn't coming out, the problem is in your component. Ie, I don't see how DbC can diagnose *how* a component is broken, in general. It *can* localize the breakage, and provide some hints based on the particular contract that is "violated". I think this shows that in a broad class of cases, for existing code, DbC doesn't do much that a developer with a debugger (such as print() ;-) can't already do, and the developer can do it much more flexibly. However, getting meta, > Just had a confused developer wondering why calling an API with > session.post(...., data={...some object dict here}) didn't work > properly. (Solved by s/data/json) session.post seems to be a pretty horrible API. You'd think this would either be session.post_json(..., data={...}) or session.post(..., data={...}, format='json') with the former preferred (and session.post deprecated in favor of session.form_encoded; I guess you could also use the latter and require the 'format' argument). How would this help in the DbC context? Your server (if you own it), or your mock server in the test suite, will complain that its contract "my input is JSON" is being violated, because it's explicitly an entry condition, your programmer looks at the component that's supposed to produce JSON, sees "form_encoded" either in the function name or the format argument's value, and the likelihood of confusion is small. The contribution of DbC here is not in the contract itself, but in the discipline of thinking "how would I write this contract so that its violation would point me to the coding error?", which leads to refactoring the 'post' method. Is DbC better than the server doing "assert is_valid_json(input)"? That too works best with the suggested refactoring. Evidently it's not better, but there are things that assert can't do. For example, the server might incrementally parse the JSON, yielding useful subobjects as you go along. Then you can't just assert is_valid_json at the beginning of the response generator; you need to do this at the server level. A DbC toolkit would presumably provide a way to decorate the server with try: server.accept_json_object() except JSONParseError as e: report_contract_violation(e) This is, of course, all imaginary, and I have no idea whether it would work as suggested in practice. It will be interesting to me to see, not only Marko's contracts in a DbC-ized module, but also places where he demands refactoring so that an *informative* contract can be written. Steve From turnbull.stephen.fw at u.tsukuba.ac.jp Thu Sep 27 02:00:15 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Thu, 27 Sep 2018 15:00:15 +0900 Subject: [Python-ideas] "while:" for the loop In-Reply-To: References: Message-ID: <23468.29167.517011.19676@turnbull.sk.tsukuba.ac.jp> Mikhail V writes: > In the first linked discussion, e.g. Mr. D'Aprano and Mr. Coghlan (from my > impression both from dev team) unambiguosly claimed that adding e.g. > "loop" as a loop token lead to necessity of excluding ALL > variables and functions/methods named "loop" from all sources. > https://mail.python.org/pipermail/python-ideas/2014-June/028206.html AIUI, the situation is this: (Executive summary) The restriction was imposed by Guido as a design feature. (Excruciating detail) There are three general possibilities: 1. The syntax recognizes some tokens as significant, but they are not reserved at all. It's arguable that Lisp works this way (some will argue that only parentheses create syntax in Lisp, but I don't think humans perceive it that way). 2. The syntax recognizes keywords (ie, reserved for their syntactic meaning) in a context-dependent fashion. In Python, an example might be treating break and continue this way: inside a loop they are reserved, but you can use them as identifiers outside of a loop. I believe that the async-related keywords were treated this way at first. 3. Syntactically-significant keywords are reserved for their syntactic purpose globally. This is the rule in Python. Guido chose #3 as a matter of designing Python. IIRC, the logic was that #1 admits awful unreadable code like "if if then then else else" and nobody needs to write that, and it's hard to write a grammar for it that allows generation of good error messages. The rules for #2, being context-dependent, are unclear to most humans and you don't get a significant benefit for many keywords (consider how useful "break" would be as a variable if you couldn't reference it in a loop!) Besides not having those disadvantages, #3 has the advantage of a simpler grammar and more transparent syntax error reporting, at the minor cost of reserving a very few tokens globally. This does get annoying sometimes (you see a lot of use of 'class_' and 'klass' in some contexts), but I hardly ever notice it. For me at least, shadowing builtins is a much more frequent problem/annoyance. From greg.ewing at canterbury.ac.nz Thu Sep 27 03:02:18 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 27 Sep 2018 19:02:18 +1200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <5BAC807A.2070509@canterbury.ac.nz> David Mertz wrote: > the reality is that they really ARE NOT much different > from assertions, in either practice or theory. Seems to me that assertions is exactly what they are. Eiffel just provides some syntactic sugar for dealing with inheritance, etc. You can get the same effect in present-day Python if you're willing to write the appropriate code. -- Greg From hpolak at polak.es Thu Sep 27 03:06:47 2018 From: hpolak at polak.es (Hans Polak) Date: Thu, 27 Sep 2018 09:06:47 +0200 Subject: [Python-ideas] "while:" for the loop In-Reply-To: References: <339c12fe-6d5c-4e59-52c9-da64f97c308b@brice.xyz> <60CB0CA1-8F5D-4BFA-850A-EEFA896A7169@gmail.com> <163566c0-705a-e18f-ceed-5f53e32629f0@brice.xyz> <2727F0D1-F7D4-476C-AFFB-EE5D6CCFEA8F@gmail.com> <20180926154216.Horde.QtSSvZtHavu8sEtgr9eTIl3@webmail.tobiaskohn.ch> Message-ID: On 26/09/18 18:23, Michael Selik wrote: > Guido has repeatedly (haha) rejected this proposal [0]. He has written > that he considered it, but decided that in practical code one almost > always loops over data, and does not want an arbitrary number of > iterations. The range object solves this problem. Years ago, I proposed a do...loop. Guido rejected that. As an aside, here's a pattern you can use for do...loops. def do_loop(): if True: return True return False while do_loop(): pass Cheers, Hans -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Sep 27 04:42:45 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 27 Sep 2018 09:42:45 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <5BAC807A.2070509@canterbury.ac.nz> References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: On Thu, 27 Sep 2018 at 08:03, Greg Ewing wrote: > > David Mertz wrote: > > the reality is that they really ARE NOT much different > > from assertions, in either practice or theory. > > Seems to me that assertions is exactly what they are. Eiffel just > provides some syntactic sugar for dealing with inheritance, etc. > You can get the same effect in present-day Python if you're > willing to write the appropriate code. Assertions, as far as I can see, are the underlying low level *mechanism* that contracts would use. Just like they are the low level mechanism behind unit tests (yeah, it's really exceptions, but close enough). But like unit tests, contracts seem to me to be a philosophy and a library / programming technique layered on top of that base. The problem seems to me to be that DbC proponents tend to evangelise the philosophy, and ignore requests to show the implementation (often pointing to Eiffel as an "example" rather than offering something appropriate to the language at hand). IMO, people don't tend to emphasise the "D" in DbC enough - it's a *design* approach, and more useful in that context than as a programming construct. For me, the philosophy seems like a reasonable way of thinking, but pretty old hat (I learned about invariants and pre-/post-conditions and their advantages for design when coding in PL/1 in the 1980's, about the same time as I was learning Jackson Structured Programming). I don't think in terms of contracts as often as I should - but it's unit tests that make me remember to do so. Would a dedicated "contracts" library help? Probably not much, but maybe (if it were lightweight enough) I could get used to the idea. Like David, I find that having contracts inline is the biggest problem with them. I try to keep my function definitions short, and contracts can easily add 100% overhead in terms of lines of code. I'd much prefer contracts to be in a separate file. (Which is basically what unit tests written with DbC as a principle in mind would be). If I have a function definition that's long enough to benefit from contracts, I'd usually think "I should refactor this" rather than "I should add contracts". Paul From marko.ristin at gmail.com Thu Sep 27 05:37:10 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Thu, 27 Sep 2018 11:37:10 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: Hi Paul, I only had a contracts library in mind (standardized so that different modules with contracts can interact and that the ecosystem for automic testing could emerge). I was never thinking about the philosophy or design methodology (where you write _all_ the contracts first and then have the implementation fulfill them). I should have clarified that more. I personally also don't think that such a methodology is practical. I really see contracts as verifiable docs that rot less fast than human text and are formally precise / less unambiguous than human text. Other aspects such as deeper tests and hand-braking (e.g., as postconditions which can't be practically implemented in python without exit stack context manager) are also nice to have. I should be done with pathlib contracts by tonight if I manage to find some spare time in the evening. Cheers, Marko Le jeu. 27 sept. 2018 ? 10:43, Paul Moore a ?crit : > On Thu, 27 Sep 2018 at 08:03, Greg Ewing > wrote: > > > > David Mertz wrote: > > > the reality is that they really ARE NOT much different > > > from assertions, in either practice or theory. > > > > Seems to me that assertions is exactly what they are. Eiffel just > > provides some syntactic sugar for dealing with inheritance, etc. > > You can get the same effect in present-day Python if you're > > willing to write the appropriate code. > > Assertions, as far as I can see, are the underlying low level > *mechanism* that contracts would use. Just like they are the low level > mechanism behind unit tests (yeah, it's really exceptions, but close > enough). But like unit tests, contracts seem to me to be a philosophy > and a library / programming technique layered on top of that base. The > problem seems to me to be that DbC proponents tend to evangelise the > philosophy, and ignore requests to show the implementation (often > pointing to Eiffel as an "example" rather than offering something > appropriate to the language at hand). IMO, people don't tend to > emphasise the "D" in DbC enough - it's a *design* approach, and more > useful in that context than as a programming construct. > > For me, the philosophy seems like a reasonable way of thinking, but > pretty old hat (I learned about invariants and pre-/post-conditions > and their advantages for design when coding in PL/1 in the 1980's, > about the same time as I was learning Jackson Structured Programming). > I don't think in terms of contracts as often as I should - but it's > unit tests that make me remember to do so. Would a dedicated > "contracts" library help? Probably not much, but maybe (if it were > lightweight enough) I could get used to the idea. > > Like David, I find that having contracts inline is the biggest problem > with them. I try to keep my function definitions short, and contracts > can easily add 100% overhead in terms of lines of code. I'd much > prefer contracts to be in a separate file. (Which is basically what > unit tests written with DbC as a principle in mind would be). If I > have a function definition that's long enough to benefit from > contracts, I'd usually think "I should refactor this" rather than "I > should add contracts". > > Paul > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Thu Sep 27 01:57:11 2018 From: jamtlu at gmail.com (James Lu) Date: Thu, 27 Sep 2018 01:57:11 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> Message-ID: <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Why couldn?t we record the operations done to a special object and replay them? >>> Actually, I think there is probably no way around a decorator that captures/snapshots the data before the function call with a lambda (or even a separate function). "Old" construct, if we are to parse it somehow from the condition function, would limit us only to shallow copies (and be complex to implement as soon as we are capturing out-of-argument values such as globals etc.). Moreove, what if we don't need shallow copies? I could imagine a dozen of cases where shallow copy is not what the programmer wants: for example, s/he might need to make deep copies, hash or otherwise transform the input data to hold only part of it instead of copying (e.g., so as to allow equality check without a double copy of the data, or capture only the value of certain property transformed in some way). from icontract import snapshot, P, thunk @snapshot(some_identifier=P.self.some_method(P.some_argument.some_attr)) P is an object of our own type, let?s call the type MockP. MockP returns new MockP objects when any operation is done to it. MockP * MockP = MockP. MockP.attr = MockP. MockP objects remember all the operations done to them, and allow the owner of a MockP object to re-apply the same operations ?thunk? converts a function or object or class to a MockP object, storing the function or object for when the operation is done. thunk(function)() Of course, you could also thunk objects like so: thunk(3) * P.number. (Though it might be better to keep the 3 after P.number in this case so P.number?s __mult__ would be invoked before 3?s __mult__ is invokes. In most cases, you?d save any operations that can be done on a copy of the data as generated by @snapshot in @postcondiion. thunk is for rare scenarios where 1) it?s hard to capture the state, for example an object that manages network state (or database connectivity etc) and whose stage can only be read by an external classmethod 2) you want to avoid using copy.deepcopy. I?m sure there?s some way to override isinstance through a meta class or dunder subclasshook. I suppose this mocking method could be a shorthand for when you don?t need the full power of a lambda. It?s arguably more succinct and readable, though YMMV. I look forward to reading your opinion on this and any ideas you might have. > On Sep 26, 2018, at 3:56 PM, James Lu wrote: > > Hi Marko, > >> Actually, following on #A4, you could also write those as multiple decorators: >> @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) >> @snpashot(lambda _, other_identifier: other_func(_.self)) > > Yes, though if we?re talking syntax using kwargs would probably be better. > Using ?P? instead of ?_?: (I agree that _ smells of ignored arguments) > > @snapshot(some_identifier=lambda P: ..., some_identifier2=lambda P: ...) > > Kwargs has the advantage that you can extend multiple lines without repeating @snapshot, though many lines of @capture would probably be more intuitive since each decorator captures one variable. > >> Why uppercase "P" and not lowercase (uppercase implies a constant for me)? > > To me, the capital letters are more prominent and explicit- easier to see when reading code. It also implies its a constant for you- you shouldn?t be modifying it, because then you?d be interfering with the function itself. > > Side node: maybe it would be good to have an @icontract.nomutate (probably use a different name, maybe @icontract.readonly) that makes sure a method doesn?t mutate its own __dict__ (and maybe the __dict__ of the members of its __dict__). It wouldn?t be necessary to put the decorator on every read only function, just the ones your worried might mutate. > > Maybe a @icontract.nomutate(param=?paramname?) that ensures the __dict__ of all members of the param name have the same equality or identity before and after. The semantics would need to be worked out. > >> On Sep 26, 2018, at 8:58 AM, Marko Ristin-Kaufmann wrote: >> >> Hi James, >> >> Actually, following on #A4, you could also write those as multiple decorators: >> @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) >> @snpashot(lambda _, other_identifier: other_func(_.self)) >> >> Am I correct? >> >> "_" looks a bit hard to read for me (implying ignored arguments). >> >> Why uppercase "P" and not lowercase (uppercase implies a constant for me)? Then "O" for "old" and "P" for parameters in a condition: >> @post(lambda O, P: ...) >> ? >> >> It also has the nice property that it follows both the temporal and the alphabet order :) >> >>> On Wed, 26 Sep 2018 at 14:30, James Lu wrote: >>> I still prefer snapshot, though capture is a good name too. We could use generator syntax and inspect the argument names. >>> >>> Instead of ?a?, perhaps use ?_?. Or maybe use ?A.?, for arguments. Some people might prefer ?P? for parameters, since parameters sometimes means the value received while the argument means the value passed. >>> >>> (#A1) >>> >>> from icontract import snapshot, __ >>> @snapshot(some_func(_.some_argument.some_attr) for some_identifier, _ in __) >>> >>> Or (#A2) >>> >>> @snapshot(some_func(some_argument.some_attr) for some_identifier, _, some_argument in __) >>> >>> ? >>> Or (#A3) >>> >>> @snapshot(lambda some_argument,_,some_identifier: some_func(some_argument.some_attr)) >>> >>> Or (#A4) >>> >>> @snapshot(lambda _,some_identifier: some_func(_.some_argument.some_attr)) >>> @snapshot(lambda _,some_identifier, other_identifier: some_func(_.some_argument.some_attr), other_func(_.self)) >>> >>> I like #A4 the most because it?s fairly DRY and avoids the extra punctuation of >>> >>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>> >>> On Sep 26, 2018, at 12:23 AM, Marko Ristin-Kaufmann wrote: >>> >>>> Hi, >>>> >>>> Franklin wrote: >>>>> The name "before" is a confusing name. It's not just something that >>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>> things after it, but with values taken before the function call. Based >>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>> confusing than one that is obvious but misleading. >>>> >>>> James wrote: >>>>> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ?old? it?s ?snapshot?. >>>> >>>> >>>> I like "snapshot", it's a bit clearer than prefixing/postfixing verbs with "pre" which might be misread (e.g., "prelet" has a meaning in Slavic languages and could be subconsciously misread, "predef" implies to me a pre-definition rather than prior-to-definition , "beforescope" is very clear for me, but it might be confusing for others as to what it actually refers to ). What about "@capture" (7 letters for captures versus 8 for snapshot)? I suppose "@let" would be playing with fire if Python with conflicting new keywords since I assume "let" to be one of the candidates. >>>> >>>> Actually, I think there is probably no way around a decorator that captures/snapshots the data before the function call with a lambda (or even a separate function). "Old" construct, if we are to parse it somehow from the condition function, would limit us only to shallow copies (and be complex to implement as soon as we are capturing out-of-argument values such as globals etc.). Moreove, what if we don't need shallow copies? I could imagine a dozen of cases where shallow copy is not what the programmer wants: for example, s/he might need to make deep copies, hash or otherwise transform the input data to hold only part of it instead of copying (e.g., so as to allow equality check without a double copy of the data, or capture only the value of certain property transformed in some way). >>>> >>>> I'd still go with the dictionary to allow for this extra freedom. We could have a convention: "a" denotes to the current arguments, and "b" denotes the captured values. It might make an interesting hint that we put "b" before "a" in the condition. You could also interpret "b" as "before" and "a" as "after", but also "a" as "arguments". >>>> >>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>> @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) >>>> def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: >>>> ... >>>> "b" can be omitted if it is not used. Under the hub, all the arguments to the condition would be passed by keywords. >>>> >>>> In case of inheritance, captures would be inherited as well. Hence the library would check at run-time that the returned dictionary with captured values has no identifier that has been already captured, and the linter checks that statically, before running the code. Reading values captured in the parent at the code of the child class might be a bit hard -- but that is case with any inherited methods/properties. In documentation, I'd list all the captures of both ancestor and the current class. >>>> >>>> I'm looking forward to reading your opinion on this and alternative suggestions :) >>>> Marko >>>> >>>>> On Tue, 25 Sep 2018 at 18:12, Franklin? Lee wrote: >>>>> On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann >>>>> wrote: >>>>> > >>>>> > Hi, >>>>> > >>>>> > (I'd like to fork from a previous thread, "Pre-conditions and post-conditions", since it got long and we started discussing a couple of different things. Let's discuss in this thread the implementation of a library for design-by-contract and how to push it forward to hopefully add it to the standard library one day.) >>>>> > >>>>> > For those unfamiliar with contracts and current state of the discussion in the previous thread, here's a short summary. The discussion started by me inquiring about the possibility to add design-by-contract concepts into the core language. The idea was rejected by the participants mainly because they thought that the merit of the feature does not merit its costs. This is quite debatable and seems to reflect many a discussion about design-by-contract in general. Please see the other thread, "Why is design-by-contract not widely adopted?" if you are interested in that debate. >>>>> > >>>>> > We (a colleague of mine and I) decided to implement a library to bring design-by-contract to Python since we don't believe that the concept will make it into the core language anytime soon and we needed badly a tool to facilitate our work with a growing code base. >>>>> > >>>>> > The library is available at http://github.com/Parquery/icontract. The hope is to polish it so that the wider community could use it and once the quality is high enough, make a proposal to add it to the standard Python libraries. We do need a standard library for contracts, otherwise projects with conflicting contract libraries can not integrate (e.g., the contracts can not be inherited between two different contract libraries). >>>>> > >>>>> > So far, the most important bits have been implemented in icontract: >>>>> > >>>>> > Preconditions, postconditions, class invariants >>>>> > Inheritance of the contracts (including strengthening and weakening of the inherited contracts) >>>>> > Informative violation messages (including information about the values involved in the contract condition) >>>>> > Sphinx extension to include contracts in the automatically generated documentation (sphinx-icontract) >>>>> > Linter to statically check that the arguments of the conditions are correct (pyicontract-lint) >>>>> > >>>>> > We are successfully using it in our code base and have been quite happy about the implementation so far. >>>>> > >>>>> > There is one bit still missing: accessing "old" values in the postcondition (i.e., shallow copies of the values prior to the execution of the function). This feature is necessary in order to allow us to verify state transitions. >>>>> > >>>>> > For example, consider a new dictionary class that has "get" and "put" methods: >>>>> > >>>>> > from typing import Optional >>>>> > >>>>> > from icontract import post >>>>> > >>>>> > class NovelDict: >>>>> > def length(self)->int: >>>>> > ... >>>>> > >>>>> > def get(self, key: str) -> Optional[str]: >>>>> > ... >>>>> > >>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>> > @post(lambda self, key: old(self.get(key)) is None and old(self.length()) + 1 == self.length(), >>>>> > "length increased with a new key") >>>>> > @post(lambda self, key: old(self.get(key)) is not None and old(self.length()) == self.length(), >>>>> > "length stable with an existing key") >>>>> > def put(self, key: str, value: str) -> None: >>>>> > ... >>>>> > >>>>> > How could we possible implement this "old" function? >>>>> > >>>>> > Here is my suggestion. I'd introduce a decorator "before" that would allow you to store whatever values in a dictionary object "old" (i.e. an object whose properties correspond to the key/value pairs). The "old" is then passed to the condition. Here is it in code: >>>>> > >>>>> > # omitted contracts for brevity >>>>> > class NovelDict: >>>>> > def length(self)->int: >>>>> > ... >>>>> > >>>>> > # omitted contracts for brevity >>>>> > def get(self, key: str) -> Optional[str]: >>>>> > ... >>>>> > >>>>> > @before(lambda self, key: {"length": self.length(), "get": self.get(key)}) >>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>> > @post(lambda self, key, old: old.get is None and old.length + 1 == self.length(), >>>>> > "length increased with a new key") >>>>> > @post(lambda self, key, old: old.get is not None and old.length == self.length(), >>>>> > "length stable with an existing key") >>>>> > def put(self, key: str, value: str) -> None: >>>>> > ... >>>>> > >>>>> > The linter would statically check that all attributes accessed in "old" have to be defined in the decorator "before" so that attribute errors would be caught early. The current implementation of the linter is fast enough to be run at save time so such errors should usually not happen with a properly set IDE. >>>>> > >>>>> > "before" decorator would also have "enabled" property, so that you can turn it off (e.g., if you only want to run a postcondition in testing). The "before" decorators can be stacked so that you can also have a more fine-grained control when each one of them is running (some during test, some during test and in production). The linter would enforce that before's "enabled" is a disjunction of all the "enabled"'s of the corresponding postconditions where the old value appears. >>>>> > >>>>> > Is this a sane approach to "old" values? Any alternative approach you would prefer? What about better naming? Is "before" a confusing name? >>>>> >>>>> The dict can be splatted into the postconditions, so that no special >>>>> name is required. This would require either that the lambdas handle >>>>> **kws, or that their caller inspect them to see what names they take. >>>>> Perhaps add a function to functools which only passes kwargs that fit. >>>>> Then the precondition mechanism can pass `self`, `key`, and `value` as >>>>> kwargs instead of args. >>>>> >>>>> For functions that have *args and **kwargs, it may be necessary to >>>>> pass them to the conditions as args and kwargs instead. >>>>> >>>>> The name "before" is a confusing name. It's not just something that >>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>> things after it, but with values taken before the function call. Based >>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>> confusing than one that is obvious but misleading. >>>>> >>>>> By the way, should the first postcondition be `self.get(key) is >>>>> value`, checking for identity rather than equality? >>>> _______________________________________________ >>>> Python-ideas mailing list >>>> Python-ideas at python.org >>>> https://mail.python.org/mailman/listinfo/python-ideas >>>> Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kenlhilton at gmail.com Thu Sep 27 06:48:19 2018 From: kenlhilton at gmail.com (Ken Hilton) Date: Thu, 27 Sep 2018 18:48:19 +0800 Subject: [Python-ideas] Add .= as a method return value assignment operator Message-ID: Hi Jasper, This seems like a great idea! It looks so much cleaner, too. Would there be a dunder method handling this? Or since it's explicitly just a syntax for "obj = obj.method()" is that not necessary? My only qualm is that this might get PHP users confused; that's really not an issue, though, since Python is not PHP. Anyway, I fully support this idea. Sincerely, Ken Hilton; -------------- next part -------------- An HTML attachment was scrubbed... URL: From cspealma at redhat.com Thu Sep 27 09:13:14 2018 From: cspealma at redhat.com (Calvin Spealman) Date: Thu, 27 Sep 2018 09:13:14 -0400 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: Message-ID: Absolutely -1 on this. Consider the following example: def encode(s, *args): """Force UTF 8 no matter what!""" return s.encode('utf8') text = "Hello, there!" text .= encode('latin1') Do you see how this creates an ambiguous situation? Implicit attribute lookup like this is really confusing. It reminds me of the old `with` construct in javascript that is basically forbidden now, because it created the same situation. On Thu, Sep 27, 2018 at 6:49 AM Ken Hilton wrote: > Hi Jasper, > This seems like a great idea! It looks so much cleaner, too. > > Would there be a dunder method handling this? Or since it's explicitly > just a syntax for "obj = obj.method()" is that not necessary? > My only qualm is that this might get PHP users confused; that's really not > an issue, though, since Python is not PHP. > > Anyway, I fully support this idea. > > Sincerely, > Ken Hilton; > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stelios.tymvios at icloud.com Thu Sep 27 10:41:27 2018 From: stelios.tymvios at icloud.com (Stelios Tymvios) Date: Thu, 27 Sep 2018 17:41:27 +0300 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: Message-ID: <0B396BC1-1A1B-4374-9A36-2C94E8A8F593@icloud.com> I believe Calvin is right. .= Assumes that every function returns a value or perhaps implies purity. In my opinion, it goes against the notion that explicit is better than implicit. Stelios Tymvios > On 27 Sep 2018, at 4:13 PM, Calvin Spealman wrote: > > Absolutely -1 on this. Consider the following example: > > def encode(s, *args): > """Force UTF 8 no matter what!""" > return s.encode('utf8') > > text = "Hello, there!" > text .= encode('latin1') > > Do you see how this creates an ambiguous situation? Implicit attribute lookup like this is really confusing. It reminds me of the old `with` construct in javascript that is basically forbidden now, because it created the same situation. > >> On Thu, Sep 27, 2018 at 6:49 AM Ken Hilton wrote: >> Hi Jasper, >> This seems like a great idea! It looks so much cleaner, too. >> >> Would there be a dunder method handling this? Or since it's explicitly just a syntax for "obj = obj.method()" is that not necessary? >> My only qualm is that this might get PHP users confused; that's really not an issue, though, since Python is not PHP. >> >> Anyway, I fully support this idea. >> >> Sincerely, >> Ken Hilton; >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Thu Sep 27 13:04:55 2018 From: python at mrabarnett.plus.com (MRAB) Date: Thu, 27 Sep 2018 18:04:55 +0100 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: Message-ID: On 2018-09-27 14:13, Calvin Spealman wrote: > Absolutely -1 on this. Consider the following example: > > def encode(s, *args): > ??? """Force UTF 8 no matter what!""" > ??? return s.encode('utf8') > > text = "Hello, there!" > text .= encode('latin1') > > Do you see how this creates an ambiguous situation? Implicit attribute > lookup like this is really confusing. It reminds me of the old `with` > construct in javascript that is basically forbidden now, because it > created the same situation. > I don't believe it's ambiguous. The intention is that: text .= encode('latin1') would be equivalent to: text = text.encode('latin1') However, I'm also -1 on it. [snip] From taavieomae at gmail.com Thu Sep 27 15:44:38 2018 From: taavieomae at gmail.com (=?UTF-8?B?VGFhdmkgRW9tw6Rl?=) Date: Thu, 27 Sep 2018 22:44:38 +0300 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: Message-ID: > Do you see how this creates an ambiguous situation? Name shadowing could create such ambiguous situations anyways even without the new operator? > In my opinion, it goes against the notion that explicit is better than implicit. By that logic all operation-assign operations violate that rule (e.g. "-=", "+=" etc.). I honestly don't feel how this feels any less explicit that what already exists in the language. > Assumes that every function returns a value or perhaps implies purity. So does `obj = obj.func(args)` or I've misunderstood you? I like the operator, so +1, though I feel it could be even more useful if it allowed accessing class members, `obj = obj.id` being equivalent to `obj .= id`, what do you think? On Thu, Sep 27, 2018 at 8:05 PM MRAB wrote: > On 2018-09-27 14:13, Calvin Spealman wrote: > > Absolutely -1 on this. Consider the following example: > > > > def encode(s, *args): > > """Force UTF 8 no matter what!""" > > return s.encode('utf8') > > > > text = "Hello, there!" > > text .= encode('latin1') > > > > Do you see how this creates an ambiguous situation? Implicit attribute > > lookup like this is really confusing. It reminds me of the old `with` > > construct in javascript that is basically forbidden now, because it > > created the same situation. > > > I don't believe it's ambiguous. The intention is that: > > text .= encode('latin1') > > would be equivalent to: > > text = text.encode('latin1') > > However, I'm also -1 on it. > > [snip] > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Thu Sep 27 16:13:59 2018 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 28 Sep 2018 06:13:59 +1000 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: Message-ID: On Wed, Sep 26, 2018 at 9:14 PM Jasper Rebane wrote: > When using Python, I find myself often using assignment operators, like 'a += 1' instead of 'a = a + 1', which saves me a lot of time and hassle > > Unfortunately, this doesn't apply to methods, thus we have to write code like this: > text = "foo" > text = text.replace("foo","bar") > # "bar" > > I propose that we should add '.=' as a method return value assignment operator so we could write the code like this instead: > text = "foo" > text .= replace("foo","bar") > # "bar" > This looks cleaner, saves time and makes debugging easier > All the other augmented operators are of the form: target #= value with some sort of valid assignment target, and any value at all on the right hand side. You can take the value and put it in a variable, you can replace it with a function returning that value, etc, etc, etc, as it is simply a value. There's no difference between "x += 123" and "y = 123; x += y". (Putting it another way: "x *= y + z" is not equivalent to "x = x * y + z", but to "x = x * (y + z)".) With your proposed ".=" operator, it's quite different: the RHS is textually concatenated with the LHS. This creates odd edge cases. For example: items = ["foo", "bar", "quux"] items[randrange(3)] .= upper() Is this equivalent to: items[randrange(3)] = items[randrange(3)].upper() ? That would call randrange twice, potentially grabbing one element and dropping it into another slot. If it isn't equivalent to that, how is it defined? I'm sure something can be figured out that will satisfy the interpreter, but will it work for humans? ChrisA From jamtlu at gmail.com Thu Sep 27 16:48:21 2018 From: jamtlu at gmail.com (James Lu) Date: Thu, 27 Sep 2018 16:48:21 -0400 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: <69C7940D-E491-4A88-A628-D0119DF2F169@gmail.com> References: <69C7940D-E491-4A88-A628-D0119DF2F169@gmail.com> Message-ID: > items = ["foo", "bar", "quux"] > items[randrange(3)] .= upper() > > Is this equivalent to: > > items[randrange(3)] = items[randrange(3)].upper() > > ? That would call randrange twice, potentially grabbing one element > and dropping it into another slot. If it isn't equivalent to that, how > is it defined? It would not call randrange twice. Consider existing Python behavior: def foo(): print("foo") return 0 l = [7] l[foo()] += 1 # output: "foo", but only once print(l) # l == [8] Sent from my iPhone > On Sep 27, 2018, at 4:13 PM, Chris Angelico wrote: > > That would call randrange twice, potentially grabbing one element > and dropping it into another slot. If it isn't equivalent to that, how > is it defined? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirillbalunov at gmail.com Thu Sep 27 17:30:59 2018 From: kirillbalunov at gmail.com (Kirill Balunov) Date: Fri, 28 Sep 2018 00:30:59 +0300 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: Message-ID: As I see it, you are mixing very different things. Augmented operators in Python work on objects, generally trying to mutate them in-place. So usually after these operations you have the same object (with the same type, with the same name and etc.) as before these operations. Of course there are exceptions, for example all immutable types or some promotions between numbers. In your examples some cases imply that you are working on names, others that you are working on objects. And as for me this ambiguity is not solvable. With kind regards, -gdg ??, 26 ????. 2018 ?. ? 14:14, Jasper Rebane : > Hi, > > When using Python, I find myself often using assignment operators, like 'a > += 1' instead of 'a = a + 1', which saves me a lot of time and hassle > > Unfortunately, this doesn't apply to methods, thus we have to write code > like this: > text = "foo" > text = text.replace("foo","bar") > # "bar" > > I propose that we should add '.=' as a method return value assignment > operator so we could write the code like this instead: > text = "foo" > text .= replace("foo","bar") > # "bar" > This looks cleaner, saves time and makes debugging easier > > Here are a few more examples: > text = " foo " > text .= strip() > # "foo" > > text = "foo bar" > text .= split(" ") > # ['foo', 'bar'] > > text = b'foo' > text .= decode("UTF-8") > # "foo" > > foo = {1,2,3} > bar = {2,3,4} > foo .= difference(bar) > # {1} > > > Rebane > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Thu Sep 27 17:59:08 2018 From: jamtlu at gmail.com (James Lu) Date: Thu, 27 Sep 2018 17:59:08 -0400 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: Message-ID: > As I see it, you are mixing very different things. Augmented operators in Python work on objects, generally trying to mutate them in-place. So usually after these operations you have the same object (with the same type, with the same name and etc.) as before these operations. Of course there are exceptions, for example all immutable types or some promotions between numbers. Yep. >>> a = (5, 2) >>> a += (3, ) >>> a (5, 2, 3) > In your examples some cases imply that you are working on names, others that you are working on objects. And as for me this ambiguity is not solvable. example please? Show an ambiguous case. To me, it's always working on names. On Thu, Sep 27, 2018 at 5:32 PM Kirill Balunov wrote: > As I see it, you are mixing very different things. Augmented operators in > Python work on objects, generally trying to mutate them in-place. So > usually after these operations you have the same object (with the same > type, with the same name and etc.) as before these operations. Of course > there are exceptions, for example all immutable types or some promotions > between numbers. > > In your examples some cases imply that you are working on names, others > that you are working on objects. And as for me this ambiguity is not > solvable. > > With kind regards, > -gdg > > ??, 26 ????. 2018 ?. ? 14:14, Jasper Rebane : > >> Hi, >> >> When using Python, I find myself often using assignment operators, like >> 'a += 1' instead of 'a = a + 1', which saves me a lot of time and hassle >> >> Unfortunately, this doesn't apply to methods, thus we have to write code >> like this: >> text = "foo" >> text = text.replace("foo","bar") >> # "bar" >> >> I propose that we should add '.=' as a method return value assignment >> operator so we could write the code like this instead: >> text = "foo" >> text .= replace("foo","bar") >> # "bar" >> This looks cleaner, saves time and makes debugging easier >> >> Here are a few more examples: >> text = " foo " >> text .= strip() >> # "foo" >> >> text = "foo bar" >> text .= split(" ") >> # ['foo', 'bar'] >> >> text = b'foo' >> text .= decode("UTF-8") >> # "foo" >> >> foo = {1,2,3} >> bar = {2,3,4} >> foo .= difference(bar) >> # {1} >> >> >> Rebane >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Thu Sep 27 20:03:32 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 28 Sep 2018 10:03:32 +1000 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: Message-ID: <20180928000332.GH19437@ando.pearwood.info> On Thu, Sep 27, 2018 at 10:44:38PM +0300, Taavi Eom?e wrote: > > Do you see how this creates an ambiguous situation? > > Name shadowing could create such ambiguous situations anyways even without > the new operator? How? Can you give an example? Normally, there is no way for a bare name to shadow a dotted name, or vice versa, since the dotted name always needs to be fully qualified. In the example below, we can talk about "text.encode" which is always the method, or "encode", which is always the function and never the method. I agree that this adds ambiguity where we can't be sure whether text .= encode('utf-8') is referring to the function or the method. We can infer that it *ought* to be the method, and maybe even add a rule to force that, but this works only for the simple cases. It is risky and error prone in the hard cases. Think about a more complex assignment: text .= encode(spam) + str(eggs) This could mean any of: text.encode(spam) + text.str(eggs) text.encode(spam) + str(eggs) encode(spam) + text.str(eggs) encode(spam) + str(eggs) In a statically typed language, the compiler could work out what is needed at compile-time (it knows that text.str doesn't exist) and either resolve any such ambiguities or refuse to compile. But in a dynamic language like Python, you can't tell whether text.str exists or not until you try it. So this proposal suffers from the same problems as Pascal-style "with" blocks, which are a FAQ: https://docs.python.org/3/faq/design.html#why-doesn-t-python-have-a-with-statement-for-attribute-assignments [...] > > On 2018-09-27 14:13, Calvin Spealman wrote: > > > Absolutely -1 on this. Consider the following example: > > > > > > def encode(s, *args): > > > """Force UTF 8 no matter what!""" > > > return s.encode('utf8') > > > > > > text = "Hello, there!" > > > text .= encode('latin1') -- Steve From marko.ristin at gmail.com Thu Sep 27 21:24:19 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Fri, 28 Sep 2018 03:24:19 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: Hi, I annotated pathlib with contracts: https://github.com/mristin/icontract-pathlib-poc. I zipped the HTML docs into https://github.com/mristin/icontract-pathlib-poc/blob/master/contracts-pathlib-poc.zip, you can just unpack and view the index.html. One thing I did observe was that there were contracts written in text all over the documentation -- I tried to formulate most of them in code. Since I'm not the implementer nor very familiar with the module, please consider that many of these contracts can be definitely made more beautiful. There were some limitations to icontract-sphinx extension and icontracts which I noted at the top of the document. (Note also that I had to rename the file to avoid import conflicts.) Some of the contracts might seem trivial -- but mind that you, as a writer, want to tell the user what s/he is expected to fulfill before calling the function. For example, if you say: rmdir() Remove this directory. The directory must be empty. Requires: - not list(self.iterdir()) (??? There must be a way to check this more optimally) - self.is_dir() self.is_dir() contract might seem trivial -- but it's actually not. You actually want to convey the message: dear user, if you are iterating through a list of paths, use this function to decide if you should call rmdir() or unlink(). Analogously with the first contract: dear user, please check if the directory is empty before calling rmdir() and this is what you need to call to actually check that. I also finally assembled the puzzle. Most of you folks are all older and were exposed to DbC in the 80ies championed by DbC zealots who advertised it as *the *tool for software development. You were repulsed by their fanaticism -- the zealots also pushed for all the contracts to be defined, and none less. Either you have 100% DbC or no sane software development at all. I, on the other side, were introduced to DbC in university much later -- Betrand held most of our software engineering lectures (including introduction to programming which was in Eiffel ;)). I started going to uni in 2004; by that time there was no fanaticism about DbC around -- not even by Bertrand himself. We were taught to use it just as yet another tool in our toolbox along unit testing and formal proofs. Not as a substitute for unit testing! Just as yet another instrument for correctness. There was no 100% DbC -- and we did quite some realistic school projects such as a backend for a flight-booking website in Eiffel (with contracts ;)). I remember that we got minus points if you wrote something in the documentation that could have been easily formalized. But nobody pushed for all the contracts; everybody was pragmatic. Nobody didn't even think about proposing to abolish unit testing and staff all the tests in the contracts to be smoke-tested. While you read my proposals in the light of these 80ies style DbC proponents, I think always only about a much more minor thing: a simple tool to make *some part* of the documentation verifiable. One lesson that I learned from all these courses was to what degree our understandings (especially among non-native speakers) differ. Even simple statements such as "x must be positive" can mean x > 0 and x >= 0 to different people. For me it became obvious that "x > 0" is clearer than "x must be positive" -- and this is that "obvious" which I always refer in my posts on this list. If a statement can not be formalized easily and introduces confusion, that's a different pair of shoes. But if it can -- why shouldn't you formalize it? I still can't wrap my head around the idea that it's not obvious that you should take the formal version over the informal version *if both are equally readable*, but one is far less unambiguous. It feels natural to me that if you want to kick out one, kick out the more ambiguous informal one. What's the point of all the maths if the informal languages just do as well? And that's why I said that the libraries on pypi meant to be used by multiple people and which already have type annotations would obviously benefit from contracts -- while you were imagining that all of these libraries need to be DbC'ed 100%, I was imagining something much more humble. Thus the misunderstanding. After annotating pathlib, I find that it actually needs contracts more thain if it had type annotations. For example: stat() Return the result of the stat() system call on this path, like os.stat() does. Ensures: - result is not None ? self.exists() - result is not None ? os.stat(str(self)).__dict__ == result.__dict__ (??? This is probably not what it was meant with ?like os.stat() does??) But what does it mean "like os.stat() does"? I wrote equality over __dict__'s in the contract. That was my idea of what the implementer was trying to tell me. But is that the operator that should be applied? Sure, the contract merits a description. But without it, how am I supposed to know what "like" means? Similarly with rmdir() -- "the directory must be empty" -- but how exactly am I supposed to check that? Anyhow, please have a look at the contracts and let me know what you think. Please consider it an illustration. Try to question whether the contracts I wrote are so obvious to everybody even if they are obvious to you and keep in mind that the user does not look into the implementation. And please try to abstract away the aesthetics: neither icontract library that I wrote nor the sphinx extension are of sufficient quality. We use them for our company code base, but they still need a lot of polishing. So please try to focus only on the content. We are still talking about contracts in general, not about the concrete contract implementation. Cheers, Marko On Thu, 27 Sep 2018 at 11:37, Marko Ristin-Kaufmann wrote: > Hi Paul, > I only had a contracts library in mind (standardized so that different > modules with contracts can interact and that the ecosystem for automic > testing could emerge). I was never thinking about the philosophy or design > methodology (where you write _all_ the contracts first and then have the > implementation fulfill them). I should have clarified that more. I > personally also don't think that such a methodology is practical. > > I really see contracts as verifiable docs that rot less fast than human > text and are formally precise / less unambiguous than human text. Other > aspects such as deeper tests and hand-braking (e.g., as postconditions > which can't be practically implemented in python without exit stack context > manager) are also nice to have. > > I should be done with pathlib contracts by tonight if I manage to find > some spare time in the evening. > > Cheers, > Marko > > Le jeu. 27 sept. 2018 ? 10:43, Paul Moore a ?crit : > >> On Thu, 27 Sep 2018 at 08:03, Greg Ewing >> wrote: >> > >> > David Mertz wrote: >> > > the reality is that they really ARE NOT much different >> > > from assertions, in either practice or theory. >> > >> > Seems to me that assertions is exactly what they are. Eiffel just >> > provides some syntactic sugar for dealing with inheritance, etc. >> > You can get the same effect in present-day Python if you're >> > willing to write the appropriate code. >> >> Assertions, as far as I can see, are the underlying low level >> *mechanism* that contracts would use. Just like they are the low level >> mechanism behind unit tests (yeah, it's really exceptions, but close >> enough). But like unit tests, contracts seem to me to be a philosophy >> and a library / programming technique layered on top of that base. The >> problem seems to me to be that DbC proponents tend to evangelise the >> philosophy, and ignore requests to show the implementation (often >> pointing to Eiffel as an "example" rather than offering something >> appropriate to the language at hand). IMO, people don't tend to >> emphasise the "D" in DbC enough - it's a *design* approach, and more >> useful in that context than as a programming construct. >> >> For me, the philosophy seems like a reasonable way of thinking, but >> pretty old hat (I learned about invariants and pre-/post-conditions >> and their advantages for design when coding in PL/1 in the 1980's, >> about the same time as I was learning Jackson Structured Programming). >> I don't think in terms of contracts as often as I should - but it's >> unit tests that make me remember to do so. Would a dedicated >> "contracts" library help? Probably not much, but maybe (if it were >> lightweight enough) I could get used to the idea. >> >> Like David, I find that having contracts inline is the biggest problem >> with them. I try to keep my function definitions short, and contracts >> can easily add 100% overhead in terms of lines of code. I'd much >> prefer contracts to be in a separate file. (Which is basically what >> unit tests written with DbC as a principle in mind would be). If I >> have a function definition that's long enough to benefit from >> contracts, I'd usually think "I should refactor this" rather than "I >> should add contracts". >> >> Paul >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Thu Sep 27 21:47:42 2018 From: jamtlu at gmail.com (James Lu) Date: Thu, 27 Sep 2018 21:47:42 -0400 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: <20180928000332.GH19437@ando.pearwood.info> References: <20180928000332.GH19437@ando.pearwood.info> Message-ID: <521E1FFC-8AA1-4DDD-B62E-AB20199224C6@gmail.com> > I agree that this adds ambiguity where we can't be sure whether text .= > encode('utf-8') is referring to the function or the method. We can infer > that it *ought* to be the method, and maybe even add a rule to force > that, but this works only for the simple cases. It is risky and error > prone in the hard cases. I think it would be a good idea to treat all global name lookups as lookups on the object on the LHS when you?re on the RHS of .=. This behavior prevents the worst mistakes and makes it very clear what is happening. Sent from my iPhone > On Sep 27, 2018, at 8:03 PM, Steven D'Aprano wrote: > > On Thu, Sep 27, 2018 at 10:44:38PM +0300, Taavi Eom?e wrote: >>> Do you see how this creates an ambiguous situation? >> >> Name shadowing could create such ambiguous situations anyways even without >> the new operator? > > How? Can you give an example? > > Normally, there is no way for a bare name to shadow a dotted name, or > vice versa, since the dotted name always needs to be fully qualified. In > the example below, we can talk about "text.encode" which is always the > method, or "encode", which is always the function and never the method. > > I agree that this adds ambiguity where we can't be sure whether text .= > encode('utf-8') is referring to the function or the method. We can infer > that it *ought* to be the method, and maybe even add a rule to force > that, but this works only for the simple cases. It is risky and error > prone in the hard cases. > > Think about a more complex assignment: > > text .= encode(spam) + str(eggs) > > This could mean any of: > > text.encode(spam) + text.str(eggs) > text.encode(spam) + str(eggs) > encode(spam) + text.str(eggs) > encode(spam) + str(eggs) > > In a statically typed language, the compiler could work out what is > needed at compile-time (it knows that text.str doesn't exist) and either > resolve any such ambiguities or refuse to compile. But in a dynamic > language like Python, you can't tell whether text.str exists or not > until you try it. > > So this proposal suffers from the same problems as Pascal-style "with" > blocks, which are a FAQ: > > https://docs.python.org/3/faq/design.html#why-doesn-t-python-have-a-with-statement-for-attribute-assignments > > [...] >>>> On 2018-09-27 14:13, Calvin Spealman wrote: >>>> Absolutely -1 on this. Consider the following example: >>>> >>>> def encode(s, *args): >>>> """Force UTF 8 no matter what!""" >>>> return s.encode('utf8') >>>> >>>> text = "Hello, there!" >>>> text .= encode('latin1') > > > > -- > Steve > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ From mertz at gnosis.cx Thu Sep 27 22:44:45 2018 From: mertz at gnosis.cx (David Mertz) Date: Thu, 27 Sep 2018 22:44:45 -0400 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: Message-ID: This seems like a fairly terrible idea. -100 from me. Python simply does not tends to chain methods for mutation of objects. A few libraries?e.g. Pandas?do this, but in those cases actual chaining is more idiomatic. This is very different from '+=' or '*=' or even '|=' where you pretty much expect to wind up with the same type of thing on the binding or mutation. Of course, you can define operators as you like, so it's not hard and fast, but it's usual. I.e. '+=' is unlikely to leave the current number domain, other than according to the regular numeric hierarchy. text = "foo" > text .= replace("foo","bar") > This one example is okay. But most are awful: text .= count('foo') text .= split(';') text .= find('bar') In all of those you jump around among various data types bound to 'text' -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Thu Sep 27 22:49:11 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 28 Sep 2018 12:49:11 +1000 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: <521E1FFC-8AA1-4DDD-B62E-AB20199224C6@gmail.com> References: <20180928000332.GH19437@ando.pearwood.info> <521E1FFC-8AA1-4DDD-B62E-AB20199224C6@gmail.com> Message-ID: <20180928024911.GI19437@ando.pearwood.info> On Thu, Sep 27, 2018 at 09:47:42PM -0400, James Lu wrote: > > I agree that this adds ambiguity where we can't be sure whether text .= > > encode('utf-8') is referring to the function or the method. We can infer > > that it *ought* to be the method, and maybe even add a rule to force > > that, but this works only for the simple cases. It is risky and error > > prone in the hard cases. > > I think it would be a good idea to treat all global name lookups as > lookups on the object on the LHS when you?re on the RHS of .=. We would be innundated with questions and bug reports complaining that # earlier... argument = something() # later on myobj = MyClass() myobj .= method(argument) fails with AttributeError: 'MyClass' object has no attribute 'argument' and people would be right to complain. That rule would mean you couldn't use builtins or module-level names on the right hand side of the .= operator. I call that a huge misfeature. > This behavior prevents the worst mistakes and makes it very clear what > is happening. Far from preventing mistakes, it would cause them, whenever somebody intended to use a builtin or module-level object, and accidently got a completely different attribute of the target object. -- Steve From brenbarn at brenbarn.net Thu Sep 27 22:50:06 2018 From: brenbarn at brenbarn.net (Brendan Barnwell) Date: Thu, 27 Sep 2018 19:50:06 -0700 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: Message-ID: <5BAD96DE.2050709@brenbarn.net> On 2018-09-27 03:48, Ken Hilton wrote: > Would there be a dunder method handling this? Or since it's explicitly > just a syntax for "obj = obj.method()" is that not necessary? > My only qualm is that this might get PHP users confused; that's really > not an issue, though, since Python is not PHP. I'm opposed to this idea, precisely because there's no way to make a dunder for it. Exiting augmented assignment operators work like this: "LHS op= RHS" means "LHS = LHS.__iop__(RHS)" (where iop is iadd, isub, etc.). This can't work for a dot operator. If "LHS .= RHS" is supposed to mean "LHS = LHS.RHS", then what is the argument that is going to be passed to the dunder method? The proposed .= syntax is using the RHS as the *name* of the attribute to be looked up, but for all existing augmented assignments, the RHS is the *value* to be operated on. As others pointed out elsewhere in the thread, the problem is compounded if there are multiple terms in the RHS. What does "this .= that + other" mean? What would be the argument passed to the dunder function? Is the name of the relevant attribute supposed to be taken from "that" or "other" or from the result of evaluating "that + other"? I like the consistency of the existing augmented assignment operations; they are just all syntactic sugar for the same pattern: LHS = LHS.__iop__(RHS) . I'm opposed to the creation of things that look like augmented assignment but don't follow the same pattern, which is what this proposal does. -- Brendan Barnwell "Do not follow where the path may lead. Go, instead, where there is no path, and leave a trail." --author unknown From greg.ewing at canterbury.ac.nz Fri Sep 28 01:34:58 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 28 Sep 2018 17:34:58 +1200 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: <20180928000332.GH19437@ando.pearwood.info> References: <20180928000332.GH19437@ando.pearwood.info> Message-ID: <5BADBD82.90303@canterbury.ac.nz> Steven D'Aprano wrote: > Think about a more complex assignment: > > text .= encode(spam) + str(eggs) I think the only sane thing would be to disallow that, and require the RHS to have the form of a function call, which is always interpreted as a method of the LHS. -- Greg From contact at brice.xyz Fri Sep 28 03:07:23 2018 From: contact at brice.xyz (Brice Parent) Date: Fri, 28 Sep 2018 09:07:23 +0200 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: Message-ID: Le 27/09/2018 ? 12:48, Ken Hilton a ?crit?: > Hi Jasper, > This seems like a great idea! It looks so much cleaner, too. > > Would there be a dunder method handling this? Or since it's explicitly > just a syntax for "obj = obj.method()" is that not necessary? > My only qualm is that this might get PHP users confused; that's really > not an issue, though, since Python is not PHP. > > Anyway, I fully support this idea. > What would the following evaluate to? a .= b + c(d) 1: a = a.b + a.c(a.d)? # everything is prepended an "a." it means we dn't have access to any external elements, making the functionality only useful in a few cases 2: a = a.b + a.c(d)? # every first level element (if that means something) is prepended an "a." We still lose some of the ability to access anything outside of `a`, but a bit less than in #1. The effort to understand the line as grown a bit, though. 3: a = a.(b + c(d))? # everything is evaluated, and an "a." is prepended to that result (the same way `a *= 2 + 3` is equivalent to `a *= 5`) I believe in most cases, this wouldn't mean anything to evaluate `b + c(d)` on their own, and expect a return that can be used as an attribute of `a`. 4: a = a.b + c(d)? # "a." is prepended to the first element after the `=` It is probably quite easy to read and understand, but it removes the transitivity of the operators we have on the right, and is a bit limiting. 5: SyntaxError: Can only use the [whatever the name] augmented operator with a single expression Why not, it's a bit limiting, but is clear enough to me. Maybe, a simpler thing to do for this problem would be to make something like this: a = .b(5) + c(.d) + 3 being the equivalent of a = a.b(5) + c(a.d) + 3 I don't see any ambiguity anymore, it shortens the code a lot, and I guess it wouldn't be hard for the compiler to recompose the line as a first parsing step, and create the same AST with both syntaxes. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rebane2001 at gmail.com Fri Sep 28 03:36:23 2018 From: rebane2001 at gmail.com (Jasper Rebane) Date: Fri, 28 Sep 2018 10:36:23 +0300 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: Message-ID: I had the number 4 in mind Though I think your idea is way better, as it's more flexible and less confusing On Fri, Sep 28, 2018, 10:33 Brice Parent wrote: > > Le 27/09/2018 ? 12:48, Ken Hilton a ?crit : > > Hi Jasper, > This seems like a great idea! It looks so much cleaner, too. > > Would there be a dunder method handling this? Or since it's explicitly > just a syntax for "obj = obj.method()" is that not necessary? > My only qualm is that this might get PHP users confused; that's really not > an issue, though, since Python is not PHP. > > Anyway, I fully support this idea. > > What would the following evaluate to? > a .= b + c(d) > > 1: a = a.b + a.c(a.d) # everything is prepended an "a." > it means we dn't have access to any external elements, making the > functionality only useful in a few cases > > 2: a = a.b + a.c(d) # every first level element (if that means something) > is prepended an "a." > We still lose some of the ability to access anything outside of `a`, but a > bit less than in #1. The effort to understand the line as grown a bit, > though. > > 3: a = a.(b + c(d)) # everything is evaluated, and an "a." is prepended > to that result > (the same way `a *= 2 + 3` is equivalent to `a *= 5`) > I believe in most cases, this wouldn't mean anything to evaluate `b + > c(d)` on their own, and expect a return that can be used as an attribute of > `a`. > > 4: a = a.b + c(d) # "a." is prepended to the first element after the `=` > It is probably quite easy to read and understand, but it removes the > transitivity of the operators we have on the right, and is a bit limiting. > > 5: SyntaxError: Can only use the [whatever the name] augmented operator > with a single expression > Why not, it's a bit limiting, but is clear enough to me. > > Maybe, a simpler thing to do for this problem would be to make something > like this: > a = .b(5) + c(.d) + 3 > being the equivalent of > a = a.b(5) + c(a.d) + 3 > > I don't see any ambiguity anymore, it shortens the code a lot, and I guess > it wouldn't be hard for the compiler to recompose the line as a first > parsing step, and create the same AST with both syntaxes. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Fri Sep 28 04:10:27 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 28 Sep 2018 18:10:27 +1000 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: <5BADBD82.90303@canterbury.ac.nz> References: <20180928000332.GH19437@ando.pearwood.info> <5BADBD82.90303@canterbury.ac.nz> Message-ID: <20180928081027.GK19437@ando.pearwood.info> On Fri, Sep 28, 2018 at 05:34:58PM +1200, Greg Ewing wrote: > Steven D'Aprano wrote: > >Think about a more complex assignment: > > > > text .= encode(spam) + str(eggs) > > I think the only sane thing would be to disallow that, and > require the RHS to have the form of a function call, which is > always interpreted as a method of the LHS. You obviously have a different idea of what is "sane" than I do :-) But okay, so we cripple the RHS so that it can only be a single method call. So useful things like these are out: target .= method(arg) or default target .= foo(arg) if condition else bar(arg) and even target .= method(args) + 1 making the syntax pure sugar for target = target.method(args) and absolutely nothing else. I think that's the sort of thing which gives syntactic sugar a bad name. The one positive I can see is that if the target is a compound expression, it could be evaluated once only: spam[0](x, y, z).eggs['cheese'].aardvark .= method(args) I suppose if I wrote a lot of code like that, aside from (probably?) violating the Law of Demeter, I might like this syntax because it avoids repeating a long compound target. -- Steve From p.f.moore at gmail.com Fri Sep 28 04:37:05 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 28 Sep 2018 09:37:05 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: On Fri, 28 Sep 2018 at 02:24, Marko Ristin-Kaufmann wrote: > Hi, > > I annotated pathlib with contracts: > https://github.com/mristin/icontract-pathlib-poc. I zipped the HTML docs > into > https://github.com/mristin/icontract-pathlib-poc/blob/master/contracts-pathlib-poc.zip, > you can just unpack and view the index.html. > Thanks, for doing this! It's probably not going to result in the reaction you hoped for (see below) but I appreciate you taking the time to do it. Some of the contracts might seem trivial -- but mind that you, as a writer, > want to tell the user what s/he is expected to fulfill before calling the > function. For example, if you say: > rmdir() > > Remove this directory. The directory must be empty. > Requires: > > - not list(self.iterdir()) (??? There must be a way to check this more > optimally) > - self.is_dir() > > > self.is_dir() contract might seem trivial -- but it's actually not. You > actually want to convey the message: dear user, if you are iterating > through a list of paths, use this function to decide if you should call > rmdir() or unlink(). Analogously with the first contract: dear user, please > check if the directory is empty before calling rmdir() and this is what you > need to call to actually check that. > The problem (IMO) with both of these is precisely that they are written as Python expressions. Your prose descriptions of what they mean are fine, and *those* are what I would hope to see in documentation. This is far more obvious in later examples, where the code needed to check certain conditions is extremely unclear unless you spend time trying to interpret it. > I also finally assembled the puzzle. Most of you folks are all older and > were exposed to DbC in the 80ies championed by DbC zealots who advertised > it as *the *tool for software development. You were repulsed by their > fanaticism -- the zealots also pushed for all the contracts to be defined, > and none less. Either you have 100% DbC or no sane software development at > all. > Well, yes, but your claims were *precisely* the same as those I saw back then. "All projects on PyPI would benefit", "the benefits are obvious", ... As I said, DbC is a reasonable methodology, but the reason it's not more widely used is (in the first instance) because it has a *marketing* problem. Solving that with more of the same style of marketing won't help. Once you get beyond the marketing, there are *still* questions (see above and below), but if you can't even get people past the first step, you've lost. > And that's why I said that the libraries on pypi meant to be used by > multiple people and which already have type annotations would obviously > benefit from contracts -- while you were imagining that all of these > libraries need to be DbC'ed 100%, I was imagining something much more > humble. Thus the misunderstanding. > No, I was imagining that some libraries were small, some were used by small, specialised groups, and some were immensely successful without DbC. So claiming that they would "obviously" benefit is a very strong claim. > After annotating pathlib, I find that it actually needs contracts more > thain if it had type annotations. For example: > stat() > > Return the result of the stat() system call on this path, like os.stat() > does. > Ensures: > > - result is not None ? self.exists() > - result is not None ? os.stat(str(self)).__dict__ == result.__dict__ > (??? This is probably not what it was meant with ?like os.stat() does??) > > > But what does it mean "like os.stat() does"? I wrote equality over > __dict__'s in the contract. That was my idea of what the implementer was > trying to tell me. But is that the operator that should be applied? Sure, > the contract merits a description. But without it, how am I supposed to > know what "like" means? > > Similarly with rmdir() -- "the directory must be empty" -- but how exactly > am I supposed to check that? > Isn't that the whole point? The prose statement "the directory must be empty" is clear. But the exact code check isn't - and may be best handled by a series of unit tests, rather than a precondition. Anyhow, please have a look at the contracts and let me know what you think. > Please consider it an illustration. Try to question whether the contracts I > wrote are so obvious to everybody even if they are obvious to you and keep > in mind that the user does not look into the implementation. And please try > to abstract away the aesthetics: neither icontract library that I wrote nor > the sphinx extension are of sufficient quality. We use them for our company > code base, but they still need a lot of polishing. So please try to focus > only on the content. We are still talking about contracts in general, not > about the concrete contract implementation > The thing that you didn't discuss in the above was the effect on the source code. Looking at your modified sources, I found it *significantly* harder to read your annotated version than the original. Basically every function and class was cluttered with irrelevant[1] conditions, which obscured the logic and the basic meaning of the code. [1] Irrelevant in terms of the flow of the code - I appreciate that there's value in checking preconditions in the broader sense. It's like all error handling and similar - there's a balance to be had between "normal behaviour" and handling of exceptional cases. And I feel that the contracts tip that balance too far towards making exceptional cases the focus. So ultimately this example has probably persuaded me that I *don't* want to add contract checking, except in very specific cases where the benefits outweigh the disadvantages. It's very subjective, though, so I'm fine if other people feel differently. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfine2358 at gmail.com Fri Sep 28 04:55:07 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Fri, 28 Sep 2018 09:55:07 +0100 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: <20180928081027.GK19437@ando.pearwood.info> References: <20180928000332.GH19437@ando.pearwood.info> <5BADBD82.90303@canterbury.ac.nz> <20180928081027.GK19437@ando.pearwood.info> Message-ID: Summary: I recast an example in a more abstract form. Steve D'Aprano wrote: > Think about a more complex assignment: > text .= encode(spam) + str(eggs) I find this example instructive. I hope the following is also instructive: $ python3 >>> obj += incr NameError: name 'obj' is not defined >>> obj = object() >>> obj += incr NameError: name 'incr' is not defined >>> incr = 1 >>> obj += incr TypeError: unsupported operand type(s) for +=: 'object' and 'int' >>> incr = object() >>> obj += incr TypeError: unsupported operand type(s) for +=: 'object' and 'object' >>> obj += [] + () TypeError: can only concatenate list (not "tuple") to list To me this shows that LHS += RHS works as follows: 1. Evaluate the LHS (as an assignable object). 2. Evaluate the RHS (as a value). and then some more steps, not covered in my example. As syntax the compound symbols '+=' and '.=' are similar. But in semantics, '+=' does and '.=' does not have an evaluation of the RHS as an expression. This is, in abstract terms, the origin of Steve's example. Someone else has noted that '+=' and its variants are focused on numeric operations, such as addition. This shows, to me, that the simplification provided by use cases such as text = text.replace("foo","bar") has to be compared to the complexity introduced by text .= encode(spam) + str(eggs) In other words, I've restated Steve's example, in a more abstract form. I hope it helps to have another way to look at this example. Finally, I note >>> a = 2 >>> a **= 3 >>> a 8 -- Jonathan From rosuav at gmail.com Fri Sep 28 05:05:12 2018 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 28 Sep 2018 19:05:12 +1000 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: References: <20180928000332.GH19437@ando.pearwood.info> <5BADBD82.90303@canterbury.ac.nz> <20180928081027.GK19437@ando.pearwood.info> Message-ID: On Fri, Sep 28, 2018 at 6:56 PM Jonathan Fine wrote: > Finally, I note > > >>> a = 2 > >>> a **= 3 > >>> a > 8 > ? Yes? That's what 2 ** 3 is, so that's what I would expect. All other augmented assignment operators take an assignment target on the left and a (single) value on the right. ChrisA From rosuav at gmail.com Fri Sep 28 05:13:03 2018 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 28 Sep 2018 19:13:03 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: On Fri, Sep 28, 2018 at 11:25 AM Marko Ristin-Kaufmann wrote: > > Hi, > > I annotated pathlib with contracts:https://github.com/mristin/icontract-pathlib-poc. I zipped the HTML docs into https://github.com/mristin/icontract-pathlib-poc/blob/master/contracts-pathlib-poc.zip, you can just unpack and view the index.html. > > One thing I did observe was that there were contracts written in text all over the documentation -- I tried to formulate most of them in code. Since I'm not the implementer nor very familiar with the module, please consider that many of these contracts can be definitely made more beautiful. There were some limitations to icontract-sphinx extension and icontracts which I noted at the top of the document. > You do a lot of contracts that involve is_absolute and other checks. But the postcondition on is_absolute merely says: not result or self.root != "" (Also: I'm not sure, but I think maybe that should be _root?? Leaving that aside.) So I'm not sure how much you can really ascertain about absolute paths. You guarantee that is_absolute will return something plausible, but unless you guarantee that it's returning the correct value, depending on it for your preconditions seems dubious. A buggy is_absolute could break basically everything, and your contracts wouldn't notice. It is still fundamentally difficult to make assertions about the file system as pre/post contracts. Are you becoming aware of this? Contracts, as has been stated multiple times, look great for mathematically pure functions that have no state outside of their own parameters and return values (and 'self', where applicable), but are just poor versions of unit tests when there's anything external to consider. ChrisA From jfine2358 at gmail.com Fri Sep 28 05:27:51 2018 From: jfine2358 at gmail.com (Jonathan Fine) Date: Fri, 28 Sep 2018 10:27:51 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: I like this discussion. I'd like to add another theme, namely what should happen when there is an error. (This is prompted by race hazards when performing file system operations.) Suppose fn_a() calls fn_b(), and fn_b() raises an exception. What then should fn_a() do? It may be that this exception has left part or all of the system in an inconsistent (invalid) state. At this level of abstraction, it's not possible to sensibly answer this question. Sometimes the whole system should be stopped. Other times, an invalidation of an object is enough. And sometimes, a rollback of the transaction is what's wanted. Here's a well-known example (overflow exception in Ariane 5), which to me shows that these problems can be very hard to get right. https://en.wikipedia.org/wiki/Cluster_(spacecraft) According to wikipedia (above) this failure resulted in "the first example of large-scale static code analysis by abstract interpretation". I expect that in some situations design-by-contract will help here, by encouraging a focus on providing a more complete specification of behaviour. It would be good to have some real-life Python examples. I'd afraid I don't have any (although I've looked either). -- Jonathan From rosuav at gmail.com Fri Sep 28 05:35:44 2018 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 28 Sep 2018 19:35:44 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: On Fri, Sep 28, 2018 at 7:29 PM Jonathan Fine wrote: > > I like this discussion. I'd like to add another theme, namely what > should happen when there is an error. (This is prompted by race > hazards when performing file system operations.) > > Suppose fn_a() calls fn_b(), and fn_b() raises an exception. What then > should fn_a() do? It may be that this exception has left part or all > of the system in an inconsistent (invalid) state. That's why try/finally exists. You shouldn't have to worry about contracts for that. (Similarly, context managers, which are a way of wrapping up try/finally into a convenient package.) ChrisA From p.f.moore at gmail.com Fri Sep 28 05:44:16 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 28 Sep 2018 10:44:16 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: On Fri, 28 Sep 2018 at 10:37, Chris Angelico wrote: > > On Fri, Sep 28, 2018 at 7:29 PM Jonathan Fine wrote: > > > > I like this discussion. I'd like to add another theme, namely what > > should happen when there is an error. (This is prompted by race > > hazards when performing file system operations.) > > > > Suppose fn_a() calls fn_b(), and fn_b() raises an exception. What then > > should fn_a() do? It may be that this exception has left part or all > > of the system in an inconsistent (invalid) state. > > That's why try/finally exists. You shouldn't have to worry about > contracts for that. > > (Similarly, context managers, which are a way of wrapping up > try/finally into a convenient package.) However, a contract would need to be able to express "Returns a fully initialised widget or raises a BrokenWidget exception". Unit tests do this sort of thing all the time. Paul From boxed at killingar.net Fri Sep 28 06:42:04 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Fri, 28 Sep 2018 12:42:04 +0200 Subject: [Python-ideas] Add .= as a method return value assignment operator In-Reply-To: <20180928081027.GK19437@ando.pearwood.info> References: <20180928000332.GH19437@ando.pearwood.info> <5BADBD82.90303@canterbury.ac.nz> <20180928081027.GK19437@ando.pearwood.info> Message-ID: >>> Think about a more complex assignment: >>> >>> text .= encode(spam) + str(eggs) >> >> I think the only sane thing would be to disallow that, and >> require the RHS to have the form of a function call, which is >> always interpreted as a method of the LHS. > > > You obviously have a different idea of what is "sane" than I do :-) > > But okay, so we cripple the RHS so that it can only be a single method > call. So useful things like these are out: > > target .= method(arg) or default > > target .= foo(arg) if condition else bar(arg) > > and even > > target .= method(args) + 1 > > making the syntax pure sugar for > > target = target.method(args) > > and absolutely nothing else. I think that's the sort of thing which > gives syntactic sugar a bad name. > > The one positive I can see is that if the target is a compound > expression, it could be evaluated once only: > > spam[0](x, y, z).eggs['cheese'].aardvark .= method(args) > > I suppose if I wrote a lot of code like that, aside from (probably?) > violating the Law of Demeter, I might like this syntax because it > avoids repeating a long compound target. I don?t really like this proposal for the same reasons you do I think but let?s play devils advocate for a while... I would then suggest something more readable like: foo as x = x.bar or x.something_else or even_another_thing This avoids the iffy hanging dot while allowing more complex and readable statements. It also expands trivially to tuple unpacking which the .= syntax does not. / Anders From mertz at gnosis.cx Fri Sep 28 08:08:10 2018 From: mertz at gnosis.cx (David Mertz) Date: Fri, 28 Sep 2018 08:08:10 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: On Fri, Sep 28, 2018, 4:38 AM Paul Moore wrote: > On Fri, 28 Sep 2018 at 02:24, Marko Ristin-Kaufmann < > marko.ristin at gmail.com> wrote: > >> I annotated pathlib with contracts: >> https://github.com/mristin/icontract-pathlib-poc. I zipped the HTML docs >> into >> https://github.com/mristin/icontract-pathlib-poc/blob/master/contracts-pathlib-poc.zip, >> you can just unpack and view the index.html. >> > > The thing that you didn't discuss in the above was the effect on the > source code. Looking at your modified sources, I found it *significantly* > harder to read your annotated version than the original. Basically every > function and class was cluttered with irrelevant[1] conditions, which > obscured the logic and the basic meaning of the code. > My reaction was just the same as Paul's. I read the modified source, and found that the invariant declarations made it *dramatically* harder to read. The ratio was almost exactly as I characterized in a recent note: 15 lines of pre/post-conditions on a 10 like function. Like Paul, I understand the documentation and testing value of these, but they were highly disruptive to the readability of the functions themselves. As a result of reading the example, I'd be somewhat less likely to use a DbC library, and much more strongly opposed to having one in the standards library (and aghast at the idea of dedicated syntax) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Fri Sep 28 08:23:44 2018 From: mertz at gnosis.cx (David Mertz) Date: Fri, 28 Sep 2018 08:23:44 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: On Thu, Sep 27, 2018, 9:25 PM Marko Ristin-Kaufmann wrote: > Try to question whether the contracts I wrote are so obvious to everybody > even if they are obvious to you and keep in mind that the user does not > look into the implementation. > I had missed this comment, but this seems to be the biggest disconnect, or talking past each other. I'm a user of many libraries. I USUALLY look at the implementation when in doubt about a function. If contracts are meant only for users who don't look at code, the detrimental effect on code readability is mitigated. The other place I look, if not the actual implementation, is at the docstring. I don't remember if icontracts patches the docstring when it decorates a function. If not, that would be very helpful. I agree that all the Sphinx documentation examples shown are very nice. Prose descriptions would often be nicer still, but the Boolean expressions are helpful for those unusual cases where I don't want to look at the code. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Sep 28 08:49:01 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 28 Sep 2018 13:49:01 +0100 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: On Fri, 28 Sep 2018 at 13:23, David Mertz wrote: > I agree that all the Sphinx documentation examples shown are very nice. Prose descriptions would often be nicer still, but the Boolean expressions are helpful for those unusual cases where I don't want to look at the code. I'm ambivalent about the Sphinx examples. I find the highly detailed code needed to express a condition fairly unreadable (and I'm an experienced Python programmer). For example @pre(lambda args, result: not any(Path(arg).is_absolute() for arg in args) or (result == [pth for arg in args for pth in [Path(arg)] if pth.is_absolute()][-1]), "When several absolute paths are given, the last is taken as an anchor (mimicking os.path.join()?s behaviour)") The only way I'd read that is by looking at the summary text - I'd never get the sense of what was going on from the code alone. There's clearly a number of trade-offs going on here: * Conditions should be short, to avoid clutter * Writing helper functions that are *only* used in conditions is more code to test or get wrong * Sometimes it's just plain hard to express a verbal constraint in code * Marko isn't that familiar with the codebase, so there may be better ways to express certain things But given that *all* the examples I've seen of contracts have this issue (difficult to read expressions) I suspect the problem is inherent. Another thing that I haven't yet seen clearly explained. How do these contracts get *run*? Are they checked on every call to the function, even in production code? Is there a means to turn them off? What's the runtime overhead of a "turned off" contract (even if it's just an if-test per condition, that can still add up)? And what happens if a contract fails - is it an exception/traceback (which is often unacceptable in production code such as services)? The lack of any clear feel for the cost of adding contracts is part of what makes people reluctant to use them (particularly in the face of the unfortunately still common assertion that "Python is slow" :-() Paul From marko.ristin at gmail.com Fri Sep 28 09:41:20 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Fri, 28 Sep 2018 15:41:20 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: Hi, (Posting from work, so sorry for the short response.) @Paul Moore icontract.pre/post/inv have the enabled argument; if not enabled, the contract is ignored. Similarly with rmdir() -- "the directory must be empty" -- but how exactly >> am I supposed to check that? >> > > Isn't that the whole point? The prose statement "the directory must be > empty" is clear. But the exact code check isn't - and may be best handled > by a series of unit tests, rather than a precondition. > I meant "check" as a user, not as a developer. As in "What did the implementer think -- how am I supposed to check that the directory is empty?" A la: "Dear user, if you want to rmdir, here is what you need to check that it is indeed a dir, and here is what you need to check that it is empty. If both checks pass, run me." @David patching __doc__ automatically is on the short-term todo list. I suppose we'll just add sphinx directives (:requires:, :ensures: etc.) * Marko isn't that familiar with the codebase, so there may be better > ways to express certain things > This is true :) * Sometimes it's just plain hard to express a verbal constraint in code > In these cases you simply don't express it in code. Why would you? If it's complex code, possibility that you have an error is probably equal or higher than that your comment rots. @pre(lambda args, result: not any(Path(arg).is_absolute() for arg in args) > or > (result == [pth for arg in args for pth in [Path(arg)] if > pth.is_absolute()][-1]), > "When several absolute paths are given, the last is taken as an anchor > (mimicking os.path.join()?s behaviour)") > I'm really not familiar with the code base nor with how to write stuff nice and succinct in python. This particular contract was hard to define because there were no last() and no arg_is_absolute() functions. Otherwise, it would have read: @pre(lambda args, result: not any(arg_is_absolute(arg) for arg in args) or result == Path(last(arg for arg in args if arg_is_absolute(arg))) When rendered, this wouldn't look too bad to read. @Chris > It is still fundamentally difficult to make assertions about the file > system as pre/post contracts. Are you becoming aware of this? > Contracts, as has been stated multiple times, look great for > mathematically pure functions that have no state outside of their own > parameters and return values (and 'self', where applicable), but are > just poor versions of unit tests when there's anything external to > consider. > I never thought of these particular contracts as running in the production. I would set them to run only in testing and only on part of the tests where they are safe from race conditions (e.g., setting enabled=test_pathlib.SERIAL; toggling mechanism is something I haven't spent too much thought about either and would also need to be discussed/implemented.). I really thought about them as documentation, not for correctness (or at best deeper tests during the testing in a "safe" local environment, for example when you want to check if all the contracts also hold on situations in *my *testsuite, not only during the test suite of pathlib). In the end, I'm calling it the day. I really got tired in the last few days. Standardizing contracts for python is not worth the pain. We'll continue to develop icontract for our internal needs and keep it open source, so anybody who's interested can have a look. Thank you all for a very lively discussions! Cheers, Marko On Fri, 28 Sep 2018 at 14:49, Paul Moore wrote: > On Fri, 28 Sep 2018 at 13:23, David Mertz wrote: > > I agree that all the Sphinx documentation examples shown are very nice. > Prose descriptions would often be nicer still, but the Boolean expressions > are helpful for those unusual cases where I don't want to look at the code. > > I'm ambivalent about the Sphinx examples. I find the highly detailed > code needed to express a condition fairly unreadable (and I'm an > experienced Python programmer). For example > > @pre(lambda args, result: not any(Path(arg).is_absolute() for arg in args) > or > (result == [pth for arg in args for pth in [Path(arg)] if > pth.is_absolute()][-1]), > "When several absolute paths are given, the last is taken as an anchor > (mimicking os.path.join()?s behaviour)") > > The only way I'd read that is by looking at the summary text - I'd > never get the sense of what was going on from the code alone. There's > clearly a number of trade-offs going on here: > > * Conditions should be short, to avoid clutter > * Writing helper functions that are *only* used in conditions is more > code to test or get wrong > * Sometimes it's just plain hard to express a verbal constraint in code > * Marko isn't that familiar with the codebase, so there may be better > ways to express certain things > > But given that *all* the examples I've seen of contracts have this > issue (difficult to read expressions) I suspect the problem is > inherent. > > Another thing that I haven't yet seen clearly explained. How do these > contracts get *run*? Are they checked on every call to the function, > even in production code? Is there a means to turn them off? What's the > runtime overhead of a "turned off" contract (even if it's just an > if-test per condition, that can still add up)? And what happens if a > contract fails - is it an exception/traceback (which is often > unacceptable in production code such as services)? The lack of any > clear feel for the cost of adding contracts is part of what makes > people reluctant to use them (particularly in the face of the > unfortunately still common assertion that "Python is slow" :-() > > Paul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Fri Sep 28 12:45:06 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 29 Sep 2018 02:45:06 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: References: Message-ID: <20180928164506.GN19437@ando.pearwood.info> On Tue, Sep 25, 2018 at 09:59:53PM +1000, Hugh Fisher wrote: > C and Python (currently) are known as simple languages. o_O That's a usage of "simple" I haven't come across before. Especially in the case of C, which is a minefield of *intentionally* underspecified behaviour which makes it near to impossible for the developer to tell what a piece of syntactically legal C code will actually do in practice. -- Steve From 2qdxy4rzwzuuilue at potatochowder.com Fri Sep 28 13:18:54 2018 From: 2qdxy4rzwzuuilue at potatochowder.com (2qdxy4rzwzuuilue at potatochowder.com) Date: Fri, 28 Sep 2018 19:18:54 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely Message-ID: <20180928191854.Horde.xmUWrqDGOX18KB2KDkxRZbz@webmail.your-server.de> On 9/28/18 12:45 PM, Steven D'Aprano wrote: > On Tue, Sep 25, 2018 at 09:59:53PM +1000, Hugh Fisher wrote: > >> C and Python (currently) are known as simple languages. > > o_O > > That's a usage of "simple" I haven't come across before. Especially in > the case of C, which is a minefield of *intentionally* underspecified > behaviour which makes it near to impossible for the developer to tell > what a piece of syntactically legal C code will actually do in practice. s/C/Python/ s/underspecified/dynamic/ ;-) From brenbarn at brenbarn.net Fri Sep 28 13:51:36 2018 From: brenbarn at brenbarn.net (Brendan Barnwell) Date: Fri, 28 Sep 2018 10:51:36 -0700 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: <5BAE6A28.5010609@brenbarn.net> On 2018-09-28 05:23, David Mertz wrote: > On Thu, Sep 27, 2018, 9:25 PM Marko Ristin-Kaufmann > > wrote: > > Try to question whether the contracts I wrote are so obvious to > everybody even if they are obvious to you and keep in mind that the > user does not look into the implementation. > > > I had missed this comment, but this seems to be the biggest disconnect, > or talking past each other. > > I'm a user of many libraries. I USUALLY look at the implementation when > in doubt about a function. If contracts are meant only for users who > don't look at code, the detrimental effect on code readability is mitigated. I agree with you that this seems to be a major disconnect in the discussion here. However, on the issue itself, I quite agree with Marko that it is *much* more important for the documentation to be readable than for the function to be readable. I too read the source of functions sometimes, and whenever I find myself having to do so, I grouse at the authors of whatever libraries I'm using for not making the documentation more clear. Ideally, no user should *ever* have to read the function source code, because the documentation should make it completely clear how to use the function without knowing *anything* about how it is implemented. Of course, this ideal will never be achieved, but I think it's something to aim towards, and the idea that adding a few lines of DbC decorators makes the source code too cluttered seems quite incorrect to me. I glanced through the source code and didn't find it hard to read at all. The contracts are all cleanly separated from the function bodies because they're in decorators up front. I'm frankly quite baffled that other people on this thread find that hard to read. The problem with reading the source code is that you can't tell what parts of the behavior are specified and which are implementation details. The appeal of something like DbC is that it encourages (some might say painfully forces) programmers to be very explicit about what behavior they want to guarantee. Whether writing these guarantees as Python expressions is better than writing them as prose is another matter. Personally I do see some value in the modifications that Marko made to pathlib. In a sense, writing "documentation as Python code" is like annotating the source code to mark which specific parts of the implementation are guarantees and which may be implementation details. I think there is significant value in knowing precisely what an API allows, in an explicit and verifiable form such as that provided by DbC, rather than using human language, which is much less precise, can leave room for misinterpretation, and, perhaps most importantly, is harder to verify automatically. Ultimately, I'm firmly of the opinion that, in publicly distributed code, the function *IS* its documentation, not its source code. When a function's actual behavior conflicts with its documented behavior, that is a bug. When meaningful aspects of a functions behavior are not specified in the documentation, that is also a bug. These may be bugs in the documentation or in the behavior, but either way the point is that reading the source code is not an acceptable substitute for making the documentation a complete and self-sufficient vehicle for total understanding of the function's behavior. It doesn't matter if the function behaves as the author intended; it only matters if it behaves as documented. -- Brendan Barnwell "Do not follow where the path may lead. Go, instead, where there is no path, and leave a trail." --author unknown From greg.ewing at canterbury.ac.nz Fri Sep 28 17:18:26 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 29 Sep 2018 09:18:26 +1200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: <5BAE9AA2.2020406@canterbury.ac.nz> Chris Angelico wrote: > It is still fundamentally difficult to make assertions about the file > system as pre/post contracts. When you consider race conditions, I'd say it's impossible. A postcondition check involving the file system could fail even if the function does its job perfectly. -- Greg From jamtlu at gmail.com Thu Sep 27 22:55:44 2018 From: jamtlu at gmail.com (James Lu) Date: Thu, 27 Sep 2018 22:55:44 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Message-ID: I am fine with your proposed syntax. It?s certainly lucid. Perhaps it would be a good idea to get people accustomed to ?non-magic? syntax. > I still have a feeling that most developers would like to store the state in many different custom ways. Please explain. (Expressions like thunk(all)(a == b for a, b in P.arg.meth()) would be valid.) > I'm thinking mostly about all the edge cases which we would not be able to cover (and how complex that would be to cover them). Except for a > b > c being one flat expression with 5 members, it seems fairly easy to recreate an AST, which can then be compiled down to a code object. The code object can be fun with a custom ?locals()? Below is my concept code for such a P object. from ast import * # not done: enforce Singleton property on EmptySymbolType class EmptySymbolType(object): ... EmptySymbol = EmptySymbolType() # empty symbols are placeholders class MockP(object): # "^" is xor @icontract.pre(lambda symbol, astnode: (symbol is None) ^ (astnode is None)) def __init__(self, symbol=None, value=EmptySymbol, astnode=None, initsymtable=(,)): self.symtable = dict(initsymtable) if symbol: self.expr = Expr(value=Name(id=symbol, ctx=Load())) self.symtable = {symbol: value} else: self.expr = astnode self.frozen = False def __add__(self, other): wrapped = MockP.wrap_value(other) return MockP(astnode=Expr(value=BinOp(self.expr, Add(), wrapped.expr), initsymtable={**self.symtable, **wrapped.symtable}) def compile(self): ... def freeze(self): # frozen objects wouldn?t have an overrided getattr, allowing for icontract to manipulate the MockP object using its public interface self.frozen = True @classmethod def wrap_value(cls, obj): # create a MockP object from a value. Generate a random identifier and set that as the key in symtable, the AST node is the name of that identifier, retrieving its value through simple expression evaluation. ... thunk = MockP.wrap_value P = MockP('P') # elsewhere: ensure P is only accessed via valid ?dot attribute access? inside @snapshot so contracts fail early, or don?t and allow Magic like __dict__ to occur on P. > On Sep 27, 2018, at 9:49 PM, Marko Ristin-Kaufmann wrote: > > Hi James, > > I still have a feeling that most developers would like to store the state in many different custom ways. I see also thunk and snapshot with wrapper objects to be much more complicated to implement and maintain; I'm thinking mostly about all the edge cases which we would not be able to cover (and how complex that would be to cover them). Then the linters need also to work around such wrappers... It might also scare users off since it looks like too much magic. Another concern I also have is that it's probably very hard to integrate these wrappers with mypy later -- but I don't really have a clue about that, only my gut feeling? > > What about we accepted to repeat "lambda P, " prefix, and have something like this: > > @snapshot( > lambda P, some_name: len(P.some_property), > lambda P, another_name: hash(P.another_property) > ) > > It's not too verbose for me and you can still explain in three-four sentences what happens below the hub in the library's docs. A pycharm/pydev/vim/emacs plugins could hide the verbose parts. > > I performed a small experiment to test how this solution plays with pylint and it seems OK that arguments are not used in lambdas. > > Cheers, > Marko > > >> On Thu, 27 Sep 2018 at 12:27, James Lu wrote: >> Why couldn?t we record the operations done to a special object and replay them? >> >>>>> Actually, I think there is probably no way around a decorator that captures/snapshots the data before the function call with a lambda (or even a separate function). "Old" construct, if we are to parse it somehow from the condition function, would limit us only to shallow copies (and be complex to implement as soon as we are capturing out-of-argument values such as globals etc.). Moreove, what if we don't need shallow copies? I could imagine a dozen of cases where shallow copy is not what the programmer wants: for example, s/he might need to make deep copies, hash or otherwise transform the input data to hold only part of it instead of copying (e.g., so as to allow equality check without a double copy of the data, or capture only the value of certain property transformed in some way). >> >> >> from icontract import snapshot, P, thunk >> @snapshot(some_identifier=P.self.some_method(P.some_argument.some_attr)) >> >> P is an object of our own type, let?s call the type MockP. MockP returns new MockP objects when any operation is done to it. MockP * MockP = MockP. MockP.attr = MockP. MockP objects remember all the operations done to them, and allow the owner of a MockP object to re-apply the same operations >> >> ?thunk? converts a function or object or class to a MockP object, storing the function or object for when the operation is done. >> >> thunk(function)() >> >> Of course, you could also thunk objects like so: thunk(3) * P.number. (Though it might be better to keep the 3 after P.number in this case so P.number?s __mult__ would be invoked before 3?s __mult__ is invokes. >> >> >> In most cases, you?d save any operations that can be done on a copy of the data as generated by @snapshot in @postcondiion. thunk is for rare scenarios where 1) it?s hard to capture the state, for example an object that manages network state (or database connectivity etc) and whose stage can only be read by an external classmethod 2) you want to avoid using copy.deepcopy. >> >> I?m sure there?s some way to override isinstance through a meta class or dunder subclasshook. >> >> I suppose this mocking method could be a shorthand for when you don?t need the full power of a lambda. It?s arguably more succinct and readable, though YMMV. >> >> I look forward to reading your opinion on this and any ideas you might have. >> >>> On Sep 26, 2018, at 3:56 PM, James Lu wrote: >>> >>> Hi Marko, >>> >>>> Actually, following on #A4, you could also write those as multiple decorators: >>>> @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) >>>> @snpashot(lambda _, other_identifier: other_func(_.self)) >>> >>> Yes, though if we?re talking syntax using kwargs would probably be better. >>> Using ?P? instead of ?_?: (I agree that _ smells of ignored arguments) >>> >>> @snapshot(some_identifier=lambda P: ..., some_identifier2=lambda P: ...) >>> >>> Kwargs has the advantage that you can extend multiple lines without repeating @snapshot, though many lines of @capture would probably be more intuitive since each decorator captures one variable. >>> >>>> Why uppercase "P" and not lowercase (uppercase implies a constant for me)? >>> >>> To me, the capital letters are more prominent and explicit- easier to see when reading code. It also implies its a constant for you- you shouldn?t be modifying it, because then you?d be interfering with the function itself. >>> >>> Side node: maybe it would be good to have an @icontract.nomutate (probably use a different name, maybe @icontract.readonly) that makes sure a method doesn?t mutate its own __dict__ (and maybe the __dict__ of the members of its __dict__). It wouldn?t be necessary to put the decorator on every read only function, just the ones your worried might mutate. >>> >>> Maybe a @icontract.nomutate(param=?paramname?) that ensures the __dict__ of all members of the param name have the same equality or identity before and after. The semantics would need to be worked out. >>> >>>> On Sep 26, 2018, at 8:58 AM, Marko Ristin-Kaufmann wrote: >>>> >>>> Hi James, >>>> >>>> Actually, following on #A4, you could also write those as multiple decorators: >>>> @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) >>>> @snpashot(lambda _, other_identifier: other_func(_.self)) >>>> >>>> Am I correct? >>>> >>>> "_" looks a bit hard to read for me (implying ignored arguments). >>>> >>>> Why uppercase "P" and not lowercase (uppercase implies a constant for me)? Then "O" for "old" and "P" for parameters in a condition: >>>> @post(lambda O, P: ...) >>>> ? >>>> >>>> It also has the nice property that it follows both the temporal and the alphabet order :) >>>> >>>>> On Wed, 26 Sep 2018 at 14:30, James Lu wrote: >>>>> I still prefer snapshot, though capture is a good name too. We could use generator syntax and inspect the argument names. >>>>> >>>>> Instead of ?a?, perhaps use ?_?. Or maybe use ?A.?, for arguments. Some people might prefer ?P? for parameters, since parameters sometimes means the value received while the argument means the value passed. >>>>> >>>>> (#A1) >>>>> >>>>> from icontract import snapshot, __ >>>>> @snapshot(some_func(_.some_argument.some_attr) for some_identifier, _ in __) >>>>> >>>>> Or (#A2) >>>>> >>>>> @snapshot(some_func(some_argument.some_attr) for some_identifier, _, some_argument in __) >>>>> >>>>> ? >>>>> Or (#A3) >>>>> >>>>> @snapshot(lambda some_argument,_,some_identifier: some_func(some_argument.some_attr)) >>>>> >>>>> Or (#A4) >>>>> >>>>> @snapshot(lambda _,some_identifier: some_func(_.some_argument.some_attr)) >>>>> @snapshot(lambda _,some_identifier, other_identifier: some_func(_.some_argument.some_attr), other_func(_.self)) >>>>> >>>>> I like #A4 the most because it?s fairly DRY and avoids the extra punctuation of >>>>> >>>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>>> >>>>> On Sep 26, 2018, at 12:23 AM, Marko Ristin-Kaufmann wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> Franklin wrote: >>>>>>> The name "before" is a confusing name. It's not just something that >>>>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>>>> things after it, but with values taken before the function call. Based >>>>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>>>> confusing than one that is obvious but misleading. >>>>>> >>>>>> James wrote: >>>>>>> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ?old? it?s ?snapshot?. >>>>>> >>>>>> >>>>>> I like "snapshot", it's a bit clearer than prefixing/postfixing verbs with "pre" which might be misread (e.g., "prelet" has a meaning in Slavic languages and could be subconsciously misread, "predef" implies to me a pre-definition rather than prior-to-definition , "beforescope" is very clear for me, but it might be confusing for others as to what it actually refers to ). What about "@capture" (7 letters for captures versus 8 for snapshot)? I suppose "@let" would be playing with fire if Python with conflicting new keywords since I assume "let" to be one of the candidates. >>>>>> >>>>>> Actually, I think there is probably no way around a decorator that captures/snapshots the data before the function call with a lambda (or even a separate function). "Old" construct, if we are to parse it somehow from the condition function, would limit us only to shallow copies (and be complex to implement as soon as we are capturing out-of-argument values such as globals etc.). Moreove, what if we don't need shallow copies? I could imagine a dozen of cases where shallow copy is not what the programmer wants: for example, s/he might need to make deep copies, hash or otherwise transform the input data to hold only part of it instead of copying (e.g., so as to allow equality check without a double copy of the data, or capture only the value of certain property transformed in some way). >>>>>> >>>>>> I'd still go with the dictionary to allow for this extra freedom. We could have a convention: "a" denotes to the current arguments, and "b" denotes the captured values. It might make an interesting hint that we put "b" before "a" in the condition. You could also interpret "b" as "before" and "a" as "after", but also "a" as "arguments". >>>>>> >>>>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>>>> @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) >>>>>> def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: >>>>>> ... >>>>>> "b" can be omitted if it is not used. Under the hub, all the arguments to the condition would be passed by keywords. >>>>>> >>>>>> In case of inheritance, captures would be inherited as well. Hence the library would check at run-time that the returned dictionary with captured values has no identifier that has been already captured, and the linter checks that statically, before running the code. Reading values captured in the parent at the code of the child class might be a bit hard -- but that is case with any inherited methods/properties. In documentation, I'd list all the captures of both ancestor and the current class. >>>>>> >>>>>> I'm looking forward to reading your opinion on this and alternative suggestions :) >>>>>> Marko >>>>>> >>>>>>> On Tue, 25 Sep 2018 at 18:12, Franklin? Lee wrote: >>>>>>> On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann >>>>>>> wrote: >>>>>>> > >>>>>>> > Hi, >>>>>>> > >>>>>>> > (I'd like to fork from a previous thread, "Pre-conditions and post-conditions", since it got long and we started discussing a couple of different things. Let's discuss in this thread the implementation of a library for design-by-contract and how to push it forward to hopefully add it to the standard library one day.) >>>>>>> > >>>>>>> > For those unfamiliar with contracts and current state of the discussion in the previous thread, here's a short summary. The discussion started by me inquiring about the possibility to add design-by-contract concepts into the core language. The idea was rejected by the participants mainly because they thought that the merit of the feature does not merit its costs. This is quite debatable and seems to reflect many a discussion about design-by-contract in general. Please see the other thread, "Why is design-by-contract not widely adopted?" if you are interested in that debate. >>>>>>> > >>>>>>> > We (a colleague of mine and I) decided to implement a library to bring design-by-contract to Python since we don't believe that the concept will make it into the core language anytime soon and we needed badly a tool to facilitate our work with a growing code base. >>>>>>> > >>>>>>> > The library is available at http://github.com/Parquery/icontract. The hope is to polish it so that the wider community could use it and once the quality is high enough, make a proposal to add it to the standard Python libraries. We do need a standard library for contracts, otherwise projects with conflicting contract libraries can not integrate (e.g., the contracts can not be inherited between two different contract libraries). >>>>>>> > >>>>>>> > So far, the most important bits have been implemented in icontract: >>>>>>> > >>>>>>> > Preconditions, postconditions, class invariants >>>>>>> > Inheritance of the contracts (including strengthening and weakening of the inherited contracts) >>>>>>> > Informative violation messages (including information about the values involved in the contract condition) >>>>>>> > Sphinx extension to include contracts in the automatically generated documentation (sphinx-icontract) >>>>>>> > Linter to statically check that the arguments of the conditions are correct (pyicontract-lint) >>>>>>> > >>>>>>> > We are successfully using it in our code base and have been quite happy about the implementation so far. >>>>>>> > >>>>>>> > There is one bit still missing: accessing "old" values in the postcondition (i.e., shallow copies of the values prior to the execution of the function). This feature is necessary in order to allow us to verify state transitions. >>>>>>> > >>>>>>> > For example, consider a new dictionary class that has "get" and "put" methods: >>>>>>> > >>>>>>> > from typing import Optional >>>>>>> > >>>>>>> > from icontract import post >>>>>>> > >>>>>>> > class NovelDict: >>>>>>> > def length(self)->int: >>>>>>> > ... >>>>>>> > >>>>>>> > def get(self, key: str) -> Optional[str]: >>>>>>> > ... >>>>>>> > >>>>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>>>> > @post(lambda self, key: old(self.get(key)) is None and old(self.length()) + 1 == self.length(), >>>>>>> > "length increased with a new key") >>>>>>> > @post(lambda self, key: old(self.get(key)) is not None and old(self.length()) == self.length(), >>>>>>> > "length stable with an existing key") >>>>>>> > def put(self, key: str, value: str) -> None: >>>>>>> > ... >>>>>>> > >>>>>>> > How could we possible implement this "old" function? >>>>>>> > >>>>>>> > Here is my suggestion. I'd introduce a decorator "before" that would allow you to store whatever values in a dictionary object "old" (i.e. an object whose properties correspond to the key/value pairs). The "old" is then passed to the condition. Here is it in code: >>>>>>> > >>>>>>> > # omitted contracts for brevity >>>>>>> > class NovelDict: >>>>>>> > def length(self)->int: >>>>>>> > ... >>>>>>> > >>>>>>> > # omitted contracts for brevity >>>>>>> > def get(self, key: str) -> Optional[str]: >>>>>>> > ... >>>>>>> > >>>>>>> > @before(lambda self, key: {"length": self.length(), "get": self.get(key)}) >>>>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>>>> > @post(lambda self, key, old: old.get is None and old.length + 1 == self.length(), >>>>>>> > "length increased with a new key") >>>>>>> > @post(lambda self, key, old: old.get is not None and old.length == self.length(), >>>>>>> > "length stable with an existing key") >>>>>>> > def put(self, key: str, value: str) -> None: >>>>>>> > ... >>>>>>> > >>>>>>> > The linter would statically check that all attributes accessed in "old" have to be defined in the decorator "before" so that attribute errors would be caught early. The current implementation of the linter is fast enough to be run at save time so such errors should usually not happen with a properly set IDE. >>>>>>> > >>>>>>> > "before" decorator would also have "enabled" property, so that you can turn it off (e.g., if you only want to run a postcondition in testing). The "before" decorators can be stacked so that you can also have a more fine-grained control when each one of them is running (some during test, some during test and in production). The linter would enforce that before's "enabled" is a disjunction of all the "enabled"'s of the corresponding postconditions where the old value appears. >>>>>>> > >>>>>>> > Is this a sane approach to "old" values? Any alternative approach you would prefer? What about better naming? Is "before" a confusing name? >>>>>>> >>>>>>> The dict can be splatted into the postconditions, so that no special >>>>>>> name is required. This would require either that the lambdas handle >>>>>>> **kws, or that their caller inspect them to see what names they take. >>>>>>> Perhaps add a function to functools which only passes kwargs that fit. >>>>>>> Then the precondition mechanism can pass `self`, `key`, and `value` as >>>>>>> kwargs instead of args. >>>>>>> >>>>>>> For functions that have *args and **kwargs, it may be necessary to >>>>>>> pass them to the conditions as args and kwargs instead. >>>>>>> >>>>>>> The name "before" is a confusing name. It's not just something that >>>>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>>>> things after it, but with values taken before the function call. Based >>>>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>>>> confusing than one that is obvious but misleading. >>>>>>> >>>>>>> By the way, should the first postcondition be `self.get(key) is >>>>>>> value`, checking for identity rather than equality? >>>>>> _______________________________________________ >>>>>> Python-ideas mailing list >>>>>> Python-ideas at python.org >>>>>> https://mail.python.org/mailman/listinfo/python-ideas >>>>>> Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Thu Sep 27 22:55:42 2018 From: jamtlu at gmail.com (James Lu) Date: Thu, 27 Sep 2018 22:55:42 -0400 Subject: [Python-ideas] You might find coconut language useful In-Reply-To: References: <71F5501B-F44B-417C-80E2-2FF998F9F37E@gmail.com> Message-ID: <10CACC96-5F72-4FAF-9CDB-D2458A240EB2@gmail.com> Hi Marko, I honestly don?t know how many people are using coconut. Though with a little bit of configuring Python?s import functionality, it should have decent inter compatibility. Even without Coconut, I still find icontract?s plain Python lambda syntax readable and useful. James Lu > On Sep 27, 2018, at 9:32 PM, Marko Ristin-Kaufmann wrote: > > Hi James, > > It would be super useful, particularly because lambdas can be written more succinctly. You would suggest to write programs in coconut rather than python? How many people are actually using it? > >> On Thu, 27 Sep 2018 at 22:55, James Lu wrote: >> The language is a superset of Python that transpiles to Python. Let me know if you think its syntax would be useful for contracts. >> >> Sent from my iPhone -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Fri Sep 28 19:01:09 2018 From: jamtlu at gmail.com (James Lu) Date: Fri, 28 Sep 2018 19:01:09 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Message-ID: <96DC372A-C622-482D-9068-60F2F1B3A198@gmail.com> > The problem with readability might be easier to solve than I thought, and your pointer to coconut gave me the idea. What if we make a utility that takes the python source code, examines the decorators pre/post/inv (or whatever we call them) and transforms them back and forth from/to valid python code? > > Pretty much any IDE has after load / before save handlers. When you load a python file, you'd transform it from less readable python code, write using a concise form and when you save it, it's transformed back to valid python. Hmm yes. Look at my previous email- the proposed syntax there does not require changing or transforming Python code. It?s missing many magic methods, but it would work something like this: @snapshot(some_identifier=P.self.another_property + 10) would internally turn into something like this: lambda P: P.self.another_property + 10 Maybe it?s possible to modify PyCharm parsing by altering its Python grammar file? Sent from my iPhone > On Sep 28, 2018, at 10:27 AM, Marko Ristin-Kaufmann wrote: > > Hi James, > > The problem with readability might be easier to solve than I thought, and your pointer to coconut gave me the idea. What if we make a utility that takes the python source code, examines the decorators pre/post/inv (or whatever we call them) and transforms them back and forth from/to valid python code? > > Pretty much any IDE has after load / before save handlers. When you load a python file, you'd transform it from less readable python code, write using a concise form and when you save it, it's transformed back to valid python. > > Then we need to pick the python form that is easiest to implement (and still readable enough for, say, code reviews on github), but writing and reading the contracts in the code would be much more pleasant. > > As long as the "readable" form has also valid python syntax, the tool can be implemented with ast module. > > For example: > @snapshot(some_identifier=self.another_property + 10) > @post(self.some_property => old.some_identifier > 100) > > would transform into > @snapshot(lambda P, some_identifier: P.self.another_property + 10) > @post(lambda O, P: not self.some_property and O.some_identifier > 100) > > Cheers, > Marko > >> On Fri, 28 Sep 2018 at 03:49, Marko Ristin-Kaufmann wrote: >> Hi James, >> >> I still have a feeling that most developers would like to store the state in many different custom ways. I see also thunk and snapshot with wrapper objects to be much more complicated to implement and maintain; I'm thinking mostly about all the edge cases which we would not be able to cover (and how complex that would be to cover them). Then the linters need also to work around such wrappers... It might also scare users off since it looks like too much magic. Another concern I also have is that it's probably very hard to integrate these wrappers with mypy later -- but I don't really have a clue about that, only my gut feeling? >> >> What about we accepted to repeat "lambda P, " prefix, and have something like this: >> >> @snapshot( >> lambda P, some_name: len(P.some_property), >> lambda P, another_name: hash(P.another_property) >> ) >> >> It's not too verbose for me and you can still explain in three-four sentences what happens below the hub in the library's docs. A pycharm/pydev/vim/emacs plugins could hide the verbose parts. >> >> I performed a small experiment to test how this solution plays with pylint and it seems OK that arguments are not used in lambdas. >> >> Cheers, >> Marko >> >> >>> On Thu, 27 Sep 2018 at 12:27, James Lu wrote: >>> Why couldn?t we record the operations done to a special object and replay them? >>> >>>>>> Actually, I think there is probably no way around a decorator that captures/snapshots the data before the function call with a lambda (or even a separate function). "Old" construct, if we are to parse it somehow from the condition function, would limit us only to shallow copies (and be complex to implement as soon as we are capturing out-of-argument values such as globals etc.). Moreove, what if we don't need shallow copies? I could imagine a dozen of cases where shallow copy is not what the programmer wants: for example, s/he might need to make deep copies, hash or otherwise transform the input data to hold only part of it instead of copying (e.g., so as to allow equality check without a double copy of the data, or capture only the value of certain property transformed in some way). >>> >>> >>> from icontract import snapshot, P, thunk >>> @snapshot(some_identifier=P.self.some_method(P.some_argument.some_attr)) >>> >>> P is an object of our own type, let?s call the type MockP. MockP returns new MockP objects when any operation is done to it. MockP * MockP = MockP. MockP.attr = MockP. MockP objects remember all the operations done to them, and allow the owner of a MockP object to re-apply the same operations >>> >>> ?thunk? converts a function or object or class to a MockP object, storing the function or object for when the operation is done. >>> >>> thunk(function)() >>> >>> Of course, you could also thunk objects like so: thunk(3) * P.number. (Though it might be better to keep the 3 after P.number in this case so P.number?s __mult__ would be invoked before 3?s __mult__ is invokes. >>> >>> >>> In most cases, you?d save any operations that can be done on a copy of the data as generated by @snapshot in @postcondiion. thunk is for rare scenarios where 1) it?s hard to capture the state, for example an object that manages network state (or database connectivity etc) and whose stage can only be read by an external classmethod 2) you want to avoid using copy.deepcopy. >>> >>> I?m sure there?s some way to override isinstance through a meta class or dunder subclasshook. >>> >>> I suppose this mocking method could be a shorthand for when you don?t need the full power of a lambda. It?s arguably more succinct and readable, though YMMV. >>> >>> I look forward to reading your opinion on this and any ideas you might have. >>> >>>> On Sep 26, 2018, at 3:56 PM, James Lu wrote: >>>> >>>> Hi Marko, >>>> >>>>> Actually, following on #A4, you could also write those as multiple decorators: >>>>> @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) >>>>> @snpashot(lambda _, other_identifier: other_func(_.self)) >>>> >>>> Yes, though if we?re talking syntax using kwargs would probably be better. >>>> Using ?P? instead of ?_?: (I agree that _ smells of ignored arguments) >>>> >>>> @snapshot(some_identifier=lambda P: ..., some_identifier2=lambda P: ...) >>>> >>>> Kwargs has the advantage that you can extend multiple lines without repeating @snapshot, though many lines of @capture would probably be more intuitive since each decorator captures one variable. >>>> >>>>> Why uppercase "P" and not lowercase (uppercase implies a constant for me)? >>>> >>>> To me, the capital letters are more prominent and explicit- easier to see when reading code. It also implies its a constant for you- you shouldn?t be modifying it, because then you?d be interfering with the function itself. >>>> >>>> Side node: maybe it would be good to have an @icontract.nomutate (probably use a different name, maybe @icontract.readonly) that makes sure a method doesn?t mutate its own __dict__ (and maybe the __dict__ of the members of its __dict__). It wouldn?t be necessary to put the decorator on every read only function, just the ones your worried might mutate. >>>> >>>> Maybe a @icontract.nomutate(param=?paramname?) that ensures the __dict__ of all members of the param name have the same equality or identity before and after. The semantics would need to be worked out. >>>> >>>>> On Sep 26, 2018, at 8:58 AM, Marko Ristin-Kaufmann wrote: >>>>> >>>>> Hi James, >>>>> >>>>> Actually, following on #A4, you could also write those as multiple decorators: >>>>> @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) >>>>> @snpashot(lambda _, other_identifier: other_func(_.self)) >>>>> >>>>> Am I correct? >>>>> >>>>> "_" looks a bit hard to read for me (implying ignored arguments). >>>>> >>>>> Why uppercase "P" and not lowercase (uppercase implies a constant for me)? Then "O" for "old" and "P" for parameters in a condition: >>>>> @post(lambda O, P: ...) >>>>> ? >>>>> >>>>> It also has the nice property that it follows both the temporal and the alphabet order :) >>>>> >>>>>> On Wed, 26 Sep 2018 at 14:30, James Lu wrote: >>>>>> I still prefer snapshot, though capture is a good name too. We could use generator syntax and inspect the argument names. >>>>>> >>>>>> Instead of ?a?, perhaps use ?_?. Or maybe use ?A.?, for arguments. Some people might prefer ?P? for parameters, since parameters sometimes means the value received while the argument means the value passed. >>>>>> >>>>>> (#A1) >>>>>> >>>>>> from icontract import snapshot, __ >>>>>> @snapshot(some_func(_.some_argument.some_attr) for some_identifier, _ in __) >>>>>> >>>>>> Or (#A2) >>>>>> >>>>>> @snapshot(some_func(some_argument.some_attr) for some_identifier, _, some_argument in __) >>>>>> >>>>>> ? >>>>>> Or (#A3) >>>>>> >>>>>> @snapshot(lambda some_argument,_,some_identifier: some_func(some_argument.some_attr)) >>>>>> >>>>>> Or (#A4) >>>>>> >>>>>> @snapshot(lambda _,some_identifier: some_func(_.some_argument.some_attr)) >>>>>> @snapshot(lambda _,some_identifier, other_identifier: some_func(_.some_argument.some_attr), other_func(_.self)) >>>>>> >>>>>> I like #A4 the most because it?s fairly DRY and avoids the extra punctuation of >>>>>> >>>>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>>>> >>>>>> On Sep 26, 2018, at 12:23 AM, Marko Ristin-Kaufmann wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Franklin wrote: >>>>>>>> The name "before" is a confusing name. It's not just something that >>>>>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>>>>> things after it, but with values taken before the function call. Based >>>>>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>>>>> confusing than one that is obvious but misleading. >>>>>>> >>>>>>> James wrote: >>>>>>>> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ?old? it?s ?snapshot?. >>>>>>> >>>>>>> >>>>>>> I like "snapshot", it's a bit clearer than prefixing/postfixing verbs with "pre" which might be misread (e.g., "prelet" has a meaning in Slavic languages and could be subconsciously misread, "predef" implies to me a pre-definition rather than prior-to-definition , "beforescope" is very clear for me, but it might be confusing for others as to what it actually refers to ). What about "@capture" (7 letters for captures versus 8 for snapshot)? I suppose "@let" would be playing with fire if Python with conflicting new keywords since I assume "let" to be one of the candidates. >>>>>>> >>>>>>> Actually, I think there is probably no way around a decorator that captures/snapshots the data before the function call with a lambda (or even a separate function). "Old" construct, if we are to parse it somehow from the condition function, would limit us only to shallow copies (and be complex to implement as soon as we are capturing out-of-argument values such as globals etc.). Moreove, what if we don't need shallow copies? I could imagine a dozen of cases where shallow copy is not what the programmer wants: for example, s/he might need to make deep copies, hash or otherwise transform the input data to hold only part of it instead of copying (e.g., so as to allow equality check without a double copy of the data, or capture only the value of certain property transformed in some way). >>>>>>> >>>>>>> I'd still go with the dictionary to allow for this extra freedom. We could have a convention: "a" denotes to the current arguments, and "b" denotes the captured values. It might make an interesting hint that we put "b" before "a" in the condition. You could also interpret "b" as "before" and "a" as "after", but also "a" as "arguments". >>>>>>> >>>>>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>>>>> @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) >>>>>>> def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: >>>>>>> ... >>>>>>> "b" can be omitted if it is not used. Under the hub, all the arguments to the condition would be passed by keywords. >>>>>>> >>>>>>> In case of inheritance, captures would be inherited as well. Hence the library would check at run-time that the returned dictionary with captured values has no identifier that has been already captured, and the linter checks that statically, before running the code. Reading values captured in the parent at the code of the child class might be a bit hard -- but that is case with any inherited methods/properties. In documentation, I'd list all the captures of both ancestor and the current class. >>>>>>> >>>>>>> I'm looking forward to reading your opinion on this and alternative suggestions :) >>>>>>> Marko >>>>>>> >>>>>>>> On Tue, 25 Sep 2018 at 18:12, Franklin? Lee wrote: >>>>>>>> On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann >>>>>>>> wrote: >>>>>>>> > >>>>>>>> > Hi, >>>>>>>> > >>>>>>>> > (I'd like to fork from a previous thread, "Pre-conditions and post-conditions", since it got long and we started discussing a couple of different things. Let's discuss in this thread the implementation of a library for design-by-contract and how to push it forward to hopefully add it to the standard library one day.) >>>>>>>> > >>>>>>>> > For those unfamiliar with contracts and current state of the discussion in the previous thread, here's a short summary. The discussion started by me inquiring about the possibility to add design-by-contract concepts into the core language. The idea was rejected by the participants mainly because they thought that the merit of the feature does not merit its costs. This is quite debatable and seems to reflect many a discussion about design-by-contract in general. Please see the other thread, "Why is design-by-contract not widely adopted?" if you are interested in that debate. >>>>>>>> > >>>>>>>> > We (a colleague of mine and I) decided to implement a library to bring design-by-contract to Python since we don't believe that the concept will make it into the core language anytime soon and we needed badly a tool to facilitate our work with a growing code base. >>>>>>>> > >>>>>>>> > The library is available at http://github.com/Parquery/icontract. The hope is to polish it so that the wider community could use it and once the quality is high enough, make a proposal to add it to the standard Python libraries. We do need a standard library for contracts, otherwise projects with conflicting contract libraries can not integrate (e.g., the contracts can not be inherited between two different contract libraries). >>>>>>>> > >>>>>>>> > So far, the most important bits have been implemented in icontract: >>>>>>>> > >>>>>>>> > Preconditions, postconditions, class invariants >>>>>>>> > Inheritance of the contracts (including strengthening and weakening of the inherited contracts) >>>>>>>> > Informative violation messages (including information about the values involved in the contract condition) >>>>>>>> > Sphinx extension to include contracts in the automatically generated documentation (sphinx-icontract) >>>>>>>> > Linter to statically check that the arguments of the conditions are correct (pyicontract-lint) >>>>>>>> > >>>>>>>> > We are successfully using it in our code base and have been quite happy about the implementation so far. >>>>>>>> > >>>>>>>> > There is one bit still missing: accessing "old" values in the postcondition (i.e., shallow copies of the values prior to the execution of the function). This feature is necessary in order to allow us to verify state transitions. >>>>>>>> > >>>>>>>> > For example, consider a new dictionary class that has "get" and "put" methods: >>>>>>>> > >>>>>>>> > from typing import Optional >>>>>>>> > >>>>>>>> > from icontract import post >>>>>>>> > >>>>>>>> > class NovelDict: >>>>>>>> > def length(self)->int: >>>>>>>> > ... >>>>>>>> > >>>>>>>> > def get(self, key: str) -> Optional[str]: >>>>>>>> > ... >>>>>>>> > >>>>>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>>>>> > @post(lambda self, key: old(self.get(key)) is None and old(self.length()) + 1 == self.length(), >>>>>>>> > "length increased with a new key") >>>>>>>> > @post(lambda self, key: old(self.get(key)) is not None and old(self.length()) == self.length(), >>>>>>>> > "length stable with an existing key") >>>>>>>> > def put(self, key: str, value: str) -> None: >>>>>>>> > ... >>>>>>>> > >>>>>>>> > How could we possible implement this "old" function? >>>>>>>> > >>>>>>>> > Here is my suggestion. I'd introduce a decorator "before" that would allow you to store whatever values in a dictionary object "old" (i.e. an object whose properties correspond to the key/value pairs). The "old" is then passed to the condition. Here is it in code: >>>>>>>> > >>>>>>>> > # omitted contracts for brevity >>>>>>>> > class NovelDict: >>>>>>>> > def length(self)->int: >>>>>>>> > ... >>>>>>>> > >>>>>>>> > # omitted contracts for brevity >>>>>>>> > def get(self, key: str) -> Optional[str]: >>>>>>>> > ... >>>>>>>> > >>>>>>>> > @before(lambda self, key: {"length": self.length(), "get": self.get(key)}) >>>>>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>>>>> > @post(lambda self, key, old: old.get is None and old.length + 1 == self.length(), >>>>>>>> > "length increased with a new key") >>>>>>>> > @post(lambda self, key, old: old.get is not None and old.length == self.length(), >>>>>>>> > "length stable with an existing key") >>>>>>>> > def put(self, key: str, value: str) -> None: >>>>>>>> > ... >>>>>>>> > >>>>>>>> > The linter would statically check that all attributes accessed in "old" have to be defined in the decorator "before" so that attribute errors would be caught early. The current implementation of the linter is fast enough to be run at save time so such errors should usually not happen with a properly set IDE. >>>>>>>> > >>>>>>>> > "before" decorator would also have "enabled" property, so that you can turn it off (e.g., if you only want to run a postcondition in testing). The "before" decorators can be stacked so that you can also have a more fine-grained control when each one of them is running (some during test, some during test and in production). The linter would enforce that before's "enabled" is a disjunction of all the "enabled"'s of the corresponding postconditions where the old value appears. >>>>>>>> > >>>>>>>> > Is this a sane approach to "old" values? Any alternative approach you would prefer? What about better naming? Is "before" a confusing name? >>>>>>>> >>>>>>>> The dict can be splatted into the postconditions, so that no special >>>>>>>> name is required. This would require either that the lambdas handle >>>>>>>> **kws, or that their caller inspect them to see what names they take. >>>>>>>> Perhaps add a function to functools which only passes kwargs that fit. >>>>>>>> Then the precondition mechanism can pass `self`, `key`, and `value` as >>>>>>>> kwargs instead of args. >>>>>>>> >>>>>>>> For functions that have *args and **kwargs, it may be necessary to >>>>>>>> pass them to the conditions as args and kwargs instead. >>>>>>>> >>>>>>>> The name "before" is a confusing name. It's not just something that >>>>>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>>>>> things after it, but with values taken before the function call. Based >>>>>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>>>>> confusing than one that is obvious but misleading. >>>>>>>> >>>>>>>> By the way, should the first postcondition be `self.get(key) is >>>>>>>> value`, checking for identity rather than equality? >>>>>>> _______________________________________________ >>>>>>> Python-ideas mailing list >>>>>>> Python-ideas at python.org >>>>>>> https://mail.python.org/mailman/listinfo/python-ideas >>>>>>> Code of Conduct: http://python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Fri Sep 28 19:04:32 2018 From: jamtlu at gmail.com (James Lu) Date: Fri, 28 Sep 2018 19:04:32 -0400 Subject: [Python-ideas] Exception handling in contracts Message-ID: Let?s get some ideas for how icontract can say ?it should throw an exception if this happens.? From jamtlu at gmail.com Fri Sep 28 19:16:18 2018 From: jamtlu at gmail.com (James Lu) Date: Fri, 28 Sep 2018 19:16:18 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: Many editors highlight decorators in a different color that makes it easier to ignore and can also fold decorators. Contracts can also sometimes actively improve the flow of code. I personally find a formal contract easier to read than informal documentation. It also reduces the times where you need to spend time figuring out if documentation actually accurate and up to date From steve at pearwood.info Fri Sep 28 19:39:08 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 29 Sep 2018 09:39:08 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: <20180928191854.Horde.xmUWrqDGOX18KB2KDkxRZbz@webmail.your-server.de> References: <20180928191854.Horde.xmUWrqDGOX18KB2KDkxRZbz@webmail.your-server.de> Message-ID: <20180928233908.GO19437@ando.pearwood.info> On Fri, Sep 28, 2018 at 07:18:54PM +0200, 2qdxy4rzwzuuilue at potatochowder.com wrote: > > On 9/28/18 12:45 PM, Steven D'Aprano wrote: > >On Tue, Sep 25, 2018 at 09:59:53PM +1000, Hugh Fisher wrote: > > > >>C and Python (currently) are known as simple languages. > > > >o_O > > > >That's a usage of "simple" I haven't come across before. Especially in > >the case of C, which is a minefield of *intentionally* underspecified > >behaviour which makes it near to impossible for the developer to tell > >what a piece of syntactically legal C code will actually do in practice. > > s/C/Python/ > > s/underspecified/dynamic/ > > ;-) I see the wink, but I don't see the relevance. Are you agreeing with me or disagreeing? Python is "simple" in the sense that the execution model is *relatively* simple, but its not a minimalist language by any definition. And as you say, the execution model is dynamic: we can't be sure what legal code will do until you know the runtime state. (Although we can often guess, based on assumptions about sensible, non-weird objects that don't do weird things.) But none of that compares to C undefined behaviour. People who think that they are equivalent, don't understand C undefined behaviour. https://blog.regehr.org/archives/213 http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html -- Steve From hugo.fisher at gmail.com Fri Sep 28 19:50:27 2018 From: hugo.fisher at gmail.com (Hugh Fisher) Date: Sat, 29 Sep 2018 09:50:27 +1000 Subject: [Python-ideas] Simplicity of C (was why is design-by-contracts not widely) In-Reply-To: References: Message-ID: > Date: Sat, 29 Sep 2018 02:45:06 +1000 > From: Steven D'Aprano > To: python-ideas at python.org > Subject: Re: [Python-ideas] Why is design-by-contracts not widely > Message-ID: <20180928164506.GN19437 at ando.pearwood.info> > Content-Type: text/plain; charset=us-ascii > > On Tue, Sep 25, 2018 at 09:59:53PM +1000, Hugh Fisher wrote: > > > C and Python (currently) are known as simple languages. > > o_O > > That's a usage of "simple" I haven't come across before. Especially in > the case of C, which is a minefield of *intentionally* underspecified > behaviour which makes it near to impossible for the developer to tell > what a piece of syntactically legal C code will actually do in practice. Oh FFS. You couldn't make the effort to read the very next sentence, let alone the next paragraph, before responding? -- cheers, Hugh Fisher From rosuav at gmail.com Fri Sep 28 20:12:27 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sat, 29 Sep 2018 10:12:27 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <5BAE9AA2.2020406@canterbury.ac.nz> References: <5BAC807A.2070509@canterbury.ac.nz> <5BAE9AA2.2020406@canterbury.ac.nz> Message-ID: On Sat, Sep 29, 2018 at 7:19 AM Greg Ewing wrote: > > Chris Angelico wrote: > > It is still fundamentally difficult to make assertions about the file > > system as pre/post contracts. > > When you consider race conditions, I'd say it's impossible. > A postcondition check involving the file system could fail > even if the function does its job perfectly. I guess that depends on the meaning of "contract". If it means "there is a guarantee that this is true after this function returns", then yes, the race condition means it's impossible. But as a part of the function's declared intent, it's fine to have a postcondition of "the directory will be empty" even though something could drop a file into it. But if it's the latter... contracts are just a different way of defining unit tests.... and we're right back where we started. ChrisA From 2QdxY4RzWzUUiLuE at potatochowder.com Fri Sep 28 20:22:03 2018 From: 2QdxY4RzWzUUiLuE at potatochowder.com (Dan Sommers) Date: Fri, 28 Sep 2018 20:22:03 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: <20180928233908.GO19437@ando.pearwood.info> References: <20180928191854.Horde.xmUWrqDGOX18KB2KDkxRZbz@webmail.your-server.de> <20180928233908.GO19437@ando.pearwood.info> Message-ID: <82c32dc7-6887-6706-67a2-14a5824875d5@potatochowder.com> On 9/28/18 7:39 PM, Steven D'Aprano wrote: > On Fri, Sep 28, 2018 at 07:18:54PM +0200, 2qdxy4rzwzuuilue at potatochowder.com wrote: >> >> On 9/28/18 12:45 PM, Steven D'Aprano wrote: >>> On Tue, Sep 25, 2018 at 09:59:53PM +1000, Hugh Fisher wrote: >>> >>>> C and Python (currently) are known as simple languages. >>> >>> o_O >>> >>> That's a usage of "simple" I haven't come across before. Especially in >>> the case of C, which is a minefield of *intentionally* underspecified >>> behaviour which makes it near to impossible for the developer to tell >>> what a piece of syntactically legal C code will actually do in practice. >> >> s/C/Python/ >> >> s/underspecified/dynamic/ >> >> ;-) > > I see the wink, but I don't see the relevance. Are you agreeing with me > or disagreeing? I agree that Hugh's use of "simple" is unfamiliar. I disagree that C is is a bigger offender than Python when it comes to a developer telling what a piece of syntactically legal code will actually do in practice. If that's not what you meant by "Especially in the case of C...," then I mis-interpreted or read too much into your wording. > Python is "simple" in the sense that the execution model is *relatively* > simple, but its not a minimalist language by any definition. And as you > say, the execution model is dynamic: we can't be sure what legal code > will do until you know the runtime state. That's my point: What you emphasized about C can be applied equally to Python. In C, it's because the standard is intentionally underspecified; in Python, it's y because the language is intentionally dynamic. When you said "underspecified," I didn't make the leap to "undefined behaviour" (although I think I know from past experience how you feel about C and its undefined behaviour). Instead, I jumped to things like the ambiguity in the size of an int, or the freedom the compiler has to pack/align struct values or implement integers as something other than two's complement. > (Although we can often guess, based on assumptions about sensible, > non-weird objects that don't do weird things.) Again, the same is true of C. In Python, weird objects might override getattr; in C weird objects might point to hardware registers, or depend on implementation specific detail(s). > But none of that compares to C undefined behaviour. People who think > that they are equivalent, don't understand C undefined behaviour. Well, yes: Some syntactically legal C results in nasal demons, and some of that code is harder to spot than others. AFAIK, syntactically legal Python can only do that if the underlying C code invokes undefined behaviour. From greg.ewing at canterbury.ac.nz Fri Sep 28 20:30:45 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 29 Sep 2018 12:30:45 +1200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> <5BAE9AA2.2020406@canterbury.ac.nz> Message-ID: <5BAEC7B5.2000108@canterbury.ac.nz> Chris Angelico wrote: > But as a part of the > function's declared intent, it's fine to have a postcondition of "the > directory will be empty" even though something could drop a file into > it. If you only intend the contract to serve as documentation, I suppose that's okay, but it means you can't turn on runtime checking of contracts, otherwise your program could suffer spurious failures. -- Greg From rosuav at gmail.com Fri Sep 28 20:32:16 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sat, 29 Sep 2018 10:32:16 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: <82c32dc7-6887-6706-67a2-14a5824875d5@potatochowder.com> References: <20180928191854.Horde.xmUWrqDGOX18KB2KDkxRZbz@webmail.your-server.de> <20180928233908.GO19437@ando.pearwood.info> <82c32dc7-6887-6706-67a2-14a5824875d5@potatochowder.com> Message-ID: On Sat, Sep 29, 2018 at 10:22 AM Dan Sommers <2QdxY4RzWzUUiLuE at potatochowder.com> wrote: > > On 9/28/18 7:39 PM, Steven D'Aprano wrote: > > But none of that compares to C undefined behaviour. People who think > > that they are equivalent, don't understand C undefined behaviour. > > Well, yes: Some syntactically legal C results in nasal demons, and some > of that code is harder to spot than others. AFAIK, syntactically legal > Python can only do that if the underlying C code invokes undefined > behaviour. What should happen here? >>> import ctypes >>> ctypes.cast(id(1), ctypes.POINTER(ctypes.c_int))[6] = 0 >>> 1 Nothing here invokes C's undefined behaviour. Or what about here: >>> import sys; sys.setrecursionlimit(2147483647) >>> def f(): f() ... >>> f() Python has its own set of "well don't do that then" situations. In fact, I would say that *most* languages do. ChrisA From 2QdxY4RzWzUUiLuE at potatochowder.com Fri Sep 28 20:36:01 2018 From: 2QdxY4RzWzUUiLuE at potatochowder.com (Dan Sommers) Date: Fri, 28 Sep 2018 20:36:01 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> <5BAE9AA2.2020406@canterbury.ac.nz> Message-ID: On 9/28/18 8:12 PM, Chris Angelico wrote: > On Sat, Sep 29, 2018 at 7:19 AM Greg Ewing wrote: >> >> Chris Angelico wrote: >>> It is still fundamentally difficult to make assertions about the file >>> system as pre/post contracts. >> >> When you consider race conditions, I'd say it's impossible. >> A postcondition check involving the file system could fail >> even if the function does its job perfectly. > > I guess that depends on the meaning of "contract". If it means "there > is a guarantee that this is true after this function returns", then > yes, the race condition means it's impossible. But as a part of the > function's declared intent, it's fine to have a postcondition of "the > directory will be empty" even though something could drop a file into > it. > > But if it's the latter... contracts are just a different way of > defining unit tests.... and we're right back where we started. At the risk of pedantry (on a Python list? I'm *shocked*): I call BS on any contract that requires on entry or guarantees on exit the state of the file system. At best, a function can guarantee that it will make (or made) a request to the OS, and that the OS returned "success" before the function continued. Then again, a function that guarantees to work in a particular way based on some condition of the file system would be okay. For example, a function might claim to create a temporary file with some content *if* some directory exists when the function tries to create the temporary file. But as I think both of you are claiming, the best that function can guarantee on exit is that it asked the OS to write to the file, and that the OS agreed to do so. The function cannot guarantee that the file or the content still exists when the function finally returns. From 2QdxY4RzWzUUiLuE at potatochowder.com Fri Sep 28 20:48:29 2018 From: 2QdxY4RzWzUUiLuE at potatochowder.com (Dan Sommers) Date: Fri, 28 Sep 2018 20:48:29 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: References: <20180928191854.Horde.xmUWrqDGOX18KB2KDkxRZbz@webmail.your-server.de> <20180928233908.GO19437@ando.pearwood.info> <82c32dc7-6887-6706-67a2-14a5824875d5@potatochowder.com> Message-ID: On 9/28/18 8:32 PM, Chris Angelico wrote: > On Sat, Sep 29, 2018 at 10:22 AM Dan Sommers > <2QdxY4RzWzUUiLuE at potatochowder.com> wrote: >> >> On 9/28/18 7:39 PM, Steven D'Aprano wrote: >>> But none of that compares to C undefined behaviour. People who think >>> that they are equivalent, don't understand C undefined behaviour. >> >> Well, yes: Some syntactically legal C results in nasal demons, and some >> of that code is harder to spot than others. AFAIK, syntactically legal >> Python can only do that if the underlying C code invokes undefined >> behaviour. > > What should happen here? [examples of what Steven would call non-sensible, non-non-weird objects doing non-non-weird things snipped] AFAIK, "AFAIK" is a weasel word: It allows me to proclaim my own ignorance without providing further examples, evidence, or counter examples. :-) > Python has its own set of "well don't do that then" situations. In > fact, I would say that *most* languages do. Yep. From steve at pearwood.info Fri Sep 28 23:47:22 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 29 Sep 2018 13:47:22 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <20180929034722.GQ19437@ando.pearwood.info> On Sun, Sep 23, 2018 at 07:09:37AM +0200, Marko Ristin-Kaufmann wrote: > After the discussion we had on the list and after browsing the internet a > bit, I'm still puzzled why design-by-contract was not more widely adopted > and why so few languages support it. [...] > *. *After properly reading about design-by-contract and getting deeper into > the topic, there is no rational argument against it and the benefits are > obvious. And still, people just wave their hand and continue without > formalizing the contracts in the code and keep on writing them in the > descriptions. > > * Why is that so? [...] You are asking a question about human psychology but expecting logical, technical answers. I think that's doomed to failure. There is no nice way to say this, because it isn't nice. Programmers and language designers don't use DbC because it is new and different and therefore scary and wrong. Programmers, as a class, are lazy (they even call laziness a virtue), deeply conservative, superstitious, and don't like change. They do what they do because they're used to it, not because it is the technically correct thing to do, unless it is by accident. (Before people's hackles raise too much, I'm speaking in generalities, not about you personally. Of course you are one of the 1% awesomely logical, technically correct programmers who know what you are doing and do it for the right reasons after careful analysis. I'm talking about the other 99%, you know the ones. You probably work with them. You've certainly read their answers on Stackoverflow or The Daily WTF and a million blogs.) They won't use it until there is a critical mass of people using it, and then it will suddenly flip from "that weird shit Eiffel does" to "oh yeah, this is a standard technique that everyone uses, although we don't make a big deal about it". Every innovation in programming goes through this. Whether the innovation goes mainstream or not depends on getting a critical mass, and that is all about psychology and network effects and nothing to do with the merit of the idea. Remember the wars over structured programming? Probably not. In 2018, the idea that people would *seriously argue against writing subroutines* seems insane, but they did. As late as 1999, a former acquaintance of mine was working on a BASIC project for a manager who insisted they use GOTO and GOSUB in preference to subroutines. Testing: the idea that we should have *repeatable automated tests* took a long time to be accepted, and is still resisted by both developers and their managers. What's wrong with just sitting a person down in front of the program and checking for bugs by hand? We still sometimes have to fight for an automated test suite, never mind something like test driven development. ML-based languages have had type inference for decades, and yet people still think of type checking in terms of C and Java declarations. Or they still think in terms of static VERSUS dynamic typing, instead of static PLUS dynamic typing. I could go on, but I think I've made my point. I can give you some technical reasons why I don't use contracts in my own Python code, even though I want to: (1) Many of my programs are too small to bother. If I'm writing a quick script, I don't even write tests. Sometimes "be lazy" is the right answer, when the cost of bugs is small enough and the effort to avoid them is greater. And that's fine. Nobody says that contracts must be mandatory. (2) Python doesn't make it easy to write contracts. None of the solutions I've seen are nice. Ironically, the least worst I've seen is a quick and dirty metaclass solution given by Guido in an essay way back in Python 1.5 days: https://www.python.org/doc/essays/metaclasses/ His solution relies only on a naming convention, no decorators, no lambdas: class C(Eiffel): def method(self, arg): return whatever def method_pre(self, arg): assert arg > 0 def method_post(self, Result, arg): assert Result > arg Still not pretty, but at least we get some block structure instead of a wall of decorators. Syntax matters. Without good syntax that makes it easy to write contracts, it will never be anything but niche. (3) In a sense, *of course I write contracts*. Or at least precondition checks. I just don't call them that, and I embed them in the implementation of the method, and have no way to turn them off. Nearly all of us have done the same, we start with a method like this: class C: def method(self, alist, astring, afloat): # do some work... using nothing but naming conventions and an informal (and often vague) specification in the docstring, and while that is sometimes enough in small projects, the third time we get bitten we start adding defensive checks so we get sensible exceptions: def method(self, alist, astring, afloat): if not isinstance(alist, list): raise TypeError('expected a list') if alist == []: raise ValueError('list must not be empty') # and so on... These are pre-conditions! We just don't call them that. And we can't disable them. They're not easy to extract from the source code and turn into specifications. And so much boilerplate! Let's invent syntax to make it more obvious what is going on: def method(self, alist, astring, afloat): requires: isinstance(alist, list) alist != [] isinstance(astring, str) number_of_vowels(astring) > 0 isinstance(afloat, float) not math.isnan(afloat) implementation: # code goes here Its easy to distinguish the precondition checks from the implementation, easy to ignore the checks if you don't care about them, and easy for an external tool to analyse. What's not to like about contracts? You're already using them, just in an ad hoc, ugly, informal way. And I think that is probably the crux of the matter. Most people are lazy, and don't like having to do things in a systematic manner. For years, programmers resisted writing functions, because unstructured code was easier. We still resist writing documentation, because "its obvious from the source code" is easier. We resist writing even loose specifications, because NOT writing them is easier. We resist writing tests unless the project demands it. We resist running a linter or static checker over our code ("it runs, that means there are no errors"). Until peer pressure and pain makes us do so, then we love them and could not imagine going back to the bad old days before static analysis and linters and unit tests. -- Steve From mikhailwas at gmail.com Fri Sep 28 23:57:45 2018 From: mikhailwas at gmail.com (Mikhail V) Date: Sat, 29 Sep 2018 06:57:45 +0300 Subject: [Python-ideas] "while:" for the loop In-Reply-To: <23468.29167.517011.19676@turnbull.sk.tsukuba.ac.jp> References: <23468.29167.517011.19676@turnbull.sk.tsukuba.ac.jp> Message-ID: I put the list of related discussion here, just in case. Same suggestion: https://mail.python.org/pipermail/python-dev/2005-July/054914.html Idea for the "loop" keyword: https://mail.python.org/pipermail/python-ideas/2014-June/028202.html (followed by the same suggestion from @Random832: https://mail.python.org/pipermail/python-ideas/2014-June/028220.html) Somewhat related PEP (rejected) + discussion links inside: https://legacy.python.org/dev/peps/pep-0315/ (I've meditated a bit on this - but could not get what was actually the point of that idea.) Plus some other related discussions, maybe not directly related, but something along the lines might be interesting. An old and lively one: https://mail.python.org/pipermail/python-list/1999-April/002557.html https://mail.python.org/pipermail/python-list/1999-May/008535.html Another one: https://mail.python.org/pipermail/python-list/2000-December/029972.html ( Particularly this: https://mail.python.org/pipermail/python-list/2000-December/052694.html ) Yet another "1" vs "True" debate: https://mail.python.org/pipermail/python-list/2012-January/618649.html New era: https://mail.python.org/pipermail/python-list/2017-April/721182.html From steve at pearwood.info Sat Sep 29 00:06:24 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 29 Sep 2018 14:06:24 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <5BAEC7B5.2000108@canterbury.ac.nz> References: <5BAC807A.2070509@canterbury.ac.nz> <5BAE9AA2.2020406@canterbury.ac.nz> <5BAEC7B5.2000108@canterbury.ac.nz> Message-ID: <20180929040624.GR19437@ando.pearwood.info> On Sat, Sep 29, 2018 at 12:30:45PM +1200, Greg Ewing wrote: > Chris Angelico wrote: > >But as a part of the > >function's declared intent, it's fine to have a postcondition of "the > >directory will be empty" even though something could drop a file into > >it. > > If you only intend the contract to serve as documentation, > I suppose that's okay, but it means you can't turn on runtime > checking of contracts, otherwise your program could suffer > spurious failures. If your code can cope with a particular file system state ("directory isn't empty") then you don't need to specify it as a precondition. If it can't cope, then it isn't a spurious failure, its a real failure. You either get an error when the condition is checked, up front, or you get an error in the middle of processing the directory. Of course for many things (especially file system operations) you need to be prepared to handle I/O errors *even if the precondition passed*. So what? That hasn't changed, and nobody said that contracts were a cure for Time Of Check To Time Of Use bugs. Contracts are a tool to help developers write better code, not a magic wand. You still need to write your code in a way which isn't vulnerable to TOCTTOU failures, contracts or not. Anticipating an objection: why bother with the precondition check if the code has to handle an error anyway? Because it is better to get an upfront error than an error halfway through processing. In general you get a better, more informative error message, closer to the place where it matters, if you fail fast. Yes yes yes, in theory you might have a precondition requirement which would fail up front but resolve itself before it matters: directory not empty start running your program precondition check fails ... later directory is emptied by another process your program actually needs to use the empty directory but do you really think it is good practice to design your code on the basis of that? Better to write your code conservatively: if the directory isn't empty up front, don't hope it will empty itself, fail fast, but be prepared to handle the error case as well. -- Steve From marko.ristin at gmail.com Sat Sep 29 01:36:13 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sat, 29 Sep 2018 07:36:13 +0200 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Message-ID: Hi James, I'm a bit short on time today, and would need some more time and attention to understand the proposal you wrote. I'll try to come back to you tomorrow. In any case, I need to refactor icontract's decorators to use conditions like lambda P: and lambda P, result: first before adding snapshot functionality. What about having @snapshot_with and @snapshot? @Snapshot_with does what you propose and @snapshot expects a lambda P, identifier: ? After the refactoring, maybe the same could be done for defining contracts as well? (Requires and requires_that?) If the documentation is clear, I'd expect the user to be able to distinguish the two. The first approach is shorter, and uses magic, but fails in some rare situations. The other method is more verbose, but always works. Cheers, Marko Le sam. 29 sept. 2018 ? 00:35, James Lu a ?crit : > I am fine with your proposed syntax. It?s certainly lucid. Perhaps it > would be a good idea to get people accustomed to ?non-magic? syntax. > > I still have a feeling that most developers would like to store the state > in many different custom ways. > > Please explain. (Expressions like thunk(all)(a == b for a, b in > P.arg.meth()) would be valid.) > > I'm thinking mostly about all the edge cases which we would not be able to > cover (and how complex that would be to cover them). > > > Except for a > b > c being one flat expression with 5 members, it seems > fairly easy to recreate an AST, which can then be compiled down to a code > object. The code object can be fun with a custom ?locals()? > > Below is my concept code for such a P object. > > from ast import * > > # not done: enforce Singleton property on EmptySymbolType > > class EmptySymbolType(object): ... > > EmptySymbol = EmptySymbolType() # empty symbols are placeholders > > class MockP(object): > > # "^" is xor > > @icontract.pre(lambda symbol, astnode: (symbol is None) ^ (astnode is > None)) > > def __init__(self, symbol=None, value=EmptySymbol, astnode=None, > initsymtable=(,)): > > self.symtable = dict(initsymtable) > > if symbol: > > self.expr = Expr(value=Name(id=symbol, ctx=Load())) > > self.symtable = {symbol: value} > > else: > > self.expr = astnode > > self.frozen = False > > def __add__(self, other): > > wrapped = MockP.wrap_value(other) > > return MockP(astnode=Expr(value=BinOp(self.expr, Add(), > wrapped.expr), initsymtable={**self.symtable, **wrapped.symtable}) > > def compile(self): ... > > def freeze(self): > > # frozen objects wouldn?t have an overrided getattr, allowing for > icontract to manipulate the MockP object using its public interface > > self.frozen = True > > @classmethod > > def wrap_value(cls, obj): > > # create a MockP object from a value. Generate a random identifier > and set that as the key in symtable, the AST node is the name of that > identifier, retrieving its value through simple expression evaluation. > > ... > > > thunk = MockP.wrap_value > > P = MockP('P') > > # elsewhere: ensure P is only accessed via valid ?dot attribute access? > inside @snapshot so contracts fail early, or don?t and allow Magic like > __dict__ to occur on P. > > On Sep 27, 2018, at 9:49 PM, Marko Ristin-Kaufmann > wrote: > > Hi James, > > I still have a feeling that most developers would like to store the state > in many different custom ways. I see also thunk and snapshot with wrapper > objects to be much more complicated to implement and maintain; I'm thinking > mostly about all the edge cases which we would not be able to cover (and > how complex that would be to cover them). Then the linters need also to > work around such wrappers... It might also scare users off since it looks > like too much magic. Another concern I also have is that it's probably very > hard to integrate these wrappers with mypy later -- but I don't really have > a clue about that, only my gut feeling? > > What about we accepted to repeat "lambda P, " prefix, and have something > like this: > > @snapshot( > lambda P, some_name: len(P.some_property), > lambda P, another_name: hash(P.another_property) > ) > > It's not too verbose for me and you can still explain in three-four > sentences what happens below the hub in the library's docs. A > pycharm/pydev/vim/emacs plugins could hide the verbose parts. > > I performed a small experiment to test how this solution plays with pylint > and it seems OK that arguments are not used in lambdas. > > Cheers, > Marko > > > On Thu, 27 Sep 2018 at 12:27, James Lu wrote: > >> Why couldn?t we record the operations done to a special object and >> replay them? >> >> Actually, I think there is probably no way around a decorator that >>> captures/snapshots the data before the function call with a lambda (or even >>> a separate function). "Old" construct, if we are to parse it somehow from >>> the condition function, would limit us only to shallow copies (and be >>> complex to implement as soon as we are capturing out-of-argument values >>> such as globals *etc.)*. Moreove, what if we don't need shallow copies? >>> I could imagine a dozen of cases where shallow copy is not what the >>> programmer wants: for example, s/he might need to make deep copies, hash or >>> otherwise transform the input data to hold only part of it instead of >>> copying (*e.g., *so as to allow equality check without a double copy of >>> the data, or capture only the value of certain property transformed in some >>> way). >>> >>> >> from icontract import snapshot, P, thunk >> @snapshot(some_identifier=P.self.some_method(P.some_argument.some_attr)) >> >> P is an object of our own type, let?s call the type MockP. MockP returns >> new MockP objects when any operation is done to it. MockP * MockP = MockP. >> MockP.attr = MockP. MockP objects remember all the operations done to them, >> and allow the owner of a MockP object to re-apply the same operations >> >> ?thunk? converts a function or object or class to a MockP object, storing >> the function or object for when the operation is done. >> >> thunk(function)() >> >> Of course, you could also thunk objects like so: thunk(3) * P.number. >> (Though it might be better to keep the 3 after P.number in this case so >> P.number?s __mult__ would be invoked before 3?s __mult__ is invokes. >> >> >> In most cases, you?d save any operations that can be done on a copy of >> the data as generated by @snapshot in @postcondiion. thunk is for rare >> scenarios where 1) it?s hard to capture the state, for example an object >> that manages network state (or database connectivity etc) and whose stage >> can only be read by an external classmethod 2) you want to avoid using >> copy.deepcopy. >> >> I?m sure there?s some way to override isinstance through a meta class or >> dunder subclasshook. >> >> I suppose this mocking method could be a shorthand for when you don?t >> need the full power of a lambda. It?s arguably more succinct and readable, >> though YMMV. >> >> I look forward to reading your opinion on this and any ideas you might >> have. >> >> On Sep 26, 2018, at 3:56 PM, James Lu wrote: >> >> Hi Marko, >> >> Actually, following on #A4, you could also write those as multiple >> decorators: >> @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) >> @snpashot(lambda _, other_identifier: other_func(_.self)) >> >> Yes, though if we?re talking syntax using kwargs would probably be better. >> Using ?P? instead of ?_?: (I agree that _ smells of ignored arguments) >> >> @snapshot(some_identifier=lambda P: ..., some_identifier2=lambda P: ...) >> >> Kwargs has the advantage that you can extend multiple lines without >> repeating @snapshot, though many lines of @capture would probably be more >> intuitive since each decorator captures one variable. >> >> Why uppercase "P" and not lowercase (uppercase implies a constant for me)? >> >> To me, the capital letters are more prominent and explicit- easier to see >> when reading code. It also implies its a constant for you- you shouldn?t be >> modifying it, because then you?d be interfering with the function itself. >> >> Side node: maybe it would be good to have an @icontract.nomutate >> (probably use a different name, maybe @icontract.readonly) that makes sure >> a method doesn?t mutate its own __dict__ (and maybe the __dict__ of the >> members of its __dict__). It wouldn?t be necessary to put the decorator on >> every read only function, just the ones your worried might mutate. >> >> Maybe a @icontract.nomutate(param=?paramname?) that ensures the __dict__ >> of all members of the param name have the same equality or identity before >> and after. The semantics would need to be worked out. >> >> On Sep 26, 2018, at 8:58 AM, Marko Ristin-Kaufmann < >> marko.ristin at gmail.com> wrote: >> >> Hi James, >> >> Actually, following on #A4, you could also write those as multiple >> decorators: >> @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) >> @snpashot(lambda _, other_identifier: other_func(_.self)) >> >> Am I correct? >> >> "_" looks a bit hard to read for me (implying ignored arguments). >> >> Why uppercase "P" and not lowercase (uppercase implies a constant for >> me)? Then "O" for "old" and "P" for parameters in a condition: >> @post(lambda O, P: ...) >> ? >> >> It also has the nice property that it follows both the temporal and the >> alphabet order :) >> >> On Wed, 26 Sep 2018 at 14:30, James Lu wrote: >> >>> I still prefer snapshot, though capture is a good name too. We could use >>> generator syntax and inspect the argument names. >>> >>> Instead of ?a?, perhaps use ?_?. Or maybe use ?A.?, for arguments. Some >>> people might prefer ?P? for parameters, since parameters sometimes means >>> the value received while the argument means the value passed. >>> >>> (#A1) >>> >>> from icontract import snapshot, __ >>> @snapshot(some_func(_.some_argument.some_attr) for some_identifier, _ in >>> __) >>> >>> Or (#A2) >>> >>> @snapshot(some_func(some_argument.some_attr) for some_identifier, _, >>> some_argument in __) >>> >>> ? >>> Or (#A3) >>> >>> @snapshot(lambda some_argument,_,some_identifier: >>> some_func(some_argument.some_attr)) >>> >>> Or (#A4) >>> >>> @snapshot(lambda _,some_identifier: some_func(_.some_argument.some_attr)) >>> @snapshot(lambda _,some_identifier, other_identifier: >>> some_func(_.some_argument.some_attr), other_func(_.self)) >>> >>> I like #A4 the most because it?s fairly DRY and avoids the extra >>> punctuation of >>> >>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>> >>> >>> On Sep 26, 2018, at 12:23 AM, Marko Ristin-Kaufmann < >>> marko.ristin at gmail.com> wrote: >>> >>> Hi, >>> >>> Franklin wrote: >>> >>>> The name "before" is a confusing name. It's not just something that >>>> happens before. It's really a pre-`let`, adding names to the scope of >>>> things after it, but with values taken before the function call. Based >>>> on that description, other possible names are `prelet`, `letbefore`, >>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>> confusing than one that is obvious but misleading. >>> >>> >>> James wrote: >>> >>>> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ? >>>> old? it?s ?snapshot?. >>> >>> >>> I like "snapshot", it's a bit clearer than prefixing/postfixing verbs >>> with "pre" which might be misread (*e.g., *"prelet" has a meaning in >>> Slavic languages and could be subconsciously misread, "predef" implies to >>> me a pre-*definition* rather than prior-to-definition , "beforescope" >>> is very clear for me, but it might be confusing for others as to what it >>> actually refers to ). What about "@capture" (7 letters for captures *versus >>> *8 for snapshot)? I suppose "@let" would be playing with fire if Python >>> with conflicting new keywords since I assume "let" to be one of the >>> candidates. >>> >>> Actually, I think there is probably no way around a decorator that >>> captures/snapshots the data before the function call with a lambda (or even >>> a separate function). "Old" construct, if we are to parse it somehow from >>> the condition function, would limit us only to shallow copies (and be >>> complex to implement as soon as we are capturing out-of-argument values >>> such as globals *etc.)*. Moreove, what if we don't need shallow copies? >>> I could imagine a dozen of cases where shallow copy is not what the >>> programmer wants: for example, s/he might need to make deep copies, hash or >>> otherwise transform the input data to hold only part of it instead of >>> copying (*e.g., *so as to allow equality check without a double copy of >>> the data, or capture only the value of certain property transformed in some >>> way). >>> >>> I'd still go with the dictionary to allow for this extra freedom. We >>> could have a convention: "a" denotes to the current arguments, and "b" >>> denotes the captured values. It might make an interesting hint that we put >>> "b" before "a" in the condition. You could also interpret "b" as "before" >>> and "a" as "after", but also "a" as "arguments". >>> >>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>> @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) >>> def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: >>> ... >>> >>> "b" can be omitted if it is not used. Under the hub, all the arguments >>> to the condition would be passed by keywords. >>> >>> In case of inheritance, captures would be inherited as well. Hence the >>> library would check at run-time that the returned dictionary with captured >>> values has no identifier that has been already captured, and the linter >>> checks that statically, before running the code. Reading values captured in >>> the parent at the code of the child class might be a bit hard -- but that >>> is case with any inherited methods/properties. In documentation, I'd list >>> all the captures of both ancestor and the current class. >>> >>> I'm looking forward to reading your opinion on this and alternative >>> suggestions :) >>> Marko >>> >>> On Tue, 25 Sep 2018 at 18:12, Franklin? Lee < >>> leewangzhong+python at gmail.com> wrote: >>> >>>> On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann >>>> wrote: >>>> > >>>> > Hi, >>>> > >>>> > (I'd like to fork from a previous thread, "Pre-conditions and >>>> post-conditions", since it got long and we started discussing a couple of >>>> different things. Let's discuss in this thread the implementation of a >>>> library for design-by-contract and how to push it forward to hopefully add >>>> it to the standard library one day.) >>>> > >>>> > For those unfamiliar with contracts and current state of the >>>> discussion in the previous thread, here's a short summary. The discussion >>>> started by me inquiring about the possibility to add design-by-contract >>>> concepts into the core language. The idea was rejected by the participants >>>> mainly because they thought that the merit of the feature does not merit >>>> its costs. This is quite debatable and seems to reflect many a discussion >>>> about design-by-contract in general. Please see the other thread, "Why is >>>> design-by-contract not widely adopted?" if you are interested in that >>>> debate. >>>> > >>>> > We (a colleague of mine and I) decided to implement a library to >>>> bring design-by-contract to Python since we don't believe that the concept >>>> will make it into the core language anytime soon and we needed badly a tool >>>> to facilitate our work with a growing code base. >>>> > >>>> > The library is available at http://github.com/Parquery/icontract. >>>> The hope is to polish it so that the wider community could use it and once >>>> the quality is high enough, make a proposal to add it to the standard >>>> Python libraries. We do need a standard library for contracts, otherwise >>>> projects with conflicting contract libraries can not integrate (e.g., the >>>> contracts can not be inherited between two different contract libraries). >>>> > >>>> > So far, the most important bits have been implemented in icontract: >>>> > >>>> > Preconditions, postconditions, class invariants >>>> > Inheritance of the contracts (including strengthening and weakening >>>> of the inherited contracts) >>>> > Informative violation messages (including information about the >>>> values involved in the contract condition) >>>> > Sphinx extension to include contracts in the automatically generated >>>> documentation (sphinx-icontract) >>>> > Linter to statically check that the arguments of the conditions are >>>> correct (pyicontract-lint) >>>> > >>>> > We are successfully using it in our code base and have been quite >>>> happy about the implementation so far. >>>> > >>>> > There is one bit still missing: accessing "old" values in the >>>> postcondition (i.e., shallow copies of the values prior to the execution of >>>> the function). This feature is necessary in order to allow us to verify >>>> state transitions. >>>> > >>>> > For example, consider a new dictionary class that has "get" and "put" >>>> methods: >>>> > >>>> > from typing import Optional >>>> > >>>> > from icontract import post >>>> > >>>> > class NovelDict: >>>> > def length(self)->int: >>>> > ... >>>> > >>>> > def get(self, key: str) -> Optional[str]: >>>> > ... >>>> > >>>> > @post(lambda self, key, value: self.get(key) == value) >>>> > @post(lambda self, key: old(self.get(key)) is None and >>>> old(self.length()) + 1 == self.length(), >>>> > "length increased with a new key") >>>> > @post(lambda self, key: old(self.get(key)) is not None and >>>> old(self.length()) == self.length(), >>>> > "length stable with an existing key") >>>> > def put(self, key: str, value: str) -> None: >>>> > ... >>>> > >>>> > How could we possible implement this "old" function? >>>> > >>>> > Here is my suggestion. I'd introduce a decorator "before" that would >>>> allow you to store whatever values in a dictionary object "old" (i.e. an >>>> object whose properties correspond to the key/value pairs). The "old" is >>>> then passed to the condition. Here is it in code: >>>> > >>>> > # omitted contracts for brevity >>>> > class NovelDict: >>>> > def length(self)->int: >>>> > ... >>>> > >>>> > # omitted contracts for brevity >>>> > def get(self, key: str) -> Optional[str]: >>>> > ... >>>> > >>>> > @before(lambda self, key: {"length": self.length(), "get": >>>> self.get(key)}) >>>> > @post(lambda self, key, value: self.get(key) == value) >>>> > @post(lambda self, key, old: old.get is None and old.length + 1 >>>> == self.length(), >>>> > "length increased with a new key") >>>> > @post(lambda self, key, old: old.get is not None and old.length >>>> == self.length(), >>>> > "length stable with an existing key") >>>> > def put(self, key: str, value: str) -> None: >>>> > ... >>>> > >>>> > The linter would statically check that all attributes accessed in >>>> "old" have to be defined in the decorator "before" so that attribute errors >>>> would be caught early. The current implementation of the linter is fast >>>> enough to be run at save time so such errors should usually not happen with >>>> a properly set IDE. >>>> > >>>> > "before" decorator would also have "enabled" property, so that you >>>> can turn it off (e.g., if you only want to run a postcondition in testing). >>>> The "before" decorators can be stacked so that you can also have a more >>>> fine-grained control when each one of them is running (some during test, >>>> some during test and in production). The linter would enforce that before's >>>> "enabled" is a disjunction of all the "enabled"'s of the corresponding >>>> postconditions where the old value appears. >>>> > >>>> > Is this a sane approach to "old" values? Any alternative approach you >>>> would prefer? What about better naming? Is "before" a confusing name? >>>> >>>> The dict can be splatted into the postconditions, so that no special >>>> name is required. This would require either that the lambdas handle >>>> **kws, or that their caller inspect them to see what names they take. >>>> Perhaps add a function to functools which only passes kwargs that fit. >>>> Then the precondition mechanism can pass `self`, `key`, and `value` as >>>> kwargs instead of args. >>>> >>>> For functions that have *args and **kwargs, it may be necessary to >>>> pass them to the conditions as args and kwargs instead. >>>> >>>> The name "before" is a confusing name. It's not just something that >>>> happens before. It's really a pre-`let`, adding names to the scope of >>>> things after it, but with values taken before the function call. Based >>>> on that description, other possible names are `prelet`, `letbefore`, >>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>> confusing than one that is obvious but misleading. >>>> >>>> By the way, should the first postcondition be `self.get(key) is >>>> value`, checking for identity rather than equality? >>>> >>> _______________________________________________ >>> Python-ideas mailing list >>> Python-ideas at python.org >>> https://mail.python.org/mailman/listinfo/python-ideas >>> Code of Conduct: http://python.org/psf/codeofconduct/ >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadgetsteve at live.co.uk Sat Sep 29 02:31:46 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sat, 29 Sep 2018 06:31:46 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN Message-ID: One of the strengths of the IEEE float, (to set against its many weaknesses), is the presence of the magic value NaN. Not a Number, or NaA, is especially useful in that it is a valid value in any mathematical operation, (always returning NaN), or comparison, (always returning False). In functional programming this is especially useful as it allows the chain to complete after an error while retaining the fact that an error occurred, (as we got NaN). In languages such as C integers can only be used to represent a limited range of values in integers and a less limited range of values, (but still limited), with a limited accuracy. However, one of Pythons strengths is that its integers can represent any whole number value, (up to the maximum available memory and in exchange for slow performance when numbers get huge). This is accomplished by Python Integers being objects rather than a fixed number of bytes. I think that it should be relatively simple to extend the Python integer class to have a NaN flag, possibly by having a bit length of 0, and have it follow the same rules for the handling of floating point NaN, i.e. any mathematical operation on an iNaN returns an iNaN and any comparison with one returns False. One specific use case that springs to mind would be for Libraries such as Pandas to return iNaN for entries that are not numbers in a column that it has been told to treat as integers. We would possibly need a flag to set this behaviour, rather than raising an Exception, or at the very least automatically (or provide a method to) set LHS integers to iNaN on such an exception. I thought that I would throw this out to Python Ideas for some discussion of whether such a feature is: a) Desirable? b) Possible, (I am sure that it could be done)? c) Likely to get me kicked off of the list? -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From njs at pobox.com Sat Sep 29 02:52:22 2018 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 28 Sep 2018 23:52:22 -0700 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: On Fri, Sep 28, 2018 at 11:31 PM, Steve Barnes wrote: > One specific use case that springs to mind would be for Libraries such > as Pandas to return iNaN for entries that are not numbers in a column > that it has been told to treat as integers. Pandas doesn't use Python objects to store integers, though; it uses an array of unboxed machine integers. In places where you can use Python objects to represent numbers, can't you just use float("nan") instead of iNaN? -n -- Nathaniel J. Smith -- https://vorpus.org From mike at selik.org Sat Sep 29 03:18:04 2018 From: mike at selik.org (Michael Selik) Date: Sat, 29 Sep 2018 00:18:04 -0700 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: On Fri, Sep 28, 2018 at 11:32 PM Steve Barnes wrote: > One of the strengths of the IEEE float, (to set against its many > weaknesses), is the presence of the magic value NaN. Not a Number, or > NaA, is especially useful in that it is a valid value in any > mathematical operation, (always returning NaN), or comparison, (always > returning False). In functional programming this is especially useful as > it allows the chain to complete after an error while retaining the fact > that an error occurred, (as we got NaN). The inventor of "null reference" called it a billion-dollar mistake [0]. I appreciate the Zen of Python's encouragement that "errors should never pass silently." Rather than returning iNaN, I'd prefer my program raise an exception. Besides, you can use a None if you'd like. [0] https://en.wikipedia.org/wiki/Tony_Hoare From gadgetsteve at live.co.uk Sat Sep 29 03:23:12 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sat, 29 Sep 2018 07:23:12 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: On 29/09/2018 07:52, Nathaniel Smith wrote: > On Fri, Sep 28, 2018 at 11:31 PM, Steve Barnes wrote: >> One specific use case that springs to mind would be for Libraries such >> as Pandas to return iNaN for entries that are not numbers in a column >> that it has been told to treat as integers. > > Pandas doesn't use Python objects to store integers, though; it uses > an array of unboxed machine integers. > > In places where you can use Python objects to represent numbers, can't > you just use float("nan") instead of iNaN? > > -n > It is a shame about Pandas not using integers, (speed considerations I would guess). Using float("nan") would possibly be incompatible with operations down the chain which might be expecting an integer or handling a float differently. -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From storchaka at gmail.com Sat Sep 29 03:24:44 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 29 Sep 2018 10:24:44 +0300 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: 29.09.18 09:31, Steve Barnes ????: > I think that it should be relatively simple to extend the Python integer > class to have a NaN flag, possibly by having a bit length of 0, and have > it follow the same rules for the handling of floating point NaN, i.e. > any mathematical operation on an iNaN returns an iNaN and any comparison > with one returns False. How does it differ from float('nan')? From gadgetsteve at live.co.uk Sat Sep 29 03:33:38 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sat, 29 Sep 2018 07:33:38 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: On 29/09/2018 08:18, Michael Selik wrote: > On Fri, Sep 28, 2018 at 11:32 PM Steve Barnes wrote: >> One of the strengths of the IEEE float, (to set against its many >> weaknesses), is the presence of the magic value NaN. Not a Number, or >> NaA, is especially useful in that it is a valid value in any >> mathematical operation, (always returning NaN), or comparison, (always >> returning False). In functional programming this is especially useful as >> it allows the chain to complete after an error while retaining the fact >> that an error occurred, (as we got NaN). > > The inventor of "null reference" called it a billion-dollar mistake > [0]. I appreciate the Zen of Python's encouragement that "errors > should never pass silently." Rather than returning iNaN, I'd prefer my > program raise an exception. Besides, you can use a None if you'd like. > > [0] https://en.wikipedia.org/wiki/Tony_Hoare > In the embedded world, (where I have spent most of my career), it is often the case that you need your code to always finish and if an error occurred you throw it away at the end or display the fact that you could not get a sensible answer - I am reasonably sure that the same is true of functional programming. I am not asking that the original error pass silently, (unless explicitly silenced), but rather having the option, when silencing (and hopefully logging hat an error occurred) to have a value that will pass through the rest of the processing chain without raising additional exceptions which None would be likely to do unless expressly tested for everywhere. This simplifies the overall code structure while retaining the fact that an error occurred, (and the log needs to be checked), without the dangerous practice of returning a valid value and setting an error flag, (checking of which is often neglected). -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From gadgetsteve at live.co.uk Sat Sep 29 03:35:31 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sat, 29 Sep 2018 07:35:31 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: On 29/09/2018 08:24, Serhiy Storchaka wrote: > 29.09.18 09:31, Steve Barnes ????: >> I think that it should be relatively simple to extend the Python integer >> class to have a NaN flag, possibly by having a bit length of 0, and have >> it follow the same rules for the handling of floating point NaN, i.e. >> any mathematical operation on an iNaN returns an iNaN and any comparison >> with one returns False. > > How does it differ from float('nan')? > It is still an integer and would pass through any processing that expected an integer as one, (with a value of iNaN). -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From storchaka at gmail.com Sat Sep 29 03:50:24 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 29 Sep 2018 10:50:24 +0300 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: 29.09.18 10:35, Steve Barnes ????: > On 29/09/2018 08:24, Serhiy Storchaka wrote: >> 29.09.18 09:31, Steve Barnes ????: >>> I think that it should be relatively simple to extend the Python integer >>> class to have a NaN flag, possibly by having a bit length of 0, and have >>> it follow the same rules for the handling of floating point NaN, i.e. >>> any mathematical operation on an iNaN returns an iNaN and any comparison >>> with one returns False. >> >> How does it differ from float('nan')? >> > It is still an integer and would pass through any processing that > expected an integer as one, (with a value of iNaN). Python is dynamically typed language. What is such processing that would work with iNaN, but doesn't work with float('nan')? From steve at pearwood.info Sat Sep 29 04:20:10 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 29 Sep 2018 18:20:10 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <20180929082010.GS19437@ando.pearwood.info> On Tue, Sep 25, 2018 at 08:01:28PM +1200, Robert Collins wrote: > On Mon, 24 Sep 2018 at 19:47, Marko Ristin-Kaufmann > wrote: [...] > > There are 150K projects on pypi.org. Each one of them would > > benefit if annotated with the contracts. > > You'll lose folks attention very quickly when you try to tell folk > what they do and don't understand. > > Claiming that DbC annotations will improve the documentation of every > single library on PyPI is an extraordinary claim, and such claims > require extraordinary proof. This is not a scientific paper, or an edited book. Its an email forum, and communication is not always as precise as it could be. We should read such statements charitably, not literally and pedantically. But having said that... would it be an "extraordinary claim" to state that all 150K projects on PyPI would benefit with better documentation, tests or error checking? I don't think so. I think it would be extraordinary to claim that there was even a single project that *wouldn't* benefit from at least one such patch. I'd like to see this glorious example of software perfection, and bow down in awe to its author. Contracts combine error checking, documentation and testing all in one. Ergo, if a project would benefit from any of those things, it would benefit from a contract. Anticipating an objection: like tests and documentation and error checking, contracts do not need to be "all or nothing". A contract can be incomplete and still provide benefit. We can add contracts incrementally. Even a single pre- or post-condition check is better than no check at all. So I'm with Marko here: every project on PyPI would, in principle, benefit from some contracts. I say in principle only because in practice, the available APIs for adding contracts to Python code are so clunky that the cost of contracts are excessive. We're paying a syntax tax on contracts which blows the cost all out of proportion. Just as languages like Java pay a syntax tax on static typing (compared to languages like ML and Haskell with type inference and few declarations needed), and languages like COBOL have a syntax tax on, well, everything. > I can think of many libraries where necessary pre and post conditions > (such as 'self is still locked') are going to be noisy, and at risk of > reducing comprehension if the DbC checks are used to enhance/extended > documentation. So long as my editor lets me collapse the contract blocks, I don't need to read them unless I want to. And automatically generated documentation always tends towards the verbose. Just look at the output of help() in the Python REPL. We learn to skim, and dig deeper only when needed. > Some of the examples you've been giving would be better expressed with > a more capable type system in my view (e.g. Rust's), but I have no > good idea about adding that into Python :/. Indeed. A lot of things which could be preconditions in Python would be type-checks in Eiffel. With gradual typing and type annotations, we could move some pre-condition and post-condition checks into the type checker. (Static typing is another thing which doesn't need to be "all or nothing".) > Anyhow, the thing I value most about python is its pithyness: its > extremely compact, allowing great developer efficiency, None of that changes. The beauty of contracts is that you can have as many or as few as make sense for each specific class or function, or for that matter each project. If your project is sufficiently lightweight and the cost of bugs is small enough, you might not bother with tests or error checking, or contracts. -- Steve From gadgetsteve at live.co.uk Sat Sep 29 04:43:44 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sat, 29 Sep 2018 08:43:44 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: On 29/09/2018 08:50, Serhiy Storchaka wrote: > 29.09.18 10:35, Steve Barnes ????: >> On 29/09/2018 08:24, Serhiy Storchaka wrote: >>> 29.09.18 09:31, Steve Barnes ????: >>>> I think that it should be relatively simple to extend the Python >>>> integer >>>> class to have a NaN flag, possibly by having a bit length of 0, and >>>> have >>>> it follow the same rules for the handling of floating point NaN, i.e. >>>> any mathematical operation on an iNaN returns an iNaN and any >>>> comparison >>>> with one returns False. >>> >>> How does it differ from float('nan')? >>> >> It is still an integer and would pass through any processing that >> expected an integer as one, (with a value of iNaN). > > Python is dynamically typed language. What is such processing that would > work with iNaN, but doesn't work with float('nan')? > One simplistic example would be print(int(float('nan'))) (gives a ValueError) while print(int(iNaN)) should give 'nan' or maybe 'inan'. -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From storchaka at gmail.com Sat Sep 29 04:56:59 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 29 Sep 2018 11:56:59 +0300 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: 29.09.18 11:43, Steve Barnes ????: > On 29/09/2018 08:50, Serhiy Storchaka wrote: >> Python is dynamically typed language. What is such processing that would >> work with iNaN, but doesn't work with float('nan')? >> > One simplistic example would be print(int(float('nan'))) (gives a > ValueError) while print(int(iNaN)) should give 'nan' or maybe 'inan'. Why do you convert to int when you need a string representation? Just print(float('nan')). From steve at pearwood.info Sat Sep 29 05:06:44 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 29 Sep 2018 19:06:44 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: Message-ID: <20180929090644.GT19437@ando.pearwood.info> On Wed, Sep 26, 2018 at 05:40:45AM +1000, Chris Angelico wrote: > > There are 150K projects on pypi.org. Each one of them would benefit > > if annotated with the contracts. > > This is the extraordinary claim. To justify it, you have to show that > virtually ANY project would benefit from contracts. So far, I haven't > seen any such proof. As per my previous email, I think the extraordinary claim is that there exists even a single project which wouldn't benefit from at least one contract. Honestly, you sound almost like somebody saying "Projects would benefit from getting an automated test suite? Ridiculous!" But to give you a charitable interpretation, I'll grant that given the cost to benefit ratio of code churn, human effort, refactoring etc, it is certainly possible that adding contracts to some especially mature and high-quality projects, or quick-and-dirty low-quality projects where nobody cares about bugs, would cost more than the benefit gained. There's benefit, but not *nett* benefit. That goes especially for Python code since the available interfaces for contracts are so poor. But that's why we're talking about this on Python-Ideas. I just wish we didn't have to fight so hard to justify the very idea of contracts themselves. That's like having to justify the idea of test suites, documentation and error checking. -- Steve From solipsis at pitrou.net Sat Sep 29 06:05:43 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 29 Sep 2018 12:05:43 +0200 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN References: Message-ID: <20180929120543.0e5a3af2@fsol> On Fri, 28 Sep 2018 23:52:22 -0700 Nathaniel Smith wrote: > On Fri, Sep 28, 2018 at 11:31 PM, Steve Barnes wrote: > > One specific use case that springs to mind would be for Libraries such > > as Pandas to return iNaN for entries that are not numbers in a column > > that it has been told to treat as integers. > > Pandas doesn't use Python objects to store integers, though; it uses > an array of unboxed machine integers. > > In places where you can use Python objects to represent numbers, can't > you just use float("nan") instead of iNaN? Or simply None ;-) Regards Antoine. From steve at pearwood.info Sat Sep 29 07:19:25 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 29 Sep 2018 21:19:25 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <37e7961b-528d-309f-114f-7194b9051892@kynesim.co.uk> References: <37e7961b-528d-309f-114f-7194b9051892@kynesim.co.uk> Message-ID: <20180929111925.GU19437@ando.pearwood.info> On Wed, Sep 26, 2018 at 04:03:16PM +0100, Rhodri James wrote: > Let's assume that the contracts are meaningful and useful (which I'm > pretty sure won't be 100% true; some people are bound to assume that > writing contracts means they don't have to think). Contracts are a tool. We shouldn't refuse effective tools because some developers are too DailyWTF-worthy to use them. Why should the rest of us miss out because of their incompetence? Contracts are not rocket- science: there's nothing in them that most of us aren't already doing in an ad-hoc, clumsy manner by embedding the contracts in the docstring, inside the body of our methods, in external tests etc. > Assuming that you > aren't doing some kind of wide-ranging static analysis (which doesn't > seem to be what we're talking about), all that the contracts have bought > you is the assurance that *this* invocation of the function with *these* > parameters giving *this* result is what you expected. It does not say > anything about the reliability of the function in general. This is virtually the complete opposite of what contracts give us. What you are describing is the problem with *unit testing*, not contracts. Unit tests only tell us that our function works with the specific input the test uses. In contrast, contracts test the function with *every* input the function is invoked with. (Up to the point that you disable checking, of course. Which is under your control: you decide when you are satisfied that the software is sufficiently bug-free to risk turning off checking.) Both are a form of testing, of course. As they say, tests can only reveal the presence of bugs, they can't prove the absence of bugs. But correctness checkers are out of scope for this discussion. > It seems to me that a lot of the DbC philosophy seems to assume that > functions are complex black-boxes whose behaviours are difficult to > grasp. I can't imagine how you draw that conclusion. That's like saying that unit tests and documentation requires the assumption that functions are complex and difficult to grasp. This introduction to DbC shows that contracts work with simple methods: https://www.eiffel.com/values/design-by-contract/introduction/ and here's an example: put (x: ELEMENT; key: STRING) is -- Insert x so that it will be retrievable through key. require count <= capacity not key.empty do ... Some insertion algorithm ... ensure has (x) item (key) = x count = old count + 1 end Two pre-conditions, and three post-conditions. That's hardly complex. [Aside: and there's actually a bug in this. What if the key already exists? But this is from a tutorial, not production code. Cut them some slack.] If I were writing this in Python, I'd write something like this: def put(self, x, key): """Insert x so that it will be retrievable through key.""" # Input checks are pre-conditions! if self.count > capacity: raise DatabaseFullError if not key: raise ValueError # .. Some insertion algorithm ... and then stick the post-conditions in a unit test, usually in a completely different file: class InsertTests(TestCase): def test_return_result(self): db = SomeDatabase() db.put("value", "key") self.AssertTrue("value" in db.values()) self.AssertEqual(db["key"], "value") self.AssertEqual(db.count, 1) Notice that my unit test is not actually checking at least one of the post-conditions, but a weaker, more specific version of it. The post-condition is that the count goes up by one on each insertion. My test only checks that the count is 1 after inserting into an empty database. So what's wrong with the status quo? - The pre-condition checks are embedded right there in the method implementation, mixing up the core algorithm with the associated error checking. - Which in turn makes it hard to distinguish the checks from the implementation, and impossible to do so automatically. - Half of the checks are very far away, in a separate file, assuming I even remembered or bothered to write the test. - The post-conditions aren't checked unless I run my test suite, and then they only check the canned input in the test suite. - The pre-conditions can't be easily disabled in production. - No class invariants. - Inheritance is not handled correctly. The status quo is all so very ad-hoc and messy. Design By Contract syntax would allow (not force, allow!) us to add some structure to the code: - requirements of the function - the implementation of the function - the promise made by the function Most of us already think about these as three separate things, and document them as such. Our code should reflect the structure of how we think about the code. > In my experience this is very rarely true. Most functions I > write are fairly short and easily grokked, even if they do complicated > things. That's part of the skill of breaking a problem down, IMHO; if > the function is long and horrible-looking, I've already got it wrong and > no amount of protective scaffolding like DbC is going to help. That's like saying that if a function is horrible-looking, then there's no point in writing tests for it. I'm not saying that contracts are only for horrible functions, but horrible functions are the ones which probably benefit the most from specifying exactly what they promise to do, and checking on every invocation that they live up to that promise. > >It's the reason why type checking exists, > > Except Python doesn't type check so much as try operations and see if > they work. Python (the interpreter) does type checking. Any time you get a TypeError, that's a failed type check. And with type annotations, we can run a static type checker on our code too, which will catch many of these failures before we run the code. Python code sometimes does type checking too, usually with isinstance. That's following the principle of Fail Fast, rather than waiting for some arbitrary exception deep inside the body of your function, you should fail early on bad input. -- Steve From steve at pearwood.info Sat Sep 29 07:42:18 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 29 Sep 2018 21:42:18 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: References: <20180928191854.Horde.xmUWrqDGOX18KB2KDkxRZbz@webmail.your-server.de> <20180928233908.GO19437@ando.pearwood.info> <82c32dc7-6887-6706-67a2-14a5824875d5@potatochowder.com> Message-ID: <20180929114218.GV19437@ando.pearwood.info> On Sat, Sep 29, 2018 at 10:32:16AM +1000, Chris Angelico wrote: > What should happen here? > > >>> import ctypes > >>> ctypes.cast(id(1), ctypes.POINTER(ctypes.c_int))[6] = 0 > >>> 1 > > Nothing here invokes C's undefined behaviour. Or what about here: > > >>> import sys; sys.setrecursionlimit(2147483647) > >>> def f(): f() > ... > >>> f() I'm not fussed about such edge cases. ctypes is not "Python", and other implementations don't support it. Mucking about with the interpreter internals is clear "Here be dragons" territory, and that's okay. > Python has its own set of "well don't do that then" situations. In > fact, I would say that *most* languages do. Indeed, but that's not really what I'm talking about. As I said earlier, Python's semantics are relatively simply in the sense that it is governed by a particular execution model (a virtual machine) that isn't hugely complex, but there are some corner cases which are complex. Overall Python is very rich and non-minimalist. I wouldn't describe it as "simple" except in comparison to languages like C++. BASIC is a simple language, with only two scalar data types (strings and numbers) and one array type, and only a handful of fairly simple flow control tools: if, loops, GOTO/GOSUB. (Of course, modern BASICs may be more complex.) Python has a rich set of builtin data types (ints, floats, strings, bytes, lists, tuples, dicts...), even more in the standard library, OOP, and on top of the familiar BASIC-like flow control (for loops, while loops, if...else) we have comprehensions, with statements and exception handling. (Did I miss anything?) But not GOTO :-) -- Steve From mertz at gnosis.cx Sat Sep 29 07:47:33 2018 From: mertz at gnosis.cx (David Mertz) Date: Sat, 29 Sep 2018 07:47:33 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: On Sat, Sep 29, 2018 at 2:32 AM Steve Barnes wrote: > One of the strengths of the IEEE float, (to set against its many > weaknesses), is the presence of the magic value NaN. Not a Number, or > NaA, is especially useful in that it is a valid value in any > mathematical operation, (always returning NaN), or comparison, (always > returning False). >>> nan = float('nan') >>> nan**0 1.0 ... most operations. :-) -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Sat Sep 29 08:15:42 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sat, 29 Sep 2018 22:15:42 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: <20180929114218.GV19437@ando.pearwood.info> References: <20180928191854.Horde.xmUWrqDGOX18KB2KDkxRZbz@webmail.your-server.de> <20180928233908.GO19437@ando.pearwood.info> <82c32dc7-6887-6706-67a2-14a5824875d5@potatochowder.com> <20180929114218.GV19437@ando.pearwood.info> Message-ID: On Sat, Sep 29, 2018 at 9:43 PM Steven D'Aprano wrote: > > On Sat, Sep 29, 2018 at 10:32:16AM +1000, Chris Angelico wrote: > > > What should happen here? > > > > >>> import ctypes > > >>> ctypes.cast(id(1), ctypes.POINTER(ctypes.c_int))[6] = 0 > > >>> 1 > > > > Nothing here invokes C's undefined behaviour. Or what about here: > > > > >>> import sys; sys.setrecursionlimit(2147483647) > > >>> def f(): f() > > ... > > >>> f() > > I'm not fussed about such edge cases. ctypes is not "Python", and other > implementations don't support it. Mucking about with the interpreter > internals is clear "Here be dragons" territory, and that's okay. As are all the things that are "undefined behaviour" in C, like the result of integer overflow in a signed variable. They are "Here be dragons" territory, but somehow that's not okay for you. I don't understand why you can hate on C for having behaviours where you're told "don't do that, we can't promise anything", but it's perfectly acceptable for Python to have the exact same thing. AIUI, the only difference is that C compilers are more aggressive about assuming you won't invoke undefined behaviour, whereas there are no known Python interpreters that make such expectations. ChrisA From steve at pearwood.info Sat Sep 29 08:19:38 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 29 Sep 2018 22:19:38 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: <20180929121938.GW19437@ando.pearwood.info> On Sat, Sep 29, 2018 at 10:50:24AM +0300, Serhiy Storchaka wrote: > >>How does it differ from float('nan')? > >> > >It is still an integer and would pass through any processing that > >expected an integer as one, (with a value of iNaN). > > Python is dynamically typed language. What is such processing that would > work with iNaN, but doesn't work with float('nan')? The most obvious difference is that any code which checks for isinstance(x, int) will fail with a float NAN. If you use MyPy for static type checking, passing a float NAN to something annotated to only accept ints will be flagged as an error. Bitwise operators don't work: py> NAN = float("nan") py> NAN & 1 Traceback (most recent call last): File "", line 1, in TypeError: unsupported operand type(s) for &: 'float' and 'int' Now I'm not sure what Steve expects NANs to do with bitwise operators. But raising TypeError is probably not what we want. A few more operations which aren't supported by floats: NAN.numerator NAN.denominator NAN.from_bytes NAN.bit_length NAN.to_bytes -- Steve From storchaka at gmail.com Sat Sep 29 08:56:40 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 29 Sep 2018 15:56:40 +0300 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: <20180929121938.GW19437@ando.pearwood.info> References: <20180929121938.GW19437@ando.pearwood.info> Message-ID: 29.09.18 15:19, Steven D'Aprano ????: > On Sat, Sep 29, 2018 at 10:50:24AM +0300, Serhiy Storchaka wrote: >>>> How does it differ from float('nan')? >>>> >>> It is still an integer and would pass through any processing that >>> expected an integer as one, (with a value of iNaN). >> >> Python is dynamically typed language. What is such processing that would >> work with iNaN, but doesn't work with float('nan')? > > The most obvious difference is that any code which checks for > isinstance(x, int) will fail with a float NAN. Yes, an explicit check. But why do you need an explicit check? What will you do with True returned for iNaN? Can you convert it to a machine integer or use it as length or index? > If you use MyPy for > static type checking, passing a float NAN to something annotated to only > accept ints will be flagged as an error. I think that passing iNaN to most of functions which expect int is an error. Does MyPy supports something like "int | iNaN"? Than it should be used for functions which accept int and iNaN. > Bitwise operators don't work: > > py> NAN = float("nan") > py> NAN & 1 > Traceback (most recent call last): > File "", line 1, in > TypeError: unsupported operand type(s) for &: 'float' and 'int' > > > Now I'm not sure what Steve expects NANs to do with bitwise operators. > But raising TypeError is probably not what we want. Since these operations make no sense, it makes no sense to discuss them. > A few more operations which aren't supported by floats: > > NAN.numerator > NAN.denominator Do you often use these attributes of ints? > NAN.from_bytes > NAN.bit_length > NAN.to_bytes What is the meaning of this? From steve at pearwood.info Sat Sep 29 09:00:08 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 29 Sep 2018 23:00:08 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: <20180929130008.GX19437@ando.pearwood.info> On Sat, Sep 29, 2018 at 06:31:46AM +0000, Steve Barnes wrote: > One of the strengths of the IEEE float, (to set against its many > weaknesses), o_O Since I'm old enough to (just barely) remember the chaos and horror of numeric programming before IEEE-754, I find that comment rather shocking. I'm curious what you think those weaknesses are. > is the presence of the magic value NaN. Not a Number, or > NaA, is especially useful in that it is a valid value in any > mathematical operation, (always returning NaN), or comparison, (always > returning False). Almost so. But the exceptions don't matter for this discussion. > In functional programming this is especially useful as > it allows the chain to complete after an error while retaining the fact > that an error occurred, (as we got NaN). Not just functional programming. [...] > I think that it should be relatively simple to extend the Python integer > class to have a NaN flag, possibly by having a bit length of 0, and have > it follow the same rules for the handling of floating point NaN, i.e. > any mathematical operation on an iNaN returns an iNaN and any comparison > with one returns False. Alas, a bit length of 0 is zero: py> (0).bit_length() 0 I too have often wished that integers would include three special values, namely plus and minus infinity and a NAN. On the other hand, that would add some complexity to the type, and make them harder to learn and use. Perhaps it would be better to subclass int and put the special values in the subclass. A subclass could be written in Python, and would act as a good proof of concept, demonstrating the behaviour of iNAN. For example, what would you expect iNAN & 1 to return? Back in the 1990s, Apple Computers introduced their implementation of IEEE-754, called "SANE" (Standard Apple Numeric Environment). It included a 64-bit integer format "comp", which included a single NAN value (but no infinities), so there is definately prior art to having an integer iNAN value. Likewise, R includes a special NA value, distinct from IEEE-754 NANs, which we could think of as something very vaguely like an integer NAN. https://stat.ethz.ch/R-manual/R-devel/library/base/html/NA.html -- Steve From Richard at Damon-Family.org Sat Sep 29 09:11:41 2018 From: Richard at Damon-Family.org (Richard Damon) Date: Sat, 29 Sep 2018 09:11:41 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: <605e45d0-8dac-602b-96c1-8c67f341cc21@Damon-Family.org> On 9/29/18 2:31 AM, Steve Barnes wrote: > One of the strengths of the IEEE float, (to set against its many > weaknesses), is the presence of the magic value NaN. Not a Number, or > NaA, is especially useful in that it is a valid value in any > mathematical operation, (always returning NaN), or comparison, (always > returning False). In functional programming this is especially useful as > it allows the chain to complete after an error while retaining the fact > that an error occurred, (as we got NaN). > > In languages such as C integers can only be used to represent a limited > range of values in integers and a less limited range of values, (but > still limited), with a limited accuracy. However, one of Pythons > strengths is that its integers can represent any whole number value, (up > to the maximum available memory and in exchange for slow performance > when numbers get huge). This is accomplished by Python Integers being > objects rather than a fixed number of bytes. > > I think that it should be relatively simple to extend the Python integer > class to have a NaN flag, possibly by having a bit length of 0, and have > it follow the same rules for the handling of floating point NaN, i.e. > any mathematical operation on an iNaN returns an iNaN and any comparison > with one returns False. > > One specific use case that springs to mind would be for Libraries such > as Pandas to return iNaN for entries that are not numbers in a column > that it has been told to treat as integers. > > We would possibly need a flag to set this behaviour, rather than raising > an Exception, or at the very least automatically (or provide a method > to) set LHS integers to iNaN on such an exception. > > I thought that I would throw this out to Python Ideas for some > discussion of whether such a feature is: > a) Desirable? > b) Possible, (I am sure that it could be done)? > c) Likely to get me kicked off of the list? I would think that a possibly better solution would be the creation of a NAN type (similar? to NONE) that implement this sort of property. That way the feature can be added to integers, rationals, and any other numeric types that exist (why do just integers need this addition). -- Richard Damon From steve at pearwood.info Sat Sep 29 10:20:55 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 30 Sep 2018 00:20:55 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: <605e45d0-8dac-602b-96c1-8c67f341cc21@Damon-Family.org> References: <605e45d0-8dac-602b-96c1-8c67f341cc21@Damon-Family.org> Message-ID: <20180929142055.GA19437@ando.pearwood.info> On Sat, Sep 29, 2018 at 09:11:41AM -0400, Richard Damon wrote: > I would think that a possibly better solution would be the creation of a > NAN type (similar? to NONE) that implement this sort of property. That > way the feature can be added to integers, rationals, and any other > numeric types that exist (why do just integers need this addition). Having NAN be a seperate type wouldn't help. If x needs to be an int, it can't be a separate NAN object because that's the wrong type. If ints had a NAN value, then rationals would automatically also get a NAN value, simply by using NAN/1. That's similar to the way that the complex type automatically gets NANs, on the basis that either the real or imaginary part can be a float NAN: py> complex(1, float('nan')) (1+nanj) -- Steve From steve at pearwood.info Sat Sep 29 10:43:11 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 30 Sep 2018 00:43:11 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: References: <20180928191854.Horde.xmUWrqDGOX18KB2KDkxRZbz@webmail.your-server.de> <20180928233908.GO19437@ando.pearwood.info> <82c32dc7-6887-6706-67a2-14a5824875d5@potatochowder.com> <20180929114218.GV19437@ando.pearwood.info> Message-ID: <20180929144311.GB19437@ando.pearwood.info> On Sat, Sep 29, 2018 at 10:15:42PM +1000, Chris Angelico wrote: [...] > As are all the things that are "undefined behaviour" in C, like the > result of integer overflow in a signed variable. They are "Here be > dragons" territory, but somehow that's not okay for you. I don't > understand why you can hate on C for having behaviours where you're > told "don't do that, we can't promise anything", but it's perfectly > acceptable for Python to have the exact same thing. They're not the same thing, not even close to the same thing. Undefined behaviour in C is a radically different concept to the *implementation-defined behaviour* you describe in Python and most (all?) other languages. I don't know how to communicate that message any better than the pages I linked to before. > AIUI, the only difference is that C compilers are more aggressive > about assuming you won't invoke undefined behaviour, whereas there are > no known Python interpreters that make such expectations. I don't know any other language which has the same concept of undefined behaviour as C, neither before nor after. What does that tell you? If C undefined behaviour is such a good idea, why don't more languages do the same thing? Undefined behaviour allows C compilers to generate really fast code, even if the code does something completely and radically different from what the source code says. Consequently, undefined behaviour in C is a HUGE source of bugs, including critical security bugs, and the C language is full of landmines for the unwary and inexpert, code which looks correct but could do *absolutely anything at all*. The C language philosophy is to give up correctness in favour of speed. I hate that idea. If there was a Zen of C, it would say "Errors should not just be silent, they're an opportunity to win benchmark competitions." -- Steve From turnbull.stephen.fw at u.tsukuba.ac.jp Sat Sep 29 11:14:36 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Sun, 30 Sep 2018 00:14:36 +0900 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: <23471.38620.405932.775143@turnbull.sk.tsukuba.ac.jp> Marko Ristin-Kaufmann writes: > I annotated pathlib with contracts: > https://github.com/mristin/icontract-pathlib-poc. I zipped the HTML docs Thank your for completing this task so quickly. I'm sorry, but I didn't find it convincing. I'll leave others to discuss the docs, as they are clearly "proof of concept" and I expect will be improved greatly. Part of the problem is the style of contracts, which can probably be improved with syntactic support. For example, the many double negatives of the form "not (not (X)) or Y". I guess the idea is to express all implications "X implies Y" in the equivalent form "not X or Y". I tried a few in the form "Y if X else True" but didn't find that persuasive. I also found the lambdas annoying. I conclude that there's good reason to prefer the style where the condition is expressed as a str, and eval'ed by the contract machinery. This would get rid of the lambdas, and allow writing a contract parser so you could write implications in a more natural form. I haven't tried rewriting that way, and of course, I don't have a parser to actually run stuff. > - not list(self.iterdir()) (??? There must be a way to check > this more optimally) You might define a function like this: def is_empty_dir(d): for _ in os.scandir(d): return False return True or the function could catch the StopIteration from next() directly (but I think that's considered bad style). I don't think there's any way to do this without a function call. Note that the implementation of iterdir calls os.listdir, so this function isn't necessarily much more efficient than "not list(self.iterdir())". (I'm pretty sure that in this case the iterator keeps a copy of the list internally, and listification probably involves object creation overhead, a memory allocation and a machine-language block copy.) I'm not sure whether scandir is any better. And I don't know how big a directory needs to be before the overhead of function call and exception handling become great enough to make it worthwhile. > [You] want to convey the message: dear user, if you are iterating > through a list of paths, use this function to decide if you should > call rmdir() or unlink(). Analogously with the first contract: dear > user, please check if the directory is empty before calling rmdir() > and this is what you need to call to actually check that. I don't understand this logic. I don't think I'd be looking at random contracts for individual to learn how to handle filesystem objects. I'd be more likely to do this kind of thing in most of my code: def rmtree(path: Path) -> None: try: path.unlink() except PermissionError: for p in path.iterdir(): rmtree(p) path.rmdir() I wrote all of that without being a Path user and without checking docs. (I cheated on PermissionError by testing in the interpreter, I'd probably just use Exception if I didn't have an interpreter to hand already.) Note that this function is incorrect: PermissionError could occur because I don't have write permission on the parent directory. I also don't learn anything about PermissionError from your code, so your contract is incomplete. *DbC is just not a guarantee of anything* -- if there's a guarantee, it derives from the quality of the development organization that uses DbC. > I also finally assembled the puzzle. Most of you folks are all > older and were exposed to DbC in the 80ies championed by DbC > zealots who advertised it as *the *tool for software > development. You were repulsed [...]. Please don't make statements that require mind-reading. You do not know that (I for example am 60, and have *heard of* design-by-contract before but this is the first time I've seen it *seriously proposed for use* or implementation in any project I participate in). More important, several posters have explained why they don't see a need for DbC in Python. Two common themes are proliferation of conditions making both code and documentation hard to read, and already using methods of similar power such as TDD or other systems using unit tests. It's bad form to discount such explicit statements. > And that's why I said that the libraries on pypi meant to be used by > multiple people and which already have type annotations would obviously > benefit from contracts -- while you were imagining that all of these > libraries need to be DbC'ed 100%, I was imagining something much more > humble. Thus the misunderstanding. No, the misunderstanding is caused by your wording and by your lack of attention to the costs of writing contracts. Criticizing yourself is a much more effective strategy: you can control your words, but not how others understand them. Your best bet is to choose words that are hard to misunderstand. > Similarly with rmdir() -- "the directory must be empty" -- but how > exactly am I supposed to check that? There's no reason to suppose the contract is a good place to look for that. "not list(path.iterdir())" is the obvious and easy-to-read test (but in your style, a nonempty condition would be "not not list(path.iterdir()", which is not not not the way to do it in code ;-). But this may not be efficient for "large enough" directories. However, if contracts are expected to be disabled in production code, there's strong reason to write readable rather than efficient code for contracts. Please trim unneeded text. Fixed it for you this time. From turnbull.stephen.fw at u.tsukuba.ac.jp Sat Sep 29 11:15:39 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Sun, 30 Sep 2018 00:15:39 +0900 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <20180929090644.GT19437@ando.pearwood.info> References: <20180929090644.GT19437@ando.pearwood.info> Message-ID: <23471.38683.601205.544283@turnbull.sk.tsukuba.ac.jp> Steven D'Aprano writes: > I just wish we didn't have to fight so hard to justify the very > idea of contracts themselves. You don't. You need to justify putting them in the stdlib. That's a pretty high bar. And as crappy as the contracts in Marko's pathlib rewrite look to me, I suspect you'll need syntactic support (eg, for "implies", see my post replying to Marko). If people want checking in production where reasonable (seems to be one of the nice things about contracts), you'll probably want a language change to support that syntax efficiently, which is usually a prohibitively high bar. From turnbull.stephen.fw at u.tsukuba.ac.jp Sat Sep 29 11:26:46 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Sun, 30 Sep 2018 00:26:46 +0900 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <20180929111925.GU19437@ando.pearwood.info> References: <37e7961b-528d-309f-114f-7194b9051892@kynesim.co.uk> <20180929111925.GU19437@ando.pearwood.info> Message-ID: <23471.39350.891499.861435@turnbull.sk.tsukuba.ac.jp> Steven D'Aprano writes: > put (x: ELEMENT; key: STRING) is > -- Insert x so that it will be retrievable through key. > require > count <= capacity > not key.empty > do > ... Some insertion algorithm ... > ensure > has (x) > item (key) = x > count = old count + 1 > end > > Two pre-conditions, and three post-conditions. That's hardly > complex. You can already do this: def put(self, x: Element, key: str) -> None: """Insert x so that it will be retrievable through key.""" # CHECKING PRECONDITIONS _old_count = self.count assert self.count <= self.capacity, assert key # IMPLEMENTATION ... some assertion algorithm ... # CHECKING POSTCONDITIONS assert x in self assert self[key] == x assert self.count == _old_count return I don't see a big advantage to having syntax, unless the syntax allows you to do things like turn off "expensive" contracts only. Granted, you save a little bit of typing and eye movement (you can omit "assert" and have syntax instead of an assignment for checking postconditions dependent on initial state). A document generator can look for the special comments (as with encoding cookies), and suck in all the asserts following until a non-assert line of code (or the next special comment). The assignments will need special handling, an additional special comment or something. With PEP 572, I think you could even do this: assert ((_old_count := self.count),) to get the benefit of python -O here. > If I were writing this in Python, I'd write something like this: > > def put(self, x, key): > """Insert x so that it will be retrievable through key.""" > # Input checks are pre-conditions! > if self.count > capacity: > raise DatabaseFullError > if not key: > raise ValueError > # .. Some insertion algorithm ... But this is quite different, as I understand it. Nothing I've seen in the discussion so far suggests that a contract violation allows raising differentiated exceptions, and it seems very unlikely from the syntax in your example above. I could easily see both of these errors being retryable: for _ in range(3): try: db.put(x, key) except DatabaseFullError: db.resize(expansion_factor=1.5) db.put(x, key) except ValueError: db.put(x, alternative_key) > and then stick the post-conditions in a unit test, usually in a > completely different file: If you like the contract-writing style, why would you do either of these instead of something like the code I wrote above? > So what's wrong with the status quo? > > - The pre-condition checks are embedded right there in the > method implementation, mixing up the core algorithm with the > associated error checking. You don't need syntax to separate them, you can use a convention, as I did above. > - Which in turn makes it hard to distinguish the checks from > the implementation, and impossible to do so automatically. sed can do it, why can't we? > - Half of the checks are very far away, in a separate file, > assuming I even remembered or bothered to write the test. That was your choice. There's nothing about the assert statement that says you're not allowed to use it at the end of a definition. > - The post-conditions aren't checked unless I run my test suite, and > then they only check the canned input in the test suite. Ditto. > - The pre-conditions can't be easily disabled in production. What's so hard about python -O? > - No class invariants. Examples? > - Inheritance is not handled correctly. Examples? Mixins and classes with additional functionality should work fine AFAICS. I guess you'd have to write the contracts in each subclass of an abstract class, which is definitely a minus for some of the contracts. But I don't see offhand why you would expect that the full contract of a method of a parent class would typically make sense without change for an overriding implementation, and might not make sense for a class with restricted functionality. > The status quo is all so very ad-hoc and messy. Design By Contract > syntax would allow (not force, allow!) us to add some structure to the > code: > > - requirements of the function > - the implementation of the function > - the promise made by the function Possible already as far as I can see. OK, you could have the compiler enforce the structure to some extent, but the real problem IMO is going to be like documentation and testing: programmers just won't do it regardless of syntax to make it nice and compiler checkable. > Most of us already think about these as three separate things, and > document them as such. Our code should reflect the structure of how we > think about the code. But what's the need for syntax? How about the common (in this thread) complaint that even as decorators, the contract is annoying, verbose, and distracts the reader from understanding the code? Note: I think that, as with static typing, this could be mitigated by allowing contracts to be optionally specified in a stub file. As somebody pointed out, it shouldn't be hard to write contract strippers and contract folding in many editors. (As always, we have to admit it's very difficult to get people to change their editor!) > > In my experience this is very rarely true. Most functions I > > write are fairly short and easily grokked, even if they do complicated > > things. That's part of the skill of breaking a problem down, IMHO; if > > the function is long and horrible-looking, I've already got it wrong and > > no amount of protective scaffolding like DbC is going to help. > > That's like saying that if a function is horrible-looking, then there's > no point in writing tests for it. > > I'm not saying that contracts are only for horrible functions, but > horrible functions are the ones which probably benefit the most from > specifying exactly what they promise to do, and checking on every > invocation that they live up to that promise. I think you're missing the point then: ISTM that the implicit claim here is that the time spent writing contracts for a horrible function would be better spent refactoring it. As you mention in connection with the Eiffel example, it's not easy to get all the relevant contracts, and for a horrible function it's going to be hard to get some of the ones you do write correct. > Python (the interpreter) does type checking. Any time you get a > TypeError, that's a failed type check. And with type annotations, we can > run a static type checker on our code too, which will catch many of > these failures before we run the code. But an important strength of contracts is that they are *always* run, on any input you actually give the function. From marko.ristin at gmail.com Sat Sep 29 11:55:01 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sat, 29 Sep 2018 17:55:01 +0200 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Message-ID: Hi James, What about a tool that we discussed, to convert contracts back and forth to readable form on IDe save/load with the following syntax: def some_func(arg1:int, arg2:int)-> int: # typing on the phone so no indent With requiring: Assert arg1 < arg2, "some message" With snapshotting: Var1= some_func(arg1) With ensuring: If some_enabling_condition: Assert arg1 + arg2 < var1 If no snapshot, with ensuring is dedented. Only simple assignments allowed in snapshots, only asserts and ifs allowed in require/ensure blocks. Result is reserved for the result of the function. No statements allowed in require/ensure. The same with class invariants. Works with ast and autocomplete in pycharm. Sorry for the hasty message :) Marko Le sam. 29 sept. 2018 ? 07:36, Marko Ristin-Kaufmann a ?crit : > Hi James, > I'm a bit short on time today, and would need some more time and attention > to understand the proposal you wrote. I'll try to come back to you > tomorrow. > > In any case, I need to refactor icontract's decorators to use conditions > like lambda P: and lambda P, result: first before adding snapshot > functionality. > > What about having @snapshot_with and @snapshot? @Snapshot_with does what > you propose and @snapshot expects a lambda P, identifier: ? > > After the refactoring, maybe the same could be done for defining contracts > as well? (Requires and requires_that?) > > If the documentation is clear, I'd expect the user to be able to > distinguish the two. The first approach is shorter, and uses magic, but > fails in some rare situations. The other method is more verbose, but always > works. > > Cheers, > Marko > > Le sam. 29 sept. 2018 ? 00:35, James Lu a ?crit : > >> I am fine with your proposed syntax. It?s certainly lucid. Perhaps it >> would be a good idea to get people accustomed to ?non-magic? syntax. >> >> I still have a feeling that most developers would like to store the state >> in many different custom ways. >> >> Please explain. (Expressions like thunk(all)(a == b for a, b in >> P.arg.meth()) would be valid.) >> >> I'm thinking mostly about all the edge cases which we would not be able >> to cover (and how complex that would be to cover them). >> >> >> Except for a > b > c being one flat expression with 5 members, it seems >> fairly easy to recreate an AST, which can then be compiled down to a code >> object. The code object can be fun with a custom ?locals()? >> >> Below is my concept code for such a P object. >> >> from ast import * >> >> # not done: enforce Singleton property on EmptySymbolType >> >> class EmptySymbolType(object): ... >> >> EmptySymbol = EmptySymbolType() # empty symbols are placeholders >> >> class MockP(object): >> >> # "^" is xor >> >> @icontract.pre(lambda symbol, astnode: (symbol is None) ^ (astnode is >> None)) >> >> def __init__(self, symbol=None, value=EmptySymbol, astnode=None, >> initsymtable=(,)): >> >> self.symtable = dict(initsymtable) >> >> if symbol: >> >> self.expr = Expr(value=Name(id=symbol, ctx=Load())) >> >> self.symtable = {symbol: value} >> >> else: >> >> self.expr = astnode >> >> self.frozen = False >> >> def __add__(self, other): >> >> wrapped = MockP.wrap_value(other) >> >> return MockP(astnode=Expr(value=BinOp(self.expr, Add(), >> wrapped.expr), initsymtable={**self.symtable, **wrapped.symtable}) >> >> def compile(self): ... >> >> def freeze(self): >> >> # frozen objects wouldn?t have an overrided getattr, allowing for >> icontract to manipulate the MockP object using its public interface >> >> self.frozen = True >> >> @classmethod >> >> def wrap_value(cls, obj): >> >> # create a MockP object from a value. Generate a random identifier >> and set that as the key in symtable, the AST node is the name of that >> identifier, retrieving its value through simple expression evaluation. >> >> ... >> >> >> thunk = MockP.wrap_value >> >> P = MockP('P') >> >> # elsewhere: ensure P is only accessed via valid ?dot attribute access? >> inside @snapshot so contracts fail early, or don?t and allow Magic like >> __dict__ to occur on P. >> >> On Sep 27, 2018, at 9:49 PM, Marko Ristin-Kaufmann < >> marko.ristin at gmail.com> wrote: >> >> Hi James, >> >> I still have a feeling that most developers would like to store the state >> in many different custom ways. I see also thunk and snapshot with wrapper >> objects to be much more complicated to implement and maintain; I'm thinking >> mostly about all the edge cases which we would not be able to cover (and >> how complex that would be to cover them). Then the linters need also to >> work around such wrappers... It might also scare users off since it looks >> like too much magic. Another concern I also have is that it's probably very >> hard to integrate these wrappers with mypy later -- but I don't really have >> a clue about that, only my gut feeling? >> >> What about we accepted to repeat "lambda P, " prefix, and have something >> like this: >> >> @snapshot( >> lambda P, some_name: len(P.some_property), >> lambda P, another_name: hash(P.another_property) >> ) >> >> It's not too verbose for me and you can still explain in three-four >> sentences what happens below the hub in the library's docs. A >> pycharm/pydev/vim/emacs plugins could hide the verbose parts. >> >> I performed a small experiment to test how this solution plays with >> pylint and it seems OK that arguments are not used in lambdas. >> >> Cheers, >> Marko >> >> >> On Thu, 27 Sep 2018 at 12:27, James Lu wrote: >> >>> Why couldn?t we record the operations done to a special object and >>> replay them? >>> >>> Actually, I think there is probably no way around a decorator that >>>> captures/snapshots the data before the function call with a lambda (or even >>>> a separate function). "Old" construct, if we are to parse it somehow from >>>> the condition function, would limit us only to shallow copies (and be >>>> complex to implement as soon as we are capturing out-of-argument values >>>> such as globals *etc.)*. Moreove, what if we don't need shallow >>>> copies? I could imagine a dozen of cases where shallow copy is not what the >>>> programmer wants: for example, s/he might need to make deep copies, hash or >>>> otherwise transform the input data to hold only part of it instead of >>>> copying (*e.g., *so as to allow equality check without a double copy >>>> of the data, or capture only the value of certain property transformed in >>>> some way). >>>> >>>> >>> from icontract import snapshot, P, thunk >>> @snapshot(some_identifier=P.self.some_method(P.some_argument.some_attr)) >>> >>> P is an object of our own type, let?s call the type MockP. MockP returns >>> new MockP objects when any operation is done to it. MockP * MockP = MockP. >>> MockP.attr = MockP. MockP objects remember all the operations done to them, >>> and allow the owner of a MockP object to re-apply the same operations >>> >>> ?thunk? converts a function or object or class to a MockP object, >>> storing the function or object for when the operation is done. >>> >>> thunk(function)() >>> >>> Of course, you could also thunk objects like so: thunk(3) * P.number. >>> (Though it might be better to keep the 3 after P.number in this case so >>> P.number?s __mult__ would be invoked before 3?s __mult__ is invokes. >>> >>> >>> In most cases, you?d save any operations that can be done on a copy of >>> the data as generated by @snapshot in @postcondiion. thunk is for rare >>> scenarios where 1) it?s hard to capture the state, for example an object >>> that manages network state (or database connectivity etc) and whose stage >>> can only be read by an external classmethod 2) you want to avoid using >>> copy.deepcopy. >>> >>> I?m sure there?s some way to override isinstance through a meta class or >>> dunder subclasshook. >>> >>> I suppose this mocking method could be a shorthand for when you don?t >>> need the full power of a lambda. It?s arguably more succinct and readable, >>> though YMMV. >>> >>> I look forward to reading your opinion on this and any ideas you might >>> have. >>> >>> On Sep 26, 2018, at 3:56 PM, James Lu wrote: >>> >>> Hi Marko, >>> >>> Actually, following on #A4, you could also write those as multiple >>> decorators: >>> @snpashot(lambda _, some_identifier: some_func(_, >>> some_argument.some_attr) >>> @snpashot(lambda _, other_identifier: other_func(_.self)) >>> >>> Yes, though if we?re talking syntax using kwargs would probably be >>> better. >>> Using ?P? instead of ?_?: (I agree that _ smells of ignored arguments) >>> >>> @snapshot(some_identifier=lambda P: ..., some_identifier2=lambda P: ...) >>> >>> Kwargs has the advantage that you can extend multiple lines without >>> repeating @snapshot, though many lines of @capture would probably be more >>> intuitive since each decorator captures one variable. >>> >>> Why uppercase "P" and not lowercase (uppercase implies a constant for >>> me)? >>> >>> To me, the capital letters are more prominent and explicit- easier to >>> see when reading code. It also implies its a constant for you- you >>> shouldn?t be modifying it, because then you?d be interfering with the >>> function itself. >>> >>> Side node: maybe it would be good to have an @icontract.nomutate >>> (probably use a different name, maybe @icontract.readonly) that makes sure >>> a method doesn?t mutate its own __dict__ (and maybe the __dict__ of the >>> members of its __dict__). It wouldn?t be necessary to put the decorator on >>> every read only function, just the ones your worried might mutate. >>> >>> Maybe a @icontract.nomutate(param=?paramname?) that ensures the __dict__ >>> of all members of the param name have the same equality or identity before >>> and after. The semantics would need to be worked out. >>> >>> On Sep 26, 2018, at 8:58 AM, Marko Ristin-Kaufmann < >>> marko.ristin at gmail.com> wrote: >>> >>> Hi James, >>> >>> Actually, following on #A4, you could also write those as multiple >>> decorators: >>> @snpashot(lambda _, some_identifier: some_func(_, >>> some_argument.some_attr) >>> @snpashot(lambda _, other_identifier: other_func(_.self)) >>> >>> Am I correct? >>> >>> "_" looks a bit hard to read for me (implying ignored arguments). >>> >>> Why uppercase "P" and not lowercase (uppercase implies a constant for >>> me)? Then "O" for "old" and "P" for parameters in a condition: >>> @post(lambda O, P: ...) >>> ? >>> >>> It also has the nice property that it follows both the temporal and the >>> alphabet order :) >>> >>> On Wed, 26 Sep 2018 at 14:30, James Lu wrote: >>> >>>> I still prefer snapshot, though capture is a good name too. We could >>>> use generator syntax and inspect the argument names. >>>> >>>> Instead of ?a?, perhaps use ?_?. Or maybe use ?A.?, for arguments. Some >>>> people might prefer ?P? for parameters, since parameters sometimes means >>>> the value received while the argument means the value passed. >>>> >>>> (#A1) >>>> >>>> from icontract import snapshot, __ >>>> @snapshot(some_func(_.some_argument.some_attr) for some_identifier, _ >>>> in __) >>>> >>>> Or (#A2) >>>> >>>> @snapshot(some_func(some_argument.some_attr) for some_identifier, _, >>>> some_argument in __) >>>> >>>> ? >>>> Or (#A3) >>>> >>>> @snapshot(lambda some_argument,_,some_identifier: >>>> some_func(some_argument.some_attr)) >>>> >>>> Or (#A4) >>>> >>>> @snapshot(lambda _,some_identifier: >>>> some_func(_.some_argument.some_attr)) >>>> @snapshot(lambda _,some_identifier, other_identifier: >>>> some_func(_.some_argument.some_attr), other_func(_.self)) >>>> >>>> I like #A4 the most because it?s fairly DRY and avoids the extra >>>> punctuation of >>>> >>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>> >>>> >>>> On Sep 26, 2018, at 12:23 AM, Marko Ristin-Kaufmann < >>>> marko.ristin at gmail.com> wrote: >>>> >>>> Hi, >>>> >>>> Franklin wrote: >>>> >>>>> The name "before" is a confusing name. It's not just something that >>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>> things after it, but with values taken before the function call. Based >>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>> confusing than one that is obvious but misleading. >>>> >>>> >>>> James wrote: >>>> >>>>> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ? >>>>> old? it?s ?snapshot?. >>>> >>>> >>>> I like "snapshot", it's a bit clearer than prefixing/postfixing verbs >>>> with "pre" which might be misread (*e.g., *"prelet" has a meaning in >>>> Slavic languages and could be subconsciously misread, "predef" implies to >>>> me a pre-*definition* rather than prior-to-definition , "beforescope" >>>> is very clear for me, but it might be confusing for others as to what it >>>> actually refers to ). What about "@capture" (7 letters for captures *versus >>>> *8 for snapshot)? I suppose "@let" would be playing with fire if >>>> Python with conflicting new keywords since I assume "let" to be one of the >>>> candidates. >>>> >>>> Actually, I think there is probably no way around a decorator that >>>> captures/snapshots the data before the function call with a lambda (or even >>>> a separate function). "Old" construct, if we are to parse it somehow from >>>> the condition function, would limit us only to shallow copies (and be >>>> complex to implement as soon as we are capturing out-of-argument values >>>> such as globals *etc.)*. Moreove, what if we don't need shallow >>>> copies? I could imagine a dozen of cases where shallow copy is not what the >>>> programmer wants: for example, s/he might need to make deep copies, hash or >>>> otherwise transform the input data to hold only part of it instead of >>>> copying (*e.g., *so as to allow equality check without a double copy >>>> of the data, or capture only the value of certain property transformed in >>>> some way). >>>> >>>> I'd still go with the dictionary to allow for this extra freedom. We >>>> could have a convention: "a" denotes to the current arguments, and "b" >>>> denotes the captured values. It might make an interesting hint that we put >>>> "b" before "a" in the condition. You could also interpret "b" as "before" >>>> and "a" as "after", but also "a" as "arguments". >>>> >>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>> @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) >>>> def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: >>>> ... >>>> >>>> "b" can be omitted if it is not used. Under the hub, all the arguments >>>> to the condition would be passed by keywords. >>>> >>>> In case of inheritance, captures would be inherited as well. Hence the >>>> library would check at run-time that the returned dictionary with captured >>>> values has no identifier that has been already captured, and the linter >>>> checks that statically, before running the code. Reading values captured in >>>> the parent at the code of the child class might be a bit hard -- but that >>>> is case with any inherited methods/properties. In documentation, I'd list >>>> all the captures of both ancestor and the current class. >>>> >>>> I'm looking forward to reading your opinion on this and alternative >>>> suggestions :) >>>> Marko >>>> >>>> On Tue, 25 Sep 2018 at 18:12, Franklin? Lee < >>>> leewangzhong+python at gmail.com> wrote: >>>> >>>>> On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann >>>>> wrote: >>>>> > >>>>> > Hi, >>>>> > >>>>> > (I'd like to fork from a previous thread, "Pre-conditions and >>>>> post-conditions", since it got long and we started discussing a couple of >>>>> different things. Let's discuss in this thread the implementation of a >>>>> library for design-by-contract and how to push it forward to hopefully add >>>>> it to the standard library one day.) >>>>> > >>>>> > For those unfamiliar with contracts and current state of the >>>>> discussion in the previous thread, here's a short summary. The discussion >>>>> started by me inquiring about the possibility to add design-by-contract >>>>> concepts into the core language. The idea was rejected by the participants >>>>> mainly because they thought that the merit of the feature does not merit >>>>> its costs. This is quite debatable and seems to reflect many a discussion >>>>> about design-by-contract in general. Please see the other thread, "Why is >>>>> design-by-contract not widely adopted?" if you are interested in that >>>>> debate. >>>>> > >>>>> > We (a colleague of mine and I) decided to implement a library to >>>>> bring design-by-contract to Python since we don't believe that the concept >>>>> will make it into the core language anytime soon and we needed badly a tool >>>>> to facilitate our work with a growing code base. >>>>> > >>>>> > The library is available at http://github.com/Parquery/icontract. >>>>> The hope is to polish it so that the wider community could use it and once >>>>> the quality is high enough, make a proposal to add it to the standard >>>>> Python libraries. We do need a standard library for contracts, otherwise >>>>> projects with conflicting contract libraries can not integrate (e.g., the >>>>> contracts can not be inherited between two different contract libraries). >>>>> > >>>>> > So far, the most important bits have been implemented in icontract: >>>>> > >>>>> > Preconditions, postconditions, class invariants >>>>> > Inheritance of the contracts (including strengthening and weakening >>>>> of the inherited contracts) >>>>> > Informative violation messages (including information about the >>>>> values involved in the contract condition) >>>>> > Sphinx extension to include contracts in the automatically generated >>>>> documentation (sphinx-icontract) >>>>> > Linter to statically check that the arguments of the conditions are >>>>> correct (pyicontract-lint) >>>>> > >>>>> > We are successfully using it in our code base and have been quite >>>>> happy about the implementation so far. >>>>> > >>>>> > There is one bit still missing: accessing "old" values in the >>>>> postcondition (i.e., shallow copies of the values prior to the execution of >>>>> the function). This feature is necessary in order to allow us to verify >>>>> state transitions. >>>>> > >>>>> > For example, consider a new dictionary class that has "get" and >>>>> "put" methods: >>>>> > >>>>> > from typing import Optional >>>>> > >>>>> > from icontract import post >>>>> > >>>>> > class NovelDict: >>>>> > def length(self)->int: >>>>> > ... >>>>> > >>>>> > def get(self, key: str) -> Optional[str]: >>>>> > ... >>>>> > >>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>> > @post(lambda self, key: old(self.get(key)) is None and >>>>> old(self.length()) + 1 == self.length(), >>>>> > "length increased with a new key") >>>>> > @post(lambda self, key: old(self.get(key)) is not None and >>>>> old(self.length()) == self.length(), >>>>> > "length stable with an existing key") >>>>> > def put(self, key: str, value: str) -> None: >>>>> > ... >>>>> > >>>>> > How could we possible implement this "old" function? >>>>> > >>>>> > Here is my suggestion. I'd introduce a decorator "before" that would >>>>> allow you to store whatever values in a dictionary object "old" (i.e. an >>>>> object whose properties correspond to the key/value pairs). The "old" is >>>>> then passed to the condition. Here is it in code: >>>>> > >>>>> > # omitted contracts for brevity >>>>> > class NovelDict: >>>>> > def length(self)->int: >>>>> > ... >>>>> > >>>>> > # omitted contracts for brevity >>>>> > def get(self, key: str) -> Optional[str]: >>>>> > ... >>>>> > >>>>> > @before(lambda self, key: {"length": self.length(), "get": >>>>> self.get(key)}) >>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>> > @post(lambda self, key, old: old.get is None and old.length + 1 >>>>> == self.length(), >>>>> > "length increased with a new key") >>>>> > @post(lambda self, key, old: old.get is not None and old.length >>>>> == self.length(), >>>>> > "length stable with an existing key") >>>>> > def put(self, key: str, value: str) -> None: >>>>> > ... >>>>> > >>>>> > The linter would statically check that all attributes accessed in >>>>> "old" have to be defined in the decorator "before" so that attribute errors >>>>> would be caught early. The current implementation of the linter is fast >>>>> enough to be run at save time so such errors should usually not happen with >>>>> a properly set IDE. >>>>> > >>>>> > "before" decorator would also have "enabled" property, so that you >>>>> can turn it off (e.g., if you only want to run a postcondition in testing). >>>>> The "before" decorators can be stacked so that you can also have a more >>>>> fine-grained control when each one of them is running (some during test, >>>>> some during test and in production). The linter would enforce that before's >>>>> "enabled" is a disjunction of all the "enabled"'s of the corresponding >>>>> postconditions where the old value appears. >>>>> > >>>>> > Is this a sane approach to "old" values? Any alternative approach >>>>> you would prefer? What about better naming? Is "before" a confusing name? >>>>> >>>>> The dict can be splatted into the postconditions, so that no special >>>>> name is required. This would require either that the lambdas handle >>>>> **kws, or that their caller inspect them to see what names they take. >>>>> Perhaps add a function to functools which only passes kwargs that fit. >>>>> Then the precondition mechanism can pass `self`, `key`, and `value` as >>>>> kwargs instead of args. >>>>> >>>>> For functions that have *args and **kwargs, it may be necessary to >>>>> pass them to the conditions as args and kwargs instead. >>>>> >>>>> The name "before" is a confusing name. It's not just something that >>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>> things after it, but with values taken before the function call. Based >>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>> confusing than one that is obvious but misleading. >>>>> >>>>> By the way, should the first postcondition be `self.get(key) is >>>>> value`, checking for identity rather than equality? >>>>> >>>> _______________________________________________ >>>> Python-ideas mailing list >>>> Python-ideas at python.org >>>> https://mail.python.org/mailman/listinfo/python-ideas >>>> Code of Conduct: http://python.org/psf/codeofconduct/ >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadgetsteve at live.co.uk Sat Sep 29 14:25:37 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sat, 29 Sep 2018 18:25:37 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: <20180929130008.GX19437@ando.pearwood.info> References: <20180929130008.GX19437@ando.pearwood.info> Message-ID: On 29/09/2018 14:00, Steven D'Aprano wrote: > On Sat, Sep 29, 2018 at 06:31:46AM +0000, Steve Barnes wrote: > >> One of the strengths of the IEEE float, (to set against its many >> weaknesses), > > o_O > > Since I'm old enough to (just barely) remember the chaos and horror of > numeric programming before IEEE-754, I find that comment rather > shocking. > > I'm curious what you think those weaknesses are. > I am likewise old enough - the weaknesses that I am thinking of include: - Variable precision - Non-linearity around zero - Common real world, (decimal), values being irrational, 0.1 anybody? - In many cases being very limited range (then number of people who get caught out on statistical calculations involving permutations). > >> is the presence of the magic value NaN. Not a Number, or >> NaA, is especially useful in that it is a valid value in any >> mathematical operation, (always returning NaN), or comparison, (always >> returning False). > > Almost so. But the exceptions don't matter for this discussion. Indeed, (and possibly those exceptions should be matched in many cases). > > >> In functional programming this is especially useful as >> it allows the chain to complete after an error while retaining the fact >> that an error occurred, (as we got NaN). > > Not just functional programming. > True, functional programming was the most obvious to spring to mind but in a lot of my code I need to return some value even in the case of an exception so as to allow the system to carry on running. > > [...] >> I think that it should be relatively simple to extend the Python integer >> class to have a NaN flag, possibly by having a bit length of 0, and have >> it follow the same rules for the handling of floating point NaN, i.e. >> any mathematical operation on an iNaN returns an iNaN and any comparison >> with one returns False. > > Alas, a bit length of 0 is zero: > > py> (0).bit_length() > 0 > I still think that iNaN.bit_length() should return 0 but obviously that would not be enough in itself to denote iNaN. > I too have often wished that integers would include three special > values, namely plus and minus infinity and a NAN. On the other hand, > that would add some complexity to the type, and make them harder to > learn and use. Perhaps it would be better to subclass int and put the > special values in the subclass. > I thought of including pINF & nINF in the original email and then decided to take on a single dragon at a time. > A subclass could be written in Python, and would act as a good proof of > concept, demonstrating the behaviour of iNAN. For example, what would > you expect iNAN & 1 to return? > I am thinking of trying to put together an overload of integer with iNaN overloads for all of the dunder operations as a proof of concept. > Back in the 1990s, Apple Computers introduced their implementation of > IEEE-754, called "SANE" (Standard Apple Numeric Environment). It > included a 64-bit integer format "comp", which included a single NAN > value (but no infinities), so there is definately prior art to having an > integer iNAN value. > I had forgotten this. > Likewise, R includes a special NA value, distinct from IEEE-754 NANs, > which we could think of as something very vaguely like an integer NAN. > > https://stat.ethz.ch/R-manual/R-devel/library/base/html/NA.html > > Not done enough R programming to have come across it. > Thanks! -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From gadgetsteve at live.co.uk Sat Sep 29 14:38:10 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sat, 29 Sep 2018 18:38:10 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: On 29/09/2018 09:56, Serhiy Storchaka wrote: > 29.09.18 11:43, Steve Barnes ????: >> On 29/09/2018 08:50, Serhiy Storchaka wrote: >>> Python is dynamically typed language. What is such processing that would >>> work with iNaN, but doesn't work with float('nan')? >>> >> One simplistic example would be print(int(float('nan'))) (gives a >> ValueError) while print(int(iNaN)) should give 'nan' or maybe 'inan'. > > Why do you convert to int when you need a string representation? Just > print(float('nan')). > I converted to int because I needed a whole number, this was intended to represent some more complex process where a value is converted to a whole number down in the depths of the processing. -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From marko.ristin at gmail.com Sat Sep 29 14:56:59 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sat, 29 Sep 2018 20:56:59 +0200 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Message-ID: Hi James, Just a PS to the previous syntax: with contracts: with preconditions: assert arg1 < arg2 with snapshot as S: S.var1 = some_func(arg1) with postconditions, \ result: # result would be annotated with "# type:" if return type is annotated. assert arg1 < S.var1 < arg2 For classes: class SomeClass: with invariants, selfie as self: # type: SomeClass assert 0 < self.x < sqrt(self.x) The advantage: no variable shadowing, valid python code, autocomplete works in Pycharm, even mypy could be made to work. "With contracts" makes it easier and less error prone to group preconditions and postconditions. The converter would check that there is no "with contracts" in the body of the function except in the first statement and the same for class invariants. icontract.dummies would provide these dummy context managers (all of them would raise exceptions on enter so that the code can not run by accident). The converter would add/remove these imports automatically. Cheers, Marko Le sam. 29 sept. 2018 ? 17:55, Marko Ristin-Kaufmann a ?crit : > Hi James, > What about a tool that we discussed, to convert contracts back and forth > to readable form on IDe save/load with the following syntax: > > def some_func(arg1:int, arg2:int)-> int: > # typing on the phone so no indent > With requiring: > Assert arg1 < arg2, "some message" > With snapshotting: > Var1= some_func(arg1) > > With ensuring: > If some_enabling_condition: > Assert arg1 + arg2 < var1 > > If no snapshot, with ensuring is dedented. Only simple assignments allowed > in snapshots, only asserts and ifs allowed in require/ensure blocks. > Result is reserved for the result of the function. > > No statements allowed in require/ensure. > > The same with class invariants. > > Works with ast and autocomplete in pycharm. > > Sorry for the hasty message :) > Marko > > > > Le sam. 29 sept. 2018 ? 07:36, Marko Ristin-Kaufmann < > marko.ristin at gmail.com> a ?crit : > >> Hi James, >> I'm a bit short on time today, and would need some more time and >> attention to understand the proposal you wrote. I'll try to come back to >> you tomorrow. >> >> In any case, I need to refactor icontract's decorators to use conditions >> like lambda P: and lambda P, result: first before adding snapshot >> functionality. >> >> What about having @snapshot_with and @snapshot? @Snapshot_with does what >> you propose and @snapshot expects a lambda P, identifier: ? >> >> After the refactoring, maybe the same could be done for defining >> contracts as well? (Requires and requires_that?) >> >> If the documentation is clear, I'd expect the user to be able to >> distinguish the two. The first approach is shorter, and uses magic, but >> fails in some rare situations. The other method is more verbose, but always >> works. >> >> Cheers, >> Marko >> >> Le sam. 29 sept. 2018 ? 00:35, James Lu a ?crit : >> >>> I am fine with your proposed syntax. It?s certainly lucid. Perhaps it >>> would be a good idea to get people accustomed to ?non-magic? syntax. >>> >>> I still have a feeling that most developers would like to store the >>> state in many different custom ways. >>> >>> Please explain. (Expressions like thunk(all)(a == b for a, b in >>> P.arg.meth()) would be valid.) >>> >>> I'm thinking mostly about all the edge cases which we would not be able >>> to cover (and how complex that would be to cover them). >>> >>> >>> Except for a > b > c being one flat expression with 5 members, it seems >>> fairly easy to recreate an AST, which can then be compiled down to a code >>> object. The code object can be fun with a custom ?locals()? >>> >>> Below is my concept code for such a P object. >>> >>> from ast import * >>> >>> # not done: enforce Singleton property on EmptySymbolType >>> >>> class EmptySymbolType(object): ... >>> >>> EmptySymbol = EmptySymbolType() # empty symbols are placeholders >>> >>> class MockP(object): >>> >>> # "^" is xor >>> >>> @icontract.pre(lambda symbol, astnode: (symbol is None) ^ (astnode >>> is None)) >>> >>> def __init__(self, symbol=None, value=EmptySymbol, astnode=None, >>> initsymtable=(,)): >>> >>> self.symtable = dict(initsymtable) >>> >>> if symbol: >>> >>> self.expr = Expr(value=Name(id=symbol, ctx=Load())) >>> >>> self.symtable = {symbol: value} >>> >>> else: >>> >>> self.expr = astnode >>> >>> self.frozen = False >>> >>> def __add__(self, other): >>> >>> wrapped = MockP.wrap_value(other) >>> >>> return MockP(astnode=Expr(value=BinOp(self.expr, Add(), >>> wrapped.expr), initsymtable={**self.symtable, **wrapped.symtable}) >>> >>> def compile(self): ... >>> >>> def freeze(self): >>> >>> # frozen objects wouldn?t have an overrided getattr, allowing >>> for icontract to manipulate the MockP object using its public interface >>> >>> self.frozen = True >>> >>> @classmethod >>> >>> def wrap_value(cls, obj): >>> >>> # create a MockP object from a value. Generate a random >>> identifier and set that as the key in symtable, the AST node is the name of >>> that identifier, retrieving its value through simple expression evaluation. >>> >>> ... >>> >>> >>> thunk = MockP.wrap_value >>> >>> P = MockP('P') >>> >>> # elsewhere: ensure P is only accessed via valid ?dot attribute access? >>> inside @snapshot so contracts fail early, or don?t and allow Magic like >>> __dict__ to occur on P. >>> >>> On Sep 27, 2018, at 9:49 PM, Marko Ristin-Kaufmann < >>> marko.ristin at gmail.com> wrote: >>> >>> Hi James, >>> >>> I still have a feeling that most developers would like to store the >>> state in many different custom ways. I see also thunk and snapshot with >>> wrapper objects to be much more complicated to implement and maintain; I'm >>> thinking mostly about all the edge cases which we would not be able to >>> cover (and how complex that would be to cover them). Then the linters need >>> also to work around such wrappers... It might also scare users off since it >>> looks like too much magic. Another concern I also have is that it's >>> probably very hard to integrate these wrappers with mypy later -- but I >>> don't really have a clue about that, only my gut feeling? >>> >>> What about we accepted to repeat "lambda P, " prefix, and have something >>> like this: >>> >>> @snapshot( >>> lambda P, some_name: len(P.some_property), >>> lambda P, another_name: hash(P.another_property) >>> ) >>> >>> It's not too verbose for me and you can still explain in three-four >>> sentences what happens below the hub in the library's docs. A >>> pycharm/pydev/vim/emacs plugins could hide the verbose parts. >>> >>> I performed a small experiment to test how this solution plays with >>> pylint and it seems OK that arguments are not used in lambdas. >>> >>> Cheers, >>> Marko >>> >>> >>> On Thu, 27 Sep 2018 at 12:27, James Lu wrote: >>> >>>> Why couldn?t we record the operations done to a special object and >>>> replay them? >>>> >>>> Actually, I think there is probably no way around a decorator that >>>>> captures/snapshots the data before the function call with a lambda (or even >>>>> a separate function). "Old" construct, if we are to parse it somehow from >>>>> the condition function, would limit us only to shallow copies (and be >>>>> complex to implement as soon as we are capturing out-of-argument values >>>>> such as globals *etc.)*. Moreove, what if we don't need shallow >>>>> copies? I could imagine a dozen of cases where shallow copy is not what the >>>>> programmer wants: for example, s/he might need to make deep copies, hash or >>>>> otherwise transform the input data to hold only part of it instead of >>>>> copying (*e.g., *so as to allow equality check without a double copy >>>>> of the data, or capture only the value of certain property transformed in >>>>> some way). >>>>> >>>>> >>>> from icontract import snapshot, P, thunk >>>> @snapshot(some_identifier=P.self.some_method(P.some_argument.some_attr)) >>>> >>>> P is an object of our own type, let?s call the type MockP. MockP >>>> returns new MockP objects when any operation is done to it. MockP * MockP = >>>> MockP. MockP.attr = MockP. MockP objects remember all the operations done >>>> to them, and allow the owner of a MockP object to re-apply the same >>>> operations >>>> >>>> ?thunk? converts a function or object or class to a MockP object, >>>> storing the function or object for when the operation is done. >>>> >>>> thunk(function)() >>>> >>>> Of course, you could also thunk objects like so: thunk(3) * P.number. >>>> (Though it might be better to keep the 3 after P.number in this case so >>>> P.number?s __mult__ would be invoked before 3?s __mult__ is invokes. >>>> >>>> >>>> In most cases, you?d save any operations that can be done on a copy of >>>> the data as generated by @snapshot in @postcondiion. thunk is for rare >>>> scenarios where 1) it?s hard to capture the state, for example an object >>>> that manages network state (or database connectivity etc) and whose stage >>>> can only be read by an external classmethod 2) you want to avoid using >>>> copy.deepcopy. >>>> >>>> I?m sure there?s some way to override isinstance through a meta class >>>> or dunder subclasshook. >>>> >>>> I suppose this mocking method could be a shorthand for when you don?t >>>> need the full power of a lambda. It?s arguably more succinct and readable, >>>> though YMMV. >>>> >>>> I look forward to reading your opinion on this and any ideas you might >>>> have. >>>> >>>> On Sep 26, 2018, at 3:56 PM, James Lu wrote: >>>> >>>> Hi Marko, >>>> >>>> Actually, following on #A4, you could also write those as multiple >>>> decorators: >>>> @snpashot(lambda _, some_identifier: some_func(_, >>>> some_argument.some_attr) >>>> @snpashot(lambda _, other_identifier: other_func(_.self)) >>>> >>>> Yes, though if we?re talking syntax using kwargs would probably be >>>> better. >>>> Using ?P? instead of ?_?: (I agree that _ smells of ignored arguments) >>>> >>>> @snapshot(some_identifier=lambda P: ..., some_identifier2=lambda P: ...) >>>> >>>> Kwargs has the advantage that you can extend multiple lines without >>>> repeating @snapshot, though many lines of @capture would probably be more >>>> intuitive since each decorator captures one variable. >>>> >>>> Why uppercase "P" and not lowercase (uppercase implies a constant for >>>> me)? >>>> >>>> To me, the capital letters are more prominent and explicit- easier to >>>> see when reading code. It also implies its a constant for you- you >>>> shouldn?t be modifying it, because then you?d be interfering with the >>>> function itself. >>>> >>>> Side node: maybe it would be good to have an @icontract.nomutate >>>> (probably use a different name, maybe @icontract.readonly) that makes sure >>>> a method doesn?t mutate its own __dict__ (and maybe the __dict__ of the >>>> members of its __dict__). It wouldn?t be necessary to put the decorator on >>>> every read only function, just the ones your worried might mutate. >>>> >>>> Maybe a @icontract.nomutate(param=?paramname?) that ensures the >>>> __dict__ of all members of the param name have the same equality or >>>> identity before and after. The semantics would need to be worked out. >>>> >>>> On Sep 26, 2018, at 8:58 AM, Marko Ristin-Kaufmann < >>>> marko.ristin at gmail.com> wrote: >>>> >>>> Hi James, >>>> >>>> Actually, following on #A4, you could also write those as multiple >>>> decorators: >>>> @snpashot(lambda _, some_identifier: some_func(_, >>>> some_argument.some_attr) >>>> @snpashot(lambda _, other_identifier: other_func(_.self)) >>>> >>>> Am I correct? >>>> >>>> "_" looks a bit hard to read for me (implying ignored arguments). >>>> >>>> Why uppercase "P" and not lowercase (uppercase implies a constant for >>>> me)? Then "O" for "old" and "P" for parameters in a condition: >>>> @post(lambda O, P: ...) >>>> ? >>>> >>>> It also has the nice property that it follows both the temporal and the >>>> alphabet order :) >>>> >>>> On Wed, 26 Sep 2018 at 14:30, James Lu wrote: >>>> >>>>> I still prefer snapshot, though capture is a good name too. We could >>>>> use generator syntax and inspect the argument names. >>>>> >>>>> Instead of ?a?, perhaps use ?_?. Or maybe use ?A.?, for arguments. >>>>> Some people might prefer ?P? for parameters, since parameters sometimes >>>>> means the value received while the argument means the value passed. >>>>> >>>>> (#A1) >>>>> >>>>> from icontract import snapshot, __ >>>>> @snapshot(some_func(_.some_argument.some_attr) for some_identifier, _ >>>>> in __) >>>>> >>>>> Or (#A2) >>>>> >>>>> @snapshot(some_func(some_argument.some_attr) for some_identifier, _, >>>>> some_argument in __) >>>>> >>>>> ? >>>>> Or (#A3) >>>>> >>>>> @snapshot(lambda some_argument,_,some_identifier: >>>>> some_func(some_argument.some_attr)) >>>>> >>>>> Or (#A4) >>>>> >>>>> @snapshot(lambda _,some_identifier: >>>>> some_func(_.some_argument.some_attr)) >>>>> @snapshot(lambda _,some_identifier, other_identifier: >>>>> some_func(_.some_argument.some_attr), other_func(_.self)) >>>>> >>>>> I like #A4 the most because it?s fairly DRY and avoids the extra >>>>> punctuation of >>>>> >>>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>>> >>>>> >>>>> On Sep 26, 2018, at 12:23 AM, Marko Ristin-Kaufmann < >>>>> marko.ristin at gmail.com> wrote: >>>>> >>>>> Hi, >>>>> >>>>> Franklin wrote: >>>>> >>>>>> The name "before" is a confusing name. It's not just something that >>>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>>> things after it, but with values taken before the function call. Based >>>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>>> confusing than one that is obvious but misleading. >>>>> >>>>> >>>>> James wrote: >>>>> >>>>>> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ? >>>>>> old? it?s ?snapshot?. >>>>> >>>>> >>>>> I like "snapshot", it's a bit clearer than prefixing/postfixing verbs >>>>> with "pre" which might be misread (*e.g., *"prelet" has a meaning in >>>>> Slavic languages and could be subconsciously misread, "predef" implies to >>>>> me a pre-*definition* rather than prior-to-definition , "beforescope" >>>>> is very clear for me, but it might be confusing for others as to what it >>>>> actually refers to ). What about "@capture" (7 letters for captures *versus >>>>> *8 for snapshot)? I suppose "@let" would be playing with fire if >>>>> Python with conflicting new keywords since I assume "let" to be one of the >>>>> candidates. >>>>> >>>>> Actually, I think there is probably no way around a decorator that >>>>> captures/snapshots the data before the function call with a lambda (or even >>>>> a separate function). "Old" construct, if we are to parse it somehow from >>>>> the condition function, would limit us only to shallow copies (and be >>>>> complex to implement as soon as we are capturing out-of-argument values >>>>> such as globals *etc.)*. Moreove, what if we don't need shallow >>>>> copies? I could imagine a dozen of cases where shallow copy is not what the >>>>> programmer wants: for example, s/he might need to make deep copies, hash or >>>>> otherwise transform the input data to hold only part of it instead of >>>>> copying (*e.g., *so as to allow equality check without a double copy >>>>> of the data, or capture only the value of certain property transformed in >>>>> some way). >>>>> >>>>> I'd still go with the dictionary to allow for this extra freedom. We >>>>> could have a convention: "a" denotes to the current arguments, and "b" >>>>> denotes the captured values. It might make an interesting hint that we put >>>>> "b" before "a" in the condition. You could also interpret "b" as "before" >>>>> and "a" as "after", but also "a" as "arguments". >>>>> >>>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>>> @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) >>>>> def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: >>>>> ... >>>>> >>>>> "b" can be omitted if it is not used. Under the hub, all the arguments >>>>> to the condition would be passed by keywords. >>>>> >>>>> In case of inheritance, captures would be inherited as well. Hence the >>>>> library would check at run-time that the returned dictionary with captured >>>>> values has no identifier that has been already captured, and the linter >>>>> checks that statically, before running the code. Reading values captured in >>>>> the parent at the code of the child class might be a bit hard -- but that >>>>> is case with any inherited methods/properties. In documentation, I'd list >>>>> all the captures of both ancestor and the current class. >>>>> >>>>> I'm looking forward to reading your opinion on this and alternative >>>>> suggestions :) >>>>> Marko >>>>> >>>>> On Tue, 25 Sep 2018 at 18:12, Franklin? Lee < >>>>> leewangzhong+python at gmail.com> wrote: >>>>> >>>>>> On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann >>>>>> wrote: >>>>>> > >>>>>> > Hi, >>>>>> > >>>>>> > (I'd like to fork from a previous thread, "Pre-conditions and >>>>>> post-conditions", since it got long and we started discussing a couple of >>>>>> different things. Let's discuss in this thread the implementation of a >>>>>> library for design-by-contract and how to push it forward to hopefully add >>>>>> it to the standard library one day.) >>>>>> > >>>>>> > For those unfamiliar with contracts and current state of the >>>>>> discussion in the previous thread, here's a short summary. The discussion >>>>>> started by me inquiring about the possibility to add design-by-contract >>>>>> concepts into the core language. The idea was rejected by the participants >>>>>> mainly because they thought that the merit of the feature does not merit >>>>>> its costs. This is quite debatable and seems to reflect many a discussion >>>>>> about design-by-contract in general. Please see the other thread, "Why is >>>>>> design-by-contract not widely adopted?" if you are interested in that >>>>>> debate. >>>>>> > >>>>>> > We (a colleague of mine and I) decided to implement a library to >>>>>> bring design-by-contract to Python since we don't believe that the concept >>>>>> will make it into the core language anytime soon and we needed badly a tool >>>>>> to facilitate our work with a growing code base. >>>>>> > >>>>>> > The library is available at http://github.com/Parquery/icontract. >>>>>> The hope is to polish it so that the wider community could use it and once >>>>>> the quality is high enough, make a proposal to add it to the standard >>>>>> Python libraries. We do need a standard library for contracts, otherwise >>>>>> projects with conflicting contract libraries can not integrate (e.g., the >>>>>> contracts can not be inherited between two different contract libraries). >>>>>> > >>>>>> > So far, the most important bits have been implemented in icontract: >>>>>> > >>>>>> > Preconditions, postconditions, class invariants >>>>>> > Inheritance of the contracts (including strengthening and weakening >>>>>> of the inherited contracts) >>>>>> > Informative violation messages (including information about the >>>>>> values involved in the contract condition) >>>>>> > Sphinx extension to include contracts in the automatically >>>>>> generated documentation (sphinx-icontract) >>>>>> > Linter to statically check that the arguments of the conditions are >>>>>> correct (pyicontract-lint) >>>>>> > >>>>>> > We are successfully using it in our code base and have been quite >>>>>> happy about the implementation so far. >>>>>> > >>>>>> > There is one bit still missing: accessing "old" values in the >>>>>> postcondition (i.e., shallow copies of the values prior to the execution of >>>>>> the function). This feature is necessary in order to allow us to verify >>>>>> state transitions. >>>>>> > >>>>>> > For example, consider a new dictionary class that has "get" and >>>>>> "put" methods: >>>>>> > >>>>>> > from typing import Optional >>>>>> > >>>>>> > from icontract import post >>>>>> > >>>>>> > class NovelDict: >>>>>> > def length(self)->int: >>>>>> > ... >>>>>> > >>>>>> > def get(self, key: str) -> Optional[str]: >>>>>> > ... >>>>>> > >>>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>>> > @post(lambda self, key: old(self.get(key)) is None and >>>>>> old(self.length()) + 1 == self.length(), >>>>>> > "length increased with a new key") >>>>>> > @post(lambda self, key: old(self.get(key)) is not None and >>>>>> old(self.length()) == self.length(), >>>>>> > "length stable with an existing key") >>>>>> > def put(self, key: str, value: str) -> None: >>>>>> > ... >>>>>> > >>>>>> > How could we possible implement this "old" function? >>>>>> > >>>>>> > Here is my suggestion. I'd introduce a decorator "before" that >>>>>> would allow you to store whatever values in a dictionary object "old" (i.e. >>>>>> an object whose properties correspond to the key/value pairs). The "old" is >>>>>> then passed to the condition. Here is it in code: >>>>>> > >>>>>> > # omitted contracts for brevity >>>>>> > class NovelDict: >>>>>> > def length(self)->int: >>>>>> > ... >>>>>> > >>>>>> > # omitted contracts for brevity >>>>>> > def get(self, key: str) -> Optional[str]: >>>>>> > ... >>>>>> > >>>>>> > @before(lambda self, key: {"length": self.length(), "get": >>>>>> self.get(key)}) >>>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>>> > @post(lambda self, key, old: old.get is None and old.length + 1 >>>>>> == self.length(), >>>>>> > "length increased with a new key") >>>>>> > @post(lambda self, key, old: old.get is not None and old.length >>>>>> == self.length(), >>>>>> > "length stable with an existing key") >>>>>> > def put(self, key: str, value: str) -> None: >>>>>> > ... >>>>>> > >>>>>> > The linter would statically check that all attributes accessed in >>>>>> "old" have to be defined in the decorator "before" so that attribute errors >>>>>> would be caught early. The current implementation of the linter is fast >>>>>> enough to be run at save time so such errors should usually not happen with >>>>>> a properly set IDE. >>>>>> > >>>>>> > "before" decorator would also have "enabled" property, so that you >>>>>> can turn it off (e.g., if you only want to run a postcondition in testing). >>>>>> The "before" decorators can be stacked so that you can also have a more >>>>>> fine-grained control when each one of them is running (some during test, >>>>>> some during test and in production). The linter would enforce that before's >>>>>> "enabled" is a disjunction of all the "enabled"'s of the corresponding >>>>>> postconditions where the old value appears. >>>>>> > >>>>>> > Is this a sane approach to "old" values? Any alternative approach >>>>>> you would prefer? What about better naming? Is "before" a confusing name? >>>>>> >>>>>> The dict can be splatted into the postconditions, so that no special >>>>>> name is required. This would require either that the lambdas handle >>>>>> **kws, or that their caller inspect them to see what names they take. >>>>>> Perhaps add a function to functools which only passes kwargs that fit. >>>>>> Then the precondition mechanism can pass `self`, `key`, and `value` as >>>>>> kwargs instead of args. >>>>>> >>>>>> For functions that have *args and **kwargs, it may be necessary to >>>>>> pass them to the conditions as args and kwargs instead. >>>>>> >>>>>> The name "before" is a confusing name. It's not just something that >>>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>>> things after it, but with values taken before the function call. Based >>>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>>> confusing than one that is obvious but misleading. >>>>>> >>>>>> By the way, should the first postcondition be `self.get(key) is >>>>>> value`, checking for identity rather than equality? >>>>>> >>>>> _______________________________________________ >>>>> Python-ideas mailing list >>>>> Python-ideas at python.org >>>>> https://mail.python.org/mailman/listinfo/python-ideas >>>>> Code of Conduct: http://python.org/psf/codeofconduct/ >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Sat Sep 29 15:05:39 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 29 Sep 2018 22:05:39 +0300 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: 29.09.18 21:38, Steve Barnes ????: > On 29/09/2018 09:56, Serhiy Storchaka wrote: >> 29.09.18 11:43, Steve Barnes ????: >>> On 29/09/2018 08:50, Serhiy Storchaka wrote: >>>> Python is dynamically typed language. What is such processing that would >>>> work with iNaN, but doesn't work with float('nan')? >>>> >>> One simplistic example would be print(int(float('nan'))) (gives a >>> ValueError) while print(int(iNaN)) should give 'nan' or maybe 'inan'. >> >> Why do you convert to int when you need a string representation? Just >> print(float('nan')). >> I converted to int because I needed a whole number, this was intended to > represent some more complex process where a value is converted to a > whole number down in the depths of the processing. float('nan') is a number (in Python sense). No need to convert it. From marko.ristin at gmail.com Sat Sep 29 15:13:21 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sat, 29 Sep 2018 21:13:21 +0200 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Message-ID: P.p.s. to raise a custom exception: if not (arg1 < S.var1 < arg2): "Some description" raise SomeException(arg1, S.var1, arg2) The converter enforces that only "if not" statement is allowed, only a string description (optional) followed by a raise in the body of if-statement. This is later at back-conversion to python easy to transform into a lambda. Cheers, Marko Le sam. 29 sept. 2018 ? 20:56, Marko Ristin-Kaufmann a ?crit : > Hi James, > Just a PS to the previous syntax: > > with contracts: > with preconditions: > assert arg1 < arg2 > > with snapshot as S: > S.var1 = some_func(arg1) > with postconditions, \ > result: > # result would be annotated with "# type:" if return type > is annotated. > assert arg1 < S.var1 < arg2 > > For classes: > class SomeClass: > with invariants, > selfie as self: # type: SomeClass > assert 0 < self.x < sqrt(self.x) > > The advantage: no variable shadowing, valid python code, autocomplete > works in Pycharm, even mypy could be made to work. "With contracts" makes > it easier and less error prone to group preconditions and postconditions. > The converter would check that there is no "with contracts" in the body of > the function except in the first statement and the same for class > invariants. > > icontract.dummies would provide these dummy context managers (all of them > would raise exceptions on enter so that the code can not run by accident). > The converter would add/remove these imports automatically. > > Cheers, > Marko > > > Le sam. 29 sept. 2018 ? 17:55, Marko Ristin-Kaufmann < > marko.ristin at gmail.com> a ?crit : > >> Hi James, >> What about a tool that we discussed, to convert contracts back and forth >> to readable form on IDe save/load with the following syntax: >> >> def some_func(arg1:int, arg2:int)-> int: >> # typing on the phone so no indent >> With requiring: >> Assert arg1 < arg2, "some message" >> With snapshotting: >> Var1= some_func(arg1) >> >> With ensuring: >> If some_enabling_condition: >> Assert arg1 + arg2 < var1 >> >> If no snapshot, with ensuring is dedented. Only simple assignments >> allowed in snapshots, only asserts and ifs allowed in require/ensure >> blocks. Result is reserved for the result of the function. >> >> No statements allowed in require/ensure. >> >> The same with class invariants. >> >> Works with ast and autocomplete in pycharm. >> >> Sorry for the hasty message :) >> Marko >> >> >> >> Le sam. 29 sept. 2018 ? 07:36, Marko Ristin-Kaufmann < >> marko.ristin at gmail.com> a ?crit : >> >>> Hi James, >>> I'm a bit short on time today, and would need some more time and >>> attention to understand the proposal you wrote. I'll try to come back to >>> you tomorrow. >>> >>> In any case, I need to refactor icontract's decorators to use conditions >>> like lambda P: and lambda P, result: first before adding snapshot >>> functionality. >>> >>> What about having @snapshot_with and @snapshot? @Snapshot_with does what >>> you propose and @snapshot expects a lambda P, identifier: ? >>> >>> After the refactoring, maybe the same could be done for defining >>> contracts as well? (Requires and requires_that?) >>> >>> If the documentation is clear, I'd expect the user to be able to >>> distinguish the two. The first approach is shorter, and uses magic, but >>> fails in some rare situations. The other method is more verbose, but always >>> works. >>> >>> Cheers, >>> Marko >>> >>> Le sam. 29 sept. 2018 ? 00:35, James Lu a ?crit : >>> >>>> I am fine with your proposed syntax. It?s certainly lucid. Perhaps it >>>> would be a good idea to get people accustomed to ?non-magic? syntax. >>>> >>>> I still have a feeling that most developers would like to store the >>>> state in many different custom ways. >>>> >>>> Please explain. (Expressions like thunk(all)(a == b for a, b in >>>> P.arg.meth()) would be valid.) >>>> >>>> I'm thinking mostly about all the edge cases which we would not be able >>>> to cover (and how complex that would be to cover them). >>>> >>>> >>>> Except for a > b > c being one flat expression with 5 members, it seems >>>> fairly easy to recreate an AST, which can then be compiled down to a code >>>> object. The code object can be fun with a custom ?locals()? >>>> >>>> Below is my concept code for such a P object. >>>> >>>> from ast import * >>>> >>>> # not done: enforce Singleton property on EmptySymbolType >>>> >>>> class EmptySymbolType(object): ... >>>> >>>> EmptySymbol = EmptySymbolType() # empty symbols are placeholders >>>> >>>> class MockP(object): >>>> >>>> # "^" is xor >>>> >>>> @icontract.pre(lambda symbol, astnode: (symbol is None) ^ (astnode >>>> is None)) >>>> >>>> def __init__(self, symbol=None, value=EmptySymbol, astnode=None, >>>> initsymtable=(,)): >>>> >>>> self.symtable = dict(initsymtable) >>>> >>>> if symbol: >>>> >>>> self.expr = Expr(value=Name(id=symbol, ctx=Load())) >>>> >>>> self.symtable = {symbol: value} >>>> >>>> else: >>>> >>>> self.expr = astnode >>>> >>>> self.frozen = False >>>> >>>> def __add__(self, other): >>>> >>>> wrapped = MockP.wrap_value(other) >>>> >>>> return MockP(astnode=Expr(value=BinOp(self.expr, Add(), >>>> wrapped.expr), initsymtable={**self.symtable, **wrapped.symtable}) >>>> >>>> def compile(self): ... >>>> >>>> def freeze(self): >>>> >>>> # frozen objects wouldn?t have an overrided getattr, allowing >>>> for icontract to manipulate the MockP object using its public interface >>>> >>>> self.frozen = True >>>> >>>> @classmethod >>>> >>>> def wrap_value(cls, obj): >>>> >>>> # create a MockP object from a value. Generate a random >>>> identifier and set that as the key in symtable, the AST node is the name of >>>> that identifier, retrieving its value through simple expression evaluation. >>>> >>>> ... >>>> >>>> >>>> thunk = MockP.wrap_value >>>> >>>> P = MockP('P') >>>> >>>> # elsewhere: ensure P is only accessed via valid ?dot attribute access? >>>> inside @snapshot so contracts fail early, or don?t and allow Magic like >>>> __dict__ to occur on P. >>>> >>>> On Sep 27, 2018, at 9:49 PM, Marko Ristin-Kaufmann < >>>> marko.ristin at gmail.com> wrote: >>>> >>>> Hi James, >>>> >>>> I still have a feeling that most developers would like to store the >>>> state in many different custom ways. I see also thunk and snapshot with >>>> wrapper objects to be much more complicated to implement and maintain; I'm >>>> thinking mostly about all the edge cases which we would not be able to >>>> cover (and how complex that would be to cover them). Then the linters need >>>> also to work around such wrappers... It might also scare users off since it >>>> looks like too much magic. Another concern I also have is that it's >>>> probably very hard to integrate these wrappers with mypy later -- but I >>>> don't really have a clue about that, only my gut feeling? >>>> >>>> What about we accepted to repeat "lambda P, " prefix, and have >>>> something like this: >>>> >>>> @snapshot( >>>> lambda P, some_name: len(P.some_property), >>>> lambda P, another_name: hash(P.another_property) >>>> ) >>>> >>>> It's not too verbose for me and you can still explain in three-four >>>> sentences what happens below the hub in the library's docs. A >>>> pycharm/pydev/vim/emacs plugins could hide the verbose parts. >>>> >>>> I performed a small experiment to test how this solution plays with >>>> pylint and it seems OK that arguments are not used in lambdas. >>>> >>>> Cheers, >>>> Marko >>>> >>>> >>>> On Thu, 27 Sep 2018 at 12:27, James Lu wrote: >>>> >>>>> Why couldn?t we record the operations done to a special object and >>>>> replay them? >>>>> >>>>> Actually, I think there is probably no way around a decorator that >>>>>> captures/snapshots the data before the function call with a lambda (or even >>>>>> a separate function). "Old" construct, if we are to parse it somehow from >>>>>> the condition function, would limit us only to shallow copies (and be >>>>>> complex to implement as soon as we are capturing out-of-argument values >>>>>> such as globals *etc.)*. Moreove, what if we don't need shallow >>>>>> copies? I could imagine a dozen of cases where shallow copy is not what the >>>>>> programmer wants: for example, s/he might need to make deep copies, hash or >>>>>> otherwise transform the input data to hold only part of it instead of >>>>>> copying (*e.g., *so as to allow equality check without a double copy >>>>>> of the data, or capture only the value of certain property transformed in >>>>>> some way). >>>>>> >>>>>> >>>>> from icontract import snapshot, P, thunk >>>>> >>>>> @snapshot(some_identifier=P.self.some_method(P.some_argument.some_attr)) >>>>> >>>>> P is an object of our own type, let?s call the type MockP. MockP >>>>> returns new MockP objects when any operation is done to it. MockP * MockP = >>>>> MockP. MockP.attr = MockP. MockP objects remember all the operations done >>>>> to them, and allow the owner of a MockP object to re-apply the same >>>>> operations >>>>> >>>>> ?thunk? converts a function or object or class to a MockP object, >>>>> storing the function or object for when the operation is done. >>>>> >>>>> thunk(function)() >>>>> >>>>> Of course, you could also thunk objects like so: thunk(3) * P.number. >>>>> (Though it might be better to keep the 3 after P.number in this case so >>>>> P.number?s __mult__ would be invoked before 3?s __mult__ is invokes. >>>>> >>>>> >>>>> In most cases, you?d save any operations that can be done on a copy of >>>>> the data as generated by @snapshot in @postcondiion. thunk is for rare >>>>> scenarios where 1) it?s hard to capture the state, for example an object >>>>> that manages network state (or database connectivity etc) and whose stage >>>>> can only be read by an external classmethod 2) you want to avoid using >>>>> copy.deepcopy. >>>>> >>>>> I?m sure there?s some way to override isinstance through a meta class >>>>> or dunder subclasshook. >>>>> >>>>> I suppose this mocking method could be a shorthand for when you don?t >>>>> need the full power of a lambda. It?s arguably more succinct and readable, >>>>> though YMMV. >>>>> >>>>> I look forward to reading your opinion on this and any ideas you might >>>>> have. >>>>> >>>>> On Sep 26, 2018, at 3:56 PM, James Lu wrote: >>>>> >>>>> Hi Marko, >>>>> >>>>> Actually, following on #A4, you could also write those as multiple >>>>> decorators: >>>>> @snpashot(lambda _, some_identifier: some_func(_, >>>>> some_argument.some_attr) >>>>> @snpashot(lambda _, other_identifier: other_func(_.self)) >>>>> >>>>> Yes, though if we?re talking syntax using kwargs would probably be >>>>> better. >>>>> Using ?P? instead of ?_?: (I agree that _ smells of ignored arguments) >>>>> >>>>> @snapshot(some_identifier=lambda P: ..., some_identifier2=lambda P: >>>>> ...) >>>>> >>>>> Kwargs has the advantage that you can extend multiple lines without >>>>> repeating @snapshot, though many lines of @capture would probably be more >>>>> intuitive since each decorator captures one variable. >>>>> >>>>> Why uppercase "P" and not lowercase (uppercase implies a constant for >>>>> me)? >>>>> >>>>> To me, the capital letters are more prominent and explicit- easier to >>>>> see when reading code. It also implies its a constant for you- you >>>>> shouldn?t be modifying it, because then you?d be interfering with the >>>>> function itself. >>>>> >>>>> Side node: maybe it would be good to have an @icontract.nomutate >>>>> (probably use a different name, maybe @icontract.readonly) that makes sure >>>>> a method doesn?t mutate its own __dict__ (and maybe the __dict__ of the >>>>> members of its __dict__). It wouldn?t be necessary to put the decorator on >>>>> every read only function, just the ones your worried might mutate. >>>>> >>>>> Maybe a @icontract.nomutate(param=?paramname?) that ensures the >>>>> __dict__ of all members of the param name have the same equality or >>>>> identity before and after. The semantics would need to be worked out. >>>>> >>>>> On Sep 26, 2018, at 8:58 AM, Marko Ristin-Kaufmann < >>>>> marko.ristin at gmail.com> wrote: >>>>> >>>>> Hi James, >>>>> >>>>> Actually, following on #A4, you could also write those as multiple >>>>> decorators: >>>>> @snpashot(lambda _, some_identifier: some_func(_, >>>>> some_argument.some_attr) >>>>> @snpashot(lambda _, other_identifier: other_func(_.self)) >>>>> >>>>> Am I correct? >>>>> >>>>> "_" looks a bit hard to read for me (implying ignored arguments). >>>>> >>>>> Why uppercase "P" and not lowercase (uppercase implies a constant for >>>>> me)? Then "O" for "old" and "P" for parameters in a condition: >>>>> @post(lambda O, P: ...) >>>>> ? >>>>> >>>>> It also has the nice property that it follows both the temporal and >>>>> the alphabet order :) >>>>> >>>>> On Wed, 26 Sep 2018 at 14:30, James Lu wrote: >>>>> >>>>>> I still prefer snapshot, though capture is a good name too. We could >>>>>> use generator syntax and inspect the argument names. >>>>>> >>>>>> Instead of ?a?, perhaps use ?_?. Or maybe use ?A.?, for arguments. >>>>>> Some people might prefer ?P? for parameters, since parameters sometimes >>>>>> means the value received while the argument means the value passed. >>>>>> >>>>>> (#A1) >>>>>> >>>>>> from icontract import snapshot, __ >>>>>> @snapshot(some_func(_.some_argument.some_attr) for some_identifier, _ >>>>>> in __) >>>>>> >>>>>> Or (#A2) >>>>>> >>>>>> @snapshot(some_func(some_argument.some_attr) for some_identifier, _, >>>>>> some_argument in __) >>>>>> >>>>>> ? >>>>>> Or (#A3) >>>>>> >>>>>> @snapshot(lambda some_argument,_,some_identifier: >>>>>> some_func(some_argument.some_attr)) >>>>>> >>>>>> Or (#A4) >>>>>> >>>>>> @snapshot(lambda _,some_identifier: >>>>>> some_func(_.some_argument.some_attr)) >>>>>> @snapshot(lambda _,some_identifier, other_identifier: >>>>>> some_func(_.some_argument.some_attr), other_func(_.self)) >>>>>> >>>>>> I like #A4 the most because it?s fairly DRY and avoids the extra >>>>>> punctuation of >>>>>> >>>>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>>>> >>>>>> >>>>>> On Sep 26, 2018, at 12:23 AM, Marko Ristin-Kaufmann < >>>>>> marko.ristin at gmail.com> wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> Franklin wrote: >>>>>> >>>>>>> The name "before" is a confusing name. It's not just something that >>>>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>>>> things after it, but with values taken before the function call. >>>>>>> Based >>>>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>>>> confusing than one that is obvious but misleading. >>>>>> >>>>>> >>>>>> James wrote: >>>>>> >>>>>>> I suggest that instead of ?@before? it?s ?@snapshot? and instead of >>>>>>> ?old? it?s ?snapshot?. >>>>>> >>>>>> >>>>>> I like "snapshot", it's a bit clearer than prefixing/postfixing verbs >>>>>> with "pre" which might be misread (*e.g., *"prelet" has a meaning in >>>>>> Slavic languages and could be subconsciously misread, "predef" implies to >>>>>> me a pre-*definition* rather than prior-to-definition , >>>>>> "beforescope" is very clear for me, but it might be confusing for others as >>>>>> to what it actually refers to ). What about "@capture" (7 letters for >>>>>> captures *versus *8 for snapshot)? I suppose "@let" would be playing >>>>>> with fire if Python with conflicting new keywords since I assume "let" to >>>>>> be one of the candidates. >>>>>> >>>>>> Actually, I think there is probably no way around a decorator that >>>>>> captures/snapshots the data before the function call with a lambda (or even >>>>>> a separate function). "Old" construct, if we are to parse it somehow from >>>>>> the condition function, would limit us only to shallow copies (and be >>>>>> complex to implement as soon as we are capturing out-of-argument values >>>>>> such as globals *etc.)*. Moreove, what if we don't need shallow >>>>>> copies? I could imagine a dozen of cases where shallow copy is not what the >>>>>> programmer wants: for example, s/he might need to make deep copies, hash or >>>>>> otherwise transform the input data to hold only part of it instead of >>>>>> copying (*e.g., *so as to allow equality check without a double copy >>>>>> of the data, or capture only the value of certain property transformed in >>>>>> some way). >>>>>> >>>>>> I'd still go with the dictionary to allow for this extra freedom. We >>>>>> could have a convention: "a" denotes to the current arguments, and "b" >>>>>> denotes the captured values. It might make an interesting hint that we put >>>>>> "b" before "a" in the condition. You could also interpret "b" as "before" >>>>>> and "a" as "after", but also "a" as "arguments". >>>>>> >>>>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>>>> @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) >>>>>> def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: >>>>>> ... >>>>>> >>>>>> "b" can be omitted if it is not used. Under the hub, all the >>>>>> arguments to the condition would be passed by keywords. >>>>>> >>>>>> In case of inheritance, captures would be inherited as well. Hence >>>>>> the library would check at run-time that the returned dictionary with >>>>>> captured values has no identifier that has been already captured, and the >>>>>> linter checks that statically, before running the code. Reading values >>>>>> captured in the parent at the code of the child class might be a bit hard >>>>>> -- but that is case with any inherited methods/properties. In >>>>>> documentation, I'd list all the captures of both ancestor and the current >>>>>> class. >>>>>> >>>>>> I'm looking forward to reading your opinion on this and alternative >>>>>> suggestions :) >>>>>> Marko >>>>>> >>>>>> On Tue, 25 Sep 2018 at 18:12, Franklin? Lee < >>>>>> leewangzhong+python at gmail.com> wrote: >>>>>> >>>>>>> On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann >>>>>>> wrote: >>>>>>> > >>>>>>> > Hi, >>>>>>> > >>>>>>> > (I'd like to fork from a previous thread, "Pre-conditions and >>>>>>> post-conditions", since it got long and we started discussing a couple of >>>>>>> different things. Let's discuss in this thread the implementation of a >>>>>>> library for design-by-contract and how to push it forward to hopefully add >>>>>>> it to the standard library one day.) >>>>>>> > >>>>>>> > For those unfamiliar with contracts and current state of the >>>>>>> discussion in the previous thread, here's a short summary. The discussion >>>>>>> started by me inquiring about the possibility to add design-by-contract >>>>>>> concepts into the core language. The idea was rejected by the participants >>>>>>> mainly because they thought that the merit of the feature does not merit >>>>>>> its costs. This is quite debatable and seems to reflect many a discussion >>>>>>> about design-by-contract in general. Please see the other thread, "Why is >>>>>>> design-by-contract not widely adopted?" if you are interested in that >>>>>>> debate. >>>>>>> > >>>>>>> > We (a colleague of mine and I) decided to implement a library to >>>>>>> bring design-by-contract to Python since we don't believe that the concept >>>>>>> will make it into the core language anytime soon and we needed badly a tool >>>>>>> to facilitate our work with a growing code base. >>>>>>> > >>>>>>> > The library is available at http://github.com/Parquery/icontract. >>>>>>> The hope is to polish it so that the wider community could use it and once >>>>>>> the quality is high enough, make a proposal to add it to the standard >>>>>>> Python libraries. We do need a standard library for contracts, otherwise >>>>>>> projects with conflicting contract libraries can not integrate (e.g., the >>>>>>> contracts can not be inherited between two different contract libraries). >>>>>>> > >>>>>>> > So far, the most important bits have been implemented in icontract: >>>>>>> > >>>>>>> > Preconditions, postconditions, class invariants >>>>>>> > Inheritance of the contracts (including strengthening and >>>>>>> weakening of the inherited contracts) >>>>>>> > Informative violation messages (including information about the >>>>>>> values involved in the contract condition) >>>>>>> > Sphinx extension to include contracts in the automatically >>>>>>> generated documentation (sphinx-icontract) >>>>>>> > Linter to statically check that the arguments of the conditions >>>>>>> are correct (pyicontract-lint) >>>>>>> > >>>>>>> > We are successfully using it in our code base and have been quite >>>>>>> happy about the implementation so far. >>>>>>> > >>>>>>> > There is one bit still missing: accessing "old" values in the >>>>>>> postcondition (i.e., shallow copies of the values prior to the execution of >>>>>>> the function). This feature is necessary in order to allow us to verify >>>>>>> state transitions. >>>>>>> > >>>>>>> > For example, consider a new dictionary class that has "get" and >>>>>>> "put" methods: >>>>>>> > >>>>>>> > from typing import Optional >>>>>>> > >>>>>>> > from icontract import post >>>>>>> > >>>>>>> > class NovelDict: >>>>>>> > def length(self)->int: >>>>>>> > ... >>>>>>> > >>>>>>> > def get(self, key: str) -> Optional[str]: >>>>>>> > ... >>>>>>> > >>>>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>>>> > @post(lambda self, key: old(self.get(key)) is None and >>>>>>> old(self.length()) + 1 == self.length(), >>>>>>> > "length increased with a new key") >>>>>>> > @post(lambda self, key: old(self.get(key)) is not None and >>>>>>> old(self.length()) == self.length(), >>>>>>> > "length stable with an existing key") >>>>>>> > def put(self, key: str, value: str) -> None: >>>>>>> > ... >>>>>>> > >>>>>>> > How could we possible implement this "old" function? >>>>>>> > >>>>>>> > Here is my suggestion. I'd introduce a decorator "before" that >>>>>>> would allow you to store whatever values in a dictionary object "old" (i.e. >>>>>>> an object whose properties correspond to the key/value pairs). The "old" is >>>>>>> then passed to the condition. Here is it in code: >>>>>>> > >>>>>>> > # omitted contracts for brevity >>>>>>> > class NovelDict: >>>>>>> > def length(self)->int: >>>>>>> > ... >>>>>>> > >>>>>>> > # omitted contracts for brevity >>>>>>> > def get(self, key: str) -> Optional[str]: >>>>>>> > ... >>>>>>> > >>>>>>> > @before(lambda self, key: {"length": self.length(), "get": >>>>>>> self.get(key)}) >>>>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>>>> > @post(lambda self, key, old: old.get is None and old.length + >>>>>>> 1 == self.length(), >>>>>>> > "length increased with a new key") >>>>>>> > @post(lambda self, key, old: old.get is not None and >>>>>>> old.length == self.length(), >>>>>>> > "length stable with an existing key") >>>>>>> > def put(self, key: str, value: str) -> None: >>>>>>> > ... >>>>>>> > >>>>>>> > The linter would statically check that all attributes accessed in >>>>>>> "old" have to be defined in the decorator "before" so that attribute errors >>>>>>> would be caught early. The current implementation of the linter is fast >>>>>>> enough to be run at save time so such errors should usually not happen with >>>>>>> a properly set IDE. >>>>>>> > >>>>>>> > "before" decorator would also have "enabled" property, so that you >>>>>>> can turn it off (e.g., if you only want to run a postcondition in testing). >>>>>>> The "before" decorators can be stacked so that you can also have a more >>>>>>> fine-grained control when each one of them is running (some during test, >>>>>>> some during test and in production). The linter would enforce that before's >>>>>>> "enabled" is a disjunction of all the "enabled"'s of the corresponding >>>>>>> postconditions where the old value appears. >>>>>>> > >>>>>>> > Is this a sane approach to "old" values? Any alternative approach >>>>>>> you would prefer? What about better naming? Is "before" a confusing name? >>>>>>> >>>>>>> The dict can be splatted into the postconditions, so that no special >>>>>>> name is required. This would require either that the lambdas handle >>>>>>> **kws, or that their caller inspect them to see what names they take. >>>>>>> Perhaps add a function to functools which only passes kwargs that >>>>>>> fit. >>>>>>> Then the precondition mechanism can pass `self`, `key`, and `value` >>>>>>> as >>>>>>> kwargs instead of args. >>>>>>> >>>>>>> For functions that have *args and **kwargs, it may be necessary to >>>>>>> pass them to the conditions as args and kwargs instead. >>>>>>> >>>>>>> The name "before" is a confusing name. It's not just something that >>>>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>>>> things after it, but with values taken before the function call. >>>>>>> Based >>>>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>>>> confusing than one that is obvious but misleading. >>>>>>> >>>>>>> By the way, should the first postcondition be `self.get(key) is >>>>>>> value`, checking for identity rather than equality? >>>>>>> >>>>>> _______________________________________________ >>>>>> Python-ideas mailing list >>>>>> Python-ideas at python.org >>>>>> https://mail.python.org/mailman/listinfo/python-ideas >>>>>> Code of Conduct: http://python.org/psf/codeofconduct/ >>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Sat Sep 29 15:22:57 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sat, 29 Sep 2018 21:22:57 +0200 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Message-ID: Hi James, I reread the proposal with MockP. I still don't get the details, but if I think I understand the basic idea. You put a placeholder and whenever one of its methods is called (including dunders), you record it and finally assemble an AST and compile a lambda function to be executed at actual call later. But that would still fail if you want to have: @snapshot(var1=some_func(MockP.arg1, MockP.arg2)) , right? Or there is a way to record that? Cheers, Marko Le sam. 29 sept. 2018 ? 00:35, James Lu a ?crit : > I am fine with your proposed syntax. It?s certainly lucid. Perhaps it > would be a good idea to get people accustomed to ?non-magic? syntax. > > I still have a feeling that most developers would like to store the state > in many different custom ways. > > Please explain. (Expressions like thunk(all)(a == b for a, b in > P.arg.meth()) would be valid.) > > I'm thinking mostly about all the edge cases which we would not be able to > cover (and how complex that would be to cover them). > > > Except for a > b > c being one flat expression with 5 members, it seems > fairly easy to recreate an AST, which can then be compiled down to a code > object. The code object can be fun with a custom ?locals()? > > Below is my concept code for such a P object. > > from ast import * > > # not done: enforce Singleton property on EmptySymbolType > > class EmptySymbolType(object): ... > > EmptySymbol = EmptySymbolType() # empty symbols are placeholders > > class MockP(object): > > # "^" is xor > > @icontract.pre(lambda symbol, astnode: (symbol is None) ^ (astnode is > None)) > > def __init__(self, symbol=None, value=EmptySymbol, astnode=None, > initsymtable=(,)): > > self.symtable = dict(initsymtable) > > if symbol: > > self.expr = Expr(value=Name(id=symbol, ctx=Load())) > > self.symtable = {symbol: value} > > else: > > self.expr = astnode > > self.frozen = False > > def __add__(self, other): > > wrapped = MockP.wrap_value(other) > > return MockP(astnode=Expr(value=BinOp(self.expr, Add(), > wrapped.expr), initsymtable={**self.symtable, **wrapped.symtable}) > > def compile(self): ... > > def freeze(self): > > # frozen objects wouldn?t have an overrided getattr, allowing for > icontract to manipulate the MockP object using its public interface > > self.frozen = True > > @classmethod > > def wrap_value(cls, obj): > > # create a MockP object from a value. Generate a random identifier > and set that as the key in symtable, the AST node is the name of that > identifier, retrieving its value through simple expression evaluation. > > ... > > > thunk = MockP.wrap_value > > P = MockP('P') > > # elsewhere: ensure P is only accessed via valid ?dot attribute access? > inside @snapshot so contracts fail early, or don?t and allow Magic like > __dict__ to occur on P. > > On Sep 27, 2018, at 9:49 PM, Marko Ristin-Kaufmann > wrote: > > Hi James, > > I still have a feeling that most developers would like to store the state > in many different custom ways. I see also thunk and snapshot with wrapper > objects to be much more complicated to implement and maintain; I'm thinking > mostly about all the edge cases which we would not be able to cover (and > how complex that would be to cover them). Then the linters need also to > work around such wrappers... It might also scare users off since it looks > like too much magic. Another concern I also have is that it's probably very > hard to integrate these wrappers with mypy later -- but I don't really have > a clue about that, only my gut feeling? > > What about we accepted to repeat "lambda P, " prefix, and have something > like this: > > @snapshot( > lambda P, some_name: len(P.some_property), > lambda P, another_name: hash(P.another_property) > ) > > It's not too verbose for me and you can still explain in three-four > sentences what happens below the hub in the library's docs. A > pycharm/pydev/vim/emacs plugins could hide the verbose parts. > > I performed a small experiment to test how this solution plays with pylint > and it seems OK that arguments are not used in lambdas. > > Cheers, > Marko > > > On Thu, 27 Sep 2018 at 12:27, James Lu wrote: > >> Why couldn?t we record the operations done to a special object and >> replay them? >> >> Actually, I think there is probably no way around a decorator that >>> captures/snapshots the data before the function call with a lambda (or even >>> a separate function). "Old" construct, if we are to parse it somehow from >>> the condition function, would limit us only to shallow copies (and be >>> complex to implement as soon as we are capturing out-of-argument values >>> such as globals *etc.)*. Moreove, what if we don't need shallow copies? >>> I could imagine a dozen of cases where shallow copy is not what the >>> programmer wants: for example, s/he might need to make deep copies, hash or >>> otherwise transform the input data to hold only part of it instead of >>> copying (*e.g., *so as to allow equality check without a double copy of >>> the data, or capture only the value of certain property transformed in some >>> way). >>> >>> >> from icontract import snapshot, P, thunk >> @snapshot(some_identifier=P.self.some_method(P.some_argument.some_attr)) >> >> P is an object of our own type, let?s call the type MockP. MockP returns >> new MockP objects when any operation is done to it. MockP * MockP = MockP. >> MockP.attr = MockP. MockP objects remember all the operations done to them, >> and allow the owner of a MockP object to re-apply the same operations >> >> ?thunk? converts a function or object or class to a MockP object, storing >> the function or object for when the operation is done. >> >> thunk(function)() >> >> Of course, you could also thunk objects like so: thunk(3) * P.number. >> (Though it might be better to keep the 3 after P.number in this case so >> P.number?s __mult__ would be invoked before 3?s __mult__ is invokes. >> >> >> In most cases, you?d save any operations that can be done on a copy of >> the data as generated by @snapshot in @postcondiion. thunk is for rare >> scenarios where 1) it?s hard to capture the state, for example an object >> that manages network state (or database connectivity etc) and whose stage >> can only be read by an external classmethod 2) you want to avoid using >> copy.deepcopy. >> >> I?m sure there?s some way to override isinstance through a meta class or >> dunder subclasshook. >> >> I suppose this mocking method could be a shorthand for when you don?t >> need the full power of a lambda. It?s arguably more succinct and readable, >> though YMMV. >> >> I look forward to reading your opinion on this and any ideas you might >> have. >> >> On Sep 26, 2018, at 3:56 PM, James Lu wrote: >> >> Hi Marko, >> >> Actually, following on #A4, you could also write those as multiple >> decorators: >> @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) >> @snpashot(lambda _, other_identifier: other_func(_.self)) >> >> Yes, though if we?re talking syntax using kwargs would probably be better. >> Using ?P? instead of ?_?: (I agree that _ smells of ignored arguments) >> >> @snapshot(some_identifier=lambda P: ..., some_identifier2=lambda P: ...) >> >> Kwargs has the advantage that you can extend multiple lines without >> repeating @snapshot, though many lines of @capture would probably be more >> intuitive since each decorator captures one variable. >> >> Why uppercase "P" and not lowercase (uppercase implies a constant for me)? >> >> To me, the capital letters are more prominent and explicit- easier to see >> when reading code. It also implies its a constant for you- you shouldn?t be >> modifying it, because then you?d be interfering with the function itself. >> >> Side node: maybe it would be good to have an @icontract.nomutate >> (probably use a different name, maybe @icontract.readonly) that makes sure >> a method doesn?t mutate its own __dict__ (and maybe the __dict__ of the >> members of its __dict__). It wouldn?t be necessary to put the decorator on >> every read only function, just the ones your worried might mutate. >> >> Maybe a @icontract.nomutate(param=?paramname?) that ensures the __dict__ >> of all members of the param name have the same equality or identity before >> and after. The semantics would need to be worked out. >> >> On Sep 26, 2018, at 8:58 AM, Marko Ristin-Kaufmann < >> marko.ristin at gmail.com> wrote: >> >> Hi James, >> >> Actually, following on #A4, you could also write those as multiple >> decorators: >> @snpashot(lambda _, some_identifier: some_func(_, some_argument.some_attr) >> @snpashot(lambda _, other_identifier: other_func(_.self)) >> >> Am I correct? >> >> "_" looks a bit hard to read for me (implying ignored arguments). >> >> Why uppercase "P" and not lowercase (uppercase implies a constant for >> me)? Then "O" for "old" and "P" for parameters in a condition: >> @post(lambda O, P: ...) >> ? >> >> It also has the nice property that it follows both the temporal and the >> alphabet order :) >> >> On Wed, 26 Sep 2018 at 14:30, James Lu wrote: >> >>> I still prefer snapshot, though capture is a good name too. We could use >>> generator syntax and inspect the argument names. >>> >>> Instead of ?a?, perhaps use ?_?. Or maybe use ?A.?, for arguments. Some >>> people might prefer ?P? for parameters, since parameters sometimes means >>> the value received while the argument means the value passed. >>> >>> (#A1) >>> >>> from icontract import snapshot, __ >>> @snapshot(some_func(_.some_argument.some_attr) for some_identifier, _ in >>> __) >>> >>> Or (#A2) >>> >>> @snapshot(some_func(some_argument.some_attr) for some_identifier, _, >>> some_argument in __) >>> >>> ? >>> Or (#A3) >>> >>> @snapshot(lambda some_argument,_,some_identifier: >>> some_func(some_argument.some_attr)) >>> >>> Or (#A4) >>> >>> @snapshot(lambda _,some_identifier: some_func(_.some_argument.some_attr)) >>> @snapshot(lambda _,some_identifier, other_identifier: >>> some_func(_.some_argument.some_attr), other_func(_.self)) >>> >>> I like #A4 the most because it?s fairly DRY and avoids the extra >>> punctuation of >>> >>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>> >>> >>> On Sep 26, 2018, at 12:23 AM, Marko Ristin-Kaufmann < >>> marko.ristin at gmail.com> wrote: >>> >>> Hi, >>> >>> Franklin wrote: >>> >>>> The name "before" is a confusing name. It's not just something that >>>> happens before. It's really a pre-`let`, adding names to the scope of >>>> things after it, but with values taken before the function call. Based >>>> on that description, other possible names are `prelet`, `letbefore`, >>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>> confusing than one that is obvious but misleading. >>> >>> >>> James wrote: >>> >>>> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ? >>>> old? it?s ?snapshot?. >>> >>> >>> I like "snapshot", it's a bit clearer than prefixing/postfixing verbs >>> with "pre" which might be misread (*e.g., *"prelet" has a meaning in >>> Slavic languages and could be subconsciously misread, "predef" implies to >>> me a pre-*definition* rather than prior-to-definition , "beforescope" >>> is very clear for me, but it might be confusing for others as to what it >>> actually refers to ). What about "@capture" (7 letters for captures *versus >>> *8 for snapshot)? I suppose "@let" would be playing with fire if Python >>> with conflicting new keywords since I assume "let" to be one of the >>> candidates. >>> >>> Actually, I think there is probably no way around a decorator that >>> captures/snapshots the data before the function call with a lambda (or even >>> a separate function). "Old" construct, if we are to parse it somehow from >>> the condition function, would limit us only to shallow copies (and be >>> complex to implement as soon as we are capturing out-of-argument values >>> such as globals *etc.)*. Moreove, what if we don't need shallow copies? >>> I could imagine a dozen of cases where shallow copy is not what the >>> programmer wants: for example, s/he might need to make deep copies, hash or >>> otherwise transform the input data to hold only part of it instead of >>> copying (*e.g., *so as to allow equality check without a double copy of >>> the data, or capture only the value of certain property transformed in some >>> way). >>> >>> I'd still go with the dictionary to allow for this extra freedom. We >>> could have a convention: "a" denotes to the current arguments, and "b" >>> denotes the captured values. It might make an interesting hint that we put >>> "b" before "a" in the condition. You could also interpret "b" as "before" >>> and "a" as "after", but also "a" as "arguments". >>> >>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>> @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) >>> def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: >>> ... >>> >>> "b" can be omitted if it is not used. Under the hub, all the arguments >>> to the condition would be passed by keywords. >>> >>> In case of inheritance, captures would be inherited as well. Hence the >>> library would check at run-time that the returned dictionary with captured >>> values has no identifier that has been already captured, and the linter >>> checks that statically, before running the code. Reading values captured in >>> the parent at the code of the child class might be a bit hard -- but that >>> is case with any inherited methods/properties. In documentation, I'd list >>> all the captures of both ancestor and the current class. >>> >>> I'm looking forward to reading your opinion on this and alternative >>> suggestions :) >>> Marko >>> >>> On Tue, 25 Sep 2018 at 18:12, Franklin? Lee < >>> leewangzhong+python at gmail.com> wrote: >>> >>>> On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann >>>> wrote: >>>> > >>>> > Hi, >>>> > >>>> > (I'd like to fork from a previous thread, "Pre-conditions and >>>> post-conditions", since it got long and we started discussing a couple of >>>> different things. Let's discuss in this thread the implementation of a >>>> library for design-by-contract and how to push it forward to hopefully add >>>> it to the standard library one day.) >>>> > >>>> > For those unfamiliar with contracts and current state of the >>>> discussion in the previous thread, here's a short summary. The discussion >>>> started by me inquiring about the possibility to add design-by-contract >>>> concepts into the core language. The idea was rejected by the participants >>>> mainly because they thought that the merit of the feature does not merit >>>> its costs. This is quite debatable and seems to reflect many a discussion >>>> about design-by-contract in general. Please see the other thread, "Why is >>>> design-by-contract not widely adopted?" if you are interested in that >>>> debate. >>>> > >>>> > We (a colleague of mine and I) decided to implement a library to >>>> bring design-by-contract to Python since we don't believe that the concept >>>> will make it into the core language anytime soon and we needed badly a tool >>>> to facilitate our work with a growing code base. >>>> > >>>> > The library is available at http://github.com/Parquery/icontract. >>>> The hope is to polish it so that the wider community could use it and once >>>> the quality is high enough, make a proposal to add it to the standard >>>> Python libraries. We do need a standard library for contracts, otherwise >>>> projects with conflicting contract libraries can not integrate (e.g., the >>>> contracts can not be inherited between two different contract libraries). >>>> > >>>> > So far, the most important bits have been implemented in icontract: >>>> > >>>> > Preconditions, postconditions, class invariants >>>> > Inheritance of the contracts (including strengthening and weakening >>>> of the inherited contracts) >>>> > Informative violation messages (including information about the >>>> values involved in the contract condition) >>>> > Sphinx extension to include contracts in the automatically generated >>>> documentation (sphinx-icontract) >>>> > Linter to statically check that the arguments of the conditions are >>>> correct (pyicontract-lint) >>>> > >>>> > We are successfully using it in our code base and have been quite >>>> happy about the implementation so far. >>>> > >>>> > There is one bit still missing: accessing "old" values in the >>>> postcondition (i.e., shallow copies of the values prior to the execution of >>>> the function). This feature is necessary in order to allow us to verify >>>> state transitions. >>>> > >>>> > For example, consider a new dictionary class that has "get" and "put" >>>> methods: >>>> > >>>> > from typing import Optional >>>> > >>>> > from icontract import post >>>> > >>>> > class NovelDict: >>>> > def length(self)->int: >>>> > ... >>>> > >>>> > def get(self, key: str) -> Optional[str]: >>>> > ... >>>> > >>>> > @post(lambda self, key, value: self.get(key) == value) >>>> > @post(lambda self, key: old(self.get(key)) is None and >>>> old(self.length()) + 1 == self.length(), >>>> > "length increased with a new key") >>>> > @post(lambda self, key: old(self.get(key)) is not None and >>>> old(self.length()) == self.length(), >>>> > "length stable with an existing key") >>>> > def put(self, key: str, value: str) -> None: >>>> > ... >>>> > >>>> > How could we possible implement this "old" function? >>>> > >>>> > Here is my suggestion. I'd introduce a decorator "before" that would >>>> allow you to store whatever values in a dictionary object "old" (i.e. an >>>> object whose properties correspond to the key/value pairs). The "old" is >>>> then passed to the condition. Here is it in code: >>>> > >>>> > # omitted contracts for brevity >>>> > class NovelDict: >>>> > def length(self)->int: >>>> > ... >>>> > >>>> > # omitted contracts for brevity >>>> > def get(self, key: str) -> Optional[str]: >>>> > ... >>>> > >>>> > @before(lambda self, key: {"length": self.length(), "get": >>>> self.get(key)}) >>>> > @post(lambda self, key, value: self.get(key) == value) >>>> > @post(lambda self, key, old: old.get is None and old.length + 1 >>>> == self.length(), >>>> > "length increased with a new key") >>>> > @post(lambda self, key, old: old.get is not None and old.length >>>> == self.length(), >>>> > "length stable with an existing key") >>>> > def put(self, key: str, value: str) -> None: >>>> > ... >>>> > >>>> > The linter would statically check that all attributes accessed in >>>> "old" have to be defined in the decorator "before" so that attribute errors >>>> would be caught early. The current implementation of the linter is fast >>>> enough to be run at save time so such errors should usually not happen with >>>> a properly set IDE. >>>> > >>>> > "before" decorator would also have "enabled" property, so that you >>>> can turn it off (e.g., if you only want to run a postcondition in testing). >>>> The "before" decorators can be stacked so that you can also have a more >>>> fine-grained control when each one of them is running (some during test, >>>> some during test and in production). The linter would enforce that before's >>>> "enabled" is a disjunction of all the "enabled"'s of the corresponding >>>> postconditions where the old value appears. >>>> > >>>> > Is this a sane approach to "old" values? Any alternative approach you >>>> would prefer? What about better naming? Is "before" a confusing name? >>>> >>>> The dict can be splatted into the postconditions, so that no special >>>> name is required. This would require either that the lambdas handle >>>> **kws, or that their caller inspect them to see what names they take. >>>> Perhaps add a function to functools which only passes kwargs that fit. >>>> Then the precondition mechanism can pass `self`, `key`, and `value` as >>>> kwargs instead of args. >>>> >>>> For functions that have *args and **kwargs, it may be necessary to >>>> pass them to the conditions as args and kwargs instead. >>>> >>>> The name "before" is a confusing name. It's not just something that >>>> happens before. It's really a pre-`let`, adding names to the scope of >>>> things after it, but with values taken before the function call. Based >>>> on that description, other possible names are `prelet`, `letbefore`, >>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>> confusing than one that is obvious but misleading. >>>> >>>> By the way, should the first postcondition be `self.get(key) is >>>> value`, checking for identity rather than equality? >>>> >>> _______________________________________________ >>> Python-ideas mailing list >>> Python-ideas at python.org >>> https://mail.python.org/mailman/listinfo/python-ideas >>> Code of Conduct: http://python.org/psf/codeofconduct/ >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Sat Sep 29 15:46:39 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 30 Sep 2018 05:46:39 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely In-Reply-To: <20180929144311.GB19437@ando.pearwood.info> References: <20180928191854.Horde.xmUWrqDGOX18KB2KDkxRZbz@webmail.your-server.de> <20180928233908.GO19437@ando.pearwood.info> <82c32dc7-6887-6706-67a2-14a5824875d5@potatochowder.com> <20180929114218.GV19437@ando.pearwood.info> <20180929144311.GB19437@ando.pearwood.info> Message-ID: On Sun, Sep 30, 2018 at 12:43 AM Steven D'Aprano wrote: > > On Sat, Sep 29, 2018 at 10:15:42PM +1000, Chris Angelico wrote: > > [...] > > As are all the things that are "undefined behaviour" in C, like the > > result of integer overflow in a signed variable. They are "Here be > > dragons" territory, but somehow that's not okay for you. I don't > > understand why you can hate on C for having behaviours where you're > > told "don't do that, we can't promise anything", but it's perfectly > > acceptable for Python to have the exact same thing. > > They're not the same thing, not even close to the same thing. > > Undefined behaviour in C is a radically different concept to the > *implementation-defined behaviour* you describe in Python and most > (all?) other languages. I don't know how to communicate that message any > better than the pages I linked to before. Considering that many people here, myself included, still haven't understood what you're so het up about, I think you may need to work on communicating that better. It's still just "stuff you should never do". The compiler is allowed to assume you will never do it. How is that even slightly different from "the CPython interpreter is allowed to assume you won't use ctypes to change a reference count"? ChrisA From oscar.j.benjamin at gmail.com Sat Sep 29 16:43:42 2018 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Sat, 29 Sep 2018 21:43:42 +0100 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: On Sat, 29 Sep 2018 at 19:38, Steve Barnes wrote: > > On 29/09/2018 09:56, Serhiy Storchaka wrote: > > 29.09.18 11:43, Steve Barnes ????: > >> On 29/09/2018 08:50, Serhiy Storchaka wrote: > >>> Python is dynamically typed language. What is such processing that would > >>> work with iNaN, but doesn't work with float('nan')? > >>> > >> One simplistic example would be print(int(float('nan'))) (gives a > >> ValueError) while print(int(iNaN)) should give 'nan' or maybe 'inan'. > > > > Why do you convert to int when you need a string representation? Just > > print(float('nan')). > > I converted to int because I needed a whole number, this was intended to > represent some more complex process where a value is converted to a > whole number down in the depths of the processing. Your requirement to have a whole number cannot meaningfully be satisfied if your input is nan so an exception is the most useful result. -- Oscar From steve at pearwood.info Sat Sep 29 21:00:53 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 30 Sep 2018 11:00:53 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: <20180930010053.GF19437@ando.pearwood.info> On Sat, Sep 29, 2018 at 09:43:42PM +0100, Oscar Benjamin wrote: > On Sat, 29 Sep 2018 at 19:38, Steve Barnes wrote: > > > I converted to int because I needed a whole number, this was intended to > > represent some more complex process where a value is converted to a > > whole number down in the depths of the processing. > > Your requirement to have a whole number cannot meaningfully be > satisfied if your input is nan so an exception is the most useful > result. Not to Steve it isn't. Be careful about making value judgements like that: Steve is asking for an integer NAN because for *him* an integer NAN is more useful than an exception. You shouldn't tell him that he is wrong, unless you know his use-case and his code, which you don't. -- Steve From steve at pearwood.info Sat Sep 29 21:07:26 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 30 Sep 2018 11:07:26 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: Message-ID: <20180930010726.GG19437@ando.pearwood.info> On Sat, Sep 29, 2018 at 10:05:39PM +0300, Serhiy Storchaka wrote: > 29.09.18 21:38, Steve Barnes ????: [...] > >>Why do you convert to int when you need a string representation? Just > >>print(float('nan')). > >> > >>I converted to int because I needed a whole number, this was intended to > >represent some more complex process where a value is converted to a > >whole number down in the depths of the processing. > > float('nan') is a number (in Python sense). No need to convert it. Steve just told you that he doesn't need a number, he needs a whole number (an integer), and that this represents a more complex process that includes a call to int. Why do you dismiss that and say there is no need to call int when you don't know the process involved? It *may* be that Steve could use math.floor() or math.ceil() instead, neither of which have the same meaning as calling int(). But more likely he DOES need to convert it by calling int, just as he says. Telling people that they don't understand their own code when you don't know their code is not very productive. -- Steve From greg.ewing at canterbury.ac.nz Sat Sep 29 21:46:01 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 30 Sep 2018 14:46:01 +1300 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: <20180930010726.GG19437@ando.pearwood.info> References: <20180930010726.GG19437@ando.pearwood.info> Message-ID: <5BB02AD9.8050401@canterbury.ac.nz> Something to consider in all of this is that Python floats often *don't* produce NaNs for undefined operations, but raise exceptions instead: >>> 1.0/0.0 Traceback (most recent call last): File "", line 1, in ZeroDivisionError: float division by zero >>> math.sqrt(-1.0) Traceback (most recent call last): File "", line 1, in ValueError: math domain error So achieving the OP's goals would not only entail adding an integer version of NaN, but either making int arithmetic behave differently from floats, or changing the way float arithmetic behaves, to produce NaNs instead of exceptions. -- Greg From steve at pearwood.info Sat Sep 29 21:54:13 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 30 Sep 2018 11:54:13 +1000 Subject: [Python-ideas] Simplicity of C (was why is design-by-contracts not widely) In-Reply-To: References: Message-ID: <20180930015413.GH19437@ando.pearwood.info> On Sat, Sep 29, 2018 at 09:50:27AM +1000, Hugh Fisher wrote: > Oh FFS. You couldn't make the effort to read the very next sentence, > let alone the next paragraph, before responding? Hugh, your mind-reading powers are fading. I read not just the next sentence and paragraph but the entire post. They aren't relevant to my comment: I still disagreed with your description of C (in particular) as "simple" although I'm willing to accept that Python is *relatively* simple in some ways. Just because I disagree with you doesn't mean I didn't read your post. The next sentence was: [Hugh] When starting a programming project in C or Python, there's maybe a brief discussion about C99 or C11, or Python 3.5 or 3.6, but that's it. There's one way to do it. followed by criticism of C++ for being "designed with a shovel rather than a chisel", and the comment: [Hugh] C++ programming projects often start by specifying exactly which bits of the language the programming team will be allowed to use. And in-house style guides for Python often do the same. For example, Google's style-guide for Python bans the use of "Power features" such as custom metaclasses, access to bytecode, on-the-fly compilation, dynamic inheritance, object reparenting, import hacks, reflection, modification of system internals, etc. https://github.com/google/styleguide/blob/gh-pages/pyguide.md#219-power-features Other choices include whether to use a functional style or object- oriented style or both. Design By Contract is a methodology. People already decide whether to use TDD or design up front (or don't decide on any methodology at all and wing it). They can already decide on using Design By Contract, if they like the existing solutions for it. This discussion is for those of us who would like to include DbC in our projects but don't like existing solutions. C++ being designed with a shovel is not relevant. (Except in the sense that we should always be careful about piling on feature upon feature into Python.) -- Steve From rosuav at gmail.com Sat Sep 29 22:17:25 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 30 Sep 2018 12:17:25 +1000 Subject: [Python-ideas] Simplicity of C (was why is design-by-contracts not widely) In-Reply-To: <20180930015413.GH19437@ando.pearwood.info> References: <20180930015413.GH19437@ando.pearwood.info> Message-ID: On Sun, Sep 30, 2018 at 11:54 AM Steven D'Aprano wrote: > This discussion is for those of us who would like to include DbC in our > projects but don't like existing solutions. C++ being designed with a > shovel is not relevant. > > (Except in the sense that we should always be careful about piling on > feature upon feature into Python.) And as such, I do not want to see dedicated syntax for no purpose other than contracts. What I'm interested in is (a) whether something can and should be added to the stdlib, and (b) whether some specific (and probably small) aspect of it could benefit from language support. As a parallel example, consider type hints. The language has ZERO support for special syntax for a language of types. What you have is simple, straight-forward names like "List", and the normal behaviours that we already can do such as subscripting. There is language support, however, for attaching expressions to functions and their arguments. At the moment, I'm seeing decorator-based contracts as a clunky version of unit tests. We already have "inline unit testing" - it's called doctest - and I haven't seen anything pinned down as "hey, this is what it'd take to make contracts more viable". Certainly nothing that couldn't be done as a third-party package. But I'm still open to being swayed on that point. ChrisA From steve at pearwood.info Sun Sep 30 00:26:35 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 30 Sep 2018 14:26:35 +1000 Subject: [Python-ideas] Simplicity of C (was why is design-by-contracts not widely) In-Reply-To: References: <20180930015413.GH19437@ando.pearwood.info> Message-ID: <20180930042635.GI19437@ando.pearwood.info> On Sun, Sep 30, 2018 at 12:17:25PM +1000, Chris Angelico wrote: > On Sun, Sep 30, 2018 at 11:54 AM Steven D'Aprano wrote: > > This discussion is for those of us who would like to include DbC in our > > projects but don't like existing solutions. C++ being designed with a > > shovel is not relevant. > > > > (Except in the sense that we should always be careful about piling on > > feature upon feature into Python.) > > And as such, I do not want to see dedicated syntax for no purpose > other than contracts. That's a reasonable objection, except that contracts are a BIG purpose. Would you object to dedicated syntax for object oriented programming? (Classes and methods.) I hope not. Imagine if OOP in Python was limited to an API like this: MyClass = type(name="MyClass", parent=object) MyClass.add_method(__init__=lambda self, arg: setattr(self, "arg", arg)) MyClass.add_method(__str__=lambda self: "MyClass(%r)" % (self.arg,)) MyClass.add_method(spam=lambda self, x, y: (self.arg + x)/y) MyClass.add_method(eggs=lambda self, x, y: self.arg*x - y) MyClass.add_member(cheese='Cheddar') MyClass.add_member(aardvark=None) That's the situation we're in right now for contracts. It sucks and blows at the same time. Syntax matters, and sometimes without the right syntax, certain techniques and methodologies aren't practical. I know that adding syntax is a big step, and should be considered a last resort for when a library or even a built-in won't work. But adding contracts isn't a small benefit. Its not a magic bullet, nobody says that, but I would say that contracts as a feature is *much* bigger and more important than (say) docstrings, and we have dedicated syntax for docstrings. > What I'm interested in is (a) whether something > can and should be added to the stdlib, and (b) whether some specific > (and probably small) aspect of it could benefit from language support. > As a parallel example, consider type hints. The language has ZERO > support for special syntax for a language of types. That's a weird use of the word "ZERO" :-) def spam(x: int) -> float: y: int I count three special syntax forms for a language of types: - parameter type hints; - return type hints; - variable type hints. (Yes, I'm aware that *technically* we can put anything we like in the hints, they don't have to be used as type hints, but Guido has made it clear that such uses are definitely of second-rate importance and only grudgingly supported.) > What you have is > simple, straight-forward names like "List", and the normal behaviours > that we already can do such as subscripting. There is language > support, however, for attaching expressions to functions and their > arguments. Your point is taken that there is no separate syntax for referring to the types *themselves*, but then there's no need for such. int is int, whether you refer to the class "int" or the static type "int". > At the moment, I'm seeing decorator-based contracts as a clunky > version of unit tests. Contracts are not unit tests. Contracts and unit tests are complementary, and overlap somewhat, but they are the same. Unit tests only test the canned values you write in you tests. Contracts test real data you pass to your application. -- Steve From cs at cskk.id.au Sun Sep 30 00:44:22 2018 From: cs at cskk.id.au (Cameron Simpson) Date: Sun, 30 Sep 2018 14:44:22 +1000 Subject: [Python-ideas] Simplicity of C (was why is design-by-contracts not widely) In-Reply-To: References: Message-ID: <20180930044422.GA57629@cskk.homeip.net> On 30Sep2018 12:17, Chris Angelico wrote: >At the moment, I'm seeing decorator-based contracts as a clunky >version of unit tests. We already have "inline unit testing" - it's >called doctest - and I haven't seen anything pinned down as "hey, this >is what it'd take to make contracts more viable". Certainly nothing >that couldn't be done as a third-party package. But I'm still open to >being swayed on that point. Decorator based contracts are very little like clunky unit tests to me. I'm basing my opinion on the icontracts pip package, which I'm going to start using. In case you've been looking at something different, it provides a small number of decorators including @pre(test-function) and @post(test-function) and the class invariant decorator @inv, with good error messages for violations. They are _functionally_ like putting assertions in your code at the start and end of your functions, but have some advantages: - they're exposed, not buried inside the function, where they're easy to see and can be considered as contracts - they run on _every_ function call, not just during your testing, and get turned off just like assertions do: when you run Python with the -O (optimise) option. (There's some more tuning available too.) - the assertions make qualitative statements about the object/parameter state in the form "the state is consistent if these things apply"; tests tend to say "here's a situation, do these things and examine these results". You need to invent the situations and the results, rather than making general statements about the purpose and functional semantics of the class. They're different to both unit tests _and_ doctests because they get exercised during normal code execution. Both unit tests and doctests run _only_ during your test phase, with only whatever test scenarios you have devised. The difficulty with unit tests and doctests (both of which I use) and also integration tests is making something small enough to run but big/wide enough to cover all the relevant cases. They _do not_ run against all your real world data. It can be quite hard to apply them to your real world data. Also, all the @pre/@post/@inv stuff will run _during_ your unit tests and doctests as well, so they get included in your test regime for free. I've got a few classes which have a selftest method whose purpose is to confirm correctness of the instance state, and I call that from a few methods at start and end, particularly those for which unit tests have been hard to write or I know are inadequately covered (and probably never will be because devising a sufficient test case is impractical, especially for hard to envisage corner cases). The icontracts module will be very helpful to me here: I can pull out the self-test function as the class invariant, and make a bunch of @pre/@post assertions corresponding the the method semantic definition. The flip side of this is that there's no case for language changes in what I say above: the decorators look pretty good to my eye. Cheers, Cameron Simpson From rosuav at gmail.com Sun Sep 30 00:50:28 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 30 Sep 2018 14:50:28 +1000 Subject: [Python-ideas] Simplicity of C (was why is design-by-contracts not widely) In-Reply-To: <20180930042635.GI19437@ando.pearwood.info> References: <20180930015413.GH19437@ando.pearwood.info> <20180930042635.GI19437@ando.pearwood.info> Message-ID: On Sun, Sep 30, 2018 at 2:27 PM Steven D'Aprano wrote: > > On Sun, Sep 30, 2018 at 12:17:25PM +1000, Chris Angelico wrote: > > On Sun, Sep 30, 2018 at 11:54 AM Steven D'Aprano wrote: > > > This discussion is for those of us who would like to include DbC in our > > > projects but don't like existing solutions. C++ being designed with a > > > shovel is not relevant. > > > > > > (Except in the sense that we should always be careful about piling on > > > feature upon feature into Python.) > > > > And as such, I do not want to see dedicated syntax for no purpose > > other than contracts. > > That's a reasonable objection, except that contracts are a BIG purpose. I think that's a matter of debate, but sure, let's accept that contracts are big. > Would you object to dedicated syntax for object oriented programming? > (Classes and methods.) I hope not. Imagine if OOP in Python was limited > to an API like this: TBH I don't think contracts are nearly as big a tool as classes are. > MyClass = type(name="MyClass", parent=object) > MyClass.add_method(__init__=lambda self, arg: setattr(self, "arg", arg)) > MyClass.add_method(__str__=lambda self: "MyClass(%r)" % (self.arg,)) > MyClass.add_method(spam=lambda self, x, y: (self.arg + x)/y) > MyClass.add_method(eggs=lambda self, x, y: self.arg*x - y) > MyClass.add_member(cheese='Cheddar') > MyClass.add_member(aardvark=None) Actually, that's not far from the way JavaScript's class hierarchy was, up until ES2015. Which suggests that a class keyword is a convenience, but not actually essential. > I know that adding syntax is a big step, and should be considered a last > resort for when a library or even a built-in won't work. But adding > contracts isn't a small benefit. Its not a magic bullet, nobody says > that, but I would say that contracts as a feature is *much* bigger and > more important than (say) docstrings, and we have dedicated syntax for > docstrings. Hmm, what we really have is just string literals, plus a bit of magic that says "if the first thing in a function/class is a string literal, we save it". But sure. > > What I'm interested in is (a) whether something > > can and should be added to the stdlib, and (b) whether some specific > > (and probably small) aspect of it could benefit from language support. > > As a parallel example, consider type hints. The language has ZERO > > support for special syntax for a language of types. > > That's a weird use of the word "ZERO" :-) > > def spam(x: int) -> float: > y: int > > I count three special syntax forms for a language of types: > > - parameter type hints; > - return type hints; > - variable type hints. > > (Yes, I'm aware that *technically* we can put anything we like in the > hints, they don't have to be used as type hints, but Guido has made it > clear that such uses are definitely of second-rate importance and only > grudgingly supported.) That's exactly my point though. The syntax is for "attach this thing to that function". There is no syntax for a language of types, a type algebra syntax. We don't have a syntax that says "tuple containing int, two strings, and float". The syntactic support is the very smallest it can be - and, as you say, it's not actually restricted to type hints at all. If I'm reading the dates correctly, annotations were completely non-specific from 2006 (PEP 3107) until 2014 (PEP 484). So what is the smallest piece of syntactic support that would enable decent DbC? And "none at all" is a valid response, although I think in this case it's incorrect. > > What you have is > > simple, straight-forward names like "List", and the normal behaviours > > that we already can do such as subscripting. There is language > > support, however, for attaching expressions to functions and their > > arguments. > > Your point is taken that there is no separate syntax for referring to > the types *themselves*, but then there's no need for such. int is int, > whether you refer to the class "int" or the static type "int". The static type "int" is one of the simplest possible type declarations. There are much more complicated options. > > At the moment, I'm seeing decorator-based contracts as a clunky > > version of unit tests. > > Contracts are not unit tests. > > Contracts and unit tests are complementary, and overlap somewhat, but > they are the same. Unit tests only test the canned values you write in > you tests. Contracts test real data you pass to your application. > And yet all the examples I've seen have just been poor substitutes for unit tests. Can we get some examples that actually do a better job of selling contracts? ChrisA From marko.ristin at gmail.com Sun Sep 30 01:28:25 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sun, 30 Sep 2018 07:28:25 +0200 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Message-ID: Hi James, I copy/pasted the discussion re the readability tool to an issue on github: https://github.com/Parquery/icontract/issues/48 Would you mind opening a separate issue and copy/pasting what you find relevant re MockP approach in a separate issue? I think it's time to fork the issues and have separate threads with code highlighting etc. Is that OK with you? Cheers, Marko On Sat, 29 Sep 2018 at 21:22, Marko Ristin-Kaufmann wrote: > Hi James, > I reread the proposal with MockP. I still don't get the details, but if I > think I understand the basic idea. You put a placeholder and whenever one > of its methods is called (including dunders), you record it and finally > assemble an AST and compile a lambda function to be executed at actual call > later. > > But that would still fail if you want to have: > @snapshot(var1=some_func(MockP.arg1, MockP.arg2)) > , right? Or there is a way to record that? > > Cheers, > Marko > > Le sam. 29 sept. 2018 ? 00:35, James Lu a ?crit : > >> I am fine with your proposed syntax. It?s certainly lucid. Perhaps it >> would be a good idea to get people accustomed to ?non-magic? syntax. >> >> I still have a feeling that most developers would like to store the state >> in many different custom ways. >> >> Please explain. (Expressions like thunk(all)(a == b for a, b in >> P.arg.meth()) would be valid.) >> >> I'm thinking mostly about all the edge cases which we would not be able >> to cover (and how complex that would be to cover them). >> >> >> Except for a > b > c being one flat expression with 5 members, it seems >> fairly easy to recreate an AST, which can then be compiled down to a code >> object. The code object can be fun with a custom ?locals()? >> >> Below is my concept code for such a P object. >> >> from ast import * >> >> # not done: enforce Singleton property on EmptySymbolType >> >> class EmptySymbolType(object): ... >> >> EmptySymbol = EmptySymbolType() # empty symbols are placeholders >> >> class MockP(object): >> >> # "^" is xor >> >> @icontract.pre(lambda symbol, astnode: (symbol is None) ^ (astnode is >> None)) >> >> def __init__(self, symbol=None, value=EmptySymbol, astnode=None, >> initsymtable=(,)): >> >> self.symtable = dict(initsymtable) >> >> if symbol: >> >> self.expr = Expr(value=Name(id=symbol, ctx=Load())) >> >> self.symtable = {symbol: value} >> >> else: >> >> self.expr = astnode >> >> self.frozen = False >> >> def __add__(self, other): >> >> wrapped = MockP.wrap_value(other) >> >> return MockP(astnode=Expr(value=BinOp(self.expr, Add(), >> wrapped.expr), initsymtable={**self.symtable, **wrapped.symtable}) >> >> def compile(self): ... >> >> def freeze(self): >> >> # frozen objects wouldn?t have an overrided getattr, allowing for >> icontract to manipulate the MockP object using its public interface >> >> self.frozen = True >> >> @classmethod >> >> def wrap_value(cls, obj): >> >> # create a MockP object from a value. Generate a random identifier >> and set that as the key in symtable, the AST node is the name of that >> identifier, retrieving its value through simple expression evaluation. >> >> ... >> >> >> thunk = MockP.wrap_value >> >> P = MockP('P') >> >> # elsewhere: ensure P is only accessed via valid ?dot attribute access? >> inside @snapshot so contracts fail early, or don?t and allow Magic like >> __dict__ to occur on P. >> >> On Sep 27, 2018, at 9:49 PM, Marko Ristin-Kaufmann < >> marko.ristin at gmail.com> wrote: >> >> Hi James, >> >> I still have a feeling that most developers would like to store the state >> in many different custom ways. I see also thunk and snapshot with wrapper >> objects to be much more complicated to implement and maintain; I'm thinking >> mostly about all the edge cases which we would not be able to cover (and >> how complex that would be to cover them). Then the linters need also to >> work around such wrappers... It might also scare users off since it looks >> like too much magic. Another concern I also have is that it's probably very >> hard to integrate these wrappers with mypy later -- but I don't really have >> a clue about that, only my gut feeling? >> >> What about we accepted to repeat "lambda P, " prefix, and have something >> like this: >> >> @snapshot( >> lambda P, some_name: len(P.some_property), >> lambda P, another_name: hash(P.another_property) >> ) >> >> It's not too verbose for me and you can still explain in three-four >> sentences what happens below the hub in the library's docs. A >> pycharm/pydev/vim/emacs plugins could hide the verbose parts. >> >> I performed a small experiment to test how this solution plays with >> pylint and it seems OK that arguments are not used in lambdas. >> >> Cheers, >> Marko >> >> >> On Thu, 27 Sep 2018 at 12:27, James Lu wrote: >> >>> Why couldn?t we record the operations done to a special object and >>> replay them? >>> >>> Actually, I think there is probably no way around a decorator that >>>> captures/snapshots the data before the function call with a lambda (or even >>>> a separate function). "Old" construct, if we are to parse it somehow from >>>> the condition function, would limit us only to shallow copies (and be >>>> complex to implement as soon as we are capturing out-of-argument values >>>> such as globals *etc.)*. Moreove, what if we don't need shallow >>>> copies? I could imagine a dozen of cases where shallow copy is not what the >>>> programmer wants: for example, s/he might need to make deep copies, hash or >>>> otherwise transform the input data to hold only part of it instead of >>>> copying (*e.g., *so as to allow equality check without a double copy >>>> of the data, or capture only the value of certain property transformed in >>>> some way). >>>> >>>> >>> from icontract import snapshot, P, thunk >>> @snapshot(some_identifier=P.self.some_method(P.some_argument.some_attr)) >>> >>> P is an object of our own type, let?s call the type MockP. MockP returns >>> new MockP objects when any operation is done to it. MockP * MockP = MockP. >>> MockP.attr = MockP. MockP objects remember all the operations done to them, >>> and allow the owner of a MockP object to re-apply the same operations >>> >>> ?thunk? converts a function or object or class to a MockP object, >>> storing the function or object for when the operation is done. >>> >>> thunk(function)() >>> >>> Of course, you could also thunk objects like so: thunk(3) * P.number. >>> (Though it might be better to keep the 3 after P.number in this case so >>> P.number?s __mult__ would be invoked before 3?s __mult__ is invokes. >>> >>> >>> In most cases, you?d save any operations that can be done on a copy of >>> the data as generated by @snapshot in @postcondiion. thunk is for rare >>> scenarios where 1) it?s hard to capture the state, for example an object >>> that manages network state (or database connectivity etc) and whose stage >>> can only be read by an external classmethod 2) you want to avoid using >>> copy.deepcopy. >>> >>> I?m sure there?s some way to override isinstance through a meta class or >>> dunder subclasshook. >>> >>> I suppose this mocking method could be a shorthand for when you don?t >>> need the full power of a lambda. It?s arguably more succinct and readable, >>> though YMMV. >>> >>> I look forward to reading your opinion on this and any ideas you might >>> have. >>> >>> On Sep 26, 2018, at 3:56 PM, James Lu wrote: >>> >>> Hi Marko, >>> >>> Actually, following on #A4, you could also write those as multiple >>> decorators: >>> @snpashot(lambda _, some_identifier: some_func(_, >>> some_argument.some_attr) >>> @snpashot(lambda _, other_identifier: other_func(_.self)) >>> >>> Yes, though if we?re talking syntax using kwargs would probably be >>> better. >>> Using ?P? instead of ?_?: (I agree that _ smells of ignored arguments) >>> >>> @snapshot(some_identifier=lambda P: ..., some_identifier2=lambda P: ...) >>> >>> Kwargs has the advantage that you can extend multiple lines without >>> repeating @snapshot, though many lines of @capture would probably be more >>> intuitive since each decorator captures one variable. >>> >>> Why uppercase "P" and not lowercase (uppercase implies a constant for >>> me)? >>> >>> To me, the capital letters are more prominent and explicit- easier to >>> see when reading code. It also implies its a constant for you- you >>> shouldn?t be modifying it, because then you?d be interfering with the >>> function itself. >>> >>> Side node: maybe it would be good to have an @icontract.nomutate >>> (probably use a different name, maybe @icontract.readonly) that makes sure >>> a method doesn?t mutate its own __dict__ (and maybe the __dict__ of the >>> members of its __dict__). It wouldn?t be necessary to put the decorator on >>> every read only function, just the ones your worried might mutate. >>> >>> Maybe a @icontract.nomutate(param=?paramname?) that ensures the __dict__ >>> of all members of the param name have the same equality or identity before >>> and after. The semantics would need to be worked out. >>> >>> On Sep 26, 2018, at 8:58 AM, Marko Ristin-Kaufmann < >>> marko.ristin at gmail.com> wrote: >>> >>> Hi James, >>> >>> Actually, following on #A4, you could also write those as multiple >>> decorators: >>> @snpashot(lambda _, some_identifier: some_func(_, >>> some_argument.some_attr) >>> @snpashot(lambda _, other_identifier: other_func(_.self)) >>> >>> Am I correct? >>> >>> "_" looks a bit hard to read for me (implying ignored arguments). >>> >>> Why uppercase "P" and not lowercase (uppercase implies a constant for >>> me)? Then "O" for "old" and "P" for parameters in a condition: >>> @post(lambda O, P: ...) >>> ? >>> >>> It also has the nice property that it follows both the temporal and the >>> alphabet order :) >>> >>> On Wed, 26 Sep 2018 at 14:30, James Lu wrote: >>> >>>> I still prefer snapshot, though capture is a good name too. We could >>>> use generator syntax and inspect the argument names. >>>> >>>> Instead of ?a?, perhaps use ?_?. Or maybe use ?A.?, for arguments. Some >>>> people might prefer ?P? for parameters, since parameters sometimes means >>>> the value received while the argument means the value passed. >>>> >>>> (#A1) >>>> >>>> from icontract import snapshot, __ >>>> @snapshot(some_func(_.some_argument.some_attr) for some_identifier, _ >>>> in __) >>>> >>>> Or (#A2) >>>> >>>> @snapshot(some_func(some_argument.some_attr) for some_identifier, _, >>>> some_argument in __) >>>> >>>> ? >>>> Or (#A3) >>>> >>>> @snapshot(lambda some_argument,_,some_identifier: >>>> some_func(some_argument.some_attr)) >>>> >>>> Or (#A4) >>>> >>>> @snapshot(lambda _,some_identifier: >>>> some_func(_.some_argument.some_attr)) >>>> @snapshot(lambda _,some_identifier, other_identifier: >>>> some_func(_.some_argument.some_attr), other_func(_.self)) >>>> >>>> I like #A4 the most because it?s fairly DRY and avoids the extra >>>> punctuation of >>>> >>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>> >>>> >>>> On Sep 26, 2018, at 12:23 AM, Marko Ristin-Kaufmann < >>>> marko.ristin at gmail.com> wrote: >>>> >>>> Hi, >>>> >>>> Franklin wrote: >>>> >>>>> The name "before" is a confusing name. It's not just something that >>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>> things after it, but with values taken before the function call. Based >>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>> confusing than one that is obvious but misleading. >>>> >>>> >>>> James wrote: >>>> >>>>> I suggest that instead of ?@before? it?s ?@snapshot? and instead of ? >>>>> old? it?s ?snapshot?. >>>> >>>> >>>> I like "snapshot", it's a bit clearer than prefixing/postfixing verbs >>>> with "pre" which might be misread (*e.g., *"prelet" has a meaning in >>>> Slavic languages and could be subconsciously misread, "predef" implies to >>>> me a pre-*definition* rather than prior-to-definition , "beforescope" >>>> is very clear for me, but it might be confusing for others as to what it >>>> actually refers to ). What about "@capture" (7 letters for captures *versus >>>> *8 for snapshot)? I suppose "@let" would be playing with fire if >>>> Python with conflicting new keywords since I assume "let" to be one of the >>>> candidates. >>>> >>>> Actually, I think there is probably no way around a decorator that >>>> captures/snapshots the data before the function call with a lambda (or even >>>> a separate function). "Old" construct, if we are to parse it somehow from >>>> the condition function, would limit us only to shallow copies (and be >>>> complex to implement as soon as we are capturing out-of-argument values >>>> such as globals *etc.)*. Moreove, what if we don't need shallow >>>> copies? I could imagine a dozen of cases where shallow copy is not what the >>>> programmer wants: for example, s/he might need to make deep copies, hash or >>>> otherwise transform the input data to hold only part of it instead of >>>> copying (*e.g., *so as to allow equality check without a double copy >>>> of the data, or capture only the value of certain property transformed in >>>> some way). >>>> >>>> I'd still go with the dictionary to allow for this extra freedom. We >>>> could have a convention: "a" denotes to the current arguments, and "b" >>>> denotes the captured values. It might make an interesting hint that we put >>>> "b" before "a" in the condition. You could also interpret "b" as "before" >>>> and "a" as "after", but also "a" as "arguments". >>>> >>>> @capture(lambda a: {"some_identifier": some_func(a.some_argument.some_attr)}) >>>> @post(lambda b, a, result: b.some_identifier > result + a.another_argument.another_attr) >>>> def some_func(some_argument: SomeClass, another_argument: AnotherClass) -> SomeResult: >>>> ... >>>> >>>> "b" can be omitted if it is not used. Under the hub, all the arguments >>>> to the condition would be passed by keywords. >>>> >>>> In case of inheritance, captures would be inherited as well. Hence the >>>> library would check at run-time that the returned dictionary with captured >>>> values has no identifier that has been already captured, and the linter >>>> checks that statically, before running the code. Reading values captured in >>>> the parent at the code of the child class might be a bit hard -- but that >>>> is case with any inherited methods/properties. In documentation, I'd list >>>> all the captures of both ancestor and the current class. >>>> >>>> I'm looking forward to reading your opinion on this and alternative >>>> suggestions :) >>>> Marko >>>> >>>> On Tue, 25 Sep 2018 at 18:12, Franklin? Lee < >>>> leewangzhong+python at gmail.com> wrote: >>>> >>>>> On Sun, Sep 23, 2018 at 2:05 AM Marko Ristin-Kaufmann >>>>> wrote: >>>>> > >>>>> > Hi, >>>>> > >>>>> > (I'd like to fork from a previous thread, "Pre-conditions and >>>>> post-conditions", since it got long and we started discussing a couple of >>>>> different things. Let's discuss in this thread the implementation of a >>>>> library for design-by-contract and how to push it forward to hopefully add >>>>> it to the standard library one day.) >>>>> > >>>>> > For those unfamiliar with contracts and current state of the >>>>> discussion in the previous thread, here's a short summary. The discussion >>>>> started by me inquiring about the possibility to add design-by-contract >>>>> concepts into the core language. The idea was rejected by the participants >>>>> mainly because they thought that the merit of the feature does not merit >>>>> its costs. This is quite debatable and seems to reflect many a discussion >>>>> about design-by-contract in general. Please see the other thread, "Why is >>>>> design-by-contract not widely adopted?" if you are interested in that >>>>> debate. >>>>> > >>>>> > We (a colleague of mine and I) decided to implement a library to >>>>> bring design-by-contract to Python since we don't believe that the concept >>>>> will make it into the core language anytime soon and we needed badly a tool >>>>> to facilitate our work with a growing code base. >>>>> > >>>>> > The library is available at http://github.com/Parquery/icontract. >>>>> The hope is to polish it so that the wider community could use it and once >>>>> the quality is high enough, make a proposal to add it to the standard >>>>> Python libraries. We do need a standard library for contracts, otherwise >>>>> projects with conflicting contract libraries can not integrate (e.g., the >>>>> contracts can not be inherited between two different contract libraries). >>>>> > >>>>> > So far, the most important bits have been implemented in icontract: >>>>> > >>>>> > Preconditions, postconditions, class invariants >>>>> > Inheritance of the contracts (including strengthening and weakening >>>>> of the inherited contracts) >>>>> > Informative violation messages (including information about the >>>>> values involved in the contract condition) >>>>> > Sphinx extension to include contracts in the automatically generated >>>>> documentation (sphinx-icontract) >>>>> > Linter to statically check that the arguments of the conditions are >>>>> correct (pyicontract-lint) >>>>> > >>>>> > We are successfully using it in our code base and have been quite >>>>> happy about the implementation so far. >>>>> > >>>>> > There is one bit still missing: accessing "old" values in the >>>>> postcondition (i.e., shallow copies of the values prior to the execution of >>>>> the function). This feature is necessary in order to allow us to verify >>>>> state transitions. >>>>> > >>>>> > For example, consider a new dictionary class that has "get" and >>>>> "put" methods: >>>>> > >>>>> > from typing import Optional >>>>> > >>>>> > from icontract import post >>>>> > >>>>> > class NovelDict: >>>>> > def length(self)->int: >>>>> > ... >>>>> > >>>>> > def get(self, key: str) -> Optional[str]: >>>>> > ... >>>>> > >>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>> > @post(lambda self, key: old(self.get(key)) is None and >>>>> old(self.length()) + 1 == self.length(), >>>>> > "length increased with a new key") >>>>> > @post(lambda self, key: old(self.get(key)) is not None and >>>>> old(self.length()) == self.length(), >>>>> > "length stable with an existing key") >>>>> > def put(self, key: str, value: str) -> None: >>>>> > ... >>>>> > >>>>> > How could we possible implement this "old" function? >>>>> > >>>>> > Here is my suggestion. I'd introduce a decorator "before" that would >>>>> allow you to store whatever values in a dictionary object "old" (i.e. an >>>>> object whose properties correspond to the key/value pairs). The "old" is >>>>> then passed to the condition. Here is it in code: >>>>> > >>>>> > # omitted contracts for brevity >>>>> > class NovelDict: >>>>> > def length(self)->int: >>>>> > ... >>>>> > >>>>> > # omitted contracts for brevity >>>>> > def get(self, key: str) -> Optional[str]: >>>>> > ... >>>>> > >>>>> > @before(lambda self, key: {"length": self.length(), "get": >>>>> self.get(key)}) >>>>> > @post(lambda self, key, value: self.get(key) == value) >>>>> > @post(lambda self, key, old: old.get is None and old.length + 1 >>>>> == self.length(), >>>>> > "length increased with a new key") >>>>> > @post(lambda self, key, old: old.get is not None and old.length >>>>> == self.length(), >>>>> > "length stable with an existing key") >>>>> > def put(self, key: str, value: str) -> None: >>>>> > ... >>>>> > >>>>> > The linter would statically check that all attributes accessed in >>>>> "old" have to be defined in the decorator "before" so that attribute errors >>>>> would be caught early. The current implementation of the linter is fast >>>>> enough to be run at save time so such errors should usually not happen with >>>>> a properly set IDE. >>>>> > >>>>> > "before" decorator would also have "enabled" property, so that you >>>>> can turn it off (e.g., if you only want to run a postcondition in testing). >>>>> The "before" decorators can be stacked so that you can also have a more >>>>> fine-grained control when each one of them is running (some during test, >>>>> some during test and in production). The linter would enforce that before's >>>>> "enabled" is a disjunction of all the "enabled"'s of the corresponding >>>>> postconditions where the old value appears. >>>>> > >>>>> > Is this a sane approach to "old" values? Any alternative approach >>>>> you would prefer? What about better naming? Is "before" a confusing name? >>>>> >>>>> The dict can be splatted into the postconditions, so that no special >>>>> name is required. This would require either that the lambdas handle >>>>> **kws, or that their caller inspect them to see what names they take. >>>>> Perhaps add a function to functools which only passes kwargs that fit. >>>>> Then the precondition mechanism can pass `self`, `key`, and `value` as >>>>> kwargs instead of args. >>>>> >>>>> For functions that have *args and **kwargs, it may be necessary to >>>>> pass them to the conditions as args and kwargs instead. >>>>> >>>>> The name "before" is a confusing name. It's not just something that >>>>> happens before. It's really a pre-`let`, adding names to the scope of >>>>> things after it, but with values taken before the function call. Based >>>>> on that description, other possible names are `prelet`, `letbefore`, >>>>> `predef`, `defpre`, `beforescope`. Better a name that is clearly >>>>> confusing than one that is obvious but misleading. >>>>> >>>>> By the way, should the first postcondition be `self.get(key) is >>>>> value`, checking for identity rather than equality? >>>>> >>>> _______________________________________________ >>>> Python-ideas mailing list >>>> Python-ideas at python.org >>>> https://mail.python.org/mailman/listinfo/python-ideas >>>> Code of Conduct: http://python.org/psf/codeofconduct/ >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kenlhilton at gmail.com Sun Sep 30 02:05:04 2018 From: kenlhilton at gmail.com (Ken Hilton) Date: Sun, 30 Sep 2018 14:05:04 +0800 Subject: [Python-ideas] Make None a subclass of int [alternative to iNaN] Message-ID: Hi all, Reading the iNaN discussion, most of the opposition seems to be that adding iNaN would add a new special value to integers and therefore add new complexity. I propose, instead, that we make None a subclass of int (or even a certain value of int) to represent iNaN. Therefore: >>> None + 1, None - 1, None * 2, None / 2, None // 2 (None, None, None, nan, None) # mathematical operations on NaN return NaN >>> None & 1, None | 1, None ^ 1 # I'm not sure about this one. The following could be plausible: (0, 1, 1) # or this might make more sense, as this *is* NaN we're talking about: (None, None, None) >>> isinstance(None, int) True # the whole point of this idea >>> issubclass(type(None), int) True # no matter whether None *is* an int or just a subclass, this will be true as issubclass(int, int) is True I know this is a crazy idea, but I thought it could have some merit, so why not throw it out here? Sharing, Ken Hilton; -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Sun Sep 30 02:17:49 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sun, 30 Sep 2018 08:17:49 +0200 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <23471.39350.891499.861435@turnbull.sk.tsukuba.ac.jp> References: <37e7961b-528d-309f-114f-7194b9051892@kynesim.co.uk> <20180929111925.GU19437@ando.pearwood.info> <23471.39350.891499.861435@turnbull.sk.tsukuba.ac.jp> Message-ID: Hi, I compiled a couple of issues on github to provide a more structured ground for discussions on icontract features: https://github.com/Parquery/icontract/issues (@David Maertz: I also included the issue with automatically generated __doc__ in case you are still interested in it). Cheers, Marko On Sat, 29 Sep 2018 at 17:27, Stephen J. Turnbull < turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > Steven D'Aprano writes: > > > put (x: ELEMENT; key: STRING) is > > -- Insert x so that it will be retrievable through key. > > require > > count <= capacity > > not key.empty > > do > > ... Some insertion algorithm ... > > ensure > > has (x) > > item (key) = x > > count = old count + 1 > > end > > > > Two pre-conditions, and three post-conditions. That's hardly > > complex. > > You can already do this: > > def put(self, x: Element, key: str) -> None: > """Insert x so that it will be retrievable through key.""" > > # CHECKING PRECONDITIONS > _old_count = self.count > assert self.count <= self.capacity, > assert key > > # IMPLEMENTATION > ... some assertion algorithm ... > > # CHECKING POSTCONDITIONS > assert x in self > assert self[key] == x > assert self.count == _old_count > > return > > I don't see a big advantage to having syntax, unless the syntax allows > you to do things like turn off "expensive" contracts only. Granted, > you save a little bit of typing and eye movement (you can omit > "assert" and have syntax instead of an assignment for checking > postconditions dependent on initial state). > > A document generator can look for the special comments (as with > encoding cookies), and suck in all the asserts following until a > non-assert line of code (or the next special comment). The > assignments will need special handling, an additional special comment > or something. With PEP 572, I think you could even do this: > > assert ((_old_count := self.count),) > > to get the benefit of python -O here. > > > If I were writing this in Python, I'd write something like this: > > > > def put(self, x, key): > > """Insert x so that it will be retrievable through key.""" > > # Input checks are pre-conditions! > > if self.count > capacity: > > raise DatabaseFullError > > if not key: > > raise ValueError > > # .. Some insertion algorithm ... > > But this is quite different, as I understand it. Nothing I've seen in > the discussion so far suggests that a contract violation allows > raising differentiated exceptions, and it seems very unlikely from the > syntax in your example above. I could easily see both of these errors > being retryable: > > for _ in range(3): > try: > db.put(x, key) > except DatabaseFullError: > db.resize(expansion_factor=1.5) > db.put(x, key) > except ValueError: > db.put(x, alternative_key) > > > and then stick the post-conditions in a unit test, usually in a > > completely different file: > > If you like the contract-writing style, why would you do either of > these instead of something like the code I wrote above? > > > So what's wrong with the status quo? > > > > - The pre-condition checks are embedded right there in the > > method implementation, mixing up the core algorithm with the > > associated error checking. > > You don't need syntax to separate them, you can use a convention, as I > did above. > > > - Which in turn makes it hard to distinguish the checks from > > the implementation, and impossible to do so automatically. > > sed can do it, why can't we? > > > - Half of the checks are very far away, in a separate file, > > assuming I even remembered or bothered to write the test. > > That was your choice. There's nothing about the assert statement that > says you're not allowed to use it at the end of a definition. > > > - The post-conditions aren't checked unless I run my test suite, and > > then they only check the canned input in the test suite. > > Ditto. > > > - The pre-conditions can't be easily disabled in production. > > What's so hard about python -O? > > > - No class invariants. > > Examples? > > > - Inheritance is not handled correctly. > > Examples? Mixins and classes with additional functionality should > work fine AFAICS. I guess you'd have to write the contracts in each > subclass of an abstract class, which is definitely a minus for some of > the contracts. But I don't see offhand why you would expect that the > full contract of a method of a parent class would typically make sense > without change for an overriding implementation, and might not make > sense for a class with restricted functionality. > > > The status quo is all so very ad-hoc and messy. Design By Contract > > syntax would allow (not force, allow!) us to add some structure to the > > code: > > > > - requirements of the function > > - the implementation of the function > > - the promise made by the function > > Possible already as far as I can see. OK, you could have the compiler > enforce the structure to some extent, but the real problem IMO is > going to be like documentation and testing: programmers just won't do > it regardless of syntax to make it nice and compiler checkable. > > > Most of us already think about these as three separate things, and > > document them as such. Our code should reflect the structure of how we > > think about the code. > > But what's the need for syntax? How about the common (in this thread) > complaint that even as decorators, the contract is annoying, verbose, > and distracts the reader from understanding the code? Note: I think > that, as with static typing, this could be mitigated by allowing > contracts to be optionally specified in a stub file. As somebody > pointed out, it shouldn't be hard to write contract strippers and > contract folding in many editors. (As always, we have to admit it's > very difficult to get people to change their editor!) > > > > In my experience this is very rarely true. Most functions I > > > write are fairly short and easily grokked, even if they do > complicated > > > things. That's part of the skill of breaking a problem down, IMHO; > if > > > the function is long and horrible-looking, I've already got it wrong > and > > > no amount of protective scaffolding like DbC is going to help. > > > > That's like saying that if a function is horrible-looking, then there's > > no point in writing tests for it. > > > > I'm not saying that contracts are only for horrible functions, but > > horrible functions are the ones which probably benefit the most from > > specifying exactly what they promise to do, and checking on every > > invocation that they live up to that promise. > > I think you're missing the point then: ISTM that the implicit claim > here is that the time spent writing contracts for a horrible function > would be better spent refactoring it. As you mention in connection > with the Eiffel example, it's not easy to get all the relevant > contracts, and for a horrible function it's going to be hard to get > some of the ones you do write correct. > > > Python (the interpreter) does type checking. Any time you get a > > TypeError, that's a failed type check. And with type annotations, we > can > > run a static type checker on our code too, which will catch many of > > these failures before we run the code. > > But an important strength of contracts is that they are *always* run, > on any input you actually give the function. > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko.ristin at gmail.com Sun Sep 30 02:19:23 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sun, 30 Sep 2018 08:19:23 +0200 Subject: [Python-ideas] Simplicity of C (was why is design-by-contracts not widely) In-Reply-To: <20180930044422.GA57629@cskk.homeip.net> References: <20180930044422.GA57629@cskk.homeip.net> Message-ID: Hi Cameron, Just a word of caution: I made a mistake and badly designed interface to icontract. It will change in the near future from: @post(lambda arg1, arg2, result: arg1 < result < arg2) most probably to: @ensures(lambda P: P.arg1 < result < P.arg2) This avoids any name conflicts with "result" if it's in the arguments and also many conflicts with web frameworks which frequently use "post". We will also add snapshotting before the function execution: @snapshot(lambda P, var2: set(arg2)) @ensures(lambda O, P: P.arg1 < result and result in O.var2) so that postcondition can deal with state transitions. There are also some other approaches in discussion. The library name will also need to change. When I started developing it, I was not aware of Java icontract library. It will be probably renamed to "pcontract" or any other suggested better name :) Please see the github issues for more details and current discussions: https://github.com/Parquery/icontract/issues On Sun, 30 Sep 2018 at 06:44, Cameron Simpson wrote: > On 30Sep2018 12:17, Chris Angelico wrote: > >At the moment, I'm seeing decorator-based contracts as a clunky > >version of unit tests. We already have "inline unit testing" - it's > >called doctest - and I haven't seen anything pinned down as "hey, this > >is what it'd take to make contracts more viable". Certainly nothing > >that couldn't be done as a third-party package. But I'm still open to > >being swayed on that point. > > Decorator based contracts are very little like clunky unit tests to me. > I'm > basing my opinion on the icontracts pip package, which I'm going to start > using. > > In case you've been looking at something different, it provides a small > number > of decorators including @pre(test-function) and @post(test-function) and > the > class invariant decorator @inv, with good error messages for violations. > > They are _functionally_ like putting assertions in your code at the start > and > end of your functions, but have some advantages: > > - they're exposed, not buried inside the function, where they're easy to > see > and can be considered as contracts > > - they run on _every_ function call, not just during your testing, and get > turned off just like assertions do: when you run Python with the -O > (optimise) option. (There's some more tuning available too.) > > - the assertions make qualitative statements about the object/parameter > state > in the form "the state is consistent if these things apply"; > tests tend to say "here's a situation, do these things and examine these > results". You need to invent the situations and the results, rather than > making general statements about the purpose and functional semantics of > the > class. > > They're different to both unit tests _and_ doctests because they get > exercised > during normal code execution. Both unit tests and doctests run _only_ > during > your test phase, with only whatever test scenarios you have devised. > > The difficulty with unit tests and doctests (both of which I use) and also > integration tests is making something small enough to run but big/wide > enough > to cover all the relevant cases. They _do not_ run against all your real > world > data. It can be quite hard to apply them to your real world data. > > Also, all the @pre/@post/@inv stuff will run _during_ your unit tests and > doctests as well, so they get included in your test regime for free. > > I've got a few classes which have a selftest method whose purpose is to > confirm > correctness of the instance state, and I call that from a few methods at > start > and end, particularly those for which unit tests have been hard to write > or I > know are inadequately covered (and probably never will be because devising > a > sufficient test case is impractical, especially for hard to envisage > corner > cases). > > The icontracts module will be very helpful to me here: I can pull out the > self-test function as the class invariant, and make a bunch of @pre/@post > assertions corresponding the the method semantic definition. > > The flip side of this is that there's no case for language changes in what > I > say above: the decorators look pretty good to my eye. > > Cheers, > Cameron Simpson > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Sun Sep 30 04:02:49 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 30 Sep 2018 18:02:49 +1000 Subject: [Python-ideas] Simplicity of C (was why is design-by-contracts not widely) In-Reply-To: References: <20180930015413.GH19437@ando.pearwood.info> <20180930042635.GI19437@ando.pearwood.info> Message-ID: <20180930080249.GJ19437@ando.pearwood.info> On Sun, Sep 30, 2018 at 02:50:28PM +1000, Chris Angelico wrote: > And yet all the examples I've seen have just been poor substitutes for > unit tests. Can we get some examples that actually do a better job of > selling contracts? In no particular order... (1) Distance Unit tests are far away from the code you are looking at. Things which go together ought to be together, but typically unit tests are not just seperate from the thing they are testing, but in a completely different file. Contracts are right there, next to the thing they belong with, but without being mixed into the implementation of the method or function. (2) Self-documenting code Contracts are self-documenting code. Unit tests are not. Unit tests are full of boilerplate, creating instances, setting up test data, checking the result. Contracts simply cut to the chase and state the requirements and the promises made: the input must be a non-empty list the result will be a string starting with "Aardvark" as executable code. (3) The "Have you performed the *right* tests?" problem Unit tests are great, but they have a serious problem: they test ONLY the canned data you put in your test. For non-trivial functions, unit tests tell you nothing about the general behaviour of your function, only the specific behaviour with the given input: The key problem with testing is that a test (of any kind) that uses one particular set of inputs tells you nothing at all about the behaviour of the system or component when it is given a different set of inputs. http://thinkrelevance.com/blog/2013/11/26/better-than-unit-tests Contracts are one (partial) solution to this problem. If you have a unit test that does this: def test_spam(self): self.AssertEqual(spam(2).count("eggs"), 2) then it tests ONLY that spam(2) contains "eggs" twice, and that's it. It tells you nothing about whether spam(3) or spam(1028374527601) is correct. In fact, under Test Driven Development, it would be normal to write the test first, and then implement spam as a stub that does this: def spam(n): return "eggs eggs" proving my point that a passing test with one input doesn't mean the function is correct for another input. In contrast, the post-condition: def spam(n): ensure: result.count("eggs") == max(0, n) # implementation is tested on every invocation of spam (up to the point that you decide to disable post-condition checks). There is no need for separate tests for spam(2) and spam(3) and spam(1028374527601) unless you have some specific need for them. (Say, a regression test after fixing a particular bug.) (4) Inheritance Contracts are inherited, unit tests are not. (5) Unit tests and contracts are complementary, not alternatives Unit tests and contracts do overlap, and in the areas of overlap contracts are generally superior. But sometimes it is too hard to specify a contract in sufficient detail, and so unit tests are more appropriate. And for some especially simple functions, you can't specify the post-condition except by duplicating the implementation: def max(a, b): """Return the maximum of a and b.""" ensure: result == a if a >= b else b implementation: return a if a >= b else b In that case, a post-condition is a waste of time, and one should just unit test it. https://sebnozzi.github.io/362/contracts-replace-unit-tests/ Another way to think about it is that unit tests and contracts have different purposes. Pre-conditions and class invariants are a form of defensive programming that ensures that your prerequisites are met, that code is called with the correct parameters, etc. Unit tests are a way of doing spot tests that the code works with certain specified inputs. (6) Separation of concerns: function algorithm versus error checking Functions ought to validate their input, but doing so obfuscates the function implementation. Making that input validation a pre-condition separates the error checking and input validation from the algorithm proper, which helps make the code self-documenting. (7) You can't unit test loop invariants Some languages have support for testing loop invariants. You can't unit test loop invariants at all. https://en.wikipedia.org/wiki/Loop_invariant#Programming_language_support (8) Executable documentation Contracts are executable code that document what input is valid and what result is returned. Executable code is preferrable to dead comments because comments rot: "At Resolver we've found it useful to short-circuit any doubt and just refer to comments in code as 'lies'. " -- Michael Foord paraphrases Christian Muirhead on python-dev, 2009-03-22 Contracts can also be extracted by static tools. -- Steve From rosuav at gmail.com Sun Sep 30 04:33:08 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 30 Sep 2018 18:33:08 +1000 Subject: [Python-ideas] Simplicity of C (was why is design-by-contracts not widely) In-Reply-To: <20180930080249.GJ19437@ando.pearwood.info> References: <20180930015413.GH19437@ando.pearwood.info> <20180930042635.GI19437@ando.pearwood.info> <20180930080249.GJ19437@ando.pearwood.info> Message-ID: On Sun, Sep 30, 2018 at 6:03 PM Steven D'Aprano wrote: > > On Sun, Sep 30, 2018 at 02:50:28PM +1000, Chris Angelico wrote: > > > And yet all the examples I've seen have just been poor substitutes for > > unit tests. Can we get some examples that actually do a better job of > > selling contracts? > > In no particular order... > > (1) Distance Don't doctests deal with this too? With the exact same downsides? > (2) Self-documenting code Ditto > (3) The "Have you performed the *right* tests?" problem Great if your contracts can actually be perfectly defined. Every time a weird case is mentioned, those advocating contracts (mainly Marko) give examples showing "hey, contracts can do that too", and they're just testing specifics. > (4) Inheritance Okay, that one I 100% grant you. > (5) Unit tests and contracts are complementary, not alternatives That I agree with. > (6) Separation of concerns: function algorithm versus error checking Agreed, so long as you can define the contract in a way that isn't just duplicating the function's own body. > (7) You can't unit test loop invariants Sure. > (8) Executable documentation Granted, but there are many forms of that. Contracts are great for some situations, but I'm seeing a lot of cases where they're just plain not, yet advocates still say "use contracts, use contracts". Why? ChrisA From storchaka at gmail.com Sun Sep 30 05:04:17 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 30 Sep 2018 12:04:17 +0300 Subject: [Python-ideas] Make None a subclass of int [alternative to iNaN] In-Reply-To: References: Message-ID: 30.09.18 09:05, Ken Hilton ????: > Reading the iNaN discussion, most of the opposition seems to be that > adding iNaN would add a new special value to integers and therefore add > new complexity. > > I propose, instead, that we make None a subclass of int (or even a > certain value of int) to represent iNaN. Therefore: > > ? ? >>> None?+ 1, None - 1, None * 2, None / 2, None // 2 > ? ? (None, None, None, nan, None) # mathematical operations on NaN > return NaN > ? ? >>> None & 1, None | 1, None ^ 1 > ? ? # I'm not sure about this one. The following could be plausible: > ? ? (0, 1, 1) > ? ? # or this might make more sense, as this *is* NaN we're talking about: > ? ? (None, None, None) > ? ? >>> isinstance(None, int) > ? ? True # the whole point of this idea > ? ? >>> issubclass(type(None), int) > ? ? True # no matter whether None *is* an int or just a subclass, this > will be true as issubclass(int, int) is True > > I know this is a crazy idea, but I thought it could have some merit, so > why not throw it out here? This will make some errors passing silently (instead of raising a TypeError or AttributeError earlier) and either cause errors far from the initial place or producing an incorrect result. From storchaka at gmail.com Sun Sep 30 05:09:45 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 30 Sep 2018 12:09:45 +0300 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: <20180930010726.GG19437@ando.pearwood.info> References: <20180930010726.GG19437@ando.pearwood.info> Message-ID: 30.09.18 04:07, Steven D'Aprano ????: > Telling people that they don't understand their own code when you don't > know their code is not very productive. I can't tell him what he should do with his (not working) code, but it doesn't look like a good justification for changes in the Python core. From turnbull.stephen.fw at u.tsukuba.ac.jp Sun Sep 30 08:11:27 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Sun, 30 Sep 2018 21:11:27 +0900 Subject: [Python-ideas] Simplicity of C (was why is design-by-contracts not widely) In-Reply-To: <20180930080249.GJ19437@ando.pearwood.info> References: <20180930015413.GH19437@ando.pearwood.info> <20180930042635.GI19437@ando.pearwood.info> <20180930080249.GJ19437@ando.pearwood.info> Message-ID: <23472.48495.587491.729759@turnbull.sk.tsukuba.ac.jp> Steven D'Aprano writes: > (1) Distance > (2) Self-documenting code > (3) The "Have you performed the *right* tests?" problem > (4) Inheritance > > Contracts are inherited, unit tests are not. What does "inherited" mean? Just that methods that are not overridden retain their contracts? > (6) Separation of concerns: function algorithm versus error checking > > Functions ought to validate their input, but doing so obfuscates the > function implementation. Making that input validation a pre-condition > separates the error checking and input validation from the algorithm > proper, which helps make the code self-documenting. There's nothing in the Eiffel syntax that distinguishes *which* contract(s) was (were) violated. Is there some magic? Or does the Eiffel process just die? (ISTR that is a typical error "recovery" approach in systems implemented in Eiffel, maybe from the Beautiful Code book?) > (7) You can't unit test loop invariants I don't see how a loop invariant can be elegantly specified without mixing it in to the implementation. Can you show an example of code written in a language with support for loop invariants *not* mixed into the implementation? > (8) Executable documentation I don't see how any of your points fail to be satisfied by use of asserts with a convention that (except for loop invariants) they're placed either at the beginning or the end of the function, depending on whether they're pre- or post-conditions. This requires a single- exit style which may sometimes be unnatural, of course. AFAICS you can program in contract style already in Python. Contracts involving both beginning and end state would require annoying local variables, definitely. But other than that, all I see is a desire for unnecessary syntax, or stdlib, support. From elazarg at gmail.com Sun Sep 30 08:32:16 2018 From: elazarg at gmail.com (Elazar) Date: Sun, 30 Sep 2018 15:32:16 +0300 Subject: [Python-ideas] Simplicity of C (was why is design-by-contracts not widely) In-Reply-To: <23472.48495.587491.729759@turnbull.sk.tsukuba.ac.jp> References: <20180930015413.GH19437@ando.pearwood.info> <20180930042635.GI19437@ando.pearwood.info> <20180930080249.GJ19437@ando.pearwood.info> <23472.48495.587491.729759@turnbull.sk.tsukuba.ac.jp> Message-ID: On Sun, Sep 30, 2018, 15:12 Stephen J. Turnbull < turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > Steven D'Aprano writes: > > > (4) Inheritance > > > > Contracts are inherited, unit tests are not. > > What does "inherited" mean? Just that methods that are not overridden > retain their contracts? > Contracts are attached to interfaces, not to specifications. So when you have abstract base class, it defines contracts, and implementing classes must adhere to these contracts - the can only strengthen it, not weaken it. This way the user code need pnly be aware of the specification, not the implementation. So method that _are_ overridden retain their contracts. This is precisely like with types, since types are contracts (and vice versa, in a way). Elazar -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Sun Sep 30 08:55:42 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 30 Sep 2018 22:55:42 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> Message-ID: <20180930125542.GL19437@ando.pearwood.info> On Sun, Sep 30, 2018 at 12:09:45PM +0300, Serhiy Storchaka wrote: > 30.09.18 04:07, Steven D'Aprano ????: > >Telling people that they don't understand their own code when you don't > >know their code is not very productive. > > I can't tell him what he should do with his (not working) code, but it > doesn't look like a good justification for changes in the Python core. You don't know that his code is not working. For all you know, Steve has working code that works around the lack of an int NAN in some other, more clumsy, less elegant, ugly and slow way. NANs are useful for when you don't want a calculation to halt on certain errors, or on missing data. That ability of a NAN to propogate through the calculation instead of halting can be useful when your data are ints, not just floats or Decimals. Earlier, I suggested that this proposal would probably be best done as a subclass of int. It certainly should be prototyped as a subclass before we consider making a builtin int NAN. Since Steve has already agreed to work on that first, I think any further discussion would be pointless until he comes back to us. He may decide that a subclass solves his problem and no longer want a builtin int NAN. -- Steve but not the same Steve as above... From gadgetsteve at live.co.uk Sun Sep 30 09:41:24 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sun, 30 Sep 2018 13:41:24 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: <20180930125542.GL19437@ando.pearwood.info> References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On 30/09/2018 13:55, Steven D'Aprano wrote: > On Sun, Sep 30, 2018 at 12:09:45PM +0300, Serhiy Storchaka wrote: >> 30.09.18 04:07, Steven D'Aprano ????: >>> Telling people that they don't understand their own code when you don't >>> know their code is not very productive. >> >> I can't tell him what he should do with his (not working) code, but it >> doesn't look like a good justification for changes in the Python core. > > You don't know that his code is not working. For all you know, Steve has > working code that works around the lack of an int NAN in some other, > more clumsy, less elegant, ugly and slow way. > > NANs are useful for when you don't want a calculation to halt on certain > errors, or on missing data. That ability of a NAN to propogate through > the calculation instead of halting can be useful when your data are > ints, not just floats or Decimals. > > Earlier, I suggested that this proposal would probably be best done as a > subclass of int. It certainly should be prototyped as a subclass before > we consider making a builtin int NAN. Since Steve has already agreed to > work on that first, I think any further discussion would be pointless > until he comes back to us. He may decide that a subclass solves his > problem and no longer want a builtin int NAN. > > I have had (over the years) a lot of working code with lots of checks in and a huge number of paths through due to the lack of such of iNaN, or something to return for "that didn't work", floats & complex have NaN, strings have empty string list and sets can be empty but there is no such option for integers. Hence the suggestion. I am hartened that the authors of the Decimal library also felt the need for NaN (as well as INF & -INF). I am roughing out such a class and some test cases which will hopefully include some cases where the hoped for advantages can be realised. My thinking on bitwise operations is to do the same as arithmetic operations, i.e. (anything op iNaN) = iNaN and likewise for shift operations. -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From mertz at gnosis.cx Sun Sep 30 09:55:58 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 30 Sep 2018 09:55:58 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: Notwithstanding my observation of one case where 'nan float' doesn't stay a nan, I definitely want something like iNaN. Btw are there other operations on NaN's do not produce NaN's? I suspect a NaNAwareInt subclass is the easiest way to get there, but I'm agnostic on that detail. For the very same reasons that other numeric types benefit from NaN, ints would also. I.e. I want to do a series of numeric operations on a bunch of input numbers, and it's less cumbersome to check if we went to NaN-land at the end than it is to try/except around every op. On Sun, Sep 30, 2018, 9:42 AM Steve Barnes wrote: > > > On 30/09/2018 13:55, Steven D'Aprano wrote: > > On Sun, Sep 30, 2018 at 12:09:45PM +0300, Serhiy Storchaka wrote: > >> 30.09.18 04:07, Steven D'Aprano ????: > >>> Telling people that they don't understand their own code when you don't > >>> know their code is not very productive. > >> > >> I can't tell him what he should do with his (not working) code, but it > >> doesn't look like a good justification for changes in the Python core. > > > > You don't know that his code is not working. For all you know, Steve has > > working code that works around the lack of an int NAN in some other, > > more clumsy, less elegant, ugly and slow way. > > > > NANs are useful for when you don't want a calculation to halt on certain > > errors, or on missing data. That ability of a NAN to propogate through > > the calculation instead of halting can be useful when your data are > > ints, not just floats or Decimals. > > > > Earlier, I suggested that this proposal would probably be best done as a > > subclass of int. It certainly should be prototyped as a subclass before > > we consider making a builtin int NAN. Since Steve has already agreed to > > work on that first, I think any further discussion would be pointless > > until he comes back to us. He may decide that a subclass solves his > > problem and no longer want a builtin int NAN. > > > > > I have had (over the years) a lot of working code with lots of checks in > and a huge number of paths through due to the lack of such of iNaN, or > something to return for "that didn't work", floats & complex have NaN, > strings have empty string list and sets can be empty but there is no > such option for integers. Hence the suggestion. I am hartened that the > authors of the Decimal library also felt the need for NaN (as well as > INF & -INF). > > I am roughing out such a class and some test cases which will hopefully > include some cases where the hoped for advantages can be realised. > > My thinking on bitwise operations is to do the same as arithmetic > operations, i.e. (anything op iNaN) = iNaN and likewise for shift > operations. > -- > Steve (Gadget) Barnes > Any opinions in this message are my personal opinions and do not reflect > those of my employer. > > --- > This email has been checked for viruses by AVG. > https://www.avg.com > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Sun Sep 30 10:15:52 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 30 Sep 2018 10:15:52 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: For similar reasons, I'd like an iInf too, FWIW. It's good for an overflow value, although it's hard to get there in Python ints (would 'NaNAwareInt(1)/0' be an exception or iInf?). Bonus points for anyone who knows the actual maximum size of Python ints :-). However, the main use I'd have for iInf is simply as a starting value in a minimization loop. E.g. minimum = NaNAwareInt('inf') for i in the_data: minimum = min(i, minimum) other_stuff(i, minimum, a, b, c) I've written that code a fair number of times; usually I just pick a placeholder value that is "absurdly large relative to my domain", but a clean infinity would be slightly better. E.g. 'minimum = 10**100'. On Sun, Sep 30, 2018 at 9:55 AM David Mertz wrote: > Notwithstanding my observation of one case where 'nan float' doesn't > stay a nan, I definitely want something like iNaN. Btw are there other > operations on NaN's do not produce NaN's? > > I suspect a NaNAwareInt subclass is the easiest way to get there, but I'm > agnostic on that detail. > > For the very same reasons that other numeric types benefit from NaN, ints > would also. I.e. I want to do a series of numeric operations on a bunch of > input numbers, and it's less cumbersome to check if we went to NaN-land at > the end than it is to try/except around every op. > > > On Sun, Sep 30, 2018, 9:42 AM Steve Barnes wrote: > >> >> >> On 30/09/2018 13:55, Steven D'Aprano wrote: >> > On Sun, Sep 30, 2018 at 12:09:45PM +0300, Serhiy Storchaka wrote: >> >> 30.09.18 04:07, Steven D'Aprano ????: >> >>> Telling people that they don't understand their own code when you >> don't >> >>> know their code is not very productive. >> >> >> >> I can't tell him what he should do with his (not working) code, but it >> >> doesn't look like a good justification for changes in the Python core. >> > >> > You don't know that his code is not working. For all you know, Steve has >> > working code that works around the lack of an int NAN in some other, >> > more clumsy, less elegant, ugly and slow way. >> > >> > NANs are useful for when you don't want a calculation to halt on certain >> > errors, or on missing data. That ability of a NAN to propogate through >> > the calculation instead of halting can be useful when your data are >> > ints, not just floats or Decimals. >> > >> > Earlier, I suggested that this proposal would probably be best done as a >> > subclass of int. It certainly should be prototyped as a subclass before >> > we consider making a builtin int NAN. Since Steve has already agreed to >> > work on that first, I think any further discussion would be pointless >> > until he comes back to us. He may decide that a subclass solves his >> > problem and no longer want a builtin int NAN. >> > >> > >> I have had (over the years) a lot of working code with lots of checks in >> and a huge number of paths through due to the lack of such of iNaN, or >> something to return for "that didn't work", floats & complex have NaN, >> strings have empty string list and sets can be empty but there is no >> such option for integers. Hence the suggestion. I am hartened that the >> authors of the Decimal library also felt the need for NaN (as well as >> INF & -INF). >> >> I am roughing out such a class and some test cases which will hopefully >> include some cases where the hoped for advantages can be realised. >> >> My thinking on bitwise operations is to do the same as arithmetic >> operations, i.e. (anything op iNaN) = iNaN and likewise for shift >> operations. >> -- >> Steve (Gadget) Barnes >> Any opinions in this message are my personal opinions and do not reflect >> those of my employer. >> >> --- >> This email has been checked for viruses by AVG. >> https://www.avg.com >> >> _______________________________________________ >> Python-ideas mailing list >> Python-ideas at python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ >> > -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From klahnakoski at mozilla.com Sun Sep 30 10:19:28 2018 From: klahnakoski at mozilla.com (Kyle Lahnakoski) Date: Sun, 30 Sep 2018 10:19:28 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: <6cb6e355-f0a0-677d-14fc-78ebb9f68513@mozilla.com> On 2018-09-30 09:41, Steve Barnes wrote: > I am roughing out such a class and some test cases which will hopefully > include some cases where the hoped for advantages can be realised. > > My thinking on bitwise operations is to do the same as arithmetic > operations, i.e. (anything op iNaN) = iNaN and likewise for shift > operations. Steve, While you are extending a number system, can every int be truthy, while only iNan be falsey?? I found that behaviour more useful because checking if there is a value is more common than checking if it is a zero value. Thank you From klahnakoski at mozilla.com Sun Sep 30 10:19:27 2018 From: klahnakoski at mozilla.com (Kyle Lahnakoski) Date: Sun, 30 Sep 2018 10:19:27 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: <91cbff66-68bc-8453-559c-03ec27387ebf@mozilla.com> On 2018-09-30 09:41, Steve Barnes wrote: > I am roughing out such a class and some test cases which will hopefully > include some cases where the hoped for advantages can be realised. > > My thinking on bitwise operations is to do the same as arithmetic > operations, i.e. (anything op iNaN) = iNaN and likewise for shift > operations. Steve, While you are extending a number system, can every int be truthy, while only iNan be falsey?? I found that behaviour more useful because checking if there is a value is more common than checking if it is a zero value. Thank you From rosuav at gmail.com Sun Sep 30 10:22:07 2018 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 1 Oct 2018 00:22:07 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On Mon, Oct 1, 2018 at 12:18 AM David Mertz wrote: > > For similar reasons, I'd like an iInf too, FWIW. It's good for an overflow value, although it's hard to get there in Python ints (would 'NaNAwareInt(1)/0' be an exception or iInf?). Bonus points for anyone who knows the actual maximum size of Python ints :-). > Whatever the maximum is, it's insanely huge. I basically consider that a Python int is as large as your computer has memory to store. I can work with numbers so large that converting to string takes a notable amount of time (never mind about actually printing it to a console, just 'x = str(x)' pauses the interpreter for ages). If there's a limit, it'll probably be described as something like 2**2**2**N for some ridiculously large N. Want to share what the maximum actually is? I'm very curious! ChrisA From steve at pearwood.info Sun Sep 30 10:28:45 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 1 Oct 2018 00:28:45 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <5BAC807A.2070509@canterbury.ac.nz> Message-ID: <20180930142845.GM19437@ando.pearwood.info> On Fri, Sep 28, 2018 at 01:49:01PM +0100, Paul Moore wrote: > There's clearly a number of trade-offs going on here: > > * Conditions should be short, to avoid clutter > * Writing helper functions that are *only* used in conditions is more > code to test or get wrong This is no different from any other refactoring. If you have a function that checks its input: def spam(arg): if condition(arg) and other_condition(arg) or alt_condition(arg): raise ValueError and refactor it to a helper: def invalid(arg): return condition(arg) and other_condition(arg) or alt_condition(arg) def spam(arg): if invalid(arg): raise ValueError how is that a bad thing just because we call it a "precondition" instead of calling it "error checking"? Of course we don't necessarily want the proliferation of masses and masses of tiny helper functions, but nor should we fear them. Helpers should carry their own weight, and if they do, we should use them. Whether they are used for contracts or not makes zero difference. > * Sometimes it's just plain hard to express a verbal constraint in code Indeed. People seem to be arguing against some strawman of "Design By Contract is a magic bullet that solves every imaginable problem". Of course it doesn't. Some constraints are too hard to specify as code. Okay, then don't do that. DbC isn't "all or nothing". If you can't write a contract for something, don't. You still get value from the contracts you do write. [...] > But given that *all* the examples I've seen of contracts have this > issue (difficult to read expressions) I suspect the problem is > inherent. Are you specifically talking about *Python* examples? Or contracts in general? I don't know Eiffel very well, but I find this easy to read and understand (almost as easy as Python). The trickiest thing is the implicit "self". put (x: ELEMENT; key: STRING) is -- Insert x so that it will be retrievable through key. require count <= capacity not key.empty do ... Some insertion algorithm ... ensure has (x) item (key) = x count = old count + 1 end https://www.eiffel.com/values/design-by-contract/introduction/ Here are a couple of examples from Cobra: def fraction( numer as int, denom as int) as float require numer > 0 denom <> 0 body ... def bumpState( incr as int) as int require incr > 0 ensure result >= incr .state = old.state + incr body .state += incr return .state http://cobra-language.com/trac/cobra/wiki/Contracts If you find them difficult to read, I don't know what to say :-) > Another thing that I haven't yet seen clearly explained. How do these > contracts get *run*? Are they checked on every call to the function, Yes, that's the point of them. In development they're always on. Every time you run your dev code, it tests itself. > even in production code? That's up to you, but typical practice is to check pre-conditions (your input) but not post-conditions (your output) in production. > Is there a means to turn them off? What's the > runtime overhead of a "turned off" contract (even if it's just an > if-test per condition, that can still add up)? Other languages may offer different options, but in Eiffel, contracts checking can be set to: no: assertions have no run-time effect. require: monitor preconditions only, on routine entry. ensure: preconditions on entry, postconditions on exit. invariant: same as ensure, plus class invariant on both entry and exit for qualified calls. all: same as invariant, plus check instructions, loop invariants and loop variants. You can set the checking level globally, or class-by-class. The default is to check only preconditions. That is, for methods to validate their inputs. Quoting from the Eiffel docs: When releasing the final version of a system, it is usually appropriate to turn off assertion monitoring, or bring it down to the ``require`` level. The exact policy depends on the circumstances; it is a trade off between efficiency considerations, the potential cost of mistakes, and how much the developers and quality assurance team trust the product. When developing the software, however, you should always assume -- to avoid loosening your guard -- that in the end monitoring will be turned off. https://www.eiffel.org/doc/eiffel/ET-_Design_by_Contract_%28tm%29%2C_Assertions_and_Exceptions The intention for Python would be similar: - we ought to be able disable contract checking globally; - and preferrably on a case-by-case basis; - a disabled contact ought to be like a disabled assertion, that is, completely gone with no runtime effect at all; - but due to the nature of Python's execution model, there will probably be some (small) overhead at the time the function is created, but not when the function is called. Of course the overhead will depend on the implementation. > And what happens if a > contract fails - is it an exception/traceback (which is often > unacceptable in production code such as services)? What happens when any piece of error checking code fails and raises an exception? In the case of Python, a failed contract would be an exception. What you do with the exception is up to you. The *intention* is that a failed contract is a bug, so what you *ought to do* is fix the bug. But you could catch it, retry the operation, restart the service, or whatever. That's worth repeating: *contracts aren't for testing end-user input* but for checking internal program state. A failed contract ought to be considered a bug. Any *expected error state* (such as a missing file, or bad user input, or a network outage) shouldn't be treated as a contract. -- Steve From mertz at gnosis.cx Sun Sep 30 10:29:50 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 30 Sep 2018 10:29:50 -0400 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: <20180929111925.GU19437@ando.pearwood.info> References: <37e7961b-528d-309f-114f-7194b9051892@kynesim.co.uk> <20180929111925.GU19437@ando.pearwood.info> Message-ID: On Sat, Sep 29, 2018 at 7:20 AM Steven D'Aprano wrote: > On Wed, Sep 26, 2018 at 04:03:16PM +0100, Rhodri James wrote: > > Assuming that you > > aren't doing some kind of wide-ranging static analysis (which doesn't > > seem to be what we're talking about), all that the contracts have bought > > you is the assurance that *this* invocation of the function with *these* > > parameters giving *this* result is what you expected. It does not say > > anything about the reliability of the function in general. > > This is virtually the complete opposite of what contracts give us. What > you are describing is the problem with *unit testing*, not contracts. > I think Steven's is backwards in its own way. - Contracts test the space of arguments *actually used during testing period* (or during initial production if the performance hit is acceptable). - Unit tests test the space of arguments *thought of by the developers*. *A priori,* either one of those can cover cases not addressed by the other. If unit tests use the hypothesis library or similar approaches, unit tests might very well examine arguments unlikely to be encountered in real-world (or test phase) use... these are nonetheless edge cases that are important to assure correct behavior on ("correct" can mean various things, of course: exceptions, recovery, default values whatever). In contrast, contracts might well find arguments that the developers of unit tests had not thought of. I tend to think someone sitting down trying to think of edge cases is going to be able to write more thorough tests than the accident of "what did we see during this run" ... but it could go either way. Of course... my own approach to this concern would more likely be to use a logging decorator rather than a DbC one. Then collect those logs that show all the arguments that were passed to a given function during a testing period, and roll those back into the unit tests. My approach is a bit more manual work, but also more flexible and more powerful. - Half of the checks are very far away, in a separate file, assuming > I even remembered or bothered to write the test. > To me, this is the GREATEST VIRTUE of unit tests over DbC. It puts the tests far away where they don't distract from reading and understanding the function itself. I rarely want my checks proximate since I wear a very different hat when thinking about checks than when writing functionality (ideally, a lot of the time, I wear the unit test hat *before* I write the implementation; TDD is usually good practice). > - The post-conditions aren't checked unless I run my test suite, and > then they only check the canned input in the test suite. > Yes, this is a great advantage of unit tests. No cost until you explicitly run them. > - No class invariants. > - Inheritance is not handled correctly. > These are true. Also they are things I care very little about. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From klahnakoski at mozilla.com Sun Sep 30 10:31:11 2018 From: klahnakoski at mozilla.com (Kyle Lahnakoski) Date: Sun, 30 Sep 2018 10:31:11 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On 2018-09-30 10:15, David Mertz wrote: > For similar?reasons, I'd like an iInf too, FWIW.? It's good for an > overflow value, although it's hard to get there in Python ints (would > 'NaNAwareInt(1)/0' be an exception or iInf?).? Bonus points for anyone > who knows the actual maximum size of Python ints :-). > > However, the main use I'd have for iInf is simply as a starting value > in a minimization loop.? E.g. > > minimum = NaNAwareInt('inf') > for i in the_data: > ? ? minimum = min(i, minimum) > > ? ? other_stuff(i, minimum, a, b, c) > > > I've written that code a fair number of times; usually I just pick a > placeholder value that is "absurdly large relative to my domain", but > a clean infinity would be slightly better.? E.g. 'minimum = 10**100'. If we conceptualize iNan as "not an integer", then we can define operators in two manners: Let |?| be any operator: 1) "conservative" - operators that define a|? |b==iNaN if either a or b is iNan 2) "decisive" - operators that never return iNan With a decisive min(a, b), you can write the code you want without needing iINF ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Sun Sep 30 10:43:54 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 30 Sep 2018 10:43:54 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On Sun, Sep 30, 2018 at 10:23 AM Chris Angelico wrote: > On Mon, Oct 1, 2018 at 12:18 AM David Mertz wrote: > > Bonus points for anyone who knows the actual maximum size of Python ints > :-). > > Whatever the maximum is, it's insanely huge. > Want to share what the maximum actually is? I'm very curious! > Indeed. It's a lot bigger than any machine that will exist in my lifetime can hold. int.bit_length() is stored as a system-native integer, e.g. 64-bit, rather than recursively as a Python int. So the largest Python int is '2**sys.maxsize` (e.g. '2**(2**63-1)'). I may possibly have an off-by-one or off-by-power-of-two in there :-). -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxed at killingar.net Sun Sep 30 10:45:19 2018 From: boxed at killingar.net (=?utf-8?Q?Anders_Hovm=C3=B6ller?=) Date: Sun, 30 Sep 2018 16:45:19 +0200 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: <91cbff66-68bc-8453-559c-03ec27387ebf@mozilla.com> References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> <91cbff66-68bc-8453-559c-03ec27387ebf@mozilla.com> Message-ID: >> I am roughing out such a class and some test cases which will hopefully >> include some cases where the hoped for advantages can be realised. >> >> My thinking on bitwise operations is to do the same as arithmetic >> operations, i.e. (anything op iNaN) = iNaN and likewise for shift >> operations. > > Steve, > > While you are extending a number system, can every int be truthy, while > only iNan be falsey? I found that behaviour more useful because > checking if there is a value is more common than checking if it is a > zero value. I?m not saying you?re wrong in principle but such a change to Python seems extremely disruptive. And if we?re talking about robustness of code then truthiness would be better like in Java (!) imo, where only true is true and false is false and everything else is an error. If we?re actually talking about changing the truth table of Python for basic types then this is the logical next step. But making any change to the basic types truth table is a big -1 from me. This seems like a Python 2-3 transition to me. / Anders From rosuav at gmail.com Sun Sep 30 10:48:12 2018 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 1 Oct 2018 00:48:12 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On Mon, Oct 1, 2018 at 12:44 AM David Mertz wrote: > > On Sun, Sep 30, 2018 at 10:23 AM Chris Angelico wrote: >> >> On Mon, Oct 1, 2018 at 12:18 AM David Mertz wrote: >> > Bonus points for anyone who knows the actual maximum size of Python ints :-). >> >> Whatever the maximum is, it's insanely huge. >> Want to share what the maximum actually is? I'm very curious! > > > Indeed. It's a lot bigger than any machine that will exist in my lifetime can hold. > > int.bit_length() is stored as a system-native integer, e.g. 64-bit, rather than recursively as a Python int. So the largest Python int is '2**sys.maxsize` (e.g. '2**(2**63-1)'). I may possibly have an off-by-one or off-by-power-of-two in there :-). > Hah. Is that a fundamental limit based on the underlying representation, or would it mean that bit_length would bomb with an exception if the number is larger than that? I'm not sure what's going on. I have a Py3 busily calculating 2**(2**65) and it's pegging a CPU core while progressively consuming memory, but it responds to Ctrl-C, which suggests that Python bytecode is still being executed. ChrisA From rosuav at gmail.com Sun Sep 30 10:48:48 2018 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 1 Oct 2018 00:48:48 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> <91cbff66-68bc-8453-559c-03ec27387ebf@mozilla.com> Message-ID: On Mon, Oct 1, 2018 at 12:45 AM Anders Hovm?ller wrote: > But making any change to the basic types truth table is a big -1 from me. This seems like a Python 2-3 transition to me. > Far FAR worse than anything that changed in Py2->Py3. ChrisA From steve at pearwood.info Sun Sep 30 10:53:02 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 1 Oct 2018 00:53:02 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: <20180930145302.GN19437@ando.pearwood.info> On Sun, Sep 30, 2018 at 09:55:58AM -0400, David Mertz wrote: > Notwithstanding my observation of one case where 'nan float' doesn't > stay a nan, I definitely want something like iNaN. Btw are there other > operations on NaN's do not produce NaN's? Yes. The (informal?) rule applied by IEEE-754 is that if a function takes multiple arguments, and the result is entirely determined by all the non-NAN inputs, then that value ought to be returned. For example: py> math.hypot(INF, NAN) inf py> 1**NAN 1.0 But generally, any operation (apart from comparisons) on a NAN is usually going to return a NAN. > I suspect a NaNAwareInt subclass is the easiest way to get there, but I'm > agnostic on that detail. I think that adding a NAN to int itself will be too controversial to be accepted :-) -- Steve From mertz at gnosis.cx Sun Sep 30 10:57:58 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 30 Sep 2018 10:57:58 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On Sun, Sep 30, 2018 at 10:49 AM Chris Angelico wrote: > > int.bit_length() is stored as a system-native integer, e.g. 64-bit, > rather than recursively as a Python int. So the largest Python int is > '2**sys.maxsize` (e.g. '2**(2**63-1)'). I may possibly have an off-by-one > or off-by-power-of-two in there :-). > > Hah. Is that a fundamental limit based on the underlying > representation, or would it mean that bit_length would bomb with an > exception if the number is larger than that? > It's implementation specific. In concept, a version of Python other than CPython 3.7 could store bit-length as either a Python int or a system-native int, to whatever recursive depth was needed to prevent overflows. Or perhaps as a linked list of native ints. Or something else. There's no sane reason to bother doing that, but there's never been a *promise* in Python semantics not to represent numbers with more than 1e19 bits in them. > I'm not sure what's going on. I have a Py3 busily calculating > 2**(2**65) and it's pegging a CPU core while progressively consuming > memory, but it responds to Ctrl-C, which suggests that Python bytecode > is still being executed. > I'm not quite sure, but my guess is that at SOME POINT you'll get an overflow exception when the current value gets too big to store as a native int. Or maybe it'll be a segfault; I don't know. I'm also not sure if you'll see this error before or after the heat death of the universe ;-). I *am* sure that your swap space on your puny few terabyte disk will fill up before you complete the calculation, so that might be a system level crash not a caught exception. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadgetsteve at live.co.uk Sun Sep 30 11:01:49 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sun, 30 Sep 2018 15:01:49 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On 30/09/2018 15:15, David Mertz wrote: > For similar?reasons, I'd like an iInf too, FWIW.? It's good for an > overflow value, although it's hard to get there in Python ints (would > 'NaNAwareInt(1)/0' be an exception or iInf?).? Bonus points for anyone > who knows the actual maximum size of Python ints :-). > > However, the main use I'd have for iInf is simply as a starting value in > a minimization loop.? E.g. > > minimum = NaNAwareInt('inf') > for i in the_data: > ? ? minimum = min(i, minimum) > > ? ? other_stuff(i, minimum, a, b, c) > > > I've written that code a fair number of times; usually I just pick a > placeholder value that is "absurdly large relative to my domain", but a > clean infinity would be slightly better.? E.g. 'minimum = 10**100'. > The official maximum for a Python integer is x where x.bit_length()/8 == total_available_memory, (notice the word available which includes addressing limitations, other memory constraints, etc.). Adding inf & -inf would be nice but to do so we would need a better name than NaNAwareInt. It would also be nice if Decimal(NaNAwareInt('nan')) = Decimal('NaN'), float(NaNAwareInt('nan')) = float('nan'), etc. I have been doing some reading up on Signalling vs. Quiet NaN and think that this convention could be well worth following, (and possibly storing some information about where the NaN was raised on first encountering a Signalling NaN (and converting it to Quiet). -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From rosuav at gmail.com Sun Sep 30 11:03:11 2018 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 1 Oct 2018 01:03:11 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On Mon, Oct 1, 2018 at 12:58 AM David Mertz wrote: >> I'm not sure what's going on. I have a Py3 busily calculating >> 2**(2**65) and it's pegging a CPU core while progressively consuming >> memory, but it responds to Ctrl-C, which suggests that Python bytecode >> is still being executed. > > > I'm not quite sure, but my guess is that at SOME POINT you'll get an overflow exception when the current value gets too big to store as a native int. Or maybe it'll be a segfault; I don't know. > > I'm also not sure if you'll see this error before or after the heat death of the universe ;-). > > I *am* sure that your swap space on your puny few terabyte disk will fill up before you complete the calculation, so that might be a system level crash not a caught exception. Hahahaha. I was trying to compare to this: >>> "a" * (2**63 - 1) Traceback (most recent call last): File "", line 1, in MemoryError Bam, instant. (Interestingly, trying to use 2**63 says "OverflowError: cannot fit 'int' into an index-sized integer", suggesting that "index-sized integer" is signed, even though a size can and should be unsigned.) Were there some kind of hard limit, it would be entirely possible to exceed that and get an instant error, without actually calculating all the way up there. But it looks like that doesn't happen. In any case, the colloquial definition that I usually cite ("Python can store infinitely big integers" or "integers can be as big as you have RAM to store") is within epsilon of correct :D Thanks for the info. Cool to know! ChrisA From mertz at gnosis.cx Sun Sep 30 11:10:23 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 30 Sep 2018 11:10:23 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On Sun, Sep 30, 2018 at 11:04 AM Chris Angelico wrote: > On Mon, Oct 1, 2018 at 12:58 AM David Mertz wrote: > >> I'm not sure what's going on. I have a Py3 busily calculating > >> 2**(2**65) and it's pegging a CPU core while progressively consuming > >> memory, but it responds to Ctrl-C, which suggests that Python bytecode > >> is still being executed. > > I'm not quite sure, but my guess is that at SOME POINT you'll get an > overflow exception when the current value gets too big to store as a native > int. Or maybe it'll be a segfault; I don't know. > >>> "a" * (2**63 - 1) > Traceback (most recent call last): > File "", line 1, in > MemoryError > > Bam, instant. (Interestingly, trying to use 2**63 says "OverflowError: > cannot fit 'int' into an index-sized integer", suggesting that > "index-sized integer" is signed, even though a size can and should be > unsigned.) Were there some kind of hard limit, it would be entirely > possible to exceed that and get an instant error, without actually > calculating all the way up there. But it looks like that doesn't > happen. > Sure, it wouldn't be THAT hard to do bounds checking in the Python implementation to make '2**(2**65))' an instance error rather than a wait-to-exhaust-swap one. But it's a corner case, and probably not worth the overhead for all the non-crazy uses of integer arithmetic. > In any case, the colloquial definition that I usually cite ("Python > can store infinitely big integers" or "integers can be as big as you > have RAM to store") is within epsilon of correct :D > I teach the same thing. For beginners or intermediate students, I just say "unbounded ints." Occasionally for advanced students I add the footnote. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Sun Sep 30 11:13:19 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 30 Sep 2018 11:13:19 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On Sun, Sep 30, 2018 at 11:01 AM Steve Barnes wrote: > Adding inf & -inf would be nice but to do so we would need a better name > than NaNAwareInt. > My placeholder name is deliberately awkward. I think it gestures at the concept for discussion purposes though. > It would also be nice if Decimal(NaNAwareInt('nan')) = Decimal('NaN'), > float(NaNAwareInt('nan')) = float('nan'), etc. This seems like bad behavior given (per IEEE-754 spec): >>> float('nan') == float('nan') False >>> nan = float('nan') >>> nan == nan False -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadgetsteve at live.co.uk Sun Sep 30 11:13:42 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sun, 30 Sep 2018 15:13:42 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> <91cbff66-68bc-8453-559c-03ec27387ebf@mozilla.com> Message-ID: On 30/09/2018 15:48, Chris Angelico wrote: > On Mon, Oct 1, 2018 at 12:45 AM Anders Hovm?ller wrote: >> But making any change to the basic types truth table is a big -1 from me. This seems like a Python 2-3 transition to me. >> > > Far FAR worse than anything that changed in Py2->Py3. > > ChrisA > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ > I can see that breaking a LOT of existing code so would not think that it would be a practical option, I am thinking of having a isnan &/or isvalue. If the behaviour of integers were changed so that all NaN producing operations that took non-NaN inputs were to produce a signalling NaN, i.e. also produce an Exception but a pass on that Exception (explicitly changed Signalling NaN to Quiet NaN) I don't see any non-NaN aware code being forced to change. -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From gadgetsteve at live.co.uk Sun Sep 30 11:17:51 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sun, 30 Sep 2018 15:17:51 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On 30/09/2018 16:13, David Mertz wrote: > On Sun, Sep 30, 2018 at 11:01 AM Steve Barnes > wrote: > > Adding inf & -inf would be nice but to do so we would need a better > name > than NaNAwareInt. > > > My placeholder name is deliberately awkward.? I think it gestures at the > concept for discussion purposes though. > > It would also be nice if Decimal(NaNAwareInt('nan')) = Decimal('NaN'), > float(NaNAwareInt('nan')) = float('nan'), etc. > > > This seems like bad behavior given (per IEEE-754 spec): > > >>> float('nan') == float('nan') > False > >>> nan = float('nan') > >>> nan == nan > False > > -- > Keeping medicines from the bloodstreams of the sick; food > from the bellies of the hungry; books from the hands of the > uneducated; technology from the underdeveloped; and putting > advocates of freedom in prisons.? Intellectual property is > to the 21st century what the slave trade was to the 16th. David, Note that my statements above had a single = i.e. float(NaNAwareInt('nan')) produces float('nan'), etc., as does: In [42]: nan = decimal.Decimal('nan') In [43]: decimal.Decimal(nan) Out[43]: Decimal('NaN') In [44]: float(nan) Out[44]: nan and vice-versa. -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From mertz at gnosis.cx Sun Sep 30 11:26:01 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 30 Sep 2018 11:26:01 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On Sun, Sep 30, 2018 at 11:17 AM Steve Barnes wrote: > Note that my statements above had a single = i.e. > float(NaNAwareInt('nan')) produces float('nan'), etc., as does: > > In [42]: nan = decimal.Decimal('nan') > In [43]: decimal.Decimal(nan) > Out[43]: Decimal('NaN') > In [44]: float(nan) > Out[44]: nan > I think this explanation is still a little confusing. I take it what you're getting at is that a "NaN" of any particular type (float, Decimal, complex, NanAwareInt) should be a perfectly good initializer to create a NaN of a different type using its constructor. I think that is sensible (not sure about complex). Currently we have: >>> complex(nan) (nan+0j) >>> float(complex('nan')) Traceback (most recent call last): File "", line 1, in float(complex('nan')) TypeError: can't convert complex to float >>> complex(float('nan')) (nan+0j) >>> float(complex('nan')) Traceback (most recent call last): File "", line 1, in float(complex('nan')) TypeError: can't convert complex to float >>> from decimal import Decimal >>> Decimal('nan') Decimal('NaN') >>> float(Decimal('nan')) nan >>> Decimal(float('nan')) Decimal('NaN') >>> complex(Decimal('nan')) (nan+0j) >>> Decimal(complex('nan')) Traceback (most recent call last): File "", line 1, in Decimal(complex('nan')) TypeError: conversion from complex to Decimal is not supported I don't think we can change the "cast-from-complex" behavior... even though I think it maybe should have been different from the start. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadgetsteve at live.co.uk Sun Sep 30 11:31:36 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sun, 30 Sep 2018 15:31:36 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On 30/09/2018 16:26, David Mertz wrote: > On Sun, Sep 30, 2018 at 11:17 AM Steve Barnes > wrote: > > Note that my statements above had a single = i.e. > float(NaNAwareInt('nan')) produces float('nan'), etc., as does: > > In [42]: nan = decimal.Decimal('nan') > In [43]: decimal.Decimal(nan) > Out[43]: Decimal('NaN') > In [44]: float(nan) > Out[44]: nan > > > I think this explanation is still a little confusing.? I take it what > you're getting at is that a "NaN" of any particular type (float, > Decimal, complex, NanAwareInt) should be a perfectly good initializer to > create a NaN of a different type using its constructor. > > I think that is sensible (not sure about complex).? Currently we have: > > >>> complex(nan) > (nan+0j) > >>> float(complex('nan')) > Traceback (most recent call last): > ? File "", line 1, in > ? ? float(complex('nan')) > TypeError: can't convert complex to float > > >>> complex(float('nan')) > (nan+0j) > >>> float(complex('nan')) > Traceback (most recent call last): > ? File "", line 1, in > ? ? float(complex('nan')) > TypeError: can't convert complex to float > > >>> from decimal import Decimal > >>> Decimal('nan') > Decimal('NaN') > >>> float(Decimal('nan')) > nan > >>> Decimal(float('nan')) > Decimal('NaN') > >>> complex(Decimal('nan')) > (nan+0j) > >>> Decimal(complex('nan')) > Traceback (most recent call last): > ? File "", line 1, in > ? ? Decimal(complex('nan')) > TypeError: conversion from complex to Decimal is not supported > > > I don't think we can change the "cast-from-complex" behavior... even > though I think it maybe should have been different from the start. > No complex can be converted to float without accessing either the real or imag component. In [51]: cn=complex(4, float('nan')) In [52]: cn Out[52]: (4+nanj) In [53]: cn.real Out[53]: 4.0 In [54]: cn.imag Out[54]: nan In [55]: float(cn.imag) Out[55]: nan -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From steve at pearwood.info Sun Sep 30 11:31:32 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 1 Oct 2018 01:31:32 +1000 Subject: [Python-ideas] Why is design-by-contracts not widely adopted? In-Reply-To: References: <37e7961b-528d-309f-114f-7194b9051892@kynesim.co.uk> <20180929111925.GU19437@ando.pearwood.info> Message-ID: <20180930153132.GO19437@ando.pearwood.info> On Sun, Sep 30, 2018 at 10:29:50AM -0400, David Mertz wrote: > I think Steven's is backwards in its own way. > > - Contracts test the space of arguments *actually used during testing > period* (or during initial production if the performance hit is > acceptable). > - Unit tests test the space of arguments *thought of by the developers*. > > *A priori,* either one of those can cover cases not addressed by the > other. Fair point. But given that in general unit tests tend to only exercise a handful of values (have a look at the tests in the Python stdlib) I think it is fair to say that in practice unit tests typically do not have anywhere near the coverage of live data used during alpha and beta testing. > If unit tests use the hypothesis library or similar approaches, > unit tests might very well examine arguments unlikely to be encountered in > real-world (or test phase) use... Indeed. We can consider all of these things as complementary: - doctests give us confidence that the documentation hasn't rotted; - unit tests give us confidence that corner cases are tested; - contracts give us confidence that regular and common cases are tested; - regression tests give us confidence that bugs aren't re-introduced; - smoke tests give us confidence that the software at least will run; - static type checking allows us to drop type checks from our unit tests and contracts; but of course there can be overlap. And that's perfectly fine. [...] > - Half of the checks are very far away, in a separate file, assuming > > I even remembered or bothered to write the test. > > > > To me, this is the GREATEST VIRTUE of unit tests over DbC. It puts the > tests far away where they don't distract from reading and understanding the > function itself. I rarely want my checks proximate since I wear a very > different hat when thinking about checks than when writing functionality > (ideally, a lot of the time, I wear the unit test hat *before* I write the > implementation; TDD is usually good practice). I'm curious. When you write a function or method, do you include input checks? Here's an example from the Python stdlib (docstring removed for brevity): # bisect.py def insort_right(a, x, lo=0, hi=None): if lo < 0: raise ValueError('lo must be non-negative') if hi is None: hi = len(a) while lo < hi: mid = (lo+hi)//2 if x < a[mid]: hi = mid else: lo = mid+1 a.insert(lo, x) Do you consider that check for lo < 0 to be disruptive? How would you put that in a unit test? That check is effectively a pre-condition. Putting aside the question of which exception should be raised (AssertionError or ValueError), we could re-write that as a contract: def insort_right(a, x, lo=0, hi=None): require: lo >= 0 # implementation follows, as above if hi is None: hi = len(a) while lo < hi: mid = (lo+hi)//2 if x < a[mid]: hi = mid else: lo = mid+1 a.insert(lo, x) Do you consider that precondition check for lo >= to be disruptive? More or less disruptive than when it was in the body of the function implementation? > > - The post-conditions aren't checked unless I run my test suite, and > > then they only check the canned input in the test suite. > > > > Yes, this is a great advantage of unit tests. No cost until you explicitly > run them. If you're worried about the cost of verifying your program does the right thing during testing and development, I think you're doing something wrong :-) If there are specific functions/classes where the tests are insanely expensive, that's one thing. I have some code that wants to verify that a number is prime as part of an informal post-condition check, but if it is a *big* prime that check is too costly so I skip it. But in general, if I'm testing or under active development, what do I care if the program takes 3 seconds to run instead of 2.5 seconds? Either way, its finished by the time I come back from making my coffee :-) But more seriously, fine, if a particular contract is too expensive to run, disable it or remove it and add some unit tests. And then your devs will complain that the unit tests are too slow, and stop running them, and that's why we can't have nice things... *wink* -- Steve From rosuav at gmail.com Sun Sep 30 11:34:44 2018 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 1 Oct 2018 01:34:44 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On Mon, Oct 1, 2018 at 1:32 AM Steve Barnes wrote: > > No complex can be converted to float without accessing either the real > or imag component. > Or taking its absolute value, which will return nan if either part is nan. ChrisA From mertz at gnosis.cx Sun Sep 30 11:36:38 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 30 Sep 2018 11:36:38 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On Sun, Sep 30, 2018 at 11:31 AM Steve Barnes wrote: > No complex can be converted to float without accessing either the real > or imag component. > Sure. Not in Python 3.7. But mathematically, it seems really straightforward to say that Complex numbers that lie on the Real line (i.e. imaginary part is zero) map in an obvious way to Real numbers. I haven't done an inventory, but I'd guess most?but not all?other PLs do the same thing Python does. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Sun Sep 30 11:46:10 2018 From: mertz at gnosis.cx (David Mertz) Date: Sun, 30 Sep 2018 11:46:10 -0400 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On Sun, Sep 30, 2018 at 11:35 AM Chris Angelico wrote: > On Mon, Oct 1, 2018 at 1:32 AM Steve Barnes > wrote: > > No complex can be converted to float without accessing either the real > > or imag component. > Or taking its absolute value, which will return nan if either part is nan. > Well, various other operations as well as abs(). Anything that reduces a complex to a float already... I guess you could argue that behind the scenest hese functions all access .real and/or .imag. >>> float(abs(1+1j)) 1.4142135623730951 >>> float(cmath.phase(1+1j)) 0.7853981633974483 >>> float(cmath.isfinite(1+1j)) 1.0 -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadgetsteve at live.co.uk Sun Sep 30 12:21:08 2018 From: gadgetsteve at live.co.uk (Steve Barnes) Date: Sun, 30 Sep 2018 16:21:08 +0000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010726.GG19437@ando.pearwood.info> <20180930125542.GL19437@ando.pearwood.info> Message-ID: On 30/09/2018 16:36, David Mertz wrote: > On Sun, Sep 30, 2018 at 11:31 AM Steve Barnes > wrote: > > No complex can be converted to float without accessing either the real > or imag component. > > > Sure. Not in Python 3.7.? But mathematically, it seems really > straightforward to say that Complex numbers that lie on the Real line > (i.e. imaginary part is zero) map in an obvious way to Real numbers. > > I haven't done an inventory, but I'd guess most?but not all?other PLs do > the same thing Python does. > Personally I agree that float(2.0+0j) should possibly be a valid value (2.0) but there is the complication, as always, of how near zero is zero. But that is a battle for another time. -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com From jamtlu at gmail.com Sun Sep 30 11:30:29 2018 From: jamtlu at gmail.com (James Lu) Date: Sun, 30 Sep 2018 11:30:29 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Message-ID: <769938C2-9DE6-42B2-99E7-FED5325F8510@gmail.com> Hi Marko, Going back to your proposal on repeating lambda P as a convention. I do find @snapshot(some_identifier=P -> P.self(P.arg1), some_identifier2=P -> P.arg1 + P.arg2) acceptable. Should we keep some kind of document to keep track of all the different proposals? I?m thinking an editable document like HackMD where we can label all the different ideas to keep them straight in our head. James Lu From jamtlu at gmail.com Sun Sep 30 11:30:32 2018 From: jamtlu at gmail.com (James Lu) Date: Sun, 30 Sep 2018 11:30:32 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Message-ID: <8A3D0CF3-80F3-4F5A-9ECC-C30D8896FCE1@gmail.com> Hi Marko, > If the documentation is clear, I'd expect the user to be able to distinguish the two. The first approach is shorter, and uses magic, but fails in some rare situations. The other method is more verbose, but always works. I like this idea. James Lu > On Sep 29, 2018, at 1:36 AM, Marko Ristin-Kaufmann wrote: > > If the documentation is clear, I'd expect the user to be able to distinguish the two. The first approach is shorter, and uses magic, but fails in some rare situations. The other method is more verbose, but always works. From jamtlu at gmail.com Sun Sep 30 11:30:33 2018 From: jamtlu at gmail.com (James Lu) Date: Sun, 30 Sep 2018 11:30:33 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Message-ID: <42A7E6C2-CF3D-4DDC-A855-74C973768BF7@gmail.com> Hi Marko, Regarding the ?transpile into Python? syntax with with statements: Can I see an example of this syntax when used in pathlib? I?m a bit worried this syntax is too long and ?in the way?, unlike decorators which are before the function body. Or do you mean that both MockP and your syntax should be supported? Would with requiring: assert arg1 < arg2, ?message? Be the code you type or the code that?s actually run? James Lu > On Sep 29, 2018, at 2:56 PM, Marko Ristin-Kaufmann wrote: > > Just From jamtlu at gmail.com Sun Sep 30 11:30:27 2018 From: jamtlu at gmail.com (James Lu) Date: Sun, 30 Sep 2018 11:30:27 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> Message-ID: <559132CB-DD2B-4422-853C-B97EA7FCF2D2@gmail.com> Hi Marko, I just found the time to reply to these. > I reread the proposal with MockP. I still don't get the details, but if I think I understand the basic idea. You put a placeholder and whenever one of its methods is called (including dunders), you record it and finally assemble an AST and compile a lambda function to be executed at actual call later. Precisely. > But that would still fail if you want to have: > @snapshot(var1=some_func(MockP.arg1, MockP.arg2)) > , right? Or there is a way to record that? This would still fail. You would record it like this: @snapshot(var1=thunk(some_func)(MockP.arg1, MockP.arg2)) thunk stores the function for later and produces another MockP object that listens for __call__. By the way, MockP is the class while P is a virgin instance of MockP. MockP instances are immutable, so any operation on a MockP instance creates a new object or MockP instance. I?m also beginning to lean towards @snapshot(var1=...) @snapshot(var2=...) I suspect this would deal better with VCS. This syntax does have a a nice visual alignment. I?m not entirely sure what kind of indentation PEP 8 recommends and editors give, so the point may be moot if the natural indentation also gives the same visual alignment. Though both should be supported so the best syntax may win. James Lu > On Sep 29, 2018, at 3:22 PM, Marko Ristin-Kaufmann wrote: > > I reread the proposal with MockP. I still don't get the details, but if I think I understand the basic idea. You put a placeholder and whenever one of its methods is called (including dunders), you record it and finally assemble an AST and compile a lambda function to be executed at actual call later. > > But that would still fail if you want to have: > @snapshot(var1=some_func(MockP.arg1, MockP.arg2)) > , right? Or there is a way to record that? From marko.ristin at gmail.com Sun Sep 30 16:32:23 2018 From: marko.ristin at gmail.com (Marko Ristin-Kaufmann) Date: Sun, 30 Sep 2018 22:32:23 +0200 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: <769938C2-9DE6-42B2-99E7-FED5325F8510@gmail.com> References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> <769938C2-9DE6-42B2-99E7-FED5325F8510@gmail.com> Message-ID: Hi James, (I'm just about to go to sleep, so I'll answer the other messages tomorrow.) Should we keep some kind of document to keep track of all the different > proposals? I?m thinking an editable document like HackMD where we can label > all the different ideas to keep them straight in our head. > I thought github issues would be a suitable place for that: https://github.com/Parquery/icontract/issues It reads a bit easier as a discussion rather than a single document -- if anybody else needs to follow. What do you think? On Sun, 30 Sep 2018 at 22:07, James Lu wrote: > Hi Marko, > > Going back to your proposal on repeating lambda P as a convention. > > I do find > > @snapshot(some_identifier=P -> P.self(P.arg1), > some_identifier2=P -> P.arg1 + P.arg2) > > acceptable. > > Should we keep some kind of document to keep track of all the different > proposals? I?m thinking an editable document like HackMD where we can label > all the different ideas to keep them straight in our head. > > James Lu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mital.vaja at googlemail.com Sun Sep 30 16:39:51 2018 From: mital.vaja at googlemail.com (Mital Ashok) Date: Sun, 30 Sep 2018 21:39:51 +0100 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN Message-ID: As others have said, I do not think directly changing int is a good idea. It would break so much existing code if an unexpected iNaN appeared. I also think having a nan-aware subclass should be the other way around. If you are expecting a nan-aware int, a regular int would work fine. If you are expecting an int, a nan-aware int might not work. It seems more like int is a subclass of nan-aware int. Another idea is having it as a completely different class. int can be made a virtual subclass of it. This can be implemented in pure Python too. However, I do not think there is a strong enough use case for this. The example given for an integer infinity could have it replaced with a float infinity. And how iNaN would define operations on it can be completely unexpected. Throwing an error would be more useful in those places, but if not possible, float('nan'), None or a custom object could be used (which one used would be documented). Because there would be too much for Python to "decide" for how iNaNs would work, it should be left up to user code. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Sun Sep 30 18:19:11 2018 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 30 Sep 2018 18:19:11 -0400 Subject: [Python-ideas] Simplicity of C (was why is design-by-contracts not widely) In-Reply-To: <23472.48495.587491.729759@turnbull.sk.tsukuba.ac.jp> References: <20180930015413.GH19437@ando.pearwood.info> <20180930042635.GI19437@ando.pearwood.info> <20180930080249.GJ19437@ando.pearwood.info> <23472.48495.587491.729759@turnbull.sk.tsukuba.ac.jp> Message-ID: <99d136ef-4492-968a-e7e2-10576ac9ffa0@trueblade.com> On 9/30/2018 8:11 AM, Stephen J. Turnbull wrote: > Steven D'Aprano writes: > > (7) You can't unit test loop invariants > > I don't see how a loop invariant can be elegantly specified without > mixing it in to the implementation. Can you show an example of code > written in a language with support for loop invariants *not* mixed > into the implementation? I'd be very interested in this, too. Any pointers? Eric From oscar.j.benjamin at gmail.com Sun Sep 30 18:52:40 2018 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Sun, 30 Sep 2018 23:52:40 +0100 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: <20180930010053.GF19437@ando.pearwood.info> References: <20180930010053.GF19437@ando.pearwood.info> Message-ID: On Sun, 30 Sep 2018 at 02:01, Steven D'Aprano wrote: > > On Sat, Sep 29, 2018 at 09:43:42PM +0100, Oscar Benjamin wrote: > > On Sat, 29 Sep 2018 at 19:38, Steve Barnes wrote: > > > > > I converted to int because I needed a whole number, this was intended to > > > represent some more complex process where a value is converted to a > > > whole number down in the depths of the processing. > > > > Your requirement to have a whole number cannot meaningfully be > > satisfied if your input is nan so an exception is the most useful > > result. > > Not to Steve it isn't. > > Be careful about making value judgements like that: Steve is asking for > an integer NAN because for *him* an integer NAN is more useful than an > exception. You shouldn't tell him that he is wrong, unless you know his > use-case and his code, which you don't. Then he can catch the exception and do something else. If I called int(x) because my subsequent code "needed a whole number" then I would definitely not want to end up with a nan. The proposal requested is that int(x) could return something other than a well defined integer. That would break a lot of code! In what way is iNaN superior to a plain nan? In C this sort of thing makes sense but in Python there's no reason you can't just use float('nan'). (This was raised by Serhiy earlier in the thread, resulting in Steve saying that he wants int(float('nan')) to return iNaN which then results in the quoted context above). I don't mean to make a judgment about Steve's use-cases: I have read the messages in this thread and I haven't yet seen a use-case for this proposal. -- Oscar From rosuav at gmail.com Sun Sep 30 19:00:00 2018 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 1 Oct 2018 09:00:00 +1000 Subject: [Python-ideas] Suggestion: Extend integers to include iNaN In-Reply-To: References: <20180930010053.GF19437@ando.pearwood.info> Message-ID: On Mon, Oct 1, 2018 at 8:53 AM Oscar Benjamin wrote: > > On Sun, 30 Sep 2018 at 02:01, Steven D'Aprano wrote: > > > > On Sat, Sep 29, 2018 at 09:43:42PM +0100, Oscar Benjamin wrote: > > > On Sat, 29 Sep 2018 at 19:38, Steve Barnes wrote: > > > > > > > I converted to int because I needed a whole number, this was intended to > > > > represent some more complex process where a value is converted to a > > > > whole number down in the depths of the processing. > > > > > > Your requirement to have a whole number cannot meaningfully be > > > satisfied if your input is nan so an exception is the most useful > > > result. > > > > Not to Steve it isn't. > > > > Be careful about making value judgements like that: Steve is asking for > > an integer NAN because for *him* an integer NAN is more useful than an > > exception. You shouldn't tell him that he is wrong, unless you know his > > use-case and his code, which you don't. > > Then he can catch the exception and do something else. If I called > int(x) because my subsequent code "needed a whole number" then I would > definitely not want to end up with a nan. The proposal requested is > that int(x) could return something other than a well defined integer. > That would break a lot of code! At no point was the behaviour of int(x) ever proposed to be changed. Don't overreact here. The recommended use-case was for a library to return iNaN instead of None when it is unable to return an actual value. ChrisA From jamtlu at gmail.com Sun Sep 30 21:52:36 2018 From: jamtlu at gmail.com (James Lu) Date: Sun, 30 Sep 2018 21:52:36 -0400 Subject: [Python-ideas] Upgrade to Mailman 3 Message-ID: It has a nice GUI for people who spectate a discussion to read emails without having to subscribe to the list. http://docs.mailman3.org/en/latest/migration.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamtlu at gmail.com Sun Sep 30 22:04:29 2018 From: jamtlu at gmail.com (James Lu) Date: Sun, 30 Sep 2018 22:04:29 -0400 Subject: [Python-ideas] "old" values in postconditions In-Reply-To: References: <13186373-6FB6-4C8B-A8D8-2C3E028CDC3D@gmail.com> <3C33B6FF-FC19-47D6-AD2A-FC0B17C50A8D@gmail.com> <0061278F-4243-42BD-945D-A93B4A0FC21D@gmail.com> <769938C2-9DE6-42B2-99E7-FED5325F8510@gmail.com> Message-ID: Hi Marko, Regarding switching over to GitHub issues: * I copy-pasted the MockP original code to GitHub issues. * There's a clunky way to view the discussion at https://mail.python.org/pipermail/python-ideas/2018-September/subject.html#start . * The less clunky way to view the discussion is to subscribe to the mailing list and use Gmail to move all the messages from python-ideas to python ideas list and all the messages from the discussions we have to a "contracts" label and view the discussion with your email client. * A week earlier I didn't think I'd be saying this, but I like email for discussion better. It works on mobile and I can send messages offline, and I send 90% of my messages on my phone and when I'm offline. Unless you know an alternative (WhatsApp, maybe?) that fits my use cases off the top of your head, I think we should stick to email. * My proposal: We split the discussion into a new email thread, we keep the latest agreed upon proposal on GitHub issues. On Sun, Sep 30, 2018 at 4:32 PM Marko Ristin-Kaufmann < marko.ristin at gmail.com> wrote: > Hi James, > (I'm just about to go to sleep, so I'll answer the other messages > tomorrow.) > > Should we keep some kind of document to keep track of all the different >> proposals? I?m thinking an editable document like HackMD where we can label >> all the different ideas to keep them straight in our head. >> > > I thought github issues would be a suitable place for that: > https://github.com/Parquery/icontract/issues > > It reads a bit easier as a discussion rather than a single document -- if > anybody else needs to follow. What do you think? > > On Sun, 30 Sep 2018 at 22:07, James Lu wrote: > >> Hi Marko, >> >> Going back to your proposal on repeating lambda P as a convention. >> >> I do find >> >> @snapshot(some_identifier=P -> P.self(P.arg1), >> some_identifier2=P -> P.arg1 + P.arg2) >> >> acceptable. >> >> Should we keep some kind of document to keep track of all the different >> proposals? I?m thinking an editable document like HackMD where we can label >> all the different ideas to keep them straight in our head. >> >> James Lu > > -------------- next part -------------- An HTML attachment was scrubbed... URL: