From mwm at mired.org Tue Mar 1 00:19:20 2011 From: mwm at mired.org (Mike Meyer) Date: Mon, 28 Feb 2011 18:19:20 -0500 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> Message-ID: <20110228181920.1fde25a0@bhuda.mired.org> On Tue, 1 Mar 2011 08:18:43 +1000 Nick Coghlan wrote: > On Tue, Mar 1, 2011 at 3:15 AM, Guido van Rossum wrote: > > On the third hand, I could see this as an area where a pure > > library-based approach will always be doomed, and where a proposal to > > add new syntax would actually make sense. Of course that still has the > > same problems due to release time and policy. > I suspect one of the core issues isn't so much that regex syntax is > arcane, ugly and hard to remember (although those don't help), but the > fact that fully general string pattern matching is inherently hard to > remember due to the wide range of options. There's a reason glob-style > matching is limited to a couple of simple wildcard characters. I disagree. Fully general string pattern matching has a few fundamental operations: sequence, alternation, and repetition. Modern regexp libraries have lots of features that provide shorthands for special cases of those. The "options" tend to either be things that can be duplicated by proper use of the three fundamental features, or for changing the handling of newlines and string ends. Even things like greedy vs. non-greedy can be handled by defining those fundamental operations properly (e.g. - define {m,n} as trying the matches from m to n, rather than just matching from m to n, so {n,m} and {m,n} would be the same match with different greediness). In other words, the problem isn't that fully general string pattern matching is hard, it's that our regular expression language started from an academic tool of formal language and automata theory, and has grown features ad-hoc since then. Worse yet, there are multiple implementations with slightly different, some with multiple behaviors that also change the syntax. > As as code based alternatives to regexes go, the one I see come up > most often as a suggested, working, alternative is pyparsing (although > I've never tried it myself). For example: > http://stackoverflow.com/questions/3673388/python-replacing-regex-with-bnf-or-pyparsing I played with an early version of the snobol library now in pypi, and it worked well for what I tried. However, I don't think these will be generally successful, because 1) they aren't more powerful than regex, just more readable. Which winds up hurting them, because writing a book about using them is overkill, but the existence of such a book for regexps favors them. One of the more interesting features of pattern matching is backtracking. I.e. - if a match fails, you start working backwards through the pattern until you find an element that has untried alternatives, go to the next alternative, and then start working forward again. Icon lifts that capability into the language proper - allowing for some interesting capabilities. I think the best alternative to replacing the regexp library would be new syntax to provide that facility, then building string matching on top of that facility. http://www.mired.org/consulting.html Independent Software developer/SCM consultant, email for more information. O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From greg.ewing at canterbury.ac.nz Tue Mar 1 01:07:36 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 01 Mar 2011 13:07:36 +1300 Subject: [Python-ideas] class ModuleNotFoundError(ImportError) In-Reply-To: References: Message-ID: <4D6C38C8.1080302@canterbury.ac.nz> Nick Coghlan wrote: > Perhaps it it worth revisiting the old "import x or y or z as > whatever" syntax proposal for 3.3 +1, as the suggested idiom is getting rather long-winded. Also it gets worse when there are more than two alternatives, since you end up with another nesting level for each fallback. > (although deciding what, if anything to do for "from" style > imports is a hassle) I don't think it would be too bad: from x or y or z import foo, spam, eggs This would first try to find one of the listed modules, and having found it, import it and attempt to bind the specified names. Failures during the binding phase should probably *not* trigger a fallback to the next module, to avoid ending up with a situation where some of the names are imported from one module and some from another. Since the internals of the modules are probably incompatible with each other, that would be a bad thing. -- Greg > > Cheers, > Nick. > From greg.ewing at canterbury.ac.nz Tue Mar 1 01:18:23 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 01 Mar 2011 13:18:23 +1300 Subject: [Python-ideas] class ModuleNotFoundError(ImportError) In-Reply-To: References: Message-ID: <4D6C3B4F.4020205@canterbury.ac.nz> cool-RR wrote: > I think modules sometimes raise `ImportError` because of problematic > circular imports. It might be more logical if the case where the module is found but a requested name is not present in it raised AttributeError or NameError instead of ImportError. I don't think I've ever had a situation where conflating them both into ImportError was helpful. ImportError itself would then have the meaning of the proposed ModuleNotFoundError. -- Greg From ncoghlan at gmail.com Tue Mar 1 10:50:44 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 1 Mar 2011 19:50:44 +1000 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: <20110228181920.1fde25a0@bhuda.mired.org> References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <20110228181920.1fde25a0@bhuda.mired.org> Message-ID: On Tue, Mar 1, 2011 at 9:19 AM, Mike Meyer wrote: > I disagree. Fully general string pattern matching has a few > fundamental operations: sequence, alternation, and repetition. I agree that the fundamental operations are simple in principle. However, I still believe that the elaboration of those operations into fully general pattern matching is a complex combinatorial operation that is difficult to master. regex's certainly make it harder than it needs to be, but anything with similar expressive power is still going to be tricky to completely wrap your head around. Cheers, Nick. P.S. I'm guessing this is the Icon based library you mentioned in the original message: http://www.wilmott.ca/python/patternmatching.html Certainly an interesting read. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Tue Mar 1 13:23:06 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 1 Mar 2011 22:23:06 +1000 Subject: [Python-ideas] class ModuleNotFoundError(ImportError) In-Reply-To: <4D6C38C8.1080302@canterbury.ac.nz> References: <4D6C38C8.1080302@canterbury.ac.nz> Message-ID: On Tue, Mar 1, 2011 at 10:07 AM, Greg Ewing wrote: > Nick Coghlan wrote: > >> Perhaps it it worth revisiting the old "import x or y or z as >> whatever" syntax proposal for 3.3 > > +1, as the suggested idiom is getting rather long-winded. Also > it gets worse when there are more than two alternatives, since > you end up with another nesting level for each fallback. > >> (although deciding what, if anything to do for "from" style >> imports is a hassle) > > I don't think it would be too bad: > > ? from x or y or z import foo, spam, eggs > > This would first try to find one of the listed modules, and > having found it, import it and attempt to bind the specified > names. True, I guess it is really only the module naming that differs in cases like ElementTree, which would be handled just fine by the simple approach: import lxml.etree or element.ElementTree or xml.etree.ElementTree as etree from lxml.etree or element.ElementTree or xml.etree.ElementTree import Element If the internal APIs of the resolved modules differ to the point where the latter doesn't work then the longhand form remains available. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mwm at mired.org Tue Mar 1 18:05:11 2011 From: mwm at mired.org (Mike Meyer) Date: Tue, 1 Mar 2011 12:05:11 -0500 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <20110228181920.1fde25a0@bhuda.mired.org> Message-ID: <20110301120511.1a90f4bb@bhuda.mired.org> On Tue, 1 Mar 2011 19:50:44 +1000 Nick Coghlan wrote: > On Tue, Mar 1, 2011 at 9:19 AM, Mike Meyer wrote: > > I disagree. Fully general string pattern matching has a few > > fundamental operations: sequence, alternation, and repetition. > > I agree that the fundamental operations are simple in principle. > > However, I still believe that the elaboration of those operations into > fully general pattern matching is a complex combinatorial operation > that is difficult to master. regex's certainly make it harder than it > needs to be, but anything with similar expressive power is still going > to be tricky to completely wrap your head around. True. But I think that the problem - if properly expressed - is like the game of Go: a few simple rules that combine to produce a complex system that is difficult to master. With regexp notation, what we've got is more like 3d chess: multiple complex (just slightly different) sets of operations that do more to obscure the underlying simple rules than to help master the system. http://www.mired.org/consulting.html Independent Software developer/SCM consultant, email for more information. O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From guido at python.org Tue Mar 1 19:30:45 2011 From: guido at python.org (Guido van Rossum) Date: Tue, 1 Mar 2011 10:30:45 -0800 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: <20110301120511.1a90f4bb@bhuda.mired.org> References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <20110228181920.1fde25a0@bhuda.mired.org> <20110301120511.1a90f4bb@bhuda.mired.org> Message-ID: On Tue, Mar 1, 2011 at 9:05 AM, Mike Meyer wrote: > On Tue, 1 Mar 2011 19:50:44 +1000 > Nick Coghlan wrote: > >> On Tue, Mar 1, 2011 at 9:19 AM, Mike Meyer wrote: >> > I disagree. Fully general string pattern matching has a few >> > fundamental operations: sequence, alternation, and repetition. >> >> I agree that the fundamental operations are simple in principle. >> >> However, I still believe that the elaboration of those operations into >> fully general pattern matching is a complex combinatorial operation >> that is difficult to master. regex's certainly make it harder than it >> needs to be, but anything with similar expressive power is still going >> to be tricky to completely wrap your head around. > > True. But I think that the problem - if properly expressed - is like > the game of Go: a few simple rules that combine to produce a complex > system that is difficult to master. With regexp notation, what we've > got is more like 3d chess: multiple complex (just slightly different) > sets of operations that do more to obscure the underlying simple rules > than to help master the system. I'm not sure those are the right analogies (though they may not be all that wrong either). If you ask me there are two problems with regexps: (a) The notation is cryptic and error-prone, its use of \ conflicts with Python strings (using r'...' helps but is yet another gotcha), and the parser is primitive. Until your brain has learned to parse regexps, it will have a hard time understanding examples, which are often the key to solving programming problems. Somehow the regexp syntax is not "natural" for the text parsers we have in our brain -- contrast this with Python's syntax, which was explicitly designed to go with the flow. Perhaps another problem is with composability -- if you know how to solve two simple problems using regexps, that doesn't mean your solutions can be combined to solve a combination of those problems. (b) There often isn't all that great of a match between the high-level goals of the user (e.g. "extract a list of email addresses from a file") and the available primitive operations. It's like writing an operating system for a Turing machine -- we have mathematical proof that it's possible, but that doesn't make it easy. The additional operations provided by modern, Perl-derived (which includes Python's re module) regexp notation are meant to help, but they just extend the basic premises of regexp notation, rather than providing a new, higher-level abstraction layer that is better matched to the way the typical user thinks about the problem. All in all I think it would be a good use of somebody's time to try and come up with something better. But it won't be easy. -- --Guido van Rossum (python.org/~guido) From debatem1 at gmail.com Tue Mar 1 20:53:26 2011 From: debatem1 at gmail.com (geremy condra) Date: Tue, 1 Mar 2011 11:53:26 -0800 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <20110228181920.1fde25a0@bhuda.mired.org> <20110301120511.1a90f4bb@bhuda.mired.org> Message-ID: On Tue, Mar 1, 2011 at 10:30 AM, Guido van Rossum wrote: > On Tue, Mar 1, 2011 at 9:05 AM, Mike Meyer wrote: >> On Tue, 1 Mar 2011 19:50:44 +1000 >> Nick Coghlan wrote: >> >>> On Tue, Mar 1, 2011 at 9:19 AM, Mike Meyer wrote: >>> > I disagree. Fully general string pattern matching has a few >>> > fundamental operations: sequence, alternation, and repetition. >>> >>> I agree that the fundamental operations are simple in principle. >>> >>> However, I still believe that the elaboration of those operations into >>> fully general pattern matching is a complex combinatorial operation >>> that is difficult to master. regex's certainly make it harder than it >>> needs to be, but anything with similar expressive power is still going >>> to be tricky to completely wrap your head around. >> >> True. But I think that the problem - if properly expressed - is like >> the game of Go: a few simple rules that combine to produce a complex >> system that is difficult to master. With regexp notation, what we've >> got is more like 3d chess: multiple complex (just slightly different) >> sets of operations that do more to obscure the underlying simple rules >> than to help master the system. > > I'm not sure those are the right analogies (though they may not be all > that wrong either). If you ask me there are two problems with regexps: > > (a) The notation is cryptic and error-prone, its use of \ conflicts > with Python strings (using r'...' helps but is yet another gotcha), > and the parser is primitive. Until your brain has learned to parse > regexps, it will have a hard time understanding examples, which are > often the key to solving programming problems. Somehow the regexp > syntax is not "natural" for the text parsers we have in our brain -- > contrast this with Python's syntax, which was explicitly designed to > go with the flow. Perhaps another problem is with composability -- if > you know how to solve two simple problems using regexps, that doesn't > mean your solutions can be combined to solve a combination of those > problems. > > (b) There often isn't all that great of a match between the high-level > goals of the user (e.g. "extract a list of email addresses from a > file") and the available primitive operations. It's like writing an > operating system for a Turing machine -- we have mathematical proof > that it's possible, but that doesn't make it easy. The additional > operations provided by modern, Perl-derived (which includes Python's > re module) regexp notation are meant to help, but they just extend the > basic premises of regexp notation, rather than providing a new, > higher-level abstraction layer that is better matched to the way the > typical user thinks about the problem. > > All in all I think it would be a good use of somebody's time to try > and come up with something better. But it won't be easy. > > -- > --Guido van Rossum (python.org/~guido) It's unfortunate that there isn't a good way to do this kind of long-range work within the auspices of Python. I can imagine a number of projects like this that fail to attract interest due to low perceived chances of success and a dearth of community feedback. Geremy Condra From taleinat at gmail.com Tue Mar 1 21:25:56 2011 From: taleinat at gmail.com (Tal Einat) Date: Tue, 1 Mar 2011 22:25:56 +0200 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <20110228181920.1fde25a0@bhuda.mired.org> <20110301120511.1a90f4bb@bhuda.mired.org> Message-ID: On Tue, Mar 1, 2011 at 9:53 PM, geremy condra wrote: > It's unfortunate that there isn't a good way to do this kind of > long-range work within the auspices of Python. I can imagine a number > of projects like this that fail to attract interest due to low > perceived chances of success and a dearth of community feedback. > Once a good library had a solid foundation, it could plug itself into some widely used Python programs and gain publicity and support from there, before pushing for inclusion in the stdlib. A good example is Django's URL mapping, which currently uses regexps. I think it would be possible to get Django to support an alternate pattern matching method, in addition to regexps, since this would make learning Django easier for developers who don't grok regexps. - Tal Einat -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Mar 1 22:23:19 2011 From: guido at python.org (Guido van Rossum) Date: Tue, 1 Mar 2011 13:23:19 -0800 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <20110228181920.1fde25a0@bhuda.mired.org> <20110301120511.1a90f4bb@bhuda.mired.org> Message-ID: On Tue, Mar 1, 2011 at 12:25 PM, Tal Einat wrote: > On Tue, Mar 1, 2011 at 9:53 PM, geremy condra wrote: >> >> It's unfortunate that there isn't a good way to do this kind of >> long-range work within the auspices of Python. I can imagine a number >> of projects like this that fail to attract interest due to low >> perceived chances of success and a dearth of community feedback. > > Once a good library had a solid foundation, it could plug itself into some > widely used Python programs and gain publicity and support from there, > before pushing for inclusion in the stdlib. > > A good example is Django's URL mapping, which currently uses regexps. I > think it would be possible to get Django to support an alternate pattern > matching method, in addition to regexps, since this would make learning > Django easier for developers who don't grok regexps. Ah, but geremy is complaining about work that cannot be done as a library, e.g. syntax changes. This is because I suggested a better approach to matching would probably require syntax changes. I don't have an answer -- it may be easier to create a whole new language and experiment with matching syntax than it is to get a PEP approved for a matching syntax extension to Python... That's just how it goes for mature languages. Try getting new syntax added to C++, Java or JavaScript... :-) -- --Guido van Rossum (python.org/~guido) From greg.ewing at canterbury.ac.nz Tue Mar 1 23:31:29 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 02 Mar 2011 11:31:29 +1300 Subject: [Python-ideas] New pattern-matching library In-Reply-To: References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> Message-ID: <4D6D73C1.4020802@canterbury.ac.nz> Guido van Rossum wrote: > It's been tried before without much success. I think it may have been > a decade ago that Ka-Ping Yee created a pattern matching library that > used function calls ... It didn't get much use. That may largely be due to marketing issues. A potential user would have to know that Ka-Ping's module existed, or be sufficiently dissatisfied with the status quo to go looking for something like it. Probably it has never even occurred to many people familiar with REs from other contexts that there might be another way. Whereas if there were a set of constructor functions available right at hand in the re module, prominently featured in the examples and reference docs, I suspect they would be used quite a lot. I know that *I* would use them all the time, whereas I've never been motivated enough to pull in another module to get this functionality. Perhaps the best way to think of this is not as a complete replacement for traditional RE syntax, but as a set of convenience functions for building up REs out of smaller REs. It's not entirely straightforward to do that correctly, taking into account escaping, operator precedence, etc., so having some functions available for it makes a lot of sense. They would make it much easier to write readable code involving complicated REs. Since we're a community of people who believe that "readability counts", there shouldn't be any argument that this is a desirable goal. > On the third hand, I could see this as an area where a pure > library-based approach will always be doomed, and where a proposal to > add new syntax would actually make sense. I don't think new syntax is necessary -- functions are quite adequate for the task. But they need to be available right at your fingertips when you're working with REs. Having to seek out and obtain a third party library is too high a barrier to entry. -- Greg From debatem1 at gmail.com Tue Mar 1 23:50:45 2011 From: debatem1 at gmail.com (geremy condra) Date: Tue, 1 Mar 2011 14:50:45 -0800 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <20110228181920.1fde25a0@bhuda.mired.org> <20110301120511.1a90f4bb@bhuda.mired.org> Message-ID: On Tue, Mar 1, 2011 at 1:23 PM, Guido van Rossum wrote: > On Tue, Mar 1, 2011 at 12:25 PM, Tal Einat wrote: >> On Tue, Mar 1, 2011 at 9:53 PM, geremy condra wrote: >>> >>> It's unfortunate that there isn't a good way to do this kind of >>> long-range work within the auspices of Python. I can imagine a number >>> of projects like this that fail to attract interest due to low >>> perceived chances of success and a dearth of community feedback. >> >> Once a good library had a solid foundation, it could plug itself into some >> widely used Python programs and gain publicity and support from there, >> before pushing for inclusion in the stdlib. >> >> A good example is Django's URL mapping, which currently uses regexps. I >> think it would be possible to get Django to support an alternate pattern >> matching method, in addition to regexps, since this would make learning >> Django easier for developers who don't grok regexps. > > Ah, but geremy is complaining about work that cannot be done as a > library, e.g. syntax changes. This is because I suggested a better > approach to matching would probably require syntax changes. I don't > have an answer -- it may be easier to create a whole new language and > experiment with matching syntax than it is to get a PEP approved for a > matching syntax extension to Python... That's just how it goes for > mature languages. Try getting new syntax added to C++, Java or > JavaScript... :-) Erm... this actually isn't what I was talking about at all. I was basically just saying that I think it would be good if Python had better tools to bring attention to issues that might be considered for inclusion if a better way could be found. Geremy Condra From mal at egenix.com Wed Mar 2 00:21:38 2011 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 02 Mar 2011 00:21:38 +0100 Subject: [Python-ideas] New pattern-matching library In-Reply-To: <4D6D73C1.4020802@canterbury.ac.nz> References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <4D6D73C1.4020802@canterbury.ac.nz> Message-ID: <4D6D7F82.1040802@egenix.com> Greg Ewing wrote: > Guido van Rossum wrote: > >> It's been tried before without much success. I think it may have been >> a decade ago that Ka-Ping Yee created a pattern matching library that >> used function calls ... It didn't get much use. > > That may largely be due to marketing issues. A potential > user would have to know that Ka-Ping's module existed, or > be sufficiently dissatisfied with the status quo to go > looking for something like it. Probably it has never even > occurred to many people familiar with REs from other contexts > that there might be another way. If someone wants to experiment with these things, I suggest you use mxTextTools' tagging engine as basis: http://www.egenix.com/products/python/mxBase/mxTextTools/ It provides a really fast matching machine which can be programmed from Python using simple tuples. There are already a few libraries that use it as basis for e.g. grammar-based parsing. It's flexible enough for many different kinds of parsing approaches, can parse a lot more than what you can do with REs and would also allow creating toy-language implementations that implement parsing in ways different than REs. We've used it to parse HTML (including broken HTML), XML, custom macro languages similar to the Excel VBA macros, RTF, various templating languages, etc. The BioPython project uses it to parse genome data. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Mar 02 2011) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From guido at python.org Wed Mar 2 00:22:14 2011 From: guido at python.org (Guido van Rossum) Date: Tue, 1 Mar 2011 15:22:14 -0800 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <20110228181920.1fde25a0@bhuda.mired.org> <20110301120511.1a90f4bb@bhuda.mired.org> Message-ID: On Tue, Mar 1, 2011 at 2:50 PM, geremy condra wrote: > On Tue, Mar 1, 2011 at 1:23 PM, Guido van Rossum wrote: >> On Tue, Mar 1, 2011 at 12:25 PM, Tal Einat wrote: >>> On Tue, Mar 1, 2011 at 9:53 PM, geremy condra wrote: >>>> >>>> It's unfortunate that there isn't a good way to do this kind of >>>> long-range work within the auspices of Python. I can imagine a number >>>> of projects like this that fail to attract interest due to low >>>> perceived chances of success and a dearth of community feedback. >>> >>> Once a good library had a solid foundation, it could plug itself into some >>> widely used Python programs and gain publicity and support from there, >>> before pushing for inclusion in the stdlib. >>> >>> A good example is Django's URL mapping, which currently uses regexps. I >>> think it would be possible to get Django to support an alternate pattern >>> matching method, in addition to regexps, since this would make learning >>> Django easier for developers who don't grok regexps. >> >> Ah, but geremy is complaining about work that cannot be done as a >> library, e.g. syntax changes. This is because I suggested a better >> approach to matching would probably require syntax changes. I don't >> have an answer -- it may be easier to create a whole new language and >> experiment with matching syntax than it is to get a PEP approved for a >> matching syntax extension to Python... That's just how it goes for >> mature languages. Try getting new syntax added to C++, Java or >> JavaScript... :-) > > Erm... this actually isn't what I was talking about at all. I was > basically just saying that I think it would be good if Python had > better tools to bring attention to issues that might be considered for > inclusion if a better way could be found. Ok, sorry. But that sounds so general as to be devoid of meaning. Can you clarify your wish with a few examples? -- --Guido van Rossum (python.org/~guido) From debatem1 at gmail.com Wed Mar 2 01:23:33 2011 From: debatem1 at gmail.com (geremy condra) Date: Tue, 1 Mar 2011 16:23:33 -0800 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <20110228181920.1fde25a0@bhuda.mired.org> <20110301120511.1a90f4bb@bhuda.mired.org> Message-ID: On Tue, Mar 1, 2011 at 3:22 PM, Guido van Rossum wrote: > On Tue, Mar 1, 2011 at 2:50 PM, geremy condra wrote: >> On Tue, Mar 1, 2011 at 1:23 PM, Guido van Rossum wrote: >>> On Tue, Mar 1, 2011 at 12:25 PM, Tal Einat wrote: >>>> On Tue, Mar 1, 2011 at 9:53 PM, geremy condra wrote: >>>>> >>>>> It's unfortunate that there isn't a good way to do this kind of >>>>> long-range work within the auspices of Python. I can imagine a number >>>>> of projects like this that fail to attract interest due to low >>>>> perceived chances of success and a dearth of community feedback. >>>> >>>> Once a good library had a solid foundation, it could plug itself into some >>>> widely used Python programs and gain publicity and support from there, >>>> before pushing for inclusion in the stdlib. >>>> >>>> A good example is Django's URL mapping, which currently uses regexps. I >>>> think it would be possible to get Django to support an alternate pattern >>>> matching method, in addition to regexps, since this would make learning >>>> Django easier for developers who don't grok regexps. >>> >>> Ah, but geremy is complaining about work that cannot be done as a >>> library, e.g. syntax changes. This is because I suggested a better >>> approach to matching would probably require syntax changes. I don't >>> have an answer -- it may be easier to create a whole new language and >>> experiment with matching syntax than it is to get a PEP approved for a >>> matching syntax extension to Python... That's just how it goes for >>> mature languages. Try getting new syntax added to C++, Java or >>> JavaScript... :-) >> >> Erm... this actually isn't what I was talking about at all. I was >> basically just saying that I think it would be good if Python had >> better tools to bring attention to issues that might be considered for >> inclusion if a better way could be found. > > Ok, sorry. But that sounds so general as to be devoid of meaning. Can > you clarify your wish with a few examples? Well, you've noticed yourself how many times the same ideas and questions show up on python-ideas, and how often people think they're the first ones to come up with it. You've also noted that there are more productive problems that people interested in contributing could solve. ISTM that there may be an opportunity to kill two birds with one stone in that. Specifically, I'd suggest starting by putting together a wishlist and a do-not-want-list from some of the core devs and putting it in a prominent place on python.org. That should be fairly easy, and if it doesn't seem to be getting the amount of traffic that it would need to succeed there are a number of good ways to tie it in to other venues- adding tickets to the bug tracker, putting it in a newsletter, having this list spit back an email mentioning it whenever someone starts a new thread, mentioning it on slashdot, etc. It might also be a good way to take advantage of the sprints board, by specifically asking groups that have done successful sprints in the past to look at these ideas and see if they can come up with good ways to solve them. None of that requires a huge outlay of cash or resources. If this was successful, it might be a good idea to look at providing some in-Python support for those working on the wishlist items. With the hg transition already underway it seems like this should be fairly easy- just create an hg repo for the project in question and link it to a page on PyPI. Depending on the size of the project, amount of interest, timescale, and stage of maturity development discussion could take place either on the wiki, here, stdlib-sig, in their own google group, etc. Again, nothing requiring substantial outlay or time. The only investment required would be the effort of marketing the list as a whole. >From there, it would just be a question of what direction to take. I can envision a lot of projects like this or Raymond Hettinger's idea for a stats module eventually seeing inclusion, but there are also a lot of possible tools where maintaining a relationship similar to the Apache Foundation and its projects might be for the best. I suspect it goes without saying, but I'd be happy to help out with this, and especially with PyCon coming up its a good time to put many eyes on problems like these. Geremy Condra From guido at python.org Wed Mar 2 01:47:06 2011 From: guido at python.org (Guido van Rossum) Date: Tue, 1 Mar 2011 16:47:06 -0800 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <20110228181920.1fde25a0@bhuda.mired.org> <20110301120511.1a90f4bb@bhuda.mired.org> Message-ID: On Tue, Mar 1, 2011 at 4:23 PM, geremy condra wrote: > Well, you've noticed yourself how many times the same ideas and > questions show up on python-ideas, and how often people think they're > the first ones to come up with it. You've also noted that there are > more productive problems that people interested in contributing could > solve. ISTM that there may be an opportunity to kill two birds with > one stone in that. > > Specifically, I'd suggest starting by putting together a wishlist and > a do-not-want-list from some of the core devs and putting it in a > prominent place on python.org. That should be fairly easy, and if it > doesn't seem to be getting the amount of traffic that it would need to > succeed there are a number of good ways to tie it in to other venues- > adding tickets to the bug tracker, putting it in a newsletter, having > this list spit back an email mentioning it whenever someone starts a > new thread, mentioning it on slashdot, etc. It might also be a good > way to take advantage of the sprints board, by specifically asking > groups that have done successful sprints in the past to look at these > ideas and see if they can come up with good ways to solve them. None > of that requires a huge outlay of cash or resources. > > If this was successful, it might be a good idea to look at providing > some in-Python support for those working on the wishlist items. With > the hg transition already underway it seems like this should be fairly > easy- just create an hg repo for the project in question and link it > to a page on PyPI. Depending on the size of the project, amount of > interest, timescale, and stage of maturity development discussion > could take place either on the wiki, here, stdlib-sig, in their own > google group, etc. Again, nothing requiring substantial outlay or > time. The only investment required would be the effort of marketing > the list as a whole. > > From there, it would just be a question of what direction to take. I > can envision a lot of projects like this or Raymond Hettinger's idea > for a stats module eventually seeing inclusion, but there are also a > lot of possible tools where maintaining a relationship similar to the > Apache Foundation and its projects might be for the best. > > I suspect it goes without saying, but I'd be happy to help out with > this, and especially with PyCon coming up its a good time to put many > eyes on problems like these. Okay, I get it now. I don't know how many core developers are actually following python-ideas. If you are serious about putting time into this yourself, maybe the best thing you could do would be to start a draft for such a document, put it in the Wiki (with some kind of "draft" or "tentative" disclaimer) and post it to python-dev (as well as here) to get the core devs' attention. -- --Guido van Rossum (python.org/~guido) From jnoller at gmail.com Wed Mar 2 03:47:53 2011 From: jnoller at gmail.com (Jesse Noller) Date: Tue, 1 Mar 2011 21:47:53 -0500 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <20110228181920.1fde25a0@bhuda.mired.org> <20110301120511.1a90f4bb@bhuda.mired.org> Message-ID: On Tue, Mar 1, 2011 at 7:47 PM, Guido van Rossum wrote: > On Tue, Mar 1, 2011 at 4:23 PM, geremy condra wrote: >> Well, you've noticed yourself how many times the same ideas and >> questions show up on python-ideas, and how often people think they're >> the first ones to come up with it. You've also noted that there are >> more productive problems that people interested in contributing could >> solve. ISTM that there may be an opportunity to kill two birds with >> one stone in that. >> >> Specifically, I'd suggest starting by putting together a wishlist and >> a do-not-want-list from some of the core devs and putting it in a >> prominent place on python.org. That should be fairly easy, and if it >> doesn't seem to be getting the amount of traffic that it would need to >> succeed there are a number of good ways to tie it in to other venues- >> adding tickets to the bug tracker, putting it in a newsletter, having >> this list spit back an email mentioning it whenever someone starts a >> new thread, mentioning it on slashdot, etc. It might also be a good >> way to take advantage of the sprints board, by specifically asking >> groups that have done successful sprints in the past to look at these >> ideas and see if they can come up with good ways to solve them. None >> of that requires a huge outlay of cash or resources. >> >> If this was successful, it might be a good idea to look at providing >> some in-Python support for those working on the wishlist items. With >> the hg transition already underway it seems like this should be fairly >> easy- just create an hg repo for the project in question and link it >> to a page on PyPI. Depending on the size of the project, amount of >> interest, timescale, and stage of maturity development discussion >> could take place either on the wiki, here, stdlib-sig, in their own >> google group, etc. Again, nothing requiring substantial outlay or >> time. The only investment required would be the effort of marketing >> the list as a whole. >> >> From there, it would just be a question of what direction to take. I >> can envision a lot of projects like this or Raymond Hettinger's idea >> for a stats module eventually seeing inclusion, but there are also a >> lot of possible tools where maintaining a relationship similar to the >> Apache Foundation and its projects might be for the best. >> >> I suspect it goes without saying, but I'd be happy to help out with >> this, and especially with PyCon coming up its a good time to put many >> eyes on problems like these. > > Okay, I get it now. I don't know how many core developers are actually > following python-ideas. If you are serious about putting time into > this yourself, maybe the best thing you could do would be to start a > draft for such a document, put it in the Wiki (with some kind of > "draft" or "tentative" disclaimer) and post it to python-dev (as well > as here) to get the core devs' attention. > It also might work as an appendix to the dev guide, though that's Brett's call From martin.chilvers at gmail.com Wed Mar 2 10:05:11 2011 From: martin.chilvers at gmail.com (Martin Chilvers) Date: Wed, 02 Mar 2011 09:05:11 +0000 Subject: [Python-ideas] The Descriptor Protocol... Message-ID: <4D6E0847.5060304@gmail.com> G'day! Please excuse me if I have missed something obvious, but I have a question about the implementation of the descriptor protocol, and more specifically about the arguments passed to the '__get__' and '__set__' methods. According to Raymond Hettinger's "How-To Guide for Descriptors" at:- http://users.rcn.com/python/download/Descriptor.htm#invoking-descriptors The *pseudo* implementation of __getattribute__ is as follows:- def __getattribute__(self, key): "Emulate type_getattro() in Objects/typeobject.c" v = object.__getattribute__(self, key) if hasattr(v, '__get__'): return v.__get__(None, self) return v As mentioned above, this is obviously only pseudo-Python, but it serves to illustrate my question which is, why isn't the 'key' argument passed through to the '__get__' (and similarly, '__set__') methods? In seems to me that:- 1) In terms of API design/information flow through '__getattribute__' it feels like we 'drop' the 'key' when we call call a descriptor's '__get__' method. In other words, if '__getattribute__' gets to use the key when working out what to return it also seems natural to give the descriptor the same information. 2) It makes the implementation of some descriptor based tools much uglier. Take for example a type specification tool, we might have something like:- class Person(object): name = Str() address = Str() where 'Str' is a descriptor. It would be nice to know in the '__get__' and '__set__' methods of 'Str' which attribute is being accessed. Of course, I can get around this by either:- a) using a metaclass to harvest the descriptors and set the attribute name. This is fine, but:- - it forces me to use a metaclass ;^) - it means that I can't share descriptors because they are bound to a particular attribute name which has obvious scaleability implications. b) make the developer duplicate the attribute name when constructing the descriptor:- class Person(object): name = Str('name') address = Str('address') which, well, just smells, and conflicts with step 3 of TDD ;^) Again, apologies if I've missed the obvious - I've trawled the usual places and not found this mentioned anywhere... Thanks folks! Martin From ncoghlan at gmail.com Wed Mar 2 12:20:32 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 2 Mar 2011 21:20:32 +1000 Subject: [Python-ideas] New pattern-matching library (was: str.split with multiple individual split characters) In-Reply-To: References: <4D6AE8D4.5080709@insectnation.org> <4D6AEE04.4030103@mrabarnett.plus.com> <1D473466-7593-450E-A6AD-039246BE014A@gmail.com> <4D6B83C9.2080503@pearwood.info> <20110228110406.3ae7fec5@bhuda.mired.org> <20110228181920.1fde25a0@bhuda.mired.org> <20110301120511.1a90f4bb@bhuda.mired.org> Message-ID: On Wed, Mar 2, 2011 at 10:47 AM, Guido van Rossum wrote: > Okay, I get it now. I don't know how many core developers are actually > following python-ideas. If you are serious about putting time into > this yourself, maybe the best thing you could do would be to start a > draft for such a document, put it in the Wiki (with some kind of > "draft" or "tentative" disclaimer) and post it to python-dev (as well > as here) to get the core devs' attention. One specific idea I was considering along these lines when I get back to my PEP 0 fiddling was to separate the big pile of Deferred/Rejected/Withdrawn/Finished PEPs a bit more. In particular, the Deferred PEPs are generally things where the idea being proposed is seen as having some merit, but there are fundamental issues with the proposal which prevent it from being accepted. We could separate those out and expand them to cover "wish list" PEPs which spec out something we would like to do, but don't really have any idea as to how yet. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From raymond.hettinger at gmail.com Wed Mar 2 17:10:15 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 2 Mar 2011 08:10:15 -0800 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: <4D6E0847.5060304@gmail.com> References: <4D6E0847.5060304@gmail.com> Message-ID: <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> On Mar 2, 2011, at 1:05 AM, Martin Chilvers wrote: > G'day! > > Please excuse me if I have missed something obvious, but I have a question about the implementation of the descriptor protocol, and more specifically about the arguments passed to the '__get__' and '__set__' methods. > > According to Raymond Hettinger's "How-To Guide for Descriptors" at:- > > http://users.rcn.com/python/download/Descriptor.htm#invoking-descriptors > > The *pseudo* implementation of __getattribute__ is as follows:- > > def __getattribute__(self, key): > "Emulate type_getattro() in Objects/typeobject.c" > v = object.__getattribute__(self, key) > if hasattr(v, '__get__'): > return v.__get__(None, self) > return v > > As mentioned above, this is obviously only pseudo-Python, but it serves to illustrate my question which is, why isn't the 'key' argument passed through to the '__get__' (and similarly, '__set__') methods? I've gotten this question a couple times. There usual answers are: * Why? Because Guido implemented it that way ;-) * Why did he pick a signature that dropped the "key"? Mostly likely, it is because none of the use-cases he had in mind required it. The keyless API suffices to implement property(), super(), classmethod(), staticmethod(), bound method, __slots__, various features of new-style classes, and a reimplementation old-style classes. > 2) It makes the implementation of some descriptor based tools much uglier. Take for example a type specification tool, we might have something like:- > > class Person(object): > name = Str() > address = Str() > > where 'Str' is a descriptor. It would be nice to know in the '__get__' and '__set__' methods of 'Str' which attribute is being accessed. Of course, I can get around this by either:- > > a) using a metaclass to harvest the descriptors and set the attribute name. This is fine, but:- > - it forces me to use a metaclass ;^) > - it means that I can't share descriptors because they are bound to > a particular attribute name which has obvious scaleability > implications. > > b) make the developer duplicate the attribute name when constructing the descriptor:- > > class Person(object): > name = Str('name') > address = Str('address') > > which, well, just smells, and conflicts with step 3 of TDD ;^) Yes, option b is the usual way to do it (there are a number of recipes that use this approach). For the most part, that shouldn't be an unfamiliar pattern to Python developers. For example, named tuples repeat the name: Point = namedtuple('Point', ('x', 'y')). Sometimes the language provides syntax to do the work for us, like "class A: ..." instead of "A = type('A', (), {})", but for other situations, it isn't unusual to pass in a name as a string literal so that an object will know its own name. For Python 4, perhaps you can convince Guido to add a key argument to the signature for __get__ and __set__. It would make a handful of recipes a bit prettier at the expense of being a total waste for all the other use cases that don't need it. I think everyone who writes a descriptor that needs the "key" will chafe at the current design. It bugged me at first, but the workaround is easy and after a while it doesn't seem as bothersome. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.chilvers at gmail.com Wed Mar 2 18:09:22 2011 From: martin.chilvers at gmail.com (Martin Chilvers) Date: Wed, 02 Mar 2011 17:09:22 +0000 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> Message-ID: <4D6E79C2.2050405@gmail.com> Hi Raymond, On 02/03/2011 16:10, Raymond Hettinger wrote: > I've gotten this question a couple times. There usual answers are: > > * Why? Because Guido implemented it that way ;-) > > * Why did he pick a signature that dropped the "key"? Mostly likely, it > is because none of the use-cases he had in mind required it. The keyless > API suffices to implement property(), super(), classmethod(), > staticmethod(), bound method, __slots__, various features of new-style > classes, and a reimplementation old-style classes. Probably so, but it still smells in terms of the information flow through the various levels of the API :^) > Yes, option b is the usual way to do it (there are a number of recipes > that use this approach). > > For the most part, that shouldn't be an unfamiliar pattern to Python > developers. For example, named tuples repeat the name: > Point = namedtuple('Point', ('x', 'y')). > > Sometimes the language provides syntax to do the work for us, like > "class A: ..." instead of "A = type('A', (), {})", but for other > situations, it isn't unusual to pass in a name as a string literal so > that an object will know its own name. > > For Python 4, perhaps you can convince Guido to add a key argument to > the signature for __get__ and __set__. It would make a handful of > recipes a bit prettier at the expense of being a total waste for all the > other use cases that don't need it. Just because some use cases don't use it doesn't mean it is a total waste ;^) The current design mean that descriptors can't be (sensibly) shared across differently named attributes which has major implications for scaleability... > I think everyone who writes a descriptor that needs the "key" will chafe > at the current design. It bugged me at first, but the workaround is easy > and after a while it doesn't seem as bothersome. Yep - workarounds generally are easy, but if the workarounds appear often enough its usually a symptom that the underlying code might be improved... Martin From guido at python.org Wed Mar 2 18:59:57 2011 From: guido at python.org (Guido van Rossum) Date: Wed, 2 Mar 2011 09:59:57 -0800 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: <4D6E79C2.2050405@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6E79C2.2050405@gmail.com> Message-ID: On Wed, Mar 2, 2011 at 9:09 AM, Martin Chilvers wrote: > Hi Raymond, > > On 02/03/2011 16:10, Raymond Hettinger wrote: >> >> I've gotten this question a couple times. There usual answers are: >> >> * Why? Because Guido implemented it that way ;-) >> >> * Why did he pick a signature that dropped the "key"? Mostly likely, it >> is because none of the use-cases he had in mind required it. The keyless >> API suffices to implement property(), super(), classmethod(), >> staticmethod(), bound method, __slots__, various features of new-style >> classes, and a reimplementation old-style classes. > > Probably so, but it still smells in terms of the information flow through > the various levels of the API :^) It never occurred to me to think of the descriptor protocol the way you do, and it never even occurred to me to that the key would ever be needed. You can invoke principles of information flow until you're blue in the face, but IMO it is totally reasonable to drop information that has been used already as a request passes through layers of abstraction. FWIW, rhetorical questions like "why doesn't Python do X" are usually a poor way to start a discussion about a feature request. -- --Guido van Rossum (python.org/~guido) From martin.chilvers at gmail.com Wed Mar 2 19:25:23 2011 From: martin.chilvers at gmail.com (Martin Chilvers) Date: Wed, 02 Mar 2011 18:25:23 +0000 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6E79C2.2050405@gmail.com> Message-ID: <4D6E8B93.4080309@gmail.com> G'day, On 02/03/2011 17:59, Guido van Rossum wrote: > It never occurred to me to think of the descriptor protocol the way > you do, and it never even occurred to me to that the key would ever be > needed. You can invoke principles of information flow until you're > blue in the face, but IMO it is totally reasonable to drop information > that has been used already as a request passes through layers of > abstraction. True, but in this case the 'droppage' ;^) is at the first layer of abstraction and the first hook point... > FWIW, rhetorical questions like "why doesn't Python do X" > are usually a poor way to start a discussion about a feature request. Which is why my original post started with:- "Please excuse me if I have missed something obvious, but I have a question about the implementation of the descriptor protocol, and more specifically about the arguments passed to the '__get__' and '__set__' methods." and finished with:- "Again, apologies if I've missed the obvious" Which, I don't think, is confrontational in any way, and simply asked what I firmly believe is a legitimate question about the API. Martin From guido at python.org Wed Mar 2 20:33:27 2011 From: guido at python.org (Guido van Rossum) Date: Wed, 2 Mar 2011 11:33:27 -0800 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: <4D6E8B93.4080309@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6E79C2.2050405@gmail.com> <4D6E8B93.4080309@gmail.com> Message-ID: On Wed, Mar 2, 2011 at 10:25 AM, Martin Chilvers wrote: > G'day, > > On 02/03/2011 17:59, Guido van Rossum wrote: >> >> It never occurred to me to think of the descriptor protocol the way >> you do, and it never even occurred to me to that the key would ever be >> needed. You can invoke principles of information flow until you're >> blue in the face, but IMO it is totally reasonable to drop information >> that has been used already as a request passes through layers of >> abstraction. > > True, but in this case the 'droppage' ;^) is at the first layer of > abstraction and the first hook point... > >> FWIW, rhetorical questions like "why doesn't Python do X" >> are usually a poor way to start a discussion about a feature request. > > Which is why my original post started with:- > > "Please excuse me if I have missed something obvious, but I have a question > about the implementation of the descriptor protocol, and more specifically > about the arguments passed to the '__get__' and '__set__' methods." > > and finished with:- > > "Again, apologies if I've missed the obvious" > > Which, I don't think, is confrontational in any way, and simply asked what I > firmly believe is a legitimate question about the API. Which has been answered, and yet you don't appear happy with the answer. :-) -- --Guido van Rossum (python.org/~guido) From raymond.hettinger at gmail.com Wed Mar 2 20:35:00 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 2 Mar 2011 11:35:00 -0800 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: <4D6E79C2.2050405@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6E79C2.2050405@gmail.com> Message-ID: <4581EEFD-8DEB-464F-946C-201DDABB885F@gmail.com> On Mar 2, 2011, at 9:09 AM, Martin Chilvers wrote: > Hi Raymond, > > On 02/03/2011 16:10, Raymond Hettinger wrote: >> I've gotten this question a couple times. There usual answers are: >> >> * Why? Because Guido implemented it that way ;-) >> >> * Why did he pick a signature that dropped the "key"? Mostly likely, it >> is because none of the use-cases he had in mind required it. The keyless >> API suffices to implement property(), super(), classmethod(), >> staticmethod(), bound method, __slots__, various features of new-style >> classes, and a reimplementation old-style classes. > > Probably so, but it still smells in terms of the information flow through the various levels of the API :^) Maybe. You'll have to take it up with Guido. He invented it. I just documented it. A huge class of problems he was trying to solve required only a hammer and screwdriver, not the whole toolbox ;-) >> Yes, option b is the usual way to do it (there are a number of recipes >> that use this approach). >> >> For the most part, that shouldn't be an unfamiliar pattern to Python >> developers. For example, named tuples repeat the name: >> Point = namedtuple('Point', ('x', 'y')). >> >> Sometimes the language provides syntax to do the work for us, like >> "class A: ..." instead of "A = type('A', (), {})", but for other >> situations, it isn't unusual to pass in a name as a string literal so >> that an object will know its own name. >> >> For Python 4, perhaps you can convince Guido to add a key argument to >> the signature for __get__ and __set__. It would make a handful of >> recipes a bit prettier at the expense of being a total waste for all the >> other use cases that don't need it. > > Just because some use cases don't use it doesn't mean it is a total waste ;^) Okay, let's say "mostly wasted", meaning that all the common use cases (just about everything currently implemented with descriptors) would pay a price (an extra argument being passed around) in order to benefit rare use cases (none of the recipes needing a key have yet found their way into the standard library despite having been around since Py2.2). > The current design mean that descriptors can't be (sensibly) shared across differently named attributes which has major implications for scaleability... > >> I think everyone who writes a descriptor that needs the "key" will chafe >> at the current design. It bugged me at first, but the workaround is easy >> and after a while it doesn't seem as bothersome. > > Yep - workarounds generally are easy, but if the workarounds appear often enough its usually a symptom that the underlying code might be improved... "... might be improved" suggests that the API can be changed without breaking every piece of descriptor code currently in existence. Until Python4.0, the point is moot. So, a better phrasing might be, "oh i wish the API had been originally designed differently". Also, I don't agree with the antecedent, "if the workarounds appear often enough". I'm a heavy user of descriptors and have needed this only twice; both times, the workaround idiom sufficed for getting the job done. The implementation of CPython is itself a heavy user of descriptors and has not needed this functionality even once. All that being said, if the key were passed along, I would find uses for it. So, I sympathize with your post. Raymond P.S. Python is very customizable. The descriptor protocol in implemented by __getattribute__, so it's not a hard exercise to write your own __getattribute__ to implement your own variant of the descriptor protocol. Here's a quick and dirty example: class Str: 'Descriptor using a custom protocol' def __my_get__(self, obj, key): print(key * 5) return obj class A: @property # old protocol def x(self): return 10 y = Str() # new protocol def __getattribute__(self, key): 'Implement an alternative descriptor protocol' attr = object.__getattribute__(self, key) if hasattr(attr, '__my_get__'): return attr.__my_get__(attr, key) return attr a = A() print(a.x) print(a.y) From martin.chilvers at gmail.com Thu Mar 3 01:17:55 2011 From: martin.chilvers at gmail.com (Martin Chilvers) Date: Thu, 03 Mar 2011 00:17:55 +0000 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6E79C2.2050405@gmail.com> <4D6E8B93.4080309@gmail.com> Message-ID: <4D6EDE33.3030606@gmail.com> G'day, On 02/03/2011 19:33, Guido van Rossum wrote: > On Wed, Mar 2, 2011 at 10:25 AM, Martin Chilvers > wrote: >> G'day, >> >> On 02/03/2011 17:59, Guido van Rossum wrote: >>> >>> It never occurred to me to think of the descriptor protocol the way >>> you do, and it never even occurred to me to that the key would ever be >>> needed. You can invoke principles of information flow until you're >>> blue in the face, but IMO it is totally reasonable to drop information >>> that has been used already as a request passes through layers of >>> abstraction. >> >> True, but in this case the 'droppage' ;^) is at the first layer of >> abstraction and the first hook point... >> >>> FWIW, rhetorical questions like "why doesn't Python do X" >>> are usually a poor way to start a discussion about a feature request. >> >> Which is why my original post started with:- >> >> "Please excuse me if I have missed something obvious, but I have a question >> about the implementation of the descriptor protocol, and more specifically >> about the arguments passed to the '__get__' and '__set__' methods." >> >> and finished with:- >> >> "Again, apologies if I've missed the obvious" >> >> Which, I don't think, is confrontational in any way, and simply asked what I >> firmly believe is a legitimate question about the API. > > Which has been answered, and yet you don't appear happy with the answer. :-) The answer was just fine and I quite honestly only raised the issue in the spirit of the "continuous improvement" of the APIs. Martin From ncoghlan at gmail.com Thu Mar 3 12:36:02 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Mar 2011 21:36:02 +1000 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> Message-ID: On Thu, Mar 3, 2011 at 2:10 AM, Raymond Hettinger wrote: > b) make the developer duplicate the attribute name when constructing the > descriptor:- > > class Person(object): > ???name ???= Str('name') > ???address = Str('address') > > which, well, just smells, and conflicts with step 3 of TDD ;^) > > Yes, option b is the usual way to do it (there are a number of recipes that > use this approach). > For the most part, that shouldn't be an unfamiliar pattern to Python > developers. ?For example, named tuples repeat the name: > ?? Point = namedtuple('Point', ('x', 'y')). > Sometimes the language provides syntax to do the work for us, like "class A: > ..." instead of "A = type('A', (), {})", but for other situations, it isn't > unusual to pass in a name as a string literal so that an object will know > its own name. > For Python 4, perhaps you can convince Guido to add a key argument to the > signature for __get__ and __set__. ?It would make a handful of recipes a bit > prettier at the expense of being a total waste for all the other use cases > that don't need it. > I think everyone who writes a descriptor that needs the "key" will chafe at > the current design. ?It bugged me at first, but the workaround is easy and > after a while it doesn't seem as bothersome. Rather than a descriptor specific answer, something that covered the namedtuple use case as well would be nice. That is, something that made it feasible to reference the name on the left hand side of a simple assignment without needing to repeat it as a string with the same contents. # Straw-man idea class Person: as name = Str(as) as address = Str(as) as Point = namedtuple(as, 'x y') Basically just a variant on the ordinary assignment statement that makes the target name available for reference in the source expression. I'm dubious about the merits of that particular suggestion, but I think it is a much better problem to try to tackle than changing the descriptor protocol. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From martin.chilvers at gmail.com Thu Mar 3 12:52:38 2011 From: martin.chilvers at gmail.com (Martin Chilvers) Date: Thu, 03 Mar 2011 11:52:38 +0000 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: <4581EEFD-8DEB-464F-946C-201DDABB885F@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6E79C2.2050405@gmail.com> <4581EEFD-8DEB-464F-946C-201DDABB885F@gmail.com> Message-ID: <4D6F8106.3030908@gmail.com> Hi Raymond, Thanks for your replies. We currently use the traditional __getattribute__ hook, and the original question came up as part of a debate about the various merits of descriptors vs __getattribute__. Martin On 02/03/2011 19:35, Raymond Hettinger wrote: > P.S. Python is very customizable. The descriptor protocol in > implemented by __getattribute__, so it's not a hard exercise to write > your own __getattribute__ to implement your own variant of the > descriptor protocol. Here's a quick and dirty example: > > class Str: 'Descriptor using a custom protocol' def __my_get__(self, > obj, key): print(key * 5) return obj > > class A: > > @property # old protocol def x(self): return 10 > > y = Str() # new protocol > > def __getattribute__(self, key): 'Implement an alternative descriptor > protocol' attr = object.__getattribute__(self, key) if hasattr(attr, > '__my_get__'): return attr.__my_get__(attr, key) return attr > > a = A() print(a.x) print(a.y) > > From andrew.svetlov at gmail.com Thu Mar 3 12:53:56 2011 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Thu, 3 Mar 2011 13:53:56 +0200 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> Message-ID: Something like: Point -> var_name = namedtuple(var_name, 'xy') ??? Take into acount, `->` operatior is not allowed into regular Python code, only in function annotations as annotation for result value. Maybe that close enough to our case? From andrew.svetlov at gmail.com Thu Mar 3 12:57:28 2011 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Thu, 3 Mar 2011 13:57:28 +0200 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> Message-ID: But... I still is repeating myself... :( From ncoghlan at gmail.com Thu Mar 3 12:59:26 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Mar 2011 21:59:26 +1000 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> Message-ID: On Thu, Mar 3, 2011 at 9:53 PM, Andrew Svetlov wrote: > Something like: > Point -> var_name = namedtuple(var_name, 'xy') > ??? > > Take into acount, `->` operatior is not allowed into regular Python > code, only in function annotations as annotation for result value. > Maybe that close enough to our case? Not a bad idea at all, although I would use "as" for the purpose (it's already a keyword we use for funny not-quite-assignment operations). class Person: name as n = Str(n) address as a = Str(a) Point as p = namedtuple(p, 'x y') Might be too magic for most people's tastes, but it would definitely reduce the repetition problem. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From andrew.svetlov at gmail.com Thu Mar 3 13:12:54 2011 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Thu, 3 Mar 2011 14:12:54 +0200 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> Message-ID: > Not a bad idea at all, although I would use "as" for the purpose (it's > already a keyword we use for funny not-quite-assignment operations). "as" is definitely better. But in > class Person: > ?name as n = Str(n) we still can see name duplication. >From other hand personally I don't want to see any magic name like name = Str($_) From ncoghlan at gmail.com Thu Mar 3 13:23:40 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Mar 2011 22:23:40 +1000 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> Message-ID: On Thu, Mar 3, 2011 at 10:12 PM, Andrew Svetlov wrote: >> Not a bad idea at all, although I would use "as" for the purpose (it's >> already a keyword we use for funny not-quite-assignment operations). > "as" is definitely better. > > But in >> class Person: >> ?name as n = Str(n) > we still can see name duplication. However, compared to the status quo: 1. The backreference can use an arbitrarily short name, as it won't form part of the API* 2. This can be designed so the compiler helps out in detecting typos 3. If you later decide to change the public name (the real bane of the repeat-as-a-string workaround), it only needs to be changed in one place *This can actually be done by using a pre-AST transform, such that the "n" variable never appears in the AST, instead being replaced by the constant string "name" I don't personally believe the use case is frequent enough to justify the complexity of a whole new statement type, but it's an interesting idea to consider regardless. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From andrew.svetlov at gmail.com Thu Mar 3 14:39:00 2011 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Thu, 3 Mar 2011 15:39:00 +0200 Subject: [Python-ideas] The Descriptor Protocol... In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> Message-ID: On Thu, Mar 3, 2011 at 2:23 PM, Nick Coghlan wrote: >> But in >>> class Person: >>> ?name as n = Str(n) >> we still can see name duplication. > [...] > However, compared to the status quo: > > 1. The backreference can use an arbitrarily short name, as it won't > form part of the API* > 2. This can be designed so the compiler helps out in detecting typos > 3. If you later decide to change the public name (the real bane of the > repeat-as-a-string workaround), it only needs to be changed in one > place > That's true. > *This can actually be done by using a pre-AST transform, such that the > "n" variable never appears in the AST, instead being replaced by the > constant string "name" Not sure. Other usages of "as" put new variable in current scope: import, with, except... > I don't personally believe the use case is frequent enough to justify > the complexity of a whole new statement type, but it's an interesting > idea to consider regardless. I use named descriptors (I mean objects aware about own name) time by time in my data structures. Last esage was about two weeks ago. For example of that descriptors you can see SQLAlchemy declarative_base, models and views from Django etc. >From my perspective named descriptors are widelly used. Of course current workaround with metaclass is good enough to don't starve for better solution. The same I can say about things like namedtuple. Current notation is annoying me a bit, but I argee to live with that. From greg.ewing at canterbury.ac.nz Thu Mar 3 21:45:45 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 09:45:45 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> Message-ID: <4D6FFDF9.3060507@canterbury.ac.nz> Nick Coghlan wrote: > That is, something that > made it feasible to reference the name on the left hand side of a > simple assignment without needing to repeat it as a string with the > same contents. I think we should have assignment decorators. @decorator lhs = rhs would be equivalent to lhs = decorator('lhs', rhs) -- Greg From ethan at stoneleaf.us Thu Mar 3 22:04:30 2011 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 03 Mar 2011 13:04:30 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D6FFDF9.3060507@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> Message-ID: <4D70025E.9020503@stoneleaf.us> Greg Ewing wrote: > Nick Coghlan wrote: >> That is, something that >> made it feasible to reference the name on the left hand side of a >> simple assignment without needing to repeat it as a string with the >> same contents. > > I think we should have assignment decorators. > > @decorator > lhs = rhs > > would be equivalent to > > lhs = decorator('lhs', rhs) That seems very verbose -- part of the issue for me is all the extra typing involved in specifying the name twice, and I don't think this is going to save many, if any, keystrokes. ~Ethan~ From andrew.svetlov at gmail.com Thu Mar 3 22:07:54 2011 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Thu, 3 Mar 2011 23:07:54 +0200 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D70025E.9020503@stoneleaf.us> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> Message-ID: Is there any idea how to extend decorator notation up to accepting right-hand argument? >From my perspective decorator is just some function, converting left argument to something resonable in the same context. From greg.ewing at canterbury.ac.nz Thu Mar 3 23:40:21 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 11:40:21 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D70025E.9020503@stoneleaf.us> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> Message-ID: <4D7018D5.9060701@canterbury.ac.nz> Ethan Furman wrote: > Greg Ewing wrote: > >> I think we should have assignment decorators. >> >> @decorator >> lhs = rhs > > That seems very verbose -- part of the issue for me is all the extra > typing involved in specifying the name twice, and I don't think this is > going to save many, if any, keystrokes. I don't follow. It costs an @ and a newline, but it saves one instance of the name, plus two quotes, a comma and perhaps a space. Anyway, the main issue for me in violating DRY isn't the number of keystrokes. The main issues are: * It's harder to read -- the repeated name is just noise that doesn't convey any useful information to the reader. * It's harder to maintain -- if you change the name, you have to remember to change it in both places. -- Greg From greg.ewing at canterbury.ac.nz Thu Mar 3 23:54:26 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 11:54:26 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> Message-ID: <4D701C22.50002@canterbury.ac.nz> Andrew Svetlov wrote: > Is there any idea how to extend decorator notation up to accepting > right-hand argument? Well, I just showed one. It's a little different from the way function and class decorators work, but it has to be -- otherwise it wouldn't provide any advantage over just writing a function call and assignment. I've also had another idea on how to approach the problem without involving the decorator concept: def name as something(arg, ...) would be equivalent to name = something('name', arg, ...) The idea here is that we're generalising the way that 'def' defines an object that knows the name it was defined with. It's notable that namedtuple could be used as-is with either of these: @namedtuple Fred = 'x y z' or def Fred as namedtuple('x y z') Critiquing my own suggestions, I find that the second one looks more elegant and less confusing, but then I've never really liked the decorator syntax much in the first place. -- Greg From jsbueno at python.org.br Fri Mar 4 00:01:59 2011 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Thu, 3 Mar 2011 20:01:59 -0300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D7018D5.9060701@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> Message-ID: On Thu, Mar 3, 2011 at 7:40 PM, Greg Ewing wrote: > Ethan Furman wrote: >> >> Greg Ewing wrote: >> >>> I think we should have assignment decorators. >>> >>> @decorator >>> lhs = rhs >> >> That seems very verbose -- part of the issue for me is all the extra >> typing involved in specifying the name twice, and I don't think this is >> going to save many, if any, keystrokes. > > I don't follow. It costs an @ and a newline, but it saves > one instance of the name, plus two quotes, a comma and > perhaps a space. > > Anyway, the main issue for me in violating DRY isn't the > number of keystrokes. The main issues are: I liked the idea of an assignment decorator for these cases. Even the Constants, that made an appearance on python-dev a few months ago(and I suppose that at some point they should come into python 3.3 ) could benefit from that. Let's think a little more on this syntax for assigment decorators. There _is_ currently a way of doing a similar thing, without new syntax -- but that would involve changing the f_locals of the calling frame on the constructor of the named object. (which is just DoublePlusUngood) What use cases could we have for it? If there is a syntax change, can it do more things than an implicit first parameter to a function call? Or would that suffice? Regards, js -><- > * It's harder to read -- the repeated name is just noise > ?that doesn't convey any useful information to the reader. > > * It's harder to maintain -- if you change the name, you > ?have to remember to change it in both places. > > -- > Greg > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > From ethan at stoneleaf.us Fri Mar 4 00:13:37 2011 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 03 Mar 2011 15:13:37 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D7018D5.9060701@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> Message-ID: <4D7020A1.8080702@stoneleaf.us> Greg Ewing wrote: > Ethan Furman wrote: >> Greg Ewing wrote: >> >>> I think we should have assignment decorators. >>> >>> @decorator >>> lhs = rhs >> >> That seems very verbose -- part of the issue for me is all the extra >> typing involved in specifying the name twice, and I don't think this >> is going to save many, if any, keystrokes. > > I don't follow. It costs an @ and a newline, but it saves > one instance of the name, plus two quotes, a comma and > perhaps a space. Ah -- I thought 'decorator' was a stand-in for an actual decorator name. Still not in love with it though... an @ on a line all by itself? Ick. (How's that for a reasoned argument? ;) > Anyway, the main issue for me in violating DRY isn't the > number of keystrokes. The main issues are: > > * It's harder to read -- the repeated name is just noise > that doesn't convey any useful information to the reader. > > * It's harder to maintain -- if you change the name, you > have to remember to change it in both places. Yup, those are the main issues. Didn't someone say that meta-classes can be used to solve this issue? ~Ethan~ From guido at python.org Fri Mar 4 00:15:51 2011 From: guido at python.org (Guido van Rossum) Date: Thu, 3 Mar 2011 15:15:51 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D7018D5.9060701@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> Message-ID: Greg, what is your use case for passing the lhs name into the function? Because that seems to be your purpose. Also it seems that one couldn't decorate all but the simplest assignments: @dec lhs = rhs # lhs = dec('lhs', rhs) @dec lhs.attr = rhs # lhs.attr = dec('lhs.attr', rhs) ??? @dec lhs[i] = rhs # lhs[i] = dec('??????', rhs) @dec lhs1, lhs2, lhs3 = rhs # lhs1, lhs2, lhs3 = dec('???????????', rhs) The use case I can think of for the first example would be to declare fields in a model class that know their own field name, as in Django and AppEngine models, and many other libraries. But that use case is covered pretty well by metaclasses, and the @decorator syntax has the disadvantage of requiring at least two lines per field, whereas the current solution requires only one line (and there is no explicit repetition of the field name -- the metaclass takes care of it). --Guido On Thu, Mar 3, 2011 at 2:40 PM, Greg Ewing wrote: > Ethan Furman wrote: >> >> Greg Ewing wrote: >> >>> I think we should have assignment decorators. >>> >>> @decorator >>> lhs = rhs >> >> That seems very verbose -- part of the issue for me is all the extra >> typing involved in specifying the name twice, and I don't think this is >> going to save many, if any, keystrokes. > > I don't follow. It costs an @ and a newline, but it saves > one instance of the name, plus two quotes, a comma and > perhaps a space. > > Anyway, the main issue for me in violating DRY isn't the > number of keystrokes. The main issues are: > > * It's harder to read -- the repeated name is just noise > ?that doesn't convey any useful information to the reader. > > * It's harder to maintain -- if you change the name, you > ?have to remember to change it in both places. > > -- > Greg > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -- --Guido van Rossum (python.org/~guido) From ethan at stoneleaf.us Fri Mar 4 00:25:14 2011 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 03 Mar 2011 15:25:14 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D701C22.50002@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D701C22.50002@canterbury.ac.nz> Message-ID: <4D70235A.1070608@stoneleaf.us> Greg Ewing wrote: > I've also had another idea on how to approach the problem > without involving the decorator concept: > > def name as something(arg, ...) > > would be equivalent to > > name = something('name', arg, ...) > [snip] > > or > > def Fred as namedtuple('x y z') This I like. Do we even need the def? class Person(object): name as Str() address as Str() I don't think I've seen it mentioned yet (my apologies if I missed it), but we could also add more magic and make the assigned name an available attribute to the function when it's called. ~Ethan~ From guido at python.org Fri Mar 4 00:25:21 2011 From: guido at python.org (Guido van Rossum) Date: Thu, 3 Mar 2011 15:25:21 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D70235A.1070608@stoneleaf.us> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D701C22.50002@canterbury.ac.nz> <4D70235A.1070608@stoneleaf.us> Message-ID: On Thu, Mar 3, 2011 at 3:25 PM, Ethan Furman wrote: >> ?def Fred as namedtuple('x y z') > > This I like. ?Do we even need the def? Eek! All other uses of 'as' in Python have the target on the right. -- --Guido van Rossum (python.org/~guido) From tjreedy at udel.edu Fri Mar 4 00:30:55 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 03 Mar 2011 18:30:55 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> Message-ID: On 3/3/2011 6:15 PM, Guido van Rossum wrote: > The use case I can think of for the first example would be to declare > fields in a model class that know their own field name, as in Django > and AppEngine models, and many other libraries. But that use case is > covered pretty well by metaclasses, and the @decorator syntax has the > disadvantage of requiring at least two lines per field, whereas the > current solution requires only one line (and there is no explicit > repetition of the field name -- the metaclass takes care of it). It seems that there are a number of things one can do with a metaclass that are not immediately obvious to everyone. This is not the first time I have seen that sort of answer. Perhaps a metaclasslib, to go with contextlib, would be useful. Or a HOWTO. -- Terry Jan Reedy From jsbueno at python.org.br Fri Mar 4 01:06:31 2011 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Thu, 3 Mar 2011 21:06:31 -0300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> Message-ID: Hi Guido - On Thu, Mar 3, 2011 at 8:15 PM, Guido van Rossum wrote: > Greg, what is your use case for passing the lhs name into the > function? Because that seems to be your purpose. > > Also it seems that one couldn't decorate all but the simplest assignments: > > @dec > lhs = rhs > # lhs = dec('lhs', rhs) Really I don't perceive these as issues - in one hand, it might be desired that just simple assignments would work, and kinds that would result in complicated implementations or hard to read code, could always raise syntax error, but I could imagine: > @dec > lhs.attr = rhs > # lhs.attr = dec('lhs.attr', rhs) ???? setattr (lhs, 'attr', dec('attr', rhs) ) > @dec > lhs[i] = rhs > # lhs[i] = dec('??????', rhs) Maybe this could simply raise a syntax error, but lhs.__setitem__(i, dec(str(i), rhs)) is not unthinkable > > @dec > lhs1, lhs2, lhs3 = rhs > # lhs1, lhs2, lhs3 = dec('???????????', rhs) lhs1,lhs2, lhs3 = (('lhs1', 'lhs2', 'lhs3'), rhs) > The use case I can think of for the first example would be to declare > fields in a model class that know their own field name, as in Django > and AppEngine models, and many other libraries. But that use case is > covered pretty well by metaclasses, The problem I see with the way this is currently done is that it is invisible - as in "not explicit". The metaclass sets an object attribute in way that can only be viewed on the documentation of the metaclass (which is usually misinformed as documentation on the superclass for these libraries). Moreover, the mechanism to do this through metaclasses is a bit complex, so that even reasonably proficient programmers perceive this kind of behavior as magic that "just works". > and the @decorator syntax has the > disadvantage of requiring at least two lines per field, whereas the > current solution requires only one line (and there is no explicit > repetition of the field name -- the metaclass takes care of it). As for the syntax, we can figure out one to work best - be it decorators, the "def.... as ... " approach, or something else, - but I think there are use cases for that. As I pointed out in the other message, the "constants" that have being discussed recently on python-dev could also benefit from a feature like this. regards, js -><- > > --Guido > From greg.ewing at canterbury.ac.nz Fri Mar 4 01:19:19 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 13:19:19 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D7020A1.8080702@stoneleaf.us> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> Message-ID: <4D703007.7050701@canterbury.ac.nz> Ethan Furman wrote: > Ah -- I thought 'decorator' was a stand-in for an actual decorator name. Yes, it is -- but it's a name that you would otherwise have to write somewhere else. For example, currently you write Fred = namedtuple('Fred', 'x y z') This would become @namedtuple Fred = 'x y z' > Didn't someone say that meta-classes > can be used to solve this issue? Probably they can, but at the expense of abusing 'class' to define something that in general is nothing like a class. For example, in PyGUI I make heavy use of a custom property descriptor which is currently used like this: fred = overridable_property('fred', "This is the fred property.") Using an assignment decorator, this would become @overridable_property fred = "This is the fred property." There is probably some way of arranging thing so that it can be written class fred(overridable_property): "This is the fred property." but this would be massively confusing, because it's not defining a class at all, or anything remotely like a class. What we really want is a construct that works like 'class' in some ways, but has a completely neutral name that doesn't suggest any particular semantics. The word 'def' would actually fill that bill if it weren't already ingrained in everyone's brains that it defines functions in particular. -- Greg From greg.ewing at canterbury.ac.nz Fri Mar 4 01:39:52 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 13:39:52 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D701C22.50002@canterbury.ac.nz> <4D70235A.1070608@stoneleaf.us> Message-ID: <4D7034D8.4030103@canterbury.ac.nz> Guido van Rossum wrote: > Eek! All other uses of 'as' in Python have the target on the right. Well, it doesn't necessarily have to be 'as'. It could be def Fred = namedtuple('x y z') but that wouldn't give as much of a clue that something special is happening in the middle. Is the reversal really all that much of a problem? It makes sense when you read it as an English sentence: "Define Fred as a named tuple." Just like all the other uses of "as" mean what they seem to mean. Given that each existing use of 'as' has its own unique quirks, forcing 'as' to always bind on the right regardless of anything else might be seen as a foolish consistency. -- Greg From greg.ewing at canterbury.ac.nz Fri Mar 4 02:40:41 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 14:40:41 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> Message-ID: <4D704319.5000608@canterbury.ac.nz> Guido van Rossum wrote: > Greg, what is your use case for passing the lhs name into the > function? I've found two so far: * Named tuples * OverridableProperty instances in PyGUI > Also it seems that one couldn't decorate all but the simplest assignments: > > @dec > lhs.attr = rhs > # lhs.attr = dec('lhs.attr', rhs) ??? That would be disallowed -- the target would be restricted to a bare name. This is no worse than the corresponding restriction for 'def' and 'class'. > The use case I can think of for the first example would be to declare > fields in a model class that know their own field name > ... the @decorator syntax has the > disadvantage of requiring at least two lines per field, whereas the > current solution requires only one line Yes, that's a valid criticism. The 'def' version would address it. > But that use case is > covered pretty well by metaclasses, I'm not convinced that metaclasses are a general solution to this kind of problem. I had an interesting experience recently with SqlAlchemy, where there is a Table class having a metaclass that does magical things with the contents of the class dict when the class is created. It turns out you can't subclass the Table class using the normal Python techniques, because the metaclass magic gets triggered too soon. To work around this they provide a flag you can use to suppress the magic, but it's an ugly kludge. There's also the problem that you can only use *one* metaclass at a time, so if you want a class that makes use of features provided by two different metaclasses, you're out of luck. For example, if I were relying on a metaclass to set up my OverridableProperty descriptors, and I wanted a Django model class to have some of those properties, I wouldn't be able to do it, because the metaclasses would conflict. -- Greg From raymond.hettinger at gmail.com Fri Mar 4 02:59:40 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Thu, 3 Mar 2011 17:59:40 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D703007.7050701@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> Message-ID: <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> On Mar 3, 2011, at 4:19 PM, Greg Ewing wrote: > . For example, currently you write > > Fred = namedtuple('Fred', 'x y z') > > This would become > > @namedtuple > Fred = 'x y z' How are tuples handled? @deco C = a, b, c Is this C('C', a, b, c) or C('C', (a, b, c))? All in all this seems like too much firepower (a language syntax change) for too little benefit (how much do we really care that 'C' got typed twice?). If there were going to be only one syntax change for Python 3.3, why not use it for something that adds a lot more expressive power: x = a!name # x = getattr(a, name) That bit of syntactic sugar might greatly expand our minds when it comes to dynamically creating and accessing attributes. Almost any syntax would do: a::name a$name a[[name]] etc. Using external data to build classes/structs on the fly is under-utlized technique in Python: # create a new sub-category k = type(kind, parents, {}) for trait, value in traits.items(): k$trait = value # access some trait in the hierarchy of traits print(k$trait) Another expressive bit a syntax would be for a way to do a def into a target namespace. Currently, to implement prototype OO, we need to do something like this: o = Object() # base prototype OO class o.balance = 0 # populate the prototype instance o.name = 'empty' def deposit(self, amt): self.balance += amt o.deposit = deposit del deposit p = o.clone() # make a new account p.name = 'john q public' def custom_rule(): if self.balance < 500: notify_low_balance(self) p.custom_rule = custom_rule del custom_rule Wouldn't it be better to write directly into a target namespace? o = Object() # base prototype OO class o.balance = 0 # prototype instance o.name = 'empty' def o.deposit(self, amt): self.balance += amt p = o.clone() # make a new account p.name = 'john q public' def p.custom_rule(): if self.balance < 500: notify_low_balance(self) This would also be useful for handler dicts and dispatch dicts : actions = {} def actions.shoot(): print('Fire!') def actions:retreat(): print('Run away!) for action in commands: actions[action] history.update(action) ... I think if a new syntax gets added, it should do something bold and expressive. I don't think that assignment decorators add a significant new capability. AFAICT, it just saves a few characters but doesn't change the way we think or work. If someone were proposing a C macro that did exactly the same thing, someone else would complain that it was cryptic, had too little benefit, and hid what was really going-on behind a layer of abstraction. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Mar 4 03:00:30 2011 From: guido at python.org (Guido van Rossum) Date: Thu, 3 Mar 2011 18:00:30 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D704319.5000608@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D704319.5000608@canterbury.ac.nz> Message-ID: On Thu, Mar 3, 2011 at 5:40 PM, Greg Ewing wrote: > Guido van Rossum wrote: > >> Greg, what is your use case for passing the lhs name into the >> function? > > I've found two so far: > > * Named tuples > * OverridableProperty instances in PyGUI So a metaclass doesn't work for the latter? >> Also it seems that one couldn't decorate all but the simplest assignments: >> >> @dec >> lhs.attr = rhs >> # lhs.attr = dec('lhs.attr', rhs) ???? > > That would be disallowed -- the target would be restricted to > a bare name. This is no worse than the corresponding restriction > for 'def' and 'class'. It is worse, because undecorated function and class definitions also have that restriction; but undecorated assignments don't. Though I suppose I could live with it because I don't see the use cases. >> The use case I can think of for the first example would be to declare >> fields in a model class that know their own field name > >> ... the @decorator syntax has the >> disadvantage of requiring at least two lines per field, whereas the >> current solution requires only one line > > Yes, that's a valid criticism. The 'def' version would address it. > >> But that use case is >> covered pretty well by metaclasses, > > I'm not convinced that metaclasses are a general solution to this > kind of problem. It's indeed not for the namedtuple case. I think it can work for PyGUI but I don't know much about it. > I had an interesting experience recently with SqlAlchemy, where there > is a Table class having a metaclass that does magical things with the > contents of the class dict when the class is created. It turns out > you can't subclass the Table class using the normal Python techniques, > because the metaclass magic gets triggered too soon. To work around > this they provide a flag you can use to suppress the magic, but it's > an ugly kludge. Hm. Maybe that's because their metaclass has some other side effects that has nothing to do with the property definition patchup? The examples of using metaclasses for field/property definitions that I'm familiar with (in Django and App Engine) are both fine with subclassing. > There's also the problem that you can only use *one* metaclass at > a time, so if you want a class that makes use of features provided > by two different metaclasses, you're out of luck. For example, if > I were relying on a metaclass to set up my OverridableProperty > descriptors, and I wanted a Django model class to have some of > those properties, I wouldn't be able to do it, because the > metaclasses would conflict. With enough patience you can actually combine metaclasses using multiple inheritance. The book I used to guide my way through metaclasses (http://www.amazon.com/Putting-Metaclasses-Work-Ira-Forman/dp/0201433052) even automatically constructed the combined metaclass (they use C++); in Python you have to work at it a little harder, but the same approach can be used. However I've never come across this, and I would surely prefer to avoid it if possible. I guess you need to enlighten me more about OverridableProperty; apparently (like namedtuple) it is a pretty generic thing? I know you are not really after saving the keystrokes but more after the DRY principle, but ISTM that passing the name in as a string literal seems a pretty small price to pay for the benefit derived, and the cost of getting the assignment decorator designed, implemented and documented seems pretty high. -- --Guido van Rossum (python.org/~guido) From python at mrabarnett.plus.com Fri Mar 4 03:20:39 2011 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 04 Mar 2011 02:20:39 +0000 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> Message-ID: <4D704C77.8050605@mrabarnett.plus.com> On 04/03/2011 01:59, Raymond Hettinger wrote: > > On Mar 3, 2011, at 4:19 PM, Greg Ewing wrote: >> . For example, currently you write >> >> Fred = namedtuple('Fred', 'x y z') >> >> This would become >> >> @namedtuple >> Fred = 'x y z' > > How are tuples handled? > > @deco > C = a, b, c > > Is this C('C', a, b, c) or C('C', (a, b, c))? > > All in all this seems like too much firepower (a language syntax > change) for too little benefit (how much do we really care that 'C' > got typed twice?). > > If there were going to be only one syntax change for Python 3.3, why > not use it for something that adds a lot more expressive power: > > x = a!name # x = getattr(a, name) > [snip] Or: x = a.(name) From raymond.hettinger at gmail.com Fri Mar 4 03:25:05 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Thu, 3 Mar 2011 18:25:05 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D704319.5000608@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D704319.5000608@canterbury.ac.nz> Message-ID: <667F2E95-B123-442E-A91A-2135F2275A55@gmail.com> On Mar 3, 2011, at 5:40 PM, Greg Ewing wrote: > Guido van Rossum wrote: > >> Greg, what is your use case for passing the lhs name into the >> function? > > I've found two so far: > > * Named tuples > * OverridableProperty instances in PyGUI I wouldn't like to see named tuples used as justification for the proposed syntax change. In a typical use of a named tuple, the name gets referred to many times and the name gets included in docs and docstrings: Point = namedtuple('Point', 'x, y, z, color) . . . p = Point(ax, ay, az, acolor) q = Point(qx, qy, qz, qcolor) ... def midpoint(u, v, mixer): 'Create a new Point half-way between u and v. Mix the colors with the given function' return Point(avg(u.x, v.x), avg(u.y, v.y), avg(u.z, v.z), mixer(u.color, v.color)) So, the syntax change would save only one out of very many uses. It's not much different than class definitions which get defined once, become part of the API, and are accessed in many places. Optimizing a few characters in the definition line is not win. If you need to change the name later, it will need to get changed in many places (global substitution). Named tuples do most of their work after the definition is made (instantiating new instances or accessing their parts using attribute lookup). In a typical module using named tuples, fewer than a dozen characters would be saved out of the entire file. I don't see that as much of a win. The last concern is that the proposal may change the way a person thinks about named tuples. It suggests that the only way to use them is to assign them. But there are other ways: class Point(namedtuple('Point', 'x y')): def __repr__(self): print('<{:6.3f} | {:6.3f}>'.format(self)) There may be other compelling use cases for assignment decorators but I don't think named tuples are among them. Raymond From raymond.hettinger at gmail.com Fri Mar 4 03:32:31 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Thu, 3 Mar 2011 18:32:31 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D704C77.8050605@mrabarnett.plus.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D704C77.8050605@mrabarnett.plus.com> Message-ID: <516B8811-D854-401E-9BD3-C031E223FAA4@gmail.com> On Mar 3, 2011, at 6:20 PM, MRAB wrote: > On 04/03/2011 01:59, Raymond Hettinger wrote: >> >> On Mar 3, 2011, at 4:19 PM, Greg Ewing wrote: >>> . For example, currently you write >>> >>> Fred = namedtuple('Fred', 'x y z') >>> >>> This would become >>> >>> @namedtuple >>> Fred = 'x y z' >> >> [snip] >> All in all this seems like too much firepower (a language syntax change) for too little benefit (how much do we really care that 'C' got typed twice?). >> >> If there were going to be only one syntax change for Python 3.3, why not use it for something that adds a lot more expressive power: >> >> x = a!name # x = getattr(a, name) >> > [snip] > > Or: > x = a.(name) FWIW, I'm not proposing either of those syntax changes. Those were just examples. The main point is that a new syntax change is a big deal, so it ought to be saved for something that greatly expands our expressive power, adds substantive improvements to existing programs, or is mind-expanding in some way. The with-statement was a good example of something that met all of those criteria. If you're going to change the language syntax, make it count and do something cool :-) Raymond From greg.ewing at canterbury.ac.nz Fri Mar 4 04:14:06 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 16:14:06 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> Message-ID: <4D7058FE.4050804@canterbury.ac.nz> Raymond Hettinger wrote: > How are tuples handled? > > @deco > C = a, b, c > > Is this C('C', a, b, c) or C('C', (a, b, c))? Good question. The former is potentially more useful, but less obvious. This is another thing that the 'def' version would be better at. Because it's transforming an existing function call rather than manufacturing a new one, you get to use all the parameter passing machinery in an obvious way. So you can write def C = deco(a, b, c) if you want them a separate parameters. > All in all this seems like too much firepower (a language syntax change) > for too little benefit (how much do we really care that 'C' got typed > twice?). It actually annoys me quite a lot every time I bump into something like this, because the language virtually forces a DRY violation on me that's impossible, or at least extremely awkward, to avoid. Python is generally very good at not reserving special powers for itself, but this is one area where there is a mechanism (for defining a thing that knows its own name) that works in certain specific cases (def and class) but is not open to the programmer for easy use in a general way. That strikes me as a wart. > If there were going to be only one syntax change for Python 3.3, If we only get one syntax change per major release, then for 3.3 it's probably going to be yield-from, which Guido recently said he would like to move forward. > why not > use it for something that adds a lot more expressive power: > > x = a!name # x = getattr(a, name) That's something worth considering, but I'm not sure it's obviously a more pressing issue. > Another expressive bit a syntax would be for a way to do a def into a > target namespace. > > actions = {} > def actions.shoot(): > print('Fire!') > def actions:retreat(): > print('Run away!) That's a worthy idea, too. Although if we're counting syntax changes, I'd call that about 0.5 of a change, since it's really just relaxing a restriction to allow you to write something that it looks like you should have been able to write all along! > I don't think that assignment decorators add a significant > new capability. AFAICT, it just saves a few characters but doesn't > change the way we think or work. As I've tried to point out, it's about more than just saving characters. It's about allowing the programmer to follow DRY, which is an important software engineering issue. It's also about improving readability, because that counts, you know. -- Greg From greg.ewing at canterbury.ac.nz Fri Mar 4 05:00:37 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 17:00:37 +1300 Subject: [Python-ideas] Computed attribute names (Re: Assignment decorators) In-Reply-To: <4D704C77.8050605@mrabarnett.plus.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D704C77.8050605@mrabarnett.plus.com> Message-ID: <4D7063E5.7050906@canterbury.ac.nz> MRAB wrote: > Or: > x = a.(name) That's what I'd go for as well, if I were to go for something like this at all. FWIW, there is one thing you could do with this that is currently rather painful: x.(name) += 1 Doing that with getattr/setattr currently requires about 3 lines of code. -- Greg From greg.ewing at canterbury.ac.nz Fri Mar 4 01:29:46 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 13:29:46 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D70235A.1070608@stoneleaf.us> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D701C22.50002@canterbury.ac.nz> <4D70235A.1070608@stoneleaf.us> Message-ID: <4D70327A.10108@canterbury.ac.nz> Ethan Furman wrote: > Do we even need the def? > > class Person(object): > name as Str() > address as Str() I think that would be paring it down a bit foo far -- there's not much left to suggest that a name is being bound. At least 'def' implies that something is being defined rather than just referred to. > we could also add more magic and make the assigned name an available > attribute to the function when it's called. I don't understand. Attribute of what? -- Greg From jackdied at gmail.com Fri Mar 4 05:53:07 2011 From: jackdied at gmail.com (Jack Diederich) Date: Thu, 3 Mar 2011 23:53:07 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D6FFDF9.3060507@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> Message-ID: On Thu, Mar 3, 2011 at 3:45 PM, Greg Ewing wrote: > Nick Coghlan wrote: >> >> That is, something that >> made it feasible to reference the name on the left hand side of a >> simple assignment without needing to repeat it as a string with the >> same contents. > > I think we should have assignment decorators. > > @decorator > lhs = rhs > > would be equivalent to > > lhs = decorator('lhs', rhs) A recurring question on python-list is "how do I find out the name of my variable?" and the recurring answer is "why does it matter?" or "what do you mean by _the_ name?" In general I'm wary of adding features who's only purpose is making DSLs easier to write in pure python. function/class decorators are currently just syntactic sugar for things you could already do. Speaking of deocrators, after this change what would function and class decorators look like? Currently the identity decorator is def decorator(ob): return ob Would the new syntax change that to def decorator(name, ob): return ob ? or change it to that only for assignments but not classes and functions? -Jack From steve at pearwood.info Fri Mar 4 06:09:24 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 04 Mar 2011 16:09:24 +1100 Subject: [Python-ideas] Computed attribute names (Re: Assignment decorators) In-Reply-To: <4D7063E5.7050906@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D704C77.8050605@mrabarnett.plus.com> <4D7063E5.7050906@canterbury.ac.nz> Message-ID: <4D707404.8020705@pearwood.info> Greg Ewing wrote: > MRAB wrote: > >> Or: >> x = a.(name) > > That's what I'd go for as well, if I were to go for > something like this at all. > > FWIW, there is one thing you could do with this that > is currently rather painful: > > x.(name) += 1 > > Doing that with getattr/setattr currently requires > about 3 lines of code. Is there a global newline shortage I haven't heard about? *wink* In any case: setattr(x, name, getattr(x, name, 0)+1) looks like one line to me. If you don't like the DRY violation, the obvious helper function makes it easy to write that as (say) incattr(x, name, 1) -- Steven From greg.ewing at canterbury.ac.nz Fri Mar 4 06:11:01 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 18:11:01 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D704319.5000608@canterbury.ac.nz> Message-ID: <4D707465.1080006@canterbury.ac.nz> Guido van Rossum wrote: >>* OverridableProperty instances in PyGUI > > So a metaclass doesn't work for the latter? It could be made to work, but it would be overkill and introduce an unnecessary and undesirable coupling between the property and the class that it resides in. > It is worse, because undecorated function and class definitions also > have that restriction; but undecorated assignments don't. The 'def' version would address that, because it wouldn't be surprising for it to inherit the same restrictions as 'def' for functions. > Hm. Maybe that's because [SqlAlchemy's] metaclass has some other side effects > that has nothing to do with the property definition patchup? No, it's directly related. Whenever a Table subclass is created, the metaclass goes looking for field definitions, and if it doesn't find any it throws an exception. That's fine if you're defining an actual table, but it also happens if you're just trying to create a specialised version of Table that you will then further subclass to create actual tables. The root of the problem is that two different things -- creating a new kind of Table and creating an actual Table -- are both done the same way, by subclassing, and the metaclass can't distinguish between them. In this particular case, there's a fairly straightforward solution that could have been used (but wasn't): if there are no fields, simply do nothing instead of raising an exception. But in general it illustrates that throwing large amounts of magic around can easily lead to unintended consequences. > With enough patience you can actually combine metaclasses using > multiple inheritance. ... > in Python you have to work at it a little harder, It's *harder* in Python than it is in C++? Somehow that fails to fill me with confidence. :-( > I guess you need to enlighten me more about OverridableProperty; It's quite simple, really. It turns accesses to a property called 'xxx' into calls of methods called 'get_xxx' and 'set_xxx' on the containing class. Subclasses can then override those methods, make super calls, etc. instead of having to mess around building a new property and extracting __get__ and __set__ methods out of the inherited property. It's defined like this: def overridable_property(name, doc = None): """Creates a property which calls methods get_xxx and set_xxx of the underlying object to get and set the property value, so that the property's behaviour may be easily overridden by subclasses.""" getter_name = intern('get_' + name) setter_name = intern('set_' + name) return property( lambda self: getattr(self, getter_name)(), lambda self, value: getattr(self, setter_name)(value), None, doc) It needs to know the name of the property so that it can build the correct method names. That's an implementation detail that the user shouldn't have to care about, but I'm forced to expose it because of a deficiency in the language. It's like a house with pieces of plumbing sticking out of the walls instead of being hidden away. It works, but it's ugly and annoys you every time you look at it. I *could* hide the plumbing by using a metaclass, but then I would be restricted to using these properties only in classes having that metaclass. That would be disappointing, because the property itself is quite self-contained and will happily work in any class. It would also cause me considerable headaches if I wanted to use them in a class already having a custom metaclass, such as a Django model or an SqlAlchemy table. If I studied the metaclass in question closely enough, I might be able to find a way to merge them together, but I don't want to have to do that. Arcane details of someone else's metaclass are the *last* thing I want to be bothered with! > ISTM that passing the name in as a string literal seems > a pretty small price to pay for the benefit derived, It's a small price each time you pay it, yes, but it mounts up. Every time I write one I think "There must surely be a better way of doing this... oh, no, that's right, there isn't." If the ancient Chinese had had computers, I'm sure they would have invented a form of torture based on making programmers repeatedly solve problems that the language doesn't *quite* have the tools to do elegantly... -- Greg From greg.ewing at canterbury.ac.nz Fri Mar 4 06:23:42 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 18:23:42 +1300 Subject: [Python-ideas] Computed attribute names (Re: Assignment decorators) In-Reply-To: <4D707404.8020705@pearwood.info> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D704C77.8050605@mrabarnett.plus.com> <4D7063E5.7050906@canterbury.ac.nz> <4D707404.8020705@pearwood.info> Message-ID: <4D70775E.3010304@canterbury.ac.nz> Steven D'Aprano wrote: > setattr(x, name, getattr(x, name, 0)+1) > > looks like one line to me. No, that's not quite the same thing, because it doesn't invoke the *in-place* + operation. It's true that it could be written as setattr(x, name, getattr(x, name, 0).__iadd__(1)) but that's getting really hard to read. The three lines come from trying to write it in a way that doesn't obscure what's being done: y = getattr(x, name) y += 1 setattr(x, name, y) > the obvious helper function makes it easy to write that as (say) > > incattr(x, name, 1) That takes care of +=. Now you just need another one for each of the 20-odd other in-place operations. :-) -- Greg From raymond.hettinger at gmail.com Fri Mar 4 06:24:51 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Thu, 3 Mar 2011 21:24:51 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D7058FE.4050804@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D7058FE.4050804@canterbury.ac.nz> Message-ID: <3D29D5C0-9A3B-4927-AD1B-DDAE4486D540@gmail.com> On Mar 3, 2011, at 7:14 PM, Greg Ewing wrote: > >> I don't think that assignment decorators add a significant new capability. AFAICT, it just saves a few characters but doesn't change the way we think or work. > > As I've tried to point out, it's about more than just saving > characters. It's about allowing the programmer to follow DRY, > which is an important software engineering issue. It's also > about improving readability, because that counts, you know.\ We both agree about the virtues of readability and DRY. I just disagree that either applies in the case of named tuples. The following seems weird to me and causes a mental hiccup when reading it: @namedtuple Point = 'x y' The existing, Point = namedtuple('Point', 'x y') says what it does and my eyes just breeze over it. So, I don't see a readability win. Also, I don't think the DRY principle applies. That is all about factoring-out a piece of knowledge so that it is used only once. However, the use for named tuples is to define the class and then use that class name many times. So, you still end-up using the name over and over again. It is deeply embedded in the program. You can't factor-out all of them with the proposed syntax, so you don't get any of the purported benefits of DRY: '''When the DRY principle is applied successfully, a modification of any single element of a system does not change other logically-unrelated elements''' To change the name of the class, you still have to do a global substitution, hitting the definition, every invocation, and its appearance in docs. IOW, the class name is not a piece of knowledge that can be abstracted away. Another software engineering principle, the "Rule of Three" suggests that the proposed abstraction doesn't meet the minimum threshold of usefulness. Your idea may be valid and useful in other contexts, but named tuples aren't a good motivating use case, IMO. Raymond P.S. The current approach to assignments has the virtue that it is readable to people coming from other languages. The decorator approach creates a piece of Python arcana that would only be decipherable to a smaller number of people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Fri Mar 4 06:50:52 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 18:50:52 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <3D29D5C0-9A3B-4927-AD1B-DDAE4486D540@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D7058FE.4050804@canterbury.ac.nz> <3D29D5C0-9A3B-4927-AD1B-DDAE4486D540@gmail.com> Message-ID: <4D707DBC.80405@canterbury.ac.nz> Raymond Hettinger wrote: > We both agree about the virtues of readability and DRY. I just disagree > that either applies in the case of named tuples. Named tuples aren't really the main motivation for me -- they're just something that it could be used with if you wanted. The main use case for me is the overridable_property descriptor that I described earlier. There's a difference between them: with named tuples, the name argument is really only for cosmetics -- if it doesn't match the name it's bound to, nothing really bad happens. The program still works; you just see a different name in debug output, etc. But with overridable_property, the name is an integral part of the way it works, and it seems wrong to have to leave it up to the user to get it right each and every time he uses an overridable_property. Imagine what it would be like if every class definition you wrote had to look like this: class Fred('Fred'): ... and furthermore, the code *wouldn't work* if the name passed in was different from the name the class was bound to. You complain about this, and keep getting told "It's not *that* much of a problem, really -- you only have to write it once." What would you think of that argument? > The following seems weird to me and causes a mental hiccup when reading it: > > @namedtuple > Point = 'x y' Yes, it seems weird to me, too, and I'm not really pushing for it. I'd much rather see something like def fred = overridable_property("The fredness of the wongle") [changing the example so we don't get too hung up on named tuples.] > The existing, Point = namedtuple('Point', 'x y') says what it does and > my eyes just breeze over it. So, I don't see a readability win. > > Also, I don't think the DRY principle applies. ... you still end-up > using the name over and over again. But still, it involves being forced to write something more than once when the repetition doesn't convey any extra information. That's inefficient, whether you call it DRY or not. > Another software engineering principle, the "Rule of Three" suggests > that the proposed abstraction doesn't meet the minimum threshold of > usefulness. Hmmm, I need to google that one. Let's see... Rule of three (writing) - Wikipedia, the free encyclopedia The rule of three is a principle in writing that suggests that things that come in threes are inherently funnier, more satisfying, or more effective than ... ...er, no, not that one... Rule of Three - Wikipedia, the free encyclopedia Rule of three may refer to: Rule of three (medicine), for calculating a ... ...nope... [tries again using "rule of three software engineering"] ...okay, here's something that looks relevant: "Even if something appears to have all the requisite pattern elements, it should not be considered a pattern until it has been verified to be a recurring phenomenon (preferably in at least three existing systems -- this is often called the rule of three)." (from http://www.cmcrossroads.com/bradapp/docs/patterns-intro.html) So I guess what you're saying is that we need to find two more use cases? -- Greg From greg.ewing at canterbury.ac.nz Fri Mar 4 07:00:35 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 19:00:35 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D707DBC.80405@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D7058FE.4050804@canterbury.ac.nz> <3D29D5C0-9A3B-4927-AD1B-DDAE4486D540@gmail.com> <4D707DBC.80405@canterbury.ac.nz> Message-ID: <4D708003.1030504@canterbury.ac.nz> I wrote: > So I guess what you're saying is that we need to find two more > use cases? ...and it's just occurred to me that there are two use cases *already in the language*: classes and functions are two things for which it is very handy for them to know the name they're initially bound to. My overridable_property is a third, and named tuples are a fourth if you allow them. -- Greg From guido at python.org Fri Mar 4 07:03:05 2011 From: guido at python.org (Guido van Rossum) Date: Thu, 3 Mar 2011 22:03:05 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D707465.1080006@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D704319.5000608@canterbury.ac.nz> <4D707465.1080006@canterbury.ac.nz> Message-ID: On Thu, Mar 3, 2011 at 9:11 PM, Greg Ewing wrote: > def overridable_property(name, doc = None): > ? ?"""Creates a property which calls methods get_xxx and set_xxx of > ? ?the underlying object to get and set the property value, so that > ? ?the property's behaviour may be easily overridden by subclasses.""" > > ? ?getter_name = intern('get_' + name) > ? ?setter_name = intern('set_' + name) > ? ?return property( > ? ? ? ?lambda self: getattr(self, getter_name)(), > ? ? ? ?lambda self, value: getattr(self, setter_name)(value), > ? ? ? ?None, > ? ? ? ?doc) > > It needs to know the name of the property so that it can build > the correct method names. That's an implementation detail that > the user shouldn't have to care about, but I'm forced to expose > it because of a deficiency in the language. It's like a house > with pieces of plumbing sticking out of the walls instead of > being hidden away. It works, but it's ugly and annoys you every > time you look at it. I agree it's not a good use case for a metaclass. I still don't think it's worth adding decorated assignment for; the annoyance is pretty minor and there are surely many other language warts that one might like to smooth over (but not with decorators). I don't even think it's a great use case for a decorated assignment; I don't think that @overridable_property foo = "Whatever" looks particularly elegant. Especially if you had a bunch of these in a row, I think I'd prefer the one-liner-with-string-literal over the two-liner-with-decorator. Though I don't know if these frequently occur in bunches. (The field/property definitions in Django and App Engine definitely do.) -- --Guido van Rossum (python.org/~guido) From greg.ewing at canterbury.ac.nz Fri Mar 4 07:59:04 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 Mar 2011 19:59:04 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D704319.5000608@canterbury.ac.nz> <4D707465.1080006@canterbury.ac.nz> Message-ID: <4D708DB8.4020804@canterbury.ac.nz> Guido van Rossum wrote: > I think I'd prefer the one-liner-with-string-literal over the > two-liner-with-decorator. Though I don't know if these frequently > occur in bunches. Yes, they do, which is why I'd prefer a one-line solution as well. Thinking about that, I've realised something else. The docstrings often take up most of a line, so it's galling to have to spend horizontal space on repeating the name. -- Greg From cmjohnson.mailinglist at gmail.com Fri Mar 4 08:17:09 2011 From: cmjohnson.mailinglist at gmail.com (Carl M. Johnson) Date: Thu, 3 Mar 2011 21:17:09 -1000 Subject: [Python-ideas] Different interface for namedtuple? Message-ID: I've been following the discussion on the list about decorators for assignments and whatnot, and while I do think it would be interesting if there were some way to abstract what happens in the creation of classes and defs into a more generalizable structure, for the immediate use case of NamedTuple, wouldn't an interface something like this solve the DRY problem? -- class Point(NewNamedTuple): x, y This interface couldn't have worked in Python 2.6 when namedtuple was introduced, but using the __prepare__ statement in Python 3, it's trivial to implement. As a bonus, NewNamedTuple could also support type checking with the following syntax: class Point(NewNamedTuple): x, y = int, int #or whatever the ABC is for int-like numbers In fact, since as it stands namedtuple is called "namedtuple" in lowercase, we could just camel case "NamedTuple" to collections instead of using the NewNamedTuple or whatever. Or, and I haven't done any research on this, it may also be possible to implement NewNamedTuple with backwards compatibility so it can also be used in the traditional way. -- Carl Johnson From cmjohnson.mailinglist at gmail.com Fri Mar 4 09:32:13 2011 From: cmjohnson.mailinglist at gmail.com (Carl M. Johnson) Date: Thu, 3 Mar 2011 22:32:13 -1000 Subject: [Python-ideas] Different interface for namedtuple? In-Reply-To: References: Message-ID: For those interested here's a working implementation, minus type checking but with support for the kwargs of namedtuple: >>> from collections import namedtuple >>> >>> # The custom dictionary ... class member_table: ... def __init__(self): ... self.member_names = [] ... ... def __setitem__(self, key, value): ... self.member_names.append(key) ... ... def __getitem__(self, key): ... self.member_names.append(key) ... >>> # The metaclass ... class TupleNamer(type): ... @classmethod ... def __prepare__(metacls, name, bases, **kwargs): ... return member_table() ... ... def __new__(cls, name, bases, clsdict, **kwargs): ... #So that the MetaClass is inheritable, don't apply ... #to things without a base ... if not bases: ... return type.__new__(cls, name, bases, {}) ... #The first two things in member_names are always ... #"__name__" and "__module__", so don't pass those on ... return namedtuple(name, ' '.join(clsdict.member_names[2:]), **kwargs)... ... def __init__(cls, name, bases, classdict, **kwargs): ... type.__init__(cls, name, bases, classdict) ... >>> class NamedTuple(metaclass=TupleNamer): pass ... >>> class Point(NamedTuple, rename=True): x, y, x ... >>> Point(1, 2, 3) Point(x=1, y=2, _2=3) From larry at hastings.org Fri Mar 4 10:17:11 2011 From: larry at hastings.org (Larry Hastings) Date: Fri, 04 Mar 2011 01:17:11 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> Message-ID: <4D70AE17.2080309@hastings.org> On 03/03/2011 05:59 PM, Raymond Hettinger discussed a possible language change (but later made it clear he wasn't actually proposing it): > If there were going to be only one syntax change for Python 3.3, why > not use it for something that adds a lot more expressive power: > > x = a!name # x = getattr(a, name) > > That bit of syntactic sugar might greatly expand our minds when it > comes to dynamically creating and accessing attributes. We ran that up the ol' flagpole back in February 2007: http://mail.python.org/pipermail/python-dev/2007-February/071040.html It was ultimately rejected: http://mail.python.org/pipermail/python-dev/2007-February/071107.html Dynamic attribute access is rare in real-world code. So folks who need it are currently well-served by getattr(). I think it's a bad idea to add new syntax that adds no new functionality and which we already know will be rarely used. Why manufacture dusty corners of the language? If one felt getattr() was too clumsy or ugly, one could use the attrview() class as proposed in that thread: aview = attrview(a) aview[name] # getattr(a, name) That's very pretty and requires no new syntax. As Raymond himself said just a few messages ago: > If you're going to change the language syntax, make it count and do something cool :-) /larry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Mar 4 10:46:54 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 4 Mar 2011 19:46:54 +1000 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D7058FE.4050804@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D7058FE.4050804@canterbury.ac.nz> Message-ID: On Fri, Mar 4, 2011 at 1:14 PM, Greg Ewing wrote: > It actually annoys me quite a lot every time I bump into > something like this, because the language virtually forces > a DRY violation on me that's impossible, or at least > extremely awkward, to avoid. > > Python is generally very good at not reserving special > powers for itself, but this is one area where there is > a mechanism (for defining a thing that knows its own name) > that works in certain specific cases (def and class) but > is not open to the programmer for easy use in a general way. > That strikes me as a wart. It's probably worth reminding people of Steven Bethard's attempt at addressing this issue several years back: PEP 359's "make" statement, which was specifically about invoking functions of the form f(name, tuple, namespace) to reduce abuses of class statements for that purpose. The language has moved on quite a bit since then (e.g. with the addition of abstract base classes, the __prepare__ method in the metaclass protocol and the new instance methods on property objects allowing them to be constructed in stages), but the basic question of how to construct arbitrary objects with an "official" name still doesn't have a great answer. To hit a few highlights from the thread: - '=' has its target on the left and the source expression on the right. Be cautious in proposing it be used any other way. - 'as' has its target on the right and something indirectly related to its source on the left. Be cautious in proposing it be used any other way. - I agree with Raymond that new syntax needs serious justification that this discussion hasn't uncovered yet (see [1] from me and [2] from the C# design team). Something comparable in power to PEP 359 would get closer to the threshold needed, since it would eliminate a number of uses of advanced metaclass hackery in favour of comparatively straightforward function invocations. Cheers, Nick. [1] http://www.boredomandlaziness.org/2011/02/justifying-python-language-changes.html [2] http://blogs.msdn.com/b/ericgu/archive/2004/01/12/57985.aspx -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From raymond.hettinger at gmail.com Fri Mar 4 11:15:29 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Fri, 4 Mar 2011 02:15:29 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D70AE17.2080309@hastings.org> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D70AE17.2080309@hastings.org> Message-ID: <112B4DAD-00C0-45E9-AB94-0B2A3CD9A890@gmail.com> On Mar 4, 2011, at 1:17 AM, Larry Hastings wrote: > > On 03/03/2011 05:59 PM, Raymond Hettinger discussed a possible language change (but later made it clear he wasn't actually proposing it): >> >> If there were going to be only one syntax change for Python 3.3, why not use it for something that adds a lot more expressive power: >> >> x = a!name # x = getattr(a, name) >> >> That bit of syntactic sugar might greatly expand our minds when it comes to dynamically creating and accessing attributes. > > We ran that up the ol' flagpole back in February 2007: IIRC, the idea for a __getattr__ syntax was favorably received at first, but it then drowned in a sea of syntax bikeshedding which precluded any serious discussion of use cases and benefits. Also remember that not all dead proposals have to stay dead. When generator expressions were first proposed, the PEP was rejected. The same was true for generator exceptions and for pushing data into running generators, yey these were ultimately accepted in the form of throw() and send(). For expressive power, I think Nick is on the right track by reviving the discussion about the make-statement. Raymond From p.f.moore at gmail.com Fri Mar 4 12:37:48 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 4 Mar 2011 11:37:48 +0000 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <112B4DAD-00C0-45E9-AB94-0B2A3CD9A890@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D70AE17.2080309@hastings.org> <112B4DAD-00C0-45E9-AB94-0B2A3CD9A890@gmail.com> Message-ID: On 4 March 2011 10:15, Raymond Hettinger wrote: > Also remember that not all dead proposals have to stay dead. "It's pinin' for the fjords!" > For expressive power, I think Nick is on the right track by reviving the discussion about the > make-statement. +1 Paul. From solipsis at pitrou.net Fri Mar 4 12:59:15 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 4 Mar 2011 12:59:15 +0100 Subject: [Python-ideas] Different interface for namedtuple? References: Message-ID: <20110304125915.6b779cf1@pitrou.net> On Thu, 3 Mar 2011 22:32:13 -1000 "Carl M. Johnson" wrote: > >>> > >>> # The custom dictionary > ... class member_table: > ... def __init__(self): > ... self.member_names = [] > ... > ... def __setitem__(self, key, value): > ... self.member_names.append(key) > ... > ... def __getitem__(self, key): > ... self.member_names.append(key) IMO, this looks like too much magic for a stdlib class. Especially the __getitem__ which will remember any reference made from within the class definition, at the risk of surprising behaviour. Also, your notation is actually less powerful, since you can't define anything else than members with it (you can't add any methods or properties to your namedtuple-derived class, except by subclassing it even further). Regards Antoine. From jsbueno at python.org.br Fri Mar 4 14:15:41 2011 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Fri, 4 Mar 2011 10:15:41 -0300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <3D29D5C0-9A3B-4927-AD1B-DDAE4486D540@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D7058FE.4050804@canterbury.ac.nz> <3D29D5C0-9A3B-4927-AD1B-DDAE4486D540@gmail.com> Message-ID: On Fri, Mar 4, 2011 at 2:24 AM, Raymond Hettinger wrote: > > To change the name of the class, you still have to do a global substitution, > hitting the definition, every invocation, and its appearance in docs. ?IOW, > the class name is not a piece of knowledge that can be abstracted away. > Another software engineering principle, the "Rule of Three" suggests that > the proposed abstraction doesn't meet the minimum threshold of usefulness. > Your idea may be valid and useful in other contexts, but named tuples aren't > a good motivating use case, IMO. Besides named tuples, I'd like to point out that the lack of such a mechanism had already made its way in other Libraries as things that are hard to read, and generally used without the fully understanding of what is happening.: exactly the creation of fields for ORM style frameworks like we do have already in Django and the App Engine. So, named tuples are far frombeing the unique or, IMHO, the most relevant use case for such a feature. js -><- > > Raymond > > P.S. ?The current approach to assignments has the virtue that it is readable > to people coming from other languages. ?The decorator approach creates a > piece of Python arcana that would only be decipherable to a smaller number > of people. I am not arguing necessarily for the decorator approach. But I think iti is relevant to have a syntactic language feature that allows the creation of general objects that are aware of their "primary" names. js -><- From guido at python.org Fri Mar 4 18:36:03 2011 From: guido at python.org (Guido van Rossum) Date: Fri, 4 Mar 2011 09:36:03 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D7058FE.4050804@canterbury.ac.nz> <3D29D5C0-9A3B-4927-AD1B-DDAE4486D540@gmail.com> Message-ID: On Fri, Mar 4, 2011 at 5:15 AM, Joao S. O. Bueno wrote: > I am not arguing necessarily for the decorator approach. But I think > it is relevant to have a syntactic language feature that allows the > creation of general objects that are aware of their "primary" names. I think that's a reasonable thought to pursue further. If I could write class Person(Model): name = StringProperty() age = IntegerProperty() without Model needing to have a custom metaclass that goes over the __dict__ and tells each Property instance its name I would take it. In other news, exploring the pros and cons of x.(foo) and its alternatives and variations would also be a fine idea. I find myself writing getattr() a lot. Although OTOH I also find myself telling people in code reviews a lot how they can avoid the need for using getattr(). And OT3H my most common use of getattr() is probably the 3-argument variant, as a best-practice alternative to using hasattr(). -- --Guido van Rossum (python.org/~guido) From ethan at stoneleaf.us Fri Mar 4 20:06:23 2011 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 04 Mar 2011 11:06:23 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D70327A.10108@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D701C22.50002@canterbury.ac.nz> <4D70235A.1070608@stoneleaf.us> <4D70327A.10108@canterbury.ac.nz> Message-ID: <4D71382F.5040303@stoneleaf.us> Greg Ewing wrote: > Ethan Furman wrote: >> Do we even need the def? >> >> class Person(object): >> name as Str() >> address as Str() > > I think that would be paring it down a bit foo far -- there's > not much left to suggest that a name is being bound. At > least 'def' implies that something is being defined rather > than just referred to. > >> we could also add more magic and make the assigned name an available >> attribute to the function when it's called. > > I don't understand. Attribute of what? Not entirely sure... maybe something along the lines of __name__, etc., like __var_name__, which is either the target of the lhs, or None. class Person(): name = Str() address = Str() def Str(): whatever = __var_name__ # 'name' in first call, 'address' in second ... ~Ethan~ From jimjjewett at gmail.com Fri Mar 4 21:56:49 2011 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 4 Mar 2011 15:56:49 -0500 Subject: [Python-ideas] names as objects [was: The Descriptor Protocol...] Message-ID: On Wed, Mar 2, 2011 at 4:05 AM, Martin Chilvers wrote: ... > def __getattribute__(self, key): > ? ?"Emulate type_getattro() in Objects/typeobject.c" > ? ?v = object.__getattribute__(self, key) > ? ?if hasattr(v, '__get__'): > ? ? ? return v.__get__(None, self) > ? ?return v ... > why isn't the 'key' argument passed through > to the '__get__' (and similarly, '__set__') methods? ... > class Person(object): > ? ?name ? ?= Str() This thread (and child threads) raised similar examples, such as named properties, where varname=something('varname', ...) Is this really *only* about names, or are there other ways we might want to decorate an attribute? I assume that examples like Str() suggest a desire for more information -- and possibly more persistent information -- than just the name. Would your use cases (and the similar cases in the attribute decoration thread) be solved by making the name bindings themselves into something more than pointers? Using "<-" for "the name, not the value", would the following be helpful? class Foo: x=47 x<-type = int # But is this enforced? Same questions # as with decorators. print (x<-__name__) # __name__ probably is a little magic -jJ From arnodel at gmail.com Fri Mar 4 22:34:07 2011 From: arnodel at gmail.com (Arnaud Delobelle) Date: Fri, 4 Mar 2011 21:34:07 +0000 Subject: [Python-ideas] Different interface for namedtuple? In-Reply-To: References: Message-ID: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> On 4 Mar 2011, at 07:17, Carl M. Johnson wrote: > I've been following the discussion on the list about decorators for > assignments and whatnot, and while I do think it would be interesting > if there were some way to abstract what happens in the creation of > classes and defs into a more generalizable structure, for the > immediate use case of NamedTuple, wouldn't an interface something like > this solve the DRY problem? -- > > class Point(NewNamedTuple): x, y > > This interface couldn't have worked in Python 2.6 when namedtuple was > introduced, but using the __prepare__ statement in Python 3, it's > trivial to implement. As a bonus, NewNamedTuple could also support > type checking with the following syntax: > > class Point(NewNamedTuple): x, y = int, int #or whatever the ABC is > for int-like numbers > > In fact, since as it stands namedtuple is called "namedtuple" in > lowercase, we could just camel case "NamedTuple" to collections > instead of using the NewNamedTuple or whatever. Or, and I haven't done > any research on this, it may also be possible to implement > NewNamedTuple with backwards compatibility so it can also be used in > the traditional way. It reminds me that a while ago (I think at the time of python 2.4), before the introduction of namedtuple, I had my own implementation of a "struct" decorator to create named tuples that enforced DRY and didn't require any metaclass magic. It worked as follows. >>> @struct ... def Point(x=0, y=0): ... "Two dimensional point with x and y coordinates" ... return x, y ... >>> p = Point(1, 2) >>> p Point(1, 2) >>> tuple(p) (1, 2) >>> p.x, p.y (1, 2) >>> type(p) >>> Point.__doc__ 'Two dimensional point with x and y coordinates' >>> Point(y=3) (0, 3) As you can see it abused "def", which at the time was the only way to create decorable objects that were aware of their own name. It was implemented as follows: def struct(f): classname = f.__name__ prop_names = f.func_code.co_varnames[:f.func_code.co_argcount] def _new(cls, *args, **kwargs): return tuple.__new__(cls, f(*args, **kwargs)) def _repr(self): return '%s%s' % (type(self).__name__, tuple(self)) def prop(i): return property(lambda self: self[i]) attrs = { '__slots__': (), '__new__': _new, '__repr__': _repr, '__doc__': f.__doc__ } attrs.update((name, prop(i)) for i, name in enumerate(prop_names)) return type(classname, (tuple,), attrs) -- Arnaud From cmjohnson.mailinglist at gmail.com Sat Mar 5 05:24:04 2011 From: cmjohnson.mailinglist at gmail.com (Carl M. Johnson) Date: Fri, 4 Mar 2011 18:24:04 -1000 Subject: [Python-ideas] Different interface for namedtuple? In-Reply-To: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> Message-ID: I tend to agree that the trouble with my proposed interface is that it says "class" but you can't do normal class sorts of things like create methods for the namedtuple subclass. There's also the inelegance that NamedTuple is treated as a special case by the TupleNamer metaclass, different from subclasses of NamedTuple. There's a similar issue with ORMs where normally a subclass of Table is a description of a table and its fields, but sometimes you want to actually create a new class that's like Table and you can't get that done simply by subclassing. A lot of these problems could be addressed by something like the proposed "make" keyword. Imagine if the interfaces for NamedTuple and Table were: make NamedTuple(rename=True) Point: x, y make Table() Author: firstname, lastname, DOB = Str(), Str(), Date() Those strike me as nice enough declarative syntaxes. But working out exactly how a make statement would work tends to give me a headache. I suppose at a minimum, a making-object should have a __prepare__ method and an __init__ method (or perhaps an __enter__ method, and an __exit__ method). But the more one thinks about all the aspects that would be nice for a good making-object, the more confusing it becomes... -- Carl Johnson From raymond.hettinger at gmail.com Sat Mar 5 06:02:02 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Fri, 4 Mar 2011 21:02:02 -0800 Subject: [Python-ideas] Different interface for namedtuple? In-Reply-To: References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> Message-ID: On Mar 4, 2011, at 8:24 PM, Carl M. Johnson wrote: > Imagine if the interfaces for NamedTuple and > Table were: > > make NamedTuple(rename=True) Point: > x, y > > make Table() Author: > firstname, lastname, DOB = Str(), Str(), Date() To my eyes, that doesn't even look like Python anymore. It looks a little bit like a function annotation that lost its "->" I think this thread has completely lost perspective. The problem being solved is just a minor inelegance in the signature of a factory function. That signature would be substantially the same in most commonly used languages. It's not a deep problem crying out for a syntactic atrocity to solve it. Raymond P.S. Its "namedtuple" not NamedTuple. The latter suggests a class rather than a factory function. From cmjohnson.mailinglist at gmail.com Sat Mar 5 07:00:42 2011 From: cmjohnson.mailinglist at gmail.com (Carl M. Johnson) Date: Fri, 4 Mar 2011 20:00:42 -1000 Subject: [Python-ideas] Different interface for namedtuple? In-Reply-To: References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> Message-ID: On Fri, Mar 4, 2011 at 7:02 PM, Raymond Hettinger wrote: > P.S. ?Its "namedtuple" not NamedTuple. ?The latter suggests > a class rather than a factory function. "NamedTuple" is my name for a hypothetical replacement for namedtuple that uses Metaclasses rather than being a factory function. From g.rodola at gmail.com Sat Mar 5 14:21:32 2011 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Sat, 5 Mar 2011 14:21:32 +0100 Subject: [Python-ideas] @run_as_thread decorator Message-ID: >>> import time, threading >>> >>> @threading.run_as_thread ... def foo(): ... time.sleep(100) ... return 1 ... >>> t = foo() >>> t.isAlive() True >>> t.join() >>> t.isAlive() False >>> The same thing could be done for multiprocessing module. Would this be acceptable for inclusion? --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ From ncoghlan at gmail.com Sat Mar 5 15:10:56 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 6 Mar 2011 00:10:56 +1000 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: Message-ID: On Sat, Mar 5, 2011 at 11:21 PM, Giampaolo Rodol? wrote: > ?>>> import time, threading > ?>>> > ?>>> @threading.run_as_thread > ?... def foo(): > ?... ? ? time.sleep(100) > ?... ? ? return 1 > ?... > ?>>> t = foo() > ?>>> t.isAlive() > ?True > ?>>> t.join() > ?>>> t.isAlive() > ?False > ?>>> > > The same thing could be done for multiprocessing module. > Would this be acceptable for inclusion? So basically: def run_as_thread(f): @functools.wraps(f): def wrapped(*args, **kwds): t = threading.Thread(target=f, args=args, kwds=kwds) t.start() return t return wrapped Something like that would make defining worker threads *really* easy. A similar idea may make sense as an addition to the concurrent.futures.Executor ABC. For example: def autosubmit(self): def decorator(f): @functools.wraps(f): def wrapped(*args, **kwds): return self.submit(f, *args, **kwds) return wrapped return decorator Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From solipsis at pitrou.net Sat Mar 5 15:50:10 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 5 Mar 2011 15:50:10 +0100 Subject: [Python-ideas] @run_as_thread decorator References: Message-ID: <20110305155010.46b224d8@pitrou.net> On Sun, 6 Mar 2011 00:10:56 +1000 Nick Coghlan wrote: > > So basically: > > def run_as_thread(f): > @functools.wraps(f): > def wrapped(*args, **kwds): > t = threading.Thread(target=f, args=args, kwds=kwds) > t.start() > return t > return wrapped > > Something like that would make defining worker threads *really* easy. I don't really agree. First, as you guess, there's already a rather obvious one-liner: threading.Thread(target=f).start() Second, any decorator that implicitly spawns a thread is a very bad idea (especially when used at module level...). I'm rather opposed to this, it's a useless addition to the API with no real point. Calling the Thread() constructor works basically ok. Regards Antoine. From masklinn at masklinn.net Sat Mar 5 15:26:37 2011 From: masklinn at masklinn.net (Masklinn) Date: Sat, 5 Mar 2011 15:26:37 +0100 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: Message-ID: On 2011-03-05, at 14:21 , Giampaolo Rodol? wrote: >>>> import time, threading >>>> >>>> @threading.run_as_thread > ... def foo(): > ... time.sleep(100) > ... return 1 > ... >>>> t = foo() >>>> t.isAlive() > True >>>> t.join() >>>> t.isAlive() > False >>>> > > The same thing could be done for multiprocessing module. > Would this be acceptable for inclusion? That looks good, though though I think run_as_thread needs to take arguments: * daemonifying needs to be performed *before* the thread is started, so it needs at least one argument `daemon=False` (which runs a daemonified thread if set to true) * maybe run_as_thread could take a second argument `start=True` to know whether the function should generate a started thread or a thread to start? Not sure about that one, are there situations where you'd *need* a yet-to-be-started thread apart from daemonification? * threads can take names, not sure if this is often used, should it be handled by run_as_thread? This is not as important as daemon because I think thread names can be set after start() * what are the semantics of the function's return value? None and it's basically ignored (as with regular target semantics)? From g.brandl at gmx.net Sat Mar 5 16:56:21 2011 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 05 Mar 2011 16:56:21 +0100 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: <20110305155010.46b224d8@pitrou.net> References: <20110305155010.46b224d8@pitrou.net> Message-ID: On 05.03.2011 15:50, Antoine Pitrou wrote: > On Sun, 6 Mar 2011 00:10:56 +1000 > Nick Coghlan wrote: >> >> So basically: >> >> def run_as_thread(f): >> @functools.wraps(f): >> def wrapped(*args, **kwds): >> t = threading.Thread(target=f, args=args, kwds=kwds) >> t.start() >> return t >> return wrapped >> >> Something like that would make defining worker threads *really* easy. > > I don't really agree. > First, as you guess, there's already a rather obvious one-liner: > > threading.Thread(target=f).start() > > Second, any decorator that implicitly spawns a thread is a very bad > idea (especially when used at module level...). It doesn't spawn the thread on definition, but on function call time. That's not as bad, but I agree that it is too magical for the stdlib. > I'm rather opposed to this, it's a useless addition to the API with no > real point. Calling the Thread() constructor works basically ok. Problem is, the one-liner doesn't give you a reference to the Thread object. Georg From solipsis at pitrou.net Sat Mar 5 17:08:16 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 5 Mar 2011 17:08:16 +0100 Subject: [Python-ideas] @run_as_thread decorator References: <20110305155010.46b224d8@pitrou.net> Message-ID: <20110305170816.6e5c6c28@pitrou.net> On Sat, 05 Mar 2011 16:56:21 +0100 Georg Brandl wrote: > > > I'm rather opposed to this, it's a useless addition to the API with no > > real point. Calling the Thread() constructor works basically ok. > > Problem is, the one-liner doesn't give you a reference to the Thread object. It does if you don't call start() :) From g.brandl at gmx.net Sat Mar 5 17:15:37 2011 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 05 Mar 2011 17:15:37 +0100 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: <20110305170816.6e5c6c28@pitrou.net> References: <20110305155010.46b224d8@pitrou.net> <20110305170816.6e5c6c28@pitrou.net> Message-ID: On 05.03.2011 17:08, Antoine Pitrou wrote: > On Sat, 05 Mar 2011 16:56:21 +0100 > Georg Brandl wrote: >> >> > I'm rather opposed to this, it's a useless addition to the API with no >> > real point. Calling the Thread() constructor works basically ok. >> >> Problem is, the one-liner doesn't give you a reference to the Thread object. > > It does if you don't call start() :) Then it's not a one-liner anymore :) Georg From solipsis at pitrou.net Sat Mar 5 17:32:11 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 5 Mar 2011 17:32:11 +0100 Subject: [Python-ideas] @run_as_thread decorator References: <20110305155010.46b224d8@pitrou.net> <20110305170816.6e5c6c28@pitrou.net> Message-ID: <20110305173211.1496ddfa@pitrou.net> On Sat, 05 Mar 2011 17:15:37 +0100 Georg Brandl wrote: > On 05.03.2011 17:08, Antoine Pitrou wrote: > > On Sat, 05 Mar 2011 16:56:21 +0100 > > Georg Brandl wrote: > >> > >> > I'm rather opposed to this, it's a useless addition to the API with no > >> > real point. Calling the Thread() constructor works basically ok. > >> > >> Problem is, the one-liner doesn't give you a reference to the Thread object. > > > > It does if you don't call start() :) > > Then it's not a one-liner anymore :) But neither is the original proposal! @run_in_thread def foo(): ... t = foo() From g.brandl at gmx.net Sat Mar 5 17:43:53 2011 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 05 Mar 2011 17:43:53 +0100 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: <20110305173211.1496ddfa@pitrou.net> References: <20110305155010.46b224d8@pitrou.net> <20110305170816.6e5c6c28@pitrou.net> <20110305173211.1496ddfa@pitrou.net> Message-ID: On 05.03.2011 17:32, Antoine Pitrou wrote: > On Sat, 05 Mar 2011 17:15:37 +0100 > Georg Brandl wrote: >> On 05.03.2011 17:08, Antoine Pitrou wrote: >> > On Sat, 05 Mar 2011 16:56:21 +0100 >> > Georg Brandl wrote: >> >> >> >> > I'm rather opposed to this, it's a useless addition to the API with no >> >> > real point. Calling the Thread() constructor works basically ok. >> >> >> >> Problem is, the one-liner doesn't give you a reference to the Thread object. >> > >> > It does if you don't call start() :) >> >> Then it's not a one-liner anymore :) > > But neither is the original proposal! > > @run_in_thread > def foo(): > ... > > t = foo() It's an asymptotic one-liner: every invocation after the first one is a true one-liner :) Georg From jnoller at gmail.com Sat Mar 5 18:16:40 2011 From: jnoller at gmail.com (Jesse Noller) Date: Sat, 5 Mar 2011 12:16:40 -0500 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: Message-ID: On Sat, Mar 5, 2011 at 8:21 AM, Giampaolo Rodol? wrote: > ?>>> import time, threading > ?>>> > ?>>> @threading.run_as_thread > ?... def foo(): > ?... ? ? time.sleep(100) > ?... ? ? return 1 > ?... > ?>>> t = foo() > ?>>> t.isAlive() > ?True > ?>>> t.join() > ?>>> t.isAlive() > ?False > ?>>> > > The same thing could be done for multiprocessing module. > Would this be acceptable for inclusion? > > I've long wanted to put something into the stdlib like this, but as others in the thread have pointed out - there's some semantics that remain to be hashed out and the behavior is dangerous (imo), and magical to have in the stdlib right now. In this case, I would recommend building out a library that contains these decorators (both threads and processes) building from the futures (concurrent.futures.Executor ABC) library as possible, and let's see how it pans out. I've struggled with really liking/wanting this and the fact that it's dangerous, and surprising. jesse From debatem1 at gmail.com Sat Mar 5 18:48:55 2011 From: debatem1 at gmail.com (geremy condra) Date: Sat, 5 Mar 2011 09:48:55 -0800 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: Message-ID: On Sat, Mar 5, 2011 at 9:16 AM, Jesse Noller wrote: > On Sat, Mar 5, 2011 at 8:21 AM, Giampaolo Rodol? wrote: >> ?>>> import time, threading >> ?>>> >> ?>>> @threading.run_as_thread >> ?... def foo(): >> ?... ? ? time.sleep(100) >> ?... ? ? return 1 >> ?... >> ?>>> t = foo() >> ?>>> t.isAlive() >> ?True >> ?>>> t.join() >> ?>>> t.isAlive() >> ?False >> ?>>> >> >> The same thing could be done for multiprocessing module. >> Would this be acceptable for inclusion? >> >> > > I've long wanted to put something into the stdlib like this, but as > others in the thread have pointed out - there's some semantics that > remain to be hashed out and the behavior is dangerous (imo), and > magical to have in the stdlib right now. > > In this case, I would recommend building out a library that contains > these decorators (both threads and processes) building from the > futures (concurrent.futures.Executor ABC) library as possible, and > let's see how it pans out. I've struggled with really liking/wanting > this and the fact that it's dangerous, and surprising. > > jesse I've personally written this about five times. Having said that, what I'd really like would be a context manager that executed the contained block of code in a new thread or process, and I haven't gotten that to work the way I'd like so far. Geremy Condra From g.rodola at gmail.com Sat Mar 5 19:00:52 2011 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Sat, 5 Mar 2011 19:00:52 +0100 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: Message-ID: I agree it should be possible to pass the same arguments of the original constructor, in fact this is the code I use across the various "private" projects I work on: def run_as_thread(group=None, name=None, verbose=None, daemon=None): """Decorator to run a callable in a thread returning a thread instance. >>> @run_as_thread ... def foo(): ... time.sleep(100) ... return "done" ... >>> t = foo() >>> t.is_alive() True >>> t.join() >>> t.is_alive() False """ def outer(fun): def inner(*args, **kwargs): t = threading.Thread(target=fun, args=args, kwargs=kwargs, group=group, name=name, verbose=verbose) t.start() return t return inner if hasattr(group, '__call__'): # defined as @run_as_thread rather than @run_as_thread(arg=...) fun = group group = None outer = outer(fun) return outer I don't know whether it is a good idea to provide such a thing natively, but I can't even figure out what exactly is wrong/weird with this exactly. This is why I opened this discussion, basically. =) --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ 2011/3/5 Masklinn : > On 2011-03-05, at 14:21 , Giampaolo Rodol? wrote: >>>>> import time, threading >>>>> >>>>> @threading.run_as_thread >> ... def foo(): >> ... ? ? time.sleep(100) >> ... ? ? return 1 >> ... >>>>> t = foo() >>>>> t.isAlive() >> True >>>>> t.join() >>>>> t.isAlive() >> False >>>>> >> >> The same thing could be done for multiprocessing module. >> Would this be acceptable for inclusion? > That looks good, though though I think run_as_thread needs to take arguments: > * daemonifying needs to be performed *before* the thread is started, so it needs at least one argument `daemon=False` (which runs a daemonified thread if set to true) > * maybe run_as_thread could take a second argument `start=True` to know whether the function should generate a started thread or a thread to start? Not sure about that one, are there situations where you'd *need* a yet-to-be-started thread apart from daemonification? > * threads can take names, not sure if this is often used, should it be handled by run_as_thread? This is not as important as daemon because I think thread names can be set after start() > * what are the semantics of the function's return value? None and it's basically ignored (as with regular target semantics)? > > From g.rodola at gmail.com Sat Mar 5 19:04:39 2011 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Sat, 5 Mar 2011 19:04:39 +0100 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: Message-ID: 2011/3/5 geremy condra : > I've personally written this about five times. Having said that, what > I'd really like would be a context manager that executed the contained > block of code in a new thread or process, and I haven't gotten that to > work the way I'd like so far. I've been thingking about that as well but... would that even be possible? --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ From solipsis at pitrou.net Sat Mar 5 19:32:40 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 5 Mar 2011 19:32:40 +0100 Subject: [Python-ideas] @run_as_thread decorator References: Message-ID: <20110305193240.5d18bca0@pitrou.net> On Sat, 5 Mar 2011 19:00:52 +0100 Giampaolo Rodol? wrote: > > I don't know whether it is a good idea to provide such a thing > natively, but I can't even figure out what exactly is wrong/weird with > this exactly. I don't understand what this two line variant brings over the other two-line variant: t = threading.Thread(target=func) t.start() Basically you are proposing to complicate the API for no real benefit except that it "feels good". It also makes things harder to learn for beginners since there are two abstractions stacked one over the other. It doesn't sound like a sensible addition. Regards Antoine. From bruce at leapyear.org Sat Mar 5 19:37:02 2011 From: bruce at leapyear.org (Bruce Leban) Date: Sat, 5 Mar 2011 10:37:02 -0800 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: Message-ID: If there's going to be support for turning a function into a thread, I think it should still be useful as a function. Here's how I'd change the original proposal here: >>> @threading.run_as_thread ... def foo(): ... time.sleep(100) ... return 1 ... >>> t = foo() >>> t.result() None >>> t.isAlive() True >>> t.join() >>> t.isAlive() False >>> t.result() 1 >>> foo().result(join=True) 1 >>> foo().result(join=True, timeout=1) None # as with Thread.join, in this case you cannot tell from the return value if # the join happened (unless you know the function cannot return None) -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.rodola at gmail.com Sat Mar 5 20:25:20 2011 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Sat, 5 Mar 2011 20:25:20 +0100 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: <20110305193240.5d18bca0@pitrou.net> References: <20110305193240.5d18bca0@pitrou.net> Message-ID: It is probably a bad idea, I don't know. I'm the first one being skeptical about it. Maybe a real world example can bring some contribution to this discussion since using this kind of approach at module-level doesn't bring real benefits over using threading.Thread(target=func).start(). In my experience, I've found this extremely elegant when I was developing a web application in which I wanted to start a serie of long running tasks involving the db, and didn't care about the function return value, nor I wanted to return anything relevant to the user other than a simple "task started - this may take some time" message. The code looked like this: from utils import run_as_thread class Admin: @run_as_thread def refresh_persons(self): ... @run_as_thread def refresh_voyages(self): ... @run_as_thread def refresh_addresses(self): ... > Basically you are proposing to complicate the API for no real benefit > except that it "feels good". I'd say there's a benefit in terms of elegance if this is used in a certain way. On the other hand I understand your complaints, and this probably fits better in an "util" module rather than threading. I like the function return value example proposed by Bruce. Is there a reason why this is not provided by base Thread class? http://code.activestate.com/recipes/84317/ ...and: http://www.google.com/#sclient=psy&hl=en&q=python+thread+return+value&aq=f&aqi=g1&aql=&oq=&pbx=1&bav=on.2,or.&fp=369c8973645261b8 ...suggest that users tend to require this feature. --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ 2011/3/5 Antoine Pitrou : > On Sat, 5 Mar 2011 19:00:52 +0100 > Giampaolo Rodol? > wrote: >> >> I don't know whether it is a good idea to provide such a thing >> natively, but I can't even figure out what exactly is wrong/weird with >> this exactly. > > I don't understand what this two line variant brings over the other > two-line variant: > > ?t = threading.Thread(target=func) > ?t.start() > > Basically you are proposing to complicate the API for no real benefit > except that it "feels good". It also makes things harder to learn for > beginners since there are two abstractions stacked one over the other. > It doesn't sound like a sensible addition. > > Regards > > Antoine. > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > From solipsis at pitrou.net Sat Mar 5 20:31:18 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 05 Mar 2011 20:31:18 +0100 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: <20110305193240.5d18bca0@pitrou.net> Message-ID: <1299353478.3692.2.camel@localhost.localdomain> Le samedi 05 mars 2011 ? 20:25 +0100, Giampaolo Rodol? a ?crit : > from utils import run_as_thread > > class Admin: > > @run_as_thread > def refresh_persons(self): > ... That strikes me as a bad idea, because another module or library calling refresh_persons() has no clue that this implictly starts a separate thread (instead of being a "normal" method). Regards Antoine. From debatem1 at gmail.com Sat Mar 5 20:32:15 2011 From: debatem1 at gmail.com (geremy condra) Date: Sat, 5 Mar 2011 11:32:15 -0800 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: Message-ID: On Sat, Mar 5, 2011 at 10:37 AM, Bruce Leban wrote: > If there's going to be support for turning a function into a thread, I think > it should still be useful as a function. Here's how I'd change the original > proposal here: > >>>> @threading.run_as_thread > ... def foo(): > ... ? ? time.sleep(100) > ... ? ? return 1 > ... >>>> t = foo() >>>> t.result() > None >>>> t.isAlive() > True >>>> t.join() >>>> t.isAlive() > False >>>> t.result() > 1 >>>> foo().result(join=True) > 1 >>>> foo().result(join=True, timeout=1) > None > # as with Thread.join, in this case you cannot tell from the return value if > #?the join happened (unless you know the function cannot return None) Is there a reason that a dedicated sentinel value can't be used here? Geremy Condra From debatem1 at gmail.com Sat Mar 5 20:36:25 2011 From: debatem1 at gmail.com (geremy condra) Date: Sat, 5 Mar 2011 11:36:25 -0800 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: <1299353478.3692.2.camel@localhost.localdomain> References: <20110305193240.5d18bca0@pitrou.net> <1299353478.3692.2.camel@localhost.localdomain> Message-ID: On Sat, Mar 5, 2011 at 11:31 AM, Antoine Pitrou wrote: > > Le samedi 05 mars 2011 ? 20:25 +0100, Giampaolo Rodol? a ?crit : >> from utils import run_as_thread >> >> class Admin: >> >> ? ? @run_as_thread >> ? ? def refresh_persons(self): >> ? ? ? ? ... > > That strikes me as a bad idea, because another module or library calling > refresh_persons() has no clue that this implictly starts a separate > thread (instead of being a "normal" method). > I just don't really find this that convincing; you're calling the function, you should probably know what it does. That's doubly true in this case, since you have to code around the fact that it's working in a separate thread if you want to use the return value. Geremy Condra From bruce at leapyear.org Sat Mar 5 20:39:54 2011 From: bruce at leapyear.org (Bruce Leban) Date: Sat, 5 Mar 2011 11:39:54 -0800 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: <1299353478.3692.2.camel@localhost.localdomain> References: <20110305193240.5d18bca0@pitrou.net> <1299353478.3692.2.camel@localhost.localdomain> Message-ID: On Sat, Mar 5, 2011 at 11:31 AM, Antoine Pitrou wrote: > > That strikes me as a bad idea, because another module or library calling > refresh_persons() has no clue that this implictly starts a separate > thread (instead of being a "normal" method). > > You can always call functions that don't do what you think. There's a common pattern for this: def foo(bar): return x(y(z(bar))) @threading.run_as_thread def foo_async(bar): return foo(bar) --- Bruce New Puzzazz newsletter: http://j.mp/puzzazz-news-2011-02 Make your web app more secure: http://j.mp/gruyere-security -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat Mar 5 20:44:16 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 05 Mar 2011 20:44:16 +0100 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: <20110305193240.5d18bca0@pitrou.net> <1299353478.3692.2.camel@localhost.localdomain> Message-ID: <1299354256.3692.4.camel@localhost.localdomain> Le samedi 05 mars 2011 ? 11:36 -0800, geremy condra a ?crit : > On Sat, Mar 5, 2011 at 11:31 AM, Antoine Pitrou wrote: > > > > Le samedi 05 mars 2011 ? 20:25 +0100, Giampaolo Rodol? a ?crit : > >> from utils import run_as_thread > >> > >> class Admin: > >> > >> @run_as_thread > >> def refresh_persons(self): > >> ... > > > > That strikes me as a bad idea, because another module or library calling > > refresh_persons() has no clue that this implictly starts a separate > > thread (instead of being a "normal" method). > > > > I just don't really find this that convincing; you're calling the > function, you should probably know what it does. Sure... My point is that the stdlib shouldn't really encourage this, especially when it doesn't subtantially reduce typing. This proposal seems similar in principle to the proposal of having "implicit" asynchronous function calls without writing "yield", which has also been shot down several times. Regards Antoine. From greg.ewing at canterbury.ac.nz Sat Mar 5 23:53:31 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 06 Mar 2011 11:53:31 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D7058FE.4050804@canterbury.ac.nz> <3D29D5C0-9A3B-4927-AD1B-DDAE4486D540@gmail.com> Message-ID: <4D72BEEB.9030201@canterbury.ac.nz> Guido van Rossum wrote: > I think that's a reasonable thought to pursue further. If I could write > > class Person(Model): > name = StringProperty() > age = IntegerProperty() > > without Model needing to have a custom metaclass that goes over the > __dict__ and tells each Property instance its name I would take it. You mean without *any* new syntax? I suppose that could be done with a suitable protocol, e.g. whenever assigning something to a bare name, if it has a __setname__ method, call it with the name being assigned to. That would incur overhead on every assignment to a bare name, although if __setname__ is given a type slot it might be possible to make the overhead small enough to ignore. Another problem is that you really only want this to happen on the *first* assignment. Another approach might be to make it a standard part of the class creation process to go through the attribute dict looking for objects with __setname__ methods and calling them. That would mean the feature would only be available for objects residing in classes, though, so you wouldn't be able to use it to declare named tuples at module level, for example. -- Greg > > In other news, exploring the pros and cons of x.(foo) and its > alternatives and variations would also be a fine idea. I find myself > writing getattr() a lot. Although OTOH I also find myself telling > people in code reviews a lot how they can avoid the need for using > getattr(). And OT3H my most common use of getattr() is probably the > 3-argument variant, as a best-practice alternative to using hasattr(). > From greg.ewing at canterbury.ac.nz Sun Mar 6 00:01:26 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 06 Mar 2011 12:01:26 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D71382F.5040303@stoneleaf.us> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D701C22.50002@canterbury.ac.nz> <4D70235A.1070608@stoneleaf.us> <4D70327A.10108@canterbury.ac.nz> <4D71382F.5040303@stoneleaf.us> Message-ID: <4D72C0C6.5080107@canterbury.ac.nz> Ethan Furman wrote: > class Person(): > name = Str() > address = Str() > > def Str(): > whatever = __var_name__ # 'name' in first call, 'address' in second > ... Hmmm, that seems rather convoluted, and it's not clear what should happen in more complex cases. What if there is more than one function call in the rhs expression? Which one gets the __var_name__? Do any of them? I suppose the answer would be that it only applies when the top level of the rhs consists of a single function call. It's also not clear how to implement this. Setting it as an attribute of the function itself would be wrong -- it really needs to be injected into the namespace of the frame somehow during the call. -- Greg From greg.ewing at canterbury.ac.nz Sun Mar 6 00:18:04 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 06 Mar 2011 12:18:04 +1300 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> Message-ID: <4D72C4AC.7040302@canterbury.ac.nz> Carl M. Johnson wrote: > make NamedTuple(rename=True) Point: > x, y > > make Table() Author: > firstname, lastname, DOB = Str(), Str(), Date() I did some thinking about this kind of thing once when I was considering Python as a language for interactive fiction. When writing IF, you tend to have a lot of unique objects with special behaviour, and to support this, dedicated IF languages such as TADS usually have a construct that's somewhere between a class declaration and an instantiation. For Python, I postulated an "instance" statement that would be used something like this: instance Wardrobe(Thing): name = "wardrobe" description = "A nice mahogany double-door wardrobe." def take(self): print "The wardrobe is too heavy to pick up." What this would do is first create an anonymous subclass of Thing, and then instantiate that subclass. This isn't quite what we're after here, though, because for use cases such as namedtuple, we don't usually want a subclass, and we *do* usually want to pass parameters to the constructor. We'd also like it to fit on one line in simple cases. It's hard to come up with a syntax that incorporates all of these possibilites at the same time without becoming rather long and confusing. The base class list and the construction parameters both want to go in parentheses, and how do you make it obvious which is which? -- Greg From python at mrabarnett.plus.com Sun Mar 6 01:32:21 2011 From: python at mrabarnett.plus.com (MRAB) Date: Sun, 06 Mar 2011 00:32:21 +0000 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D72BEEB.9030201@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D7058FE.4050804@canterbury.ac.nz> <3D29D5C0-9A3B-4927-AD1B-DDAE4486D540@gmail.com> <4D72BEEB.9030201@canterbury.ac.nz> Message-ID: <4D72D615.1040101@mrabarnett.plus.com> On 05/03/2011 22:53, Greg Ewing wrote: > Guido van Rossum wrote: > >> I think that's a reasonable thought to pursue further. If I could write >> >> class Person(Model): >> name = StringProperty() >> age = IntegerProperty() >> >> without Model needing to have a custom metaclass that goes over the >> __dict__ and tells each Property instance its name I would take it. > > You mean without *any* new syntax? > > I suppose that could be done with a suitable protocol, e.g. > whenever assigning something to a bare name, if it has a > __setname__ method, call it with the name being assigned to. > > That would incur overhead on every assignment to a bare name, > although if __setname__ is given a type slot it might be > possible to make the overhead small enough to ignore. > > Another problem is that you really only want this to happen > on the *first* assignment. > > Another approach might be to make it a standard part of the > class creation process to go through the attribute dict > looking for objects with __setname__ methods and calling > them. That would mean the feature would only be available > for objects residing in classes, though, so you wouldn't > be able to use it to declare named tuples at module level, > for example. > With classes, couldn't that be done just as easily now with a decorator? From ncoghlan at gmail.com Sun Mar 6 01:55:11 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 6 Mar 2011 10:55:11 +1000 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: Message-ID: On Sun, Mar 6, 2011 at 3:16 AM, Jesse Noller wrote: > I've long wanted to put something into the stdlib like this, but as > others in the thread have pointed out - there's some semantics that > remain to be hashed out and the behavior is dangerous (imo), and > magical to have in the stdlib right now. > > In this case, I would recommend building out a library that contains > these decorators (both threads and processes) building from the > futures (concurrent.futures.Executor ABC) library as possible, and > let's see how it pans out. I've struggled with really liking/wanting > this and the fact that it's dangerous, and surprising. Well said, especially the last line :) However, I suspect this is one of those things where: - rewriting it yourself is easier than finding a library for it, so a PyPI module would gather little interest or feedback - doing it "right" in the stdlib would eliminate the temptation to develop custom not-quite-right implementations (e.g. ones where the decorator actually *creates* and starts the thread) It would make a good topic for a PEP, IMO. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From arnodel at gmail.com Sun Mar 6 01:56:03 2011 From: arnodel at gmail.com (Arnaud Delobelle) Date: Sun, 6 Mar 2011 00:56:03 +0000 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: <4D72C4AC.7040302@canterbury.ac.nz> References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> Message-ID: <5BFF6041-A10C-4E1C-B68A-76D56C2D2A31@gmail.com> On 5 Mar 2011, at 23:18, Greg Ewing wrote: > Carl M. Johnson wrote: > >> make NamedTuple(rename=True) Point: >> x, y >> make Table() Author: >> firstname, lastname, DOB = Str(), Str(), Date() > > I did some thinking about this kind of thing once when I was > considering Python as a language for interactive fiction. When > writing IF, you tend to have a lot of unique objects with > special behaviour, and to support this, dedicated IF languages > such as TADS usually have a construct that's somewhere between > a class declaration and an instantiation. > > For Python, I postulated an "instance" statement that would > be used something like this: > > instance Wardrobe(Thing): > > name = "wardrobe" > description = "A nice mahogany double-door wardrobe." > > def take(self): > print "The wardrobe is too heavy to pick up." > > What this would do is first create an anonymous subclass of > Thing, and then instantiate that subclass. That's interesting, I once wrote a simple IF engine in Python and I wanted to achieve this almost exactly! I solved it as follows: instances are declared as classes inheriting from their type and the special type named Instance. So the the class definition below binds "House" to an instance of Location. class House(Instance, Location): description = "a small country house" long_description = "You are in a small country house." objects = RedKey, WoodenChest, BackDoor def go_south(): if BackDoor.is_open: message('you step through the door into the garden.') location = Garden else: message("You can't go THROUGH the door, it's closed!") Instance was defined as follows: class MetaInstance(MetaObject): def __init__(cls, name, bases, attrs): pass class Instance(object): __metaclass__ = MetaInstance @staticmethod def MetaInstance__new__(meta, name, bases, attrs): bases = list(bases) bases.remove(Instance) cls = bases.pop() return cls(**attrs) MetaInstance.__new__ = MetaInstance__new__ -- Arnaud From jnoller at gmail.com Sun Mar 6 04:25:28 2011 From: jnoller at gmail.com (Jesse Noller) Date: Sat, 5 Mar 2011 22:25:28 -0500 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: Message-ID: On Sat, Mar 5, 2011 at 7:55 PM, Nick Coghlan wrote: > On Sun, Mar 6, 2011 at 3:16 AM, Jesse Noller wrote: >> I've long wanted to put something into the stdlib like this, but as >> others in the thread have pointed out - there's some semantics that >> remain to be hashed out and the behavior is dangerous (imo), and >> magical to have in the stdlib right now. >> >> In this case, I would recommend building out a library that contains >> these decorators (both threads and processes) building from the >> futures (concurrent.futures.Executor ABC) library as possible, and >> let's see how it pans out. I've struggled with really liking/wanting >> this and the fact that it's dangerous, and surprising. > > Well said, especially the last line :) > > However, I suspect this is one of those things where: > - rewriting it yourself is easier than finding a library for it, so a > PyPI module would gather little interest or feedback > - doing it "right" in the stdlib would eliminate the temptation to > develop custom not-quite-right implementations (e.g. ones where the > decorator actually *creates* and starts the thread) > > It would make a good topic for a PEP, IMO. > > Cheers, > Nick. Well, why stop at one-function in a thread - why not have: @threading.pool(10): def func(me): .... runs func in a threadpool of 10 Or with threading.pool(10) as pool: pool.in.put(blah) pool.out.get(blah) And so on - I can totally buy that it's pep fodder to do it "right" once, for the stdlib, I've just struggled and debated with a lot of people about how helpful this really is and the right way of doing it. The original proposal seems OK at first - just run this function in a thread - at first glance it seems like it's a harmless decorator to throw a function into a thread or process. But then what about the "surprise" that a function call forked a new thread - or new process? Maybe you're right - maybe this is good PEP territory. If nothing more we could all hash out the various way of biting yourself in the butt with these :) jesse ps: i love controlling thread/process pools with context managers. I'm ill. From guido at python.org Sun Mar 6 04:31:51 2011 From: guido at python.org (Guido van Rossum) Date: Sat, 5 Mar 2011 19:31:51 -0800 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: Message-ID: On Sat, Mar 5, 2011 at 7:25 PM, Jesse Noller wrote: > On Sat, Mar 5, 2011 at 7:55 PM, Nick Coghlan wrote: >> On Sun, Mar 6, 2011 at 3:16 AM, Jesse Noller wrote: >>> I've long wanted to put something into the stdlib like this, but as >>> others in the thread have pointed out - there's some semantics that >>> remain to be hashed out and the behavior is dangerous (imo), and >>> magical to have in the stdlib right now. >>> >>> In this case, I would recommend building out a library that contains >>> these decorators (both threads and processes) building from the >>> futures (concurrent.futures.Executor ABC) library as possible, and >>> let's see how it pans out. I've struggled with really liking/wanting >>> this and the fact that it's dangerous, and surprising. >> >> Well said, especially the last line :) >> >> However, I suspect this is one of those things where: >> - rewriting it yourself is easier than finding a library for it, so a >> PyPI module would gather little interest or feedback >> - doing it "right" in the stdlib would eliminate the temptation to >> develop custom not-quite-right implementations (e.g. ones where the >> decorator actually *creates* and starts the thread) >> >> It would make a good topic for a PEP, IMO. >> >> Cheers, >> Nick. > > Well, why stop at one-function in a thread - why not have: > > @threading.pool(10): > def func(me): > ? ?.... runs func in a threadpool of 10 > > Or > > with threading.pool(10) as pool: > ? ?pool.in.put(blah) > ? ?pool.out.get(blah) > > And so on - I can totally buy that it's pep fodder to do it "right" > once, for the stdlib, I've just struggled and debated with a lot of > people about how helpful this really is and the right way of doing it. > The original proposal seems OK at first - just run this function in a > thread - at first glance it seems like it's a harmless decorator to > throw a function into a thread or process. But then what about the > "surprise" that a function call forked a new thread - or new process? > > Maybe you're right - maybe this is good PEP territory. If nothing more > we could all hash out the various way of biting yourself in the butt > with these :) I'm thinking that since all of this is all pretty simple it's something that applications or 3rd party libraries can easily have in a "utilities" module of their own. The bar is a lot lower that way. > jesse > > ps: i love controlling thread/process pools with context managers. I'm ill. Sick, you mean. :-) -- --Guido van Rossum (python.org/~guido) From jnoller at gmail.com Sun Mar 6 04:38:41 2011 From: jnoller at gmail.com (Jesse Noller) Date: Sat, 5 Mar 2011 22:38:41 -0500 Subject: [Python-ideas] @run_as_thread decorator In-Reply-To: References: Message-ID: On Sat, Mar 5, 2011 at 10:31 PM, Guido van Rossum wrote: > On Sat, Mar 5, 2011 at 7:25 PM, Jesse Noller wrote: >> On Sat, Mar 5, 2011 at 7:55 PM, Nick Coghlan wrote: >>> On Sun, Mar 6, 2011 at 3:16 AM, Jesse Noller wrote: >>>> I've long wanted to put something into the stdlib like this, but as >>>> others in the thread have pointed out - there's some semantics that >>>> remain to be hashed out and the behavior is dangerous (imo), and >>>> magical to have in the stdlib right now. >>>> >>>> In this case, I would recommend building out a library that contains >>>> these decorators (both threads and processes) building from the >>>> futures (concurrent.futures.Executor ABC) library as possible, and >>>> let's see how it pans out. I've struggled with really liking/wanting >>>> this and the fact that it's dangerous, and surprising. >>> >>> Well said, especially the last line :) >>> >>> However, I suspect this is one of those things where: >>> - rewriting it yourself is easier than finding a library for it, so a >>> PyPI module would gather little interest or feedback >>> - doing it "right" in the stdlib would eliminate the temptation to >>> develop custom not-quite-right implementations (e.g. ones where the >>> decorator actually *creates* and starts the thread) >>> >>> It would make a good topic for a PEP, IMO. >>> >>> Cheers, >>> Nick. >> >> Well, why stop at one-function in a thread - why not have: >> >> @threading.pool(10): >> def func(me): >> ? ?.... runs func in a threadpool of 10 >> >> Or >> >> with threading.pool(10) as pool: >> ? ?pool.in.put(blah) >> ? ?pool.out.get(blah) >> >> And so on - I can totally buy that it's pep fodder to do it "right" >> once, for the stdlib, I've just struggled and debated with a lot of >> people about how helpful this really is and the right way of doing it. >> The original proposal seems OK at first - just run this function in a >> thread - at first glance it seems like it's a harmless decorator to >> throw a function into a thread or process. But then what about the >> "surprise" that a function call forked a new thread - or new process? >> >> Maybe you're right - maybe this is good PEP territory. If nothing more >> we could all hash out the various way of biting yourself in the butt >> with these :) > > I'm thinking that since all of this is all pretty simple it's > something that applications or 3rd party libraries can easily have in > a "utilities" module of their own. The bar is a lot lower that way. True, they're trivial to implement, and Nick may be right that there's no way for a package to be able to gain traction that's solely based on this idea. But something that builds on decorator and context manager usage of threads and pools / futures might be able to gain a fair amount of it. >> ps: i love controlling thread/process pools with context managers. I'm ill. > > Sick, you mean. :-) Yes, this is true as well. From greg.ewing at canterbury.ac.nz Sun Mar 6 05:10:11 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 06 Mar 2011 17:10:11 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D72D615.1040101@mrabarnett.plus.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D7058FE.4050804@canterbury.ac.nz> <3D29D5C0-9A3B-4927-AD1B-DDAE4486D540@gmail.com> <4D72BEEB.9030201@canterbury.ac.nz> <4D72D615.1040101@mrabarnett.plus.com> Message-ID: <4D730923.9080804@canterbury.ac.nz> MRAB wrote: > On 05/03/2011 22:53, Greg Ewing wrote: > >> Another approach might be to make it a standard part of the >> class creation process to go through the attribute dict >> looking for objects with __setname__ methods and calling >> them. That would mean the feature would only be available > > With classes, couldn't that be done just as easily now with a decorator? I suppose it could, but then the burden is on the user to apply the decorator to each class in which he wants to use things that rely on it. That doesn't seem very satisfactory. -- Greg From cmjohnson.mailinglist at gmail.com Sun Mar 6 07:27:33 2011 From: cmjohnson.mailinglist at gmail.com (Carl M. Johnson) Date: Sat, 5 Mar 2011 20:27:33 -1000 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: <4D72C4AC.7040302@canterbury.ac.nz> References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> Message-ID: If you go back and re-read PEP 359, a lot of the motivating examples -- creating namespaces, setting GUI properties, templating HTML -- are still compelling. If you add in the examples of ORM tables and interactive fiction processors, I think what unites all the examples is that the make statement is a way of creating a DSL from within Python. Can using a DSL within Python be pythonic ,though? That's a difficult question to answer. But, on the other hand, just because there's no clean, non-magical way of making a DSL within Python doesn't mean people won't try and indeed, we can see the results of their trying out there today. For example, see Biwako a recent project which abuses metaclasses to create a declarative syntax for processing file formats. Here's an excerpt from that page: > For example, here?s a very simple Biwako class that will can parse part of the > GIF file format, allowing you to easily get to the width and height of any GIF image. > > from biwako import bin > > class GIF(bin.Structure, endianness=bin.LittleEndian, encoding='ascii'): > tag = bin.FixedString('GIF') > version = bin.String(size=3) > width = bin.Integer(size=2) > height = bin.Integer(size=2) > > Now you have a class that can accept any GIF image as a file (or any file-like > object that?s readable) and parse it into the attributes shown on this class. > > >>> image = GIF(open('example.gif', 'rb')) > >>> image.width, image.height > (400, 300) So basically, the author of this project is calling GIF a "class" but it's not something that really operates the way a normal class does, because it's subclassing the magical bin.Structure class and inheriting its metaclass. With a little searching, you can find similar examples of abuse that are centered around the with statement rather than metaclasses. People have made the with statement into an XML generator or an anonymous block handler . Seeing examples like these make me think a re-examination of PEP 359 would be a good idea. Of course, I do think it needs a little more work (in particular, I think the make statement should have an equivalent of a __prepare__ method and should receive the BLOCK as a callback instead of automatically executing it), but the core idea is worth taking another look at. -- Carl From digitalxero at gmail.com Sun Mar 6 14:39:53 2011 From: digitalxero at gmail.com (Dj Gilcrease) Date: Sun, 6 Mar 2011 08:39:53 -0500 Subject: [Python-ideas] windows dispatcher exe for python Message-ID: On Sun, Mar 6, 2011 at 8:10 AM, Mark Hammond wrote: >> Something I have been thinking about recently though is outside the >> scope of the pep is writing a python.exe, to replace the python.bat, >> that would ?try to read the shebang line of the file to send it to the >> right version of python. Then I just associate py files with the >> dispatcher exe and everything should work as intended. > > But where would such a python.exe live and how would that directory end up > on the PATH? System32 or C:\Program Files\Python (that exists regardless of the choses install location) I like the latter but the installer would need to add that to the PATH if it wasnt already there > > On the more general idea though, it could have legs as it would solve the > file association issue for files which include the shebang and arrange for > the status-quo (or better) for files which don't... > > But this sounds like a different PEP ;) Yes absolutely a different pep so I split this to python-ideas From mrts.pydev at gmail.com Sun Mar 6 19:32:07 2011 From: mrts.pydev at gmail.com (=?ISO-8859-1?Q?Mart_S=F5mermaa?=) Date: Sun, 6 Mar 2011 20:32:07 +0200 Subject: [Python-ideas] str.split() oddness In-Reply-To: References: Message-ID: First, sorry for such a big delay in replying. On Mon, Feb 28, 2011 at 2:13 AM, Guido van Rossum wrote: > Does Ruby in general leave out empty strings from the result? What > does it return when "x,,y" is split on "," ? ["x", "", "y"] or ["x", "y"]? >> "x,,y".split(",") => ["x", "", "y"] But let me remind that the behaviour of foo.split(x) where foo is not an empty string is not questioned at all, only behaviour when splitting the empty string is. Python Ruby join1 [''] => '' [''] => '' join2 [ ] => '' [ ] => '' Python Ruby split [''] <= '' [ ] <= '' As you can see, join1 and join2 are identical in both languages. Python has chosen to make split the inverse of join1, Ruby, on the other hand, the inverse of join2. > In Python the generalization is that since "xx".split(",") is ["xx"], > and "x",split(",") is ["x"], it naturally follows that "".split(",") > is [""]. That is one line of reasoning that emphasizes the "string-nature" of ''. However, I myself, the Ruby folks and Nick would rather emphasize the "zero-element-nature" [1] of ''. Both approaches are based on solid reasoning, the latter just happens to be more practical. And I would still claim that "Applying the split operator to the zero element of strings should result in the zero element of lists" wins on theoretical grounds as well. The general problem stems from the fact that my initial expectation that f_a(x) = x.join(a).split(x), where x in lists, a in strings should be an identity function can not be satisfied as join is non-injective (because of the surjective example above). [1] http://en.wikipedia.org/wiki/Zero_element From g.brandl at gmx.net Sun Mar 6 20:35:44 2011 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 06 Mar 2011 20:35:44 +0100 Subject: [Python-ideas] str.split() oddness In-Reply-To: References: Message-ID: On 06.03.2011 19:32, Mart S?mermaa wrote: >> In Python the generalization is that since "xx".split(",") is ["xx"], >> and "x",split(",") is ["x"], it naturally follows that "".split(",") >> is [""]. > > That is one line of reasoning that emphasizes the > "string-nature" of ''. > > However, I myself, the Ruby folks and Nick would rather > emphasize the "zero-element-nature" [1] of ''. > > Both approaches are based on solid reasoning, the latter > just happens to be more practical. I think we haven't seen any proof of that (and no, the property of x.join(a).split(x) == a is not show me why it would be practical). Georg From mrts.pydev at gmail.com Sun Mar 6 22:06:08 2011 From: mrts.pydev at gmail.com (=?ISO-8859-1?Q?Mart_S=F5mermaa?=) Date: Sun, 6 Mar 2011 23:06:08 +0200 Subject: [Python-ideas] str.split() oddness In-Reply-To: References: Message-ID: On Sun, Mar 6, 2011 at 9:35 PM, Georg Brandl wrote: > On 06.03.2011 19:32, Mart S?mermaa wrote: > >>> In Python the generalization is that since "xx".split(",") is ["xx"], >>> and "x",split(",") is ["x"], it naturally follows that "".split(",") >>> is [""]. >> >> That is one line of reasoning that emphasizes the >> "string-nature" of ''. >> >> However, I myself, the Ruby folks and Nick would rather >> emphasize the "zero-element-nature" [1] of ''. >> >> Both approaches are based on solid reasoning, the latter >> just happens to be more practical. > > I think we haven't seen any proof of that (and no, the property > of x.join(a).split(x) == a is not show me why it would be practical). I referred to the practical example in my first message, but let me repeat it. Which do you prefer: bar = dict(chunk.split('=') for chunk in foo.split(",")) or bar = (dict(chunk.split('=') for chunk in foo.split(",")) if foo else {}) ? I'm afraid there are other people besides me that fail to think of the `if foo else {}` part the on the first shot (assuming there will be an empty list when foo='' and that `for` will not be entered at all). Best, Mart S?mermaa From jsbueno at python.org.br Sun Mar 6 22:54:48 2011 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Sun, 6 Mar 2011 18:54:48 -0300 Subject: [Python-ideas] str.split() oddness In-Reply-To: References: Message-ID: On Sun, Mar 6, 2011 at 6:06 PM, Mart S?mermaa wrote: > On Sun, Mar 6, 2011 at 9:35 PM, Georg Brandl wrote: >> On 06.03.2011 19:32, Mart S?mermaa wrote: >> >>>> In Python the generalization is that since "xx".split(",") is ["xx"], >>>> and "x",split(",") is ["x"], it naturally follows that "".split(",") >>>> is [""]. >>> >>> That is one line of reasoning that emphasizes the >>> "string-nature" of ''. >>> >>> However, I myself, the Ruby folks and Nick would rather >>> emphasize the "zero-element-nature" [1] of ''. >>> >>> Both approaches are based on solid reasoning, the latter >>> just happens to be more practical. >> >> I think we haven't seen any proof of that (and no, the property >> of x.join(a).split(x) == a is not show me why it would be practical). > > I referred to the practical example in my first message, > but let me repeat it. > > Which do you prefer: > > ?bar = dict(chunk.split('=') for chunk in foo.split(",")) > > or > > ?bar = (dict(chunk.split('=') for chunk in foo.split(",")) if foo else {}) > > ? > > I'm afraid there are other people besides me that fail to think > of the `if foo else {}` part the on the first shot (assuming there will be an > empty list when foo='' and that `for` will not be entered at all). Mart, I don't knowe about you, but in my code for example there are plenty, and I mean __plenty__ of places where I assume after a split, I will have at least one element in a list. Python simply does not break code backwards compatibility like that, moreover for such little things like this. Such a behavior,as you describe, while apparently not bad, simply is not that way in Python, and cannot be changed without a break of compatibility. The current behavior has advantages as well: one can always refer to the 1st ( [0] ) element of the split return value. If I want to strip "# " style comments in a simple file: line = line.split("#")[0] Under your new and modified method, this code would break, and would have to contain one extra "if" upon rewriting. In my opinion it makes no sense to break the rules for such a small change. Moreover, if you come to think of it, while parsing lines in a text file, that might contain some kind of assignment interspersed with blank lines, as the one you describe, nearly any code dealing with that will have to check for blank lines containing white spaces as well. And in this case, with or without your changes: a = line.split("=") if len(a)==2: ... The situation you hit where you avoid writing that "if" int he generator expression is more likely very peculiar to the program you were writing in that moment - it is not the case I encounter in real day to day coding. js -><- > Best, > Mart S?mermaa > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > From steve at pearwood.info Mon Mar 7 00:21:52 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 7 Mar 2011 10:21:52 +1100 Subject: [Python-ideas] str.split() oddness In-Reply-To: References: Message-ID: <201103071021.52967.steve@pearwood.info> On Mon, 7 Mar 2011 08:06:08 am Mart S?mermaa wrote: > Which do you prefer: > > bar = dict(chunk.split('=') for chunk in foo.split(",")) > > or > > bar = (dict(chunk.split('=') for chunk in foo.split(",")) if foo > else {}) > > ? Which would you prefer? line = line.split("#")[0].rstrip() line = line.split("#")[0].rstrip() if line else "" Whichever behaviour we give split, we're going to complicate something. Since there's no overwhelming reason to prefer one use-case over the other, the status quo wins. Any change to the behaviour of split will break code which is currently working, and that alone is reason enough to stick with the current behaviour. By the way, your dict() examples are not robust against minor whitespace changes in foo. Consider what happens with either of: foo = "x=1, y = 4, z = 2" foo = "x=1,y=4,z=2," -- Steven D'Aprano From ncoghlan at gmail.com Mon Mar 7 00:24:23 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 7 Mar 2011 09:24:23 +1000 Subject: [Python-ideas] str.split() oddness In-Reply-To: References: Message-ID: On Mon, Mar 7, 2011 at 4:32 AM, Mart S?mermaa wrote: > However, I myself, the Ruby folks and Nick would rather > emphasize the "zero-element-nature" [1] of ''. I did say maybe. As Jesse notes, there's another pattern based line of argument that goes: len(',,'.split('.')) == 3 len(','.split('.')) == 2 len(''.split('.')) == ??? (Well, 1 "obviously", since the pattern suggests that even when there is no other text in the string, the length of the split result is always 1 more than the number of separators occurring in the string) There are reasonable arguments for "''.split(sep)" as the inverse of either "sep.join([''])" or "sep.join([])", but once *either* has been chosen for a given language, none of the arguments are strong enough to justify switching to the other behaviour. Note that, independent of which is chosen, the following identity will hold for an explicit separator: sep.join((text.split(sep)) == text It's only composing them the other way around as "sep.join(data).split(sep)" that will convert either [] to [''] (as in Python) or [''] to [] (as in Ruby). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Mon Mar 7 01:51:02 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 06 Mar 2011 19:51:02 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D730923.9080804@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D70025E.9020503@stoneleaf.us> <4D7018D5.9060701@canterbury.ac.nz> <4D7020A1.8080702@stoneleaf.us> <4D703007.7050701@canterbury.ac.nz> <32506B21-8CED-421F-8BC9-2228AA6BFFB5@gmail.com> <4D7058FE.4050804@canterbury.ac.nz> <3D29D5C0-9A3B-4927-AD1B-DDAE4486D540@gmail.com> <4D72BEEB.9030201@canterbury.ac.nz> <4D72D615.1040101@mrabarnett.plus.com> <4D730923.9080804@canterbury.ac.nz> Message-ID: On 3/5/2011 11:10 PM, Greg Ewing wrote: > MRAB wrote: >> On 05/03/2011 22:53, Greg Ewing wrote: >> >>> Another approach might be to make it a standard part of the >>> class creation process to go through the attribute dict >>> looking for objects with __setname__ methods and calling >>> them. That would mean the feature would only be available > > >> With classes, couldn't that be done just as easily now with a decorator? > > I suppose it could, but then the burden is on the user to > apply the decorator to each class in which he wants to use > things that rely on it. That doesn't seem very satisfactory. Much more satisfactory than an 'assignment decorator', which put more burden on the user and break the current meaning of 'decorator'. -- Terry Jan Reedy From tjreedy at udel.edu Mon Mar 7 03:07:36 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 06 Mar 2011 21:07:36 -0500 Subject: [Python-ideas] str.split() oddness In-Reply-To: References: Message-ID: On 3/6/2011 1:32 PM, Mart S?mermaa wrote: > On Mon, Feb 28, 2011 at 2:13 AM, Guido van Rossum wrote: Two minutes before that, I posted a more extensive reply and refutation that you have not replied to. > But let me remind that the behaviour of foo.split(x) where > foo is not an empty string is not questioned at all, only > behaviour when splitting the empty string is. > > Python Ruby > join1 [''] => '' [''] => '' > join2 [ ] => '' [ ] => '' > > Python Ruby > split ['']<= '' [ ]<= '' > > As you can see, join1 and join2 are identical in both > languages. Python has chosen to make split the inverse of > join1, Ruby, on the other hand, the inverse of join2. > >> In Python the generalization is that since "xx".split(",") is ["xx"], >> and "x",split(",") is ["x"], it naturally follows that "".split(",") >> is [""]. Which I wrote as: (n*c).split(c) == (n+1)*[''] The generalization: len(s.split(c)) == s.count(c)+1 You want to change these into (n*c).split(c) == (n+1)*[''] if n else [] len(s.split(c)) == s.count(c)+1 if s else 0 which is to say, you want to add an easily forgotten conditional and alternative to definition of split. > That is one line of reasoning that emphasizes the > "string-nature" of ''. I do not see that particularly. I emphasize the algorithmic nature of functions and prefer simpler definitions/algorithms to more complicated ones with unnecessary special cases. > However, I myself, the Ruby folks and Nick would rather > emphasize the "zero-element-nature" [1] of ''. Which says nothing it itself. Saying that one member of the domain of a function is the identify element under some particular operation (concatenation, in this case) says nothing about what that member should be mapped to by any particular function. You seem to emphasize the mapping (set of ordered pairs) nature of functions and are hence willing to change one of the mappings (ordered pairs) without regard to its relation to all the other pairs. This is a consequence of the set view, which by itself denies any relation between its members (the mapping pairs). > "Applying the split operator to the zero element of > strings should result in the zero element of lists" To repeat, 'should' has no justification; it is just hand waving. Would you really say that every function should map identities to identities (and what if domain and range have more than one)? I hope not. Would you even say that every *string* function should map '' to the identity elememt of the range set? Or more specifically, should every string->list function map '' to []? Nonsense. It depends on the function. To also repeat, if split produced an iterable, then there would be no 'zero element of lists' to talk about. Anyway, it is a moot point as change would break code. > The general problem stems from the fact that my initial > expectation that > > f_a(x) = x.join(a).split(x), where x in lists, a in strings > > should be an identity function can not be satisfied as join > is non-injective (because of the surjective example above). Since I was the first to point this out, I am glad you now agree. -- Terry Jan Reedy From tjreedy at udel.edu Mon Mar 7 03:13:30 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 06 Mar 2011 21:13:30 -0500 Subject: [Python-ideas] str.split() oddness In-Reply-To: References: Message-ID: On 3/6/2011 4:06 PM, Mart S?mermaa wrote: > Which do you prefer: > bar = dict(chunk.split('=') for chunk in foo.split(",")) > or > bar = (dict(chunk.split('=') for chunk in foo.split(",")) if foo else {}) Others have pointed out that one example is not representative of the universe of use cases of split. However, the irony of this example is the *you* are the one who prefers to add 'if s != '' else []' to the definition of s.split(c) ;-). -- Terry Jan Reedy From guido at python.org Mon Mar 7 04:11:47 2011 From: guido at python.org (Guido van Rossum) Date: Sun, 6 Mar 2011 19:11:47 -0800 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> Message-ID: On Sat, Mar 5, 2011 at 10:27 PM, Carl M. Johnson wrote: > If you go back and re-read PEP 359, a lot of the motivating examples > -- creating namespaces, setting GUI properties, templating HTML -- are > still compelling. If you add in the examples of ORM tables and > interactive fiction processors, I think what unites all the examples > is that the make statement is a way of creating a DSL from within > Python. Can using a DSL within Python be pythonic ,though? That's a > difficult question to answer. But, on the other hand, just because > there's no clean, non-magical way of making a DSL within Python > doesn't mean people won't try and indeed, we can see the results of > their trying out there today. > > For example, see Biwako a > recent project which abuses metaclasses to create a declarative syntax > for processing file formats. Here's an excerpt from that page: > >> For example, here?s a very simple Biwako class that will can parse part of the >> GIF file format, allowing you to easily get to the width and height of any GIF image. >> >> from biwako import bin >> >> class GIF(bin.Structure, endianness=bin.LittleEndian, encoding='ascii'): >> ? ?tag = bin.FixedString('GIF') >> ? ?version = bin.String(size=3) >> ? ?width = bin.Integer(size=2) >> ? ?height = bin.Integer(size=2) >> >> Now you have a class that can accept any GIF image as a file (or any file-like >> object that?s readable) and parse it into the attributes shown on this class. >> >> >>> image = GIF(open('example.gif', 'rb')) >> >>> image.width, image.height >> (400, 300) > > So basically, the author of this project is calling GIF a "class" but > it's not something that really operates the way a normal class does, > because it's subclassing the magical bin.Structure class and > inheriting its metaclass. Hm... I find that example pretty clear and don't think it is in much need of improvement. I also don't think there's anything wrong with using class -- after all each call to GIF() returns a new object whose attributes are defined by the definition. I'd assume that adding methods to the class would just work and be utterly unsurprising to anyone familiar with basic classes. > With a little searching, you can find similar examples of abuse that > are centered around the with statement rather than metaclasses. People > have made the with statement into an XML generator > > or an anonymous block handler > . TBH I find such abuse of 'with' much more troubling. > Seeing examples like these make me think a re-examination of PEP 359 > would be a good idea. Of course, I do think it needs a little more > work (in particular, I think the make statement should have an > equivalent of a __prepare__ method and should receive the BLOCK as a > callback instead of automatically executing it), but the core idea is > worth taking another look at. This I agree with. I have a feeling that last time around it sunk primarily because people were trying to pile too many different semantics onto the same syntax -- hopefully the infatuation with Ruby anonymous blocks has deflated somewhat by now. -- --Guido van Rossum (python.org/~guido) From guido at python.org Mon Mar 7 04:27:31 2011 From: guido at python.org (Guido van Rossum) Date: Sun, 6 Mar 2011 19:27:31 -0800 Subject: [Python-ideas] str.split() oddness In-Reply-To: References: Message-ID: Well, I'm sorry, but this is not going to change, so I don't see much point in continuing to discuss it. We can explain the reasoning that leads to the current behavior (as you note, it's solid), we can discuss an alternative that could be considered just as solid, but it can't prevail in this universe. The cost of change is just too high, so we'll just have to live with the current behavior (and we might as well accept that it's solid instead of trying to fight it). --Guido On Sun, Mar 6, 2011 at 10:32 AM, Mart S?mermaa wrote: > First, sorry for such a big delay in replying. > > On Mon, Feb 28, 2011 at 2:13 AM, Guido van Rossum wrote: >> Does Ruby in general leave out empty strings from the result? What >> does it return when "x,,y" is split on "," ? ["x", "", "y"] or ["x", "y"]? > >>> "x,,y".split(",") > => ["x", "", "y"] > > But let me remind that the behaviour of foo.split(x) where > foo is not an empty string is not questioned at all, only > behaviour when splitting the empty string is. > > ? ? ? ? ? ? ?Python ? ? ? ? ? Ruby > join1 ? ? [''] => '' ? ? ? ?[''] => '' > join2 ? ? [ ?] => '' ? ? ? ?[ ?] => '' > > ? ? ? ? ? ? ?Python ? ? ? ? ? Ruby > split ? ? ?[''] <= '' ? ? ? ?[ ?] <= '' > > As you can see, join1 and join2 are identical in both > languages. Python has chosen to make split the inverse of > join1, Ruby, on the other hand, the inverse of join2. > >> In Python the generalization is that since "xx".split(",") is ["xx"], >> and "x",split(",") is ["x"], it naturally follows that "".split(",") >> is [""]. > > That is one line of reasoning that emphasizes the > "string-nature" of ''. > > However, I myself, the Ruby folks and Nick would rather > emphasize the "zero-element-nature" [1] of ''. > > Both approaches are based on solid reasoning, the latter > just happens to be more practical. And I would still claim > that > > "Applying the split operator to the zero element of > strings should result in the zero element of lists" > > wins on theoretical grounds as well. > > The general problem stems from the fact that my initial > expectation that > > ?f_a(x) = x.join(a).split(x), where x in lists, a in strings > > should be an identity function can not be satisfied as join > is non-injective (because of the surjective example above). > > [1] http://en.wikipedia.org/wiki/Zero_element > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -- --Guido van Rossum (python.org/~guido) From bruce at leapyear.org Mon Mar 7 07:06:21 2011 From: bruce at leapyear.org (Bruce Leban) Date: Sun, 6 Mar 2011 22:06:21 -0800 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> Message-ID: On Sun, Mar 6, 2011 at 7:11 PM, Guido van Rossum wrote: > On Sat, Mar 5, 2011 at 10:27 PM, Carl M. Johnson > wrote:> With a little searching, you can > find similar examples of abuse that > > are centered around the with statement rather than metaclasses. People > > have made the with statement into an XML generator > > < > http://langexplr.blogspot.com/2009/02/writing-xml-with-ironpython-xmlwriter.html > > > > or an anonymous block handler > > . > > TBH I find such abuse of 'with' much more troubling. I'm curious if you are troubled by both of these or one more than the other. Personally, the xml writer seems like a reasonable use to me. While I really don't like the anonymous block hack (either the use or the implementation). --- Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrts.pydev at gmail.com Mon Mar 7 07:50:21 2011 From: mrts.pydev at gmail.com (=?ISO-8859-1?Q?Mart_S=F5mermaa?=) Date: Mon, 7 Mar 2011 08:50:21 +0200 Subject: [Python-ideas] str.split() oddness In-Reply-To: References: Message-ID: That's a well-balanced summary that I entirely agree with. However, I suggest that we keep the pros and cons in mind and perhaps re-discuss the behaviour during the Python 4 design phase. Thank you all for your input, best regards, MS On Mon, Mar 7, 2011 at 5:27 AM, Guido van Rossum wrote: > Well, I'm sorry, but this is not going to change, so I don't see much > point in continuing to discuss it. We can explain the reasoning that > leads to the current behavior (as you note, it's solid), we can > discuss an alternative that could be considered just as solid, but it > can't prevail in this universe. The cost of change is just too high, > so we'll just have to live with the current behavior (and we might as > well accept that it's solid instead of trying to fight it). > > --Guido From raymond.hettinger at gmail.com Mon Mar 7 08:24:28 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 6 Mar 2011 23:24:28 -0800 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> Message-ID: On Mar 6, 2011, at 10:06 PM, Bruce Leban wrote: > . Personally, the xml writer seems like a reasonable use to me. I'm surprised that you like the XML writer. To me it seems much more awkward to type the python code than the XML it generates: w = XmlWriter.Create(System.Console.Out,XmlWriterSettings(Indent=True)) x = XWriter(w) with x.element('tableofcontents'): with x.element('section',{'page' : '10'}): x.text('Introduction') with x.element('section',{'page' : '12'}): x.text('Main topic') with x.element('section',{'page' : '14'}): x.text('Extra topic') Generates:
Introduction
Main topic
Extra topic
At least in this example, it seems to me that the XML writer created more work and more complexity than it saved. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Mar 7 08:28:06 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 7 Mar 2011 17:28:06 +1000 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> Message-ID: On Mon, Mar 7, 2011 at 4:06 PM, Bruce Leban wrote: > On Sun, Mar 6, 2011 at 7:11 PM, Guido van Rossum wrote: >> On Sat, Mar 5, 2011 at 10:27 PM, Carl M. Johnson >> wrote:> With a little searching, you can >> find similar examples of abuse that >> > are centered around the with statement rather than metaclasses. People >> > have made the with statement into an XML generator >> > >> > >> > or an anonymous block handler >> > . >> >> TBH I find such abuse of 'with' much more troubling. > > I'm curious if you are troubled by both of these or one more than the other. > Personally, the xml writer seems like a reasonable use to me. While I really > don't like the anonymous block hack (either the use or the implementation). The XML example is fine, since it is just another application of "before-and-after" coding that the with statement was designed (that kind of thing was mentioned explicitly in the PEP 343 discussions, although never elaborated to anything like that degree). The bytecode hackery involved in AnonymousBlocksInPython and the withhacks module makes for fun toys to play with, but nothing that should even be remotely contemplated for a production system. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From bruce at leapyear.org Mon Mar 7 09:20:37 2011 From: bruce at leapyear.org (Bruce Leban) Date: Mon, 7 Mar 2011 00:20:37 -0800 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> Message-ID: On Sun, Mar 6, 2011 at 11:24 PM, Raymond Hettinger < raymond.hettinger at gmail.com> wrote: > > On Mar 6, 2011, at 10:06 PM, Bruce Leban wrote: > > . Personally, the xml writer seems like a reasonable use to me. > > > I'm surprised that you like the XML writer. To me it seems much more > awkward to type the python code than the XML it generates: > > > > At least in this example, it seems to me that the XML writer created more > work and more complexity than it saved. > I agree for this example. In real code, it wouldn't all be static. It would be like: with x.element('foo'): for a in stuff: with x.element('bar'): a.render(x) I like that better than something like this: x.write(x.element('foo', [x.element('bar', a.render()) for a in stuff])) --- Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Mar 7 09:31:26 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 7 Mar 2011 18:31:26 +1000 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> Message-ID: On Mon, Mar 7, 2011 at 5:24 PM, Raymond Hettinger wrote: > > On Mar 6, 2011, at 10:06 PM, Bruce Leban wrote: > > . Personally, the xml writer seems like a reasonable use to me. > > I'm surprised that you like the XML writer. ?To me it seems much more > awkward to type the python code than the XML it generates: > w = XmlWriter.Create(System.Console.Out,XmlWriterSettings(Indent=True)) > x = XWriter(w) > > with x.element('tableofcontents'): > with x.element('section',{'page' : '10'}): > x.text('Introduction') > with x.element('section',{'page' : '12'}): > x.text('Main topic') > with x.element('section',{'page' : '14'}): > x.text('Extra topic') However, appropriate use of keyword arguments would clean that up quite a bit: def toc(**kwds): return x.element('tableofcontents', kwds) def section(**kwds): return x.element('section', kwds) with toc(): with section(page='10'): x.text(intro_text) with section(page='12'): x.text(Main topic) with section(page='14'): x.text(Extra topic) The CM style also lets you do a lot of nice things like providing required-by-schema default values, as well as providing conveniences to make within-document cross references easier to create. I don't think the linked page is a particularly great exemplar of the style, but you *can* do good things with the approach. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From bruce at leapyear.org Mon Mar 7 09:48:48 2011 From: bruce at leapyear.org (Bruce Leban) Date: Mon, 7 Mar 2011 00:48:48 -0800 Subject: [Python-ideas] str.split() oddness In-Reply-To: References: Message-ID: On Sun, Mar 6, 2011 at 7:27 PM, Guido van Rossum wrote: > Well, I'm sorry, but this is not going to change ... The cost of change is > just too high, > so we'll just have to live with the current behavior (and we might as > well accept that it's solid instead of trying to fight it). > > Completely agree. It's interesting that the one thing that annoys me about string.split hasn't been mentioned here. I'm not bothered by the inconsistency in handling of the degenerate cases because frequently I need code to handle the degenerate case specially anyway. What *does* annoy me is the inconsistency of what the count parameter means between different languages. That is str.split(delimiter, count) means different things in different languages: Python/Javascript = max number of splits Java/C#/Ruby = max number of results Obviously, it would break things badly to switch from one to the other (in any language). An alternative would be first changing to: str.split(sep, maxsplits=None) and modify pylint to complain if maxsplits is used as a non-keyword argument. Eventually, change to str.split(sep, deprecated=None, maxsplits=None) where this throws an exception if deprecated is not None. This would also open up having maxresults keyword if it's desirable to allow either variant. --- Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Mar 7 17:33:08 2011 From: guido at python.org (Guido van Rossum) Date: Mon, 7 Mar 2011 08:33:08 -0800 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> Message-ID: On Sun, Mar 6, 2011 at 11:28 PM, Nick Coghlan wrote: > On Mon, Mar 7, 2011 at 4:06 PM, Bruce Leban wrote: >> On Sun, Mar 6, 2011 at 7:11 PM, Guido van Rossum wrote: >>> On Sat, Mar 5, 2011 at 10:27 PM, Carl M. Johnson >>> wrote:> With a little searching, you can >>> find similar examples of abuse that >>> > are centered around the with statement rather than metaclasses. People >>> > have made the with statement into an XML generator >>> > >>> > >>> > or an anonymous block handler >>> > . >>> >>> TBH I find such abuse of 'with' much more troubling. >> >> I'm curious if you are troubled by both of these or one more than the other. >> Personally, the xml writer seems like a reasonable use to me. While I really >> don't like the anonymous block hack (either the use or the implementation). > > The XML example is fine, since it is just another application of > "before-and-after" coding that the with statement was designed (that > kind of thing was mentioned explicitly in the PEP 343 discussions, > although never elaborated to anything like that degree). I don't think it's fine. The quoted example looks like it depends too much on implicit side effects. It is possible that I still (after all these years, and after enthusiastically supporting its introduction) don't fully appreciate the with-statement. I find it useful when I can clearly visualize the alternative code, which typically involves extra flow control such as a try/finally block. In this example it looks more like the alternative is just more calls. I expect that in reality the generation of XML is much more dynamic than in the example, and that the with-statement won't provide much help (there may even be cases where it could get in the way). I was going to say that I don't have much experience with generating XML, but that's not really true -- I have plenty experience generating HTML, which is a sufficiently similar experience. In fact I just hacked together a prototype for an HTML generating library which uses nested function calls instead of nested with-statements to represent the nested structure of HTML. I think I like that approach better because it makes it easier to do some parts of the generation out of order: I can construct various sub-lists of HTML elements and then splice them together. E.g. row1 = [] row2 = [] for x in : row1.append(libhtml.td(f1(x))) row2.append(libhtml.td(f2(x))) t = libhtml.table(libhtml.tr(row1), libhtml.tr(row2)) # Etc. -- --Guido van Rossum (python.org/~guido) From janssen at parc.com Mon Mar 7 18:17:20 2011 From: janssen at parc.com (Bill Janssen) Date: Mon, 7 Mar 2011 09:17:20 PST Subject: [Python-ideas] Bring back callable() In-Reply-To: References: <20101124000104.573a85a9@pitrou.net> Message-ID: <90271.1299518240@parc.com> Terry Reedy wrote: > On 11/23/2010 6:01 PM, Antoine Pitrou wrote: > > > The substitute of writing `isinstance(x, collections.Callable)` is > > not good, 1) because it's wordier 2) because collections is really not > > an intuitive place where to look for a Callable ABC. > > I thnk it should be in the abc module, along with WeakSet Me too. Bill From jsbueno at python.org.br Mon Mar 7 21:09:24 2011 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Mon, 7 Mar 2011 17:09:24 -0300 Subject: [Python-ideas] Bring back callable() In-Reply-To: <90271.1299518240@parc.com> References: <20101124000104.573a85a9@pitrou.net> <90271.1299518240@parc.com> Message-ID: On Mon, Mar 7, 2011 at 2:17 PM, Bill Janssen wrote: > Terry Reedy wrote: > >> On 11/23/2010 6:01 PM, Antoine Pitrou wrote: >> >> > The substitute of writing `isinstance(x, collections.Callable)` is >> > not good, 1) because it's wordier 2) because collections is really not >> > an intuitive place where to look for a Callable ABC. >> >> I thnk it should be in the abc module, along with WeakSet > > Me too. I was not around here, and although I looked for, I didn't find this on the web: What was the reasoning for removing "callable" in the first place? js -><- > > Bill > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > From jimjjewett at gmail.com Mon Mar 7 21:10:25 2011 From: jimjjewett at gmail.com (Jim Jewett) Date: Mon, 7 Mar 2011 15:10:25 -0500 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> Message-ID: On Mon, Mar 7, 2011 at 11:33 AM, Guido van Rossum wrote: >>>> On Sat, Mar 5, 2011 at 10:27 PM, Carl M. Johnson wrote:> >>>> > People have made the with statement >>>> > into an XML generator >>>> > > I don't think it's fine. The quoted example looks > like it depends too much on implicit side effects. What side effects? > I find it useful when I can clearly visualize the > alternative code, which typically involves extra flow > control such as a try/finally block. I had thought of the finally as the element-close tags... > In this example it looks more like the > alternative is just more calls. Sure, but the close tag comes arbitrarily later, just like the close of a file -- which was one of the canonical use cases. > ... I just hacked together a prototype for an HTML > generating library which uses nested function calls > instead of nested with-statements to represent > the nested structure of HTML. I think I like that > approach better because it makes it easier to do > some parts of the generation out of order: I can > construct various sub-lists of HTML elements > and then splice them together. E.g. > > row1 = [] > row2 = [] > for x in : > ?row1.append(libhtml.td(f1(x))) > ?row2.append(libhtml.td(f2(x))) > t = libhtml.table(libhtml.tr(row1), libhtml.tr(row2)) I think the main appeal of the above library is that you *don't* have to follow this pattern -- you can create the broad outline before you have the innermost details. -jJ From fuzzyman at gmail.com Mon Mar 7 21:14:55 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Mon, 7 Mar 2011 20:14:55 +0000 Subject: [Python-ideas] Bring back callable() In-Reply-To: References: <20101124000104.573a85a9@pitrou.net> <90271.1299518240@parc.com> Message-ID: On 7 March 2011 20:09, Joao S. O. Bueno wrote: > On Mon, Mar 7, 2011 at 2:17 PM, Bill Janssen wrote: > > Terry Reedy wrote: > > > >> On 11/23/2010 6:01 PM, Antoine Pitrou wrote: > >> > >> > The substitute of writing `isinstance(x, collections.Callable)` is > >> > not good, 1) because it's wordier 2) because collections is really not > >> > an intuitive place where to look for a Callable ABC. > >> > >> I thnk it should be in the abc module, along with WeakSet > > > > Me too. > Too late. :-) > > I was not around here, and although I looked for, I didn't find this on the > web: > > What was the reasoning for removing "callable" in the first place? > > It didn't always do exactly the right thing (but was usually "good enough") and was thought easy to replace with a more accurate alternative (isinstance(foo, abc.Callable) I *think*) which proved annoying in pactise. All the best, Michael js > -><- > > > > > Bill > > _______________________________________________ > > Python-ideas mailing list > > Python-ideas at python.org > > http://mail.python.org/mailman/listinfo/python-ideas > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From janssen at parc.com Mon Mar 7 22:06:23 2011 From: janssen at parc.com (Bill Janssen) Date: Mon, 7 Mar 2011 13:06:23 PST Subject: [Python-ideas] Bring back callable() In-Reply-To: References: <20101124000104.573a85a9@pitrou.net> <90271.1299518240@parc.com> Message-ID: <96397.1299531983@parc.com> Michael Foord wrote: > > >> I thnk it should be in the abc module, along with WeakSet > > > > > > Me too. > > > > Too late. :-) My mail tool seems to have an instinctive sense of when I've not had coffee yet, and it takes that as an opportunity to show me old unread email from last year :-). Bill From steve at pearwood.info Mon Mar 7 23:16:01 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 8 Mar 2011 09:16:01 +1100 Subject: [Python-ideas] Bring back callable() In-Reply-To: <96397.1299531983@parc.com> References: <20101124000104.573a85a9@pitrou.net> <96397.1299531983@parc.com> Message-ID: <201103080916.01343.steve@pearwood.info> On Tue, 8 Mar 2011 08:06:23 am Bill Janssen wrote: > Michael Foord wrote: > > > >> I thnk it should be in the abc module, along with WeakSet > > > > > > > > Me too. > > > > Too late. :-) > > My mail tool seems to have an instinctive sense of when I've not had > coffee yet, and it takes that as an opportunity to show me old unread > email from last year :-). It wasn't just you. I got it too. -- Steven D'Aprano From greg.ewing at canterbury.ac.nz Tue Mar 8 00:46:37 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 08 Mar 2011 12:46:37 +1300 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> Message-ID: <4D756E5D.1020302@canterbury.ac.nz> Jim Jewett wrote: > I had thought of the finally as the element-close tags... But generation of the closing tags doesn't really have to be done in a finally block, unless you're somehow wanting to support throwing an exception in the middle of your xml generation and still have it generate well-formed xml. In the absence of such a requirement, using a with-statement seems like overkill. -- Greg From larry at hastings.org Tue Mar 8 01:56:25 2011 From: larry at hastings.org (Larry Hastings) Date: Mon, 07 Mar 2011 19:56:25 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D6FFDF9.3060507@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> Message-ID: <4D757EB9.8080308@hastings.org> On 03/03/2011 03:45 PM, Greg Ewing wrote: > I think we should have assignment decorators. > > @decorator > lhs = rhs > > would be equivalent to > > lhs = decorator('lhs', rhs) > I timidly propose an alternate syntax. What I don't like about the above proposal: assignment is no longer a one-liner. So let's try it inline. Example 1: lhs = @decorator is equivalent to lhs = decorator(classobject, 'lhs', None) Example 2: lhs = @dec1 @dec2 is equivalent to lhs = dec2(classobject, 'lhs', dec1(classobject, 'lhs', None)) Example 3: lhs = @dec1('string', 3.14) is equivalent to lhs = dec1('string', 3.14)(classobject, 'lhs', None) (Here you are presumed to return a callable closure with the default values baked in.) Outside class scope, classobject is None. I think you want the classobject there so you can cache information in it, like using the variable declarations to build up a per-class database schema. I'm not confident any of this is a good idea; luckily this isn't the python-good-ideas-only list. Phew! /larry/ From ckaynor at zindagigames.com Tue Mar 8 02:08:46 2011 From: ckaynor at zindagigames.com (Chris Kaynor) Date: Mon, 7 Mar 2011 17:08:46 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D757EB9.8080308@hastings.org> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> Message-ID: On Mon, Mar 7, 2011 at 4:56 PM, Larry Hastings wrote: > > On 03/03/2011 03:45 PM, Greg Ewing wrote: > >> I think we should have assignment decorators. >> >> @decorator >> lhs = rhs >> >> would be equivalent to >> >> lhs = decorator('lhs', rhs) >> >> > I timidly propose an alternate syntax. What I don't like about the above > proposal: assignment is no longer a one-liner. So let's try it inline. > > Example 1: > > lhs = @decorator > > is equivalent to > > lhs = decorator(classobject, 'lhs', None) > > > Example 2: > > lhs = @dec1 @dec2 > > is equivalent to > > lhs = dec2(classobject, 'lhs', dec1(classobject, 'lhs', None)) > > > Example 3: > > lhs = @dec1('string', 3.14) > > is equivalent to > > lhs = dec1('string', 3.14)(classobject, 'lhs', None) > You seemed to have missed the example of: lhs = @dec1 123 which would produce dec1(classobject, 'lhs', 123) correct? If so, the separation between dec1 and 123 seems too little - it looks like typos like: lhs = @dec1123 lhs = @dec123 would be easy to make and difficult to detect. > > (Here you are presumed to return a callable closure with the default values > baked in.) > > Outside class scope, classobject is None. I think you want the classobject > there so you can cache information in it, like using the variable > declarations to build up a per-class database schema. > > > I'm not confident any of this is a good idea; luckily this isn't the > python-good-ideas-only list. Phew! > > > /larry/ > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbueno at python.org.br Tue Mar 8 02:08:53 2011 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Mon, 7 Mar 2011 22:08:53 -0300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D757EB9.8080308@hastings.org> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> Message-ID: On Mon, Mar 7, 2011 at 9:56 PM, Larry Hastings wrote: > Outside class scope, classobject is None. ?I think you want the classobject > there so you can cache information in it, like using the variable > declarations to build up a per-class database schema. > What you are calling "classobject" simply don't exist at this stage wen creating a new class. Although, the local variables dictionary collect the information needed to create the class. We could think if there are actually use cases for having this dictionary alongside the variable name to the callable to use. js -><- This is Python, where function calls has "(" . From cs at zip.com.au Tue Mar 8 02:10:27 2011 From: cs at zip.com.au (Cameron Simpson) Date: Tue, 8 Mar 2011 12:10:27 +1100 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D7034D8.4030103@canterbury.ac.nz> References: <4D7034D8.4030103@canterbury.ac.nz> Message-ID: <20110308011027.GA24112@cskk.homeip.net> On 04Mar2011 13:39, Greg Ewing wrote: | Guido van Rossum wrote: | | >Eek! All other uses of 'as' in Python have the target on the right. | | Well, it doesn't necessarily have to be 'as'. It could be | | def Fred = namedtuple('x y z') | | but that wouldn't give as much of a clue that something | special is happening in the middle. | | Is the reversal really all that much of a problem? It makes | sense when you read it as an English sentence: "Define | Fred as a named tuple." Just like all the other uses of | "as" mean what they seem to mean. No, just the reverse of what all the other uses of "as" mean:-( This: def namedtuple('x y z') as Fred is consistent with current uses. It is ugly/confusing to my eye, but I really need to read this thread more thoroughly. | Given that each existing use of 'as' has its own unique | quirks, forcing 'as' to always bind on the right regardless | of anything else might be seen as a foolish consistency. Doesn't seem foolish to me. Seems grammatically sound and highly desirable. Cheers, -- Cameron Simpson DoD#743 http://www.cskk.ezoshosting.com/cs/ Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. - Douglas Hosfstadter, Godel, Escher, Bach: an Eternal Golden Braid From larry at hastings.org Tue Mar 8 02:24:49 2011 From: larry at hastings.org (Larry Hastings) Date: Mon, 07 Mar 2011 20:24:49 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> Message-ID: <4D758561.1060901@hastings.org> On 03/07/2011 08:08 PM, Chris Kaynor wrote: > You seemed to have missed the example of: > > lhs = @dec1 123 > > which would produce > > dec1(classobject, 'lhs', 123) > > correct? No, that was deliberately not part of my proposal. My thinking: most of the time, this will be used in class scope for creating objects that have their own reasonable default values. Why force people to type in a default value that 99% of the time you don't care about? Allowing you to override this default value would be done by making the decorator callable and return a callable, like lhs = @dec1(123) Also, I thought the "@dec1 123" style just looked too weird. I admit it isn't that much weirder than what I already proposed. However, if there are great uses for assignment decorators outside class scope, perhaps we need this original rhs. Other folks have pointed out--what does something like this mean? c = Class() c.member = @dec Does dec get 'c.member'? Is this useful for anything? Shall we narrow the use of assignment decorators to just class scope? On 03/07/2011 08:08 PM, Joao S. O. Bueno wrote: > What you are calling "classobject" simply don't exist at this stage > wen creating a new class. Toldja it probably wasn't a good idea! > This is Python, where function calls has "(" . I'm not sure if this is part of a default signature or a rejoinder to me. If the latter, let me point out that class/function decorators are already function calls without parentheses; it seems natural enough that assignment decorators would similarly lack them. /larry/ From raymond.hettinger at gmail.com Tue Mar 8 04:50:41 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Mon, 7 Mar 2011 19:50:41 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D757EB9.8080308@hastings.org> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> Message-ID: <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> On Mar 7, 2011, at 4:56 PM, Larry Hastings wrote: > > On 03/03/2011 03:45 PM, Greg Ewing wrote: >> I think we should have assignment decorators. >> >> @decorator >> lhs = rhs >> >> would be equivalent to >> >> lhs = decorator('lhs', rhs) >> > > I timidly propose an alternate syntax. What I don't like about the above proposal: assignment is no longer a one-liner. So let's try it inline. > > Example 1: > > lhs = @decorator > > is equivalent to > > lhs = decorator(classobject, 'lhs', None) > . . . > > I'm not confident any of this is a good idea; luckily this isn't the python-good-ideas-only list. Phew! Just for the fun of it, here's my offering: a := gizmo(arg1, arg2) is equivalent to a = gimzo(arg1, arg2, __name__='a') Advantages: * Doesn't rewrite the order of arguments * Keep the current '@' notation unambiguous * Still looks like an assignment. * Would give a meaningful error message if gizmo() weren't expecting a name * Doesn't look like perl * Doesn't twist your mind into a pretzel * No new keywords * Easy to adapt any existing tools that need to know their own name * Doesn't seem like magic or spooky action at a distance * The program still looks like a Python program Disadvantage: * I'm still not sure that the "problem" is worth solving. * Between this and function annotations, it looks like Pascal is coming back. Raymond From p.f.moore at gmail.com Tue Mar 8 10:13:41 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 8 Mar 2011 09:13:41 +0000 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> Message-ID: On 8 March 2011 03:50, Raymond Hettinger wrote: > Just for the fun of it, here's my offering: > > > ? ?a := gizmo(arg1, arg2) > > is equivalent to > > ? ?a = gimzo(arg1, arg2, __name__='a') This is my favourite so far. I was wondering when someone would mention :=... > > Advantages: > > ?* Doesn't rewrite the order of arguments > ?* Keep the current '@' notation unambiguous > ?* Still looks like an assignment. > ?* Would give a meaningful error message if gizmo() weren't expecting a name Good point > ?* Doesn't look like perl > ?* Doesn't twist your mind into a pretzel Speak for yourself :-) > ?* No new keywords > ?* Easy to adapt any existing tools that need to know their own name > ?* Doesn't seem like magic or spooky action at a distance > ?* The program still looks like a Python program > > Disadvantage: > > ?* I'm still not sure that the "problem" is worth solving. > ?* Between this and function annotations, it looks like Pascal is coming back. Didn't Algol also use :=? If so, we can think of it as Algol coming back, and add call by name as well, for bonus fun :-) * Also, still doesn't give any guidance as to what should be allowed on the LHS (which is orthogonal to the syntax discussion, I know, but still a significant sticking point for any proposal). My view is bare names only, because if you allow anything else, the called function ends up having to parse a string representation. * Actually, also add that the syntax is very (surprisingly) restrictive. The RHS *has* to be a bare function call, near enough. So that makes it even less like an assignment... I understand the frustration that drives people to want a solution, but I don't think any of the proposals so far are unambiguous enough to qualify (even if the problem were important enough to justify new syntax). Paul. From steve at pearwood.info Tue Mar 8 14:07:33 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 09 Mar 2011 00:07:33 +1100 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> Message-ID: <4D762A15.3090105@pearwood.info> Raymond Hettinger wrote: > Just for the fun of it, here's my offering: > > > a := gizmo(arg1, arg2) > > is equivalent to > > a = gimzo(arg1, arg2, __name__='a') For what it's worth, this is the only proposal so far that I like. > Disadvantage: > > * I'm still not sure that the "problem" is worth solving. > * Between this and function annotations, it looks like Pascal is coming back. Surely that's an advantage? *wink* Cut-my-teeth-on-Pascal-and-it-didn't-do-me-any-harm-ly y'rs, -- Steven From anikom15 at gmail.com Tue Mar 8 15:35:55 2011 From: anikom15 at gmail.com (Westley =?ISO-8859-1?Q?Mart=EDnez?=) Date: Tue, 08 Mar 2011 06:35:55 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> Message-ID: <1299594955.27260.2.camel@localhost.localdomain> On Mon, 2011-03-07 at 19:50 -0800, Raymond Hettinger wrote: > On Mar 7, 2011, at 4:56 PM, Larry Hastings wrote: > > > > > On 03/03/2011 03:45 PM, Greg Ewing wrote: > >> I think we should have assignment decorators. > >> > >> @decorator > >> lhs = rhs > >> > >> would be equivalent to > >> > >> lhs = decorator('lhs', rhs) > >> > > > > I timidly propose an alternate syntax. What I don't like about the above proposal: assignment is no longer a one-liner. So let's try it inline. > > > > Example 1: > > > > lhs = @decorator > > > > is equivalent to > > > > lhs = decorator(classobject, 'lhs', None) > > > . . . > > > > I'm not confident any of this is a good idea; luckily this isn't the python-good-ideas-only list. Phew! > > Just for the fun of it, here's my offering: > > > a := gizmo(arg1, arg2) > > is equivalent to > > a = gimzo(arg1, arg2, __name__='a') > > Advantages: > > * Doesn't rewrite the order of arguments > * Keep the current '@' notation unambiguous > * Still looks like an assignment. > * Would give a meaningful error message if gizmo() weren't expecting a name > * Doesn't look like perl > * Doesn't twist your mind into a pretzel > * No new keywords > * Easy to adapt any existing tools that need to know their own name > * Doesn't seem like magic or spooky action at a distance > * The program still looks like a Python program > > Disadvantage: > > * I'm still not sure that the "problem" is worth solving. > * Between this and function annotations, it looks like Pascal is coming back. > > > Raymond This is the worst suggestion I've seen so far. := ambiguous and gives no hint as to what it is doing and its use in other languages would add confusion. := could be used for any kind of "funky assignment". := also just looks weird. From larry at hastings.org Tue Mar 8 20:22:50 2011 From: larry at hastings.org (Larry Hastings) Date: Tue, 08 Mar 2011 14:22:50 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> Message-ID: <4D76820A.1010007@hastings.org> On 03/07/2011 10:50 PM, Raymond Hettinger wrote: > Just for the fun of it, here's my offering: > > a := gizmo(arg1, arg2) > > is equivalent to > > a = gimzo(arg1, arg2, __name__='a') How do you call two assignment decorators with this syntax? > Advantages: > > * Keep the current '@' notation unambiguous I don't think any of the proposed annotation syntaxes have been ambiguous. > * The program still looks like a Python program A debatable matter of taste. For better or worse, decorators in Python use '@'--that ship has sailed. I suggest adding a second new never-before-seen syntax just for assignment decoration seems less Pythonic to me than reusing '@' in a new context. > Disadvantage: > > * I'm still not sure that the "problem" is worth solving. Hear hear! Certainly all the proposals so far have struck me being as worse than the problem. Including mine! I'd be very interested in someone demonstrating a killer app for this hypothetical feature, /larry/ From steve at pearwood.info Tue Mar 8 22:43:41 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 09 Mar 2011 08:43:41 +1100 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D76820A.1010007@hastings.org> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> Message-ID: <4D76A30D.2090706@pearwood.info> Larry Hastings wrote: > > On 03/07/2011 10:50 PM, Raymond Hettinger wrote: >> Just for the fun of it, here's my offering: >> >> a := gizmo(arg1, arg2) >> >> is equivalent to >> >> a = gimzo(arg1, arg2, __name__='a') > > How do you call two assignment decorators with this syntax? The obvious way: a, b := gizmo(arg1, arg2), widget(arg3) # a, b = gimzo(arg1, arg2, __name__='a'), widget(arg3, __name__='b') A more important question, what happens if we do this? a := obj >> Advantages: >> >> * Keep the current '@' notation unambiguous > > I don't think any of the proposed annotation syntaxes have been ambiguous. Perhaps ambiguous is the wrong word, but it certainly overloads the meaning of @ and gives it a second, quite different meaning. So much so that I claim that "assignment decorator" is completely the wrong term to use to describe it. The feature we're discussing is not an enhanced form of decoration, it is an enhanced form of *assignment*. I think Raymond has got it right, the only appropriate syntax is a variation of the assignment syntax. In current Python, decoration refers to the idiom: x = something x = decorator(x) Before the @ syntax was available, we still used decorators like this: def x(): pass x = decorator(x) Notice that it's the *def* that gives the magic "knows its own name" behaviour, not the decorator. The decorator is just a function. You can decorate objects that aren't functions or classes, just without the advantage of nice syntax. Critical to the idea of decoration is that the decorator function is a function that wraps or otherwise modifies an existing object. The proposed behaviour is nothing like decoration. It doesn't wrap an existing object, but changes the compiler's behaviour when creating it in the first place. What is fundamental to this proposed functionality is the idea of the right hand side of assignment being aware of the name which is the target of that assignment. There are currently three things which already do this: * defining a class and assigning it to a name using the class keyword * defining a function and assigning it to a name using the def keyword * applying a decorator using the @ syntax What these three things have in common is *assignment*, not decoration. -- Steven From mwm at mired.org Tue Mar 8 23:02:10 2011 From: mwm at mired.org (Mike Meyer) Date: Tue, 8 Mar 2011 17:02:10 -0500 Subject: [Python-ideas] Literate python? Message-ID: <20110308170210.71762b78@bhuda.mired.org> Wild idea, swiped directly from haskell/ghc: How about making the python interpreter just a little bit smarter, to support literate programming? Add a 'literate python' mode, triggered by file type being .pyl, a '-l' option, or the interpreter being run as 'lpython', then the compiler does a little bit of filtering before compiling (and potentially saving .pyc/.pyo) the file. If the first non-white-space character after the shebang line (if present) is a backslash, then the compiler ignores lines until it sees a line consisting of \begin{code} (which could be the first line), then compiles lines until it sees a line consisting of \end{code}, after which it switches back to searching for \begin{code}. Otherwise, all lines (again, after the shebang line, if present) are treated as comments until the compiler sees a line starting with "> " (that's greater than followed by space) following an empty line, which causes the compiler to start stripping the "> " from lines and compiles them until it finds a line that doesn't start with "> ". http://www.mired.org/consulting.html Independent Software developer/SCM consultant, email for more information. O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From ryan.freckleton at gmail.com Tue Mar 8 23:07:58 2011 From: ryan.freckleton at gmail.com (Ryan Freckleton) Date: Tue, 8 Mar 2011 15:07:58 -0700 Subject: [Python-ideas] Literate python? In-Reply-To: <20110308170210.71762b78@bhuda.mired.org> References: <20110308170210.71762b78@bhuda.mired.org> Message-ID: I absolutely love the idea of literate programming and have investigated several existing projects that attempt this. There already exist several third-party packages for literate programming in python: - PyLit (http://pylit.berlios.de/) - PyWeb (http://pywebtool.sourceforge.net/) - Leo (http://webpages.charter.net/edreamleo/front.html) PyLit approaches it by transforming between RestructuredText and source, which allows line numbers to match for debugging. PyWeb takes the more traditional noweb/tangle approach of creating source from a document. The biggest issue is making exceptions and debugging nice if you allow code re-ordering. Interpreter support may help with this, but I think there's still a lot to be explored by third party libraries (importlib, ast transforms, etc.) I'm not sure if python-ideas is the appropriate venue, but if you'd like to develop this idea more, please feel free to email me. Cheers ===== --Ryan E. Freckleton On Tue, Mar 8, 2011 at 3:02 PM, Mike Meyer wrote: > Wild idea, swiped directly from haskell/ghc: > > How about making the python interpreter just a little bit smarter, to > support literate programming? Add a 'literate python' mode, triggered > by file type being .pyl, a '-l' option, or the interpreter being run > as 'lpython', then the compiler does a little bit of filtering before > compiling (and potentially saving .pyc/.pyo) the file. > > If the first non-white-space character after the shebang line (if > present) is a backslash, then the compiler ignores lines until it sees > a line consisting of \begin{code} (which could be the first line), > then compiles lines until it sees a line consisting of \end{code}, > after which it switches back to searching for \begin{code}. > > Otherwise, all lines (again, after the shebang line, if present) are > treated as comments until the compiler sees a line starting with "> " > (that's greater than followed by space) following an empty line, which > causes the compiler to start stripping the "> " from lines and > compiles them until it finds a line that doesn't start with "> ". > > -- > Mike Meyer > http://www.mired.org/consulting.html > Independent Software developer/SCM consultant, email for more information. > > O< ascii ribbon campaign - stop html mail - www.asciiribbon.org > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Tue Mar 8 23:18:09 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 09 Mar 2011 11:18:09 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <20110308011027.GA24112@cskk.homeip.net> References: <4D7034D8.4030103@canterbury.ac.nz> <20110308011027.GA24112@cskk.homeip.net> Message-ID: <4D76AB21.4010206@canterbury.ac.nz> Cameron Simpson wrote: > def namedtuple('x y z') as Fred > > is consistent with current uses. I still think insisting that all uses of the word 'as' bind a name on the right is a foolish consistency. To my mind it's more important that the name being defined appear as close to the left as possible. This is something that Pascal got right and C got wrong. Putting the defined name on the left makes it easier to visually scan down the code looking for the definition of something. It also makes it easier to write regular expressions etc. that search for definitions. > Seems grammatically sound and highly > desirable. But the other way is equally grammatically sound -- in fact it's even more so in this case. To my ears, "define x as somethingorother" sounds much more natural than "define somethingorother as x". -- Greg From greg.ewing at canterbury.ac.nz Tue Mar 8 23:35:33 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 09 Mar 2011 11:35:33 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> Message-ID: <4D76AF35.4010607@canterbury.ac.nz> Raymond Hettinger wrote: > a := gizmo(arg1, arg2) I thought about that, but I already have another secret plan that I'd like to reserve := for. Also it doesn't provide any clue that naming is involved, as 'def' would. -- Greg From greg.ewing at canterbury.ac.nz Wed Mar 9 00:38:45 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 09 Mar 2011 12:38:45 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D76A30D.2090706@pearwood.info> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> Message-ID: <4D76BE05.1050903@canterbury.ac.nz> Steven D'Aprano wrote: > The feature we're discussing is not an enhanced form > of decoration, it is an enhanced form of *assignment*. I think Raymond > has got it right, the only appropriate syntax is a variation of the > assignment syntax. I don't entirely agree with that. There are currently two existing constructs in Python for creating objects that know their own name, 'def' and 'class'. They don't look like forms of assignment -- rather, they follow the pattern This leads me to think that any further such constructs should follow a similar pattern. -- Greg From larry at hastings.org Wed Mar 9 00:50:59 2011 From: larry at hastings.org (Larry Hastings) Date: Tue, 08 Mar 2011 18:50:59 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D76BE05.1050903@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> Message-ID: <4D76C0E3.6020101@hastings.org> On 03/08/2011 06:38 PM, Greg Ewing wrote: > There are currently two > existing constructs in Python for creating objects that know > their own name, 'def' and 'class'. They don't look like forms > of assignment -- rather, they follow the pattern > > > > This leads me to think that any further such constructs should > follow a similar pattern. > I like this avenue of thought. But here we run into a problem: what should the keyword be? The same criteria that makes a good keyword (short, meaningful, in this case a noun) also makes for a good identifier. So any good obvious choice ("var", "field", "value") is going to have lots of uses as an identifier in existing code. Not that this proposal was doing that well so far, /larry/ From tjreedy at udel.edu Wed Mar 9 02:34:08 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 08 Mar 2011 20:34:08 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D76BE05.1050903@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> Message-ID: On 3/8/2011 6:38 PM, Greg Ewing wrote: > Steven D'Aprano wrote: >> The feature we're discussing is not an enhanced form of decoration, it >> is an enhanced form of *assignment*. I think Raymond has got it right, >> the only appropriate syntax is a variation of the assignment syntax. > > I don't entirely agree with that. There are currently two > existing constructs in Python for creating objects that know > their own name, 'def' and 'class'. They don't look like forms > of assignment -- rather, they follow the pattern > > > > This leads me to think that any further such constructs should > follow a similar pattern. There is a third such construct and it follows the same pattern in its basic form import name All are syntactic sugar for name = makeob('name', *arg, **kwds) For classes and modules there are visibly such alternatives: cls = type('cls',bases, classdict) mod = __import__('mod', ...) There is also a function in inspect that makes functions. So your point is correct: name = makeob('name', ...) # becomes keywd name .... but we do not really want a new keyword for every new type of object with a definition name. Can we do with just one? -- Terry Jan Reedy From tjreedy at udel.edu Wed Mar 9 03:01:25 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 08 Mar 2011 21:01:25 -0500 Subject: [Python-ideas] Literate python? In-Reply-To: <20110308170210.71762b78@bhuda.mired.org> References: <20110308170210.71762b78@bhuda.mired.org> Message-ID: On 3/8/2011 5:02 PM, Mike Meyer wrote: > Wild idea, swiped directly from haskell/ghc: > > How about making the python interpreter just a little bit smarter, It already is ;-) Though not exactly well known, expression statements that consist of a literal (number or string) are ignored -- except for string literals in docstring position (and then, they are attached as attributes, rather than being in the code object. def f(): 'doc' # to .__doc__ 1 # ignored (1,2,3) # will not be ignored, even though constand and unused # comments are ignored 'same as comment' ''' multiline comment ''' from dis import dis dis(f) >>> 4 0 LOAD_CONST 5 ((1, 2, 3)) 3 POP_TOP 10 4 LOAD_CONST 4 (None) 7 RETURN_VALUE > If the first non-white-space character after the shebang line (if > present) is a backslash, then the compiler ignores lines until it sees > a line consisting of \begin{code} (which could be the first line), > then compiles lines until it sees a line consisting of \end{code}, > after which it switches back to searching for \begin{code}. So this appears unnecessary. Just use quotes. The main problems is that program editors are generally not smart enough to do auto text wrapping within multiline strings. -- Terry Jan Reedy From jimjjewett at gmail.com Wed Mar 9 03:10:41 2011 From: jimjjewett at gmail.com (Jim Jewett) Date: Tue, 8 Mar 2011 21:10:41 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D76C0E3.6020101@hastings.org> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D76C0E3.6020101@hastings.org> Message-ID: On Tue, Mar 8, 2011 at 6:50 PM, Larry Hastings wrote: > On 03/08/2011 06:38 PM, Greg Ewing wrote: >> existing constructs in Python for creating objects that know >> their own name, 'def' and 'class'. ... >> >... criteria that makes a good keyword (short, > meaningful, in this case a noun) also makes for a > good identifier. It doesn't have to be a noun; the documentation already refers to binding names, so ... bind x = whatever(..., __name__=x) Though I personally suspect that if the name of the variable is needed, it is really a slot rather than a discardable name, and it really *should* be a slightly different object -- a decorable pointer instead of just a pointer. boundname x = whatever(..., __name__=x) Obviously, the interpreter writes the __name__=x for you, or the proposal loses value. Note that once there is such an object (type boundname) decorators do make sense again... @Str boundname x = "initval" -jJ From ncoghlan at gmail.com Wed Mar 9 05:43:26 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 8 Mar 2011 23:43:26 -0500 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: <4D756E5D.1020302@canterbury.ac.nz> References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> <4D756E5D.1020302@canterbury.ac.nz> Message-ID: On Mon, Mar 7, 2011 at 6:46 PM, Greg Ewing wrote: > Jim Jewett wrote: > >> I had thought of the finally as the element-close tags... > > But generation of the closing tags doesn't really have to > be done in a finally block, unless you're somehow wanting > to support throwing an exception in the middle of your > xml generation and still have it generate well-formed xml. > > In the absence of such a requirement, using a with-statement > seems like overkill. You don't use it as a finally block for that style of thing - if you hit any exception, you just reraise it (really easy with @contextmanager - simply don't put a try/finally around the yield statement). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From stephen at xemacs.org Wed Mar 9 07:39:24 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 09 Mar 2011 15:39:24 +0900 Subject: [Python-ideas] Literate python? In-Reply-To: <20110308170210.71762b78@bhuda.mired.org> References: <20110308170210.71762b78@bhuda.mired.org> Message-ID: <87bp1kkdb7.fsf@uwakimon.sk.tsukuba.ac.jp> Mike Meyer writes: > Wild idea, swiped directly from haskell/ghc: Note that at least the Darcs people are purging literate Haskell from their code base, I'm not sure about the rationale though. From masklinn at masklinn.net Wed Mar 9 09:06:31 2011 From: masklinn at masklinn.net (Masklinn) Date: Wed, 9 Mar 2011 09:06:31 +0100 Subject: [Python-ideas] Literate python? In-Reply-To: References: <20110308170210.71762b78@bhuda.mired.org> Message-ID: On 9 mars 2011, at 03:01, Terry Reedy wrote: > On 3/8/2011 5:02 PM, Mike Meyer wrote: >> Wild idea, swiped directly from haskell/ghc: >> >> How about making the python interpreter just a little bit smarter, > > It already is ;-) > Though not exactly well known, expression statements that consist of a literal (number or string) are ignored -- except for string literals in docstring position (and then, they are attached as attributes, rather than being in the code object. > > def f(): > 'doc' # to .__doc__ > 1 # ignored > (1,2,3) # will not be ignored, even though constand and unused > # comments are ignored > 'same as comment' > ''' > multiline > comment > ''' > > from dis import dis > dis(f) > >>> > 4 0 LOAD_CONST 5 ((1, 2, 3)) > 3 POP_TOP > > 10 4 LOAD_CONST 4 (None) > 7 RETURN_VALUE > >> If the first non-white-space character after the shebang line (if >> present) is a backslash, then the compiler ignores lines until it sees >> a line consisting of \begin{code} (which could be the first line), >> then compiles lines until it sees a line consisting of \end{code}, >> after which it switches back to searching for \begin{code}. > > So this appears unnecessary. Just use quotes. > > The main problems is that program editors are generally not smart enough to do auto text wrapping within multiline strings. > > -- > Terry Jan Reedy > That's a far cry from lhs though (to say nothing of lit prog). Abusing doctests would be much closer to literate haskell (just about identical if you find a terse way to ignore outputs), if still a long way from knuth's literate programming. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Wed Mar 9 14:28:51 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 10 Mar 2011 00:28:51 +1100 Subject: [Python-ideas] Literate python? In-Reply-To: <87bp1kkdb7.fsf@uwakimon.sk.tsukuba.ac.jp> References: <20110308170210.71762b78@bhuda.mired.org> <87bp1kkdb7.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <4D778093.2020206@pearwood.info> Stephen J. Turnbull wrote: > Mike Meyer writes: > > > Wild idea, swiped directly from haskell/ghc: > > Note that at least the Darcs people are purging literate Haskell from > their code base, I'm not sure about the rationale though. "If it was hard to write, it should be hard to read" perhaps? *wink* -- Steven From p.f.moore at gmail.com Wed Mar 9 15:05:32 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 9 Mar 2011 14:05:32 +0000 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> Message-ID: On 9 March 2011 01:34, Terry Reedy wrote: > So your point is correct: > > name = makeob('name', ...) # becomes > > keywd name .... > > but we do not really want a new keyword for every new type of object with a > definition name. Can we do with just one? >From what I recall, that's the basic idea behind the "make keyword" PEP. Sorry, I can't recall the PEP number or details here. It was rejected at the time, but it has been suggested (by Guido, I think!) that it might be worth reviving it. Paul. From ncoghlan at gmail.com Wed Mar 9 16:15:26 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 9 Mar 2011 10:15:26 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> Message-ID: On Wed, Mar 9, 2011 at 9:05 AM, Paul Moore wrote: > On 9 March 2011 01:34, Terry Reedy wrote: >> So your point is correct: >> >> name = makeob('name', ...) # becomes >> >> keywd name ....Si >> >> but we do not really want a new keyword for every new type of object with a >> definition name. Can we do with just one? > > >From what I recall, that's the basic idea behind the "make keyword" > PEP. Sorry, I can't recall the PEP number or details here. It was > rejected at the time, but it has been suggested (by Guido, I think!) > that it might be worth reviving it. PEP 359. The hard part is coming up with something that offers sufficient benefits over a custom metaclass to justify new syntax. Areas for exploration: 1. Easier to define (just define a function with an appropriate signature) 2. Ordered namespace by default 3. Significantly easier to compose (combining metaclasses can be something of a pain) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ubershmekel at gmail.com Wed Mar 9 22:39:04 2011 From: ubershmekel at gmail.com (Yuval Greenfield) Date: Wed, 9 Mar 2011 23:39:04 +0200 Subject: [Python-ideas] Python package file type Message-ID: Most of pypi is source distributions (zip/tgz), some of it is eggs, some are msi/exe windows installers. When I started with python I found it a bit confusing, which distribution do I need? "setup.py install" isn't obvious. Do you need an installer for each and every OS? Would it be nice if all of python packages behaved a bit more like firefox extensions (just as an example). Let's say all the packages are still zip/tgz files but with the extension "pypack". Whenever python 4 (a future version :) is installed it registers to handle the package filetype with the operating system. So on pypi each package can have one big and pretty "Install" button. After clicking - you get a nice and native installation wizard which asks: 1. Express Install (here it'll automatically install the package to all installed & compatible python versions) 2. Custom (here you can extract the package somewhere or other advanced package stuff) All packages would have the same installer framework that would simply be a GUI for setup.py. What do you guys think, could this be useful? --Yuval -------------- next part -------------- An HTML attachment was scrubbed... URL: From digitalxero at gmail.com Wed Mar 9 22:57:54 2011 From: digitalxero at gmail.com (Dj Gilcrease) Date: Wed, 9 Mar 2011 16:57:54 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> Message-ID: On Mon, Mar 7, 2011 at 10:50 PM, Raymond Hettinger wrote: > Just for the fun of it, here's my offering: > ? ?a := gizmo(arg1, arg2) > > is equivalent to > > ? ?a = gimzo(arg1, arg2, __name__='a') Why not just use the normal decorator syntax where assignment decorators must have the following signature def attr_dec(obj, left, right): ... and leave it up to the specific attribute decorator to decide how it handles the various assignment types. This is a much more general purpose syntax that would let you achieve the goal of name passing as well as more. From matthew at woodcraft.me.uk Wed Mar 9 23:32:53 2011 From: matthew at woodcraft.me.uk (Matthew Woodcraft) Date: Wed, 09 Mar 2011 22:32:53 +0000 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> Message-ID: On 2011-03-09 01:34, Terry Reedy wrote: > All are syntactic sugar for > > name = makeob('name', *arg, **kwds) > > For classes and modules there are visibly such alternatives: > > cls = type('cls',bases, classdict) > mod = __import__('mod', ...) > > There is also a function in inspect that makes functions. > > So your point is correct: > > name = makeob('name', ...) # becomes > > keywd name .... > > but we do not really want a new keyword for every new type of object > with a definition name. Can we do with just one? Thinking along those lines suggests def(CharField) foo(size=10, nullable=False) as sugar for foo = CharField(size=10, nullable=False, __name__='foo') This seems quite a good parallel to class definitions, though less close to function definitions and import statements. -M- From greg.ewing at canterbury.ac.nz Thu Mar 10 00:01:43 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 10 Mar 2011 12:01:43 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> Message-ID: <4D7806D7.8070804@canterbury.ac.nz> Terry Reedy wrote: > but we do not really want a new keyword for every new type of object > with a definition name. Can we do with just one? Well, my suggestion is to re-use 'def', whose English meaning is much more general than its current use in Python. I don't like 'make' so much, because I'd prefer something that sounds more declarative. -- Greg From python at mrabarnett.plus.com Thu Mar 10 00:12:20 2011 From: python at mrabarnett.plus.com (MRAB) Date: Wed, 09 Mar 2011 23:12:20 +0000 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> Message-ID: <4D780954.4000408@mrabarnett.plus.com> On 09/03/2011 22:32, Matthew Woodcraft wrote: > On 2011-03-09 01:34, Terry Reedy wrote: >> All are syntactic sugar for >> >> name = makeob('name', *arg, **kwds) >> >> For classes and modules there are visibly such alternatives: >> >> cls = type('cls',bases, classdict) >> mod = __import__('mod', ...) >> >> There is also a function in inspect that makes functions. >> >> So your point is correct: >> >> name = makeob('name', ...) # becomes >> >> keywd name .... >> >> but we do not really want a new keyword for every new type of object >> with a definition name. Can we do with just one? > > Thinking along those lines suggests > > def(CharField) foo(size=10, nullable=False) > > as sugar for > > foo = CharField(size=10, nullable=False, __name__='foo') > > > This seems quite a good parallel to class definitions, though less close > to function definitions and import statements. > In class and function definitions the name immediately follows the keyword. To me it would be clearer to write it: def foo = CharField(size=10, nullable=False) or similar. From mwm at mired.org Thu Mar 10 00:34:51 2011 From: mwm at mired.org (Mike Meyer) Date: Wed, 9 Mar 2011 18:34:51 -0500 Subject: [Python-ideas] Literate python? In-Reply-To: References: <20110308170210.71762b78@bhuda.mired.org> Message-ID: <20110309183451.7ae2e97b@bhuda.mired.org> On Tue, 08 Mar 2011 21:01:25 -0500 Terry Reedy wrote: > On 3/8/2011 5:02 PM, Mike Meyer wrote: > > Wild idea, swiped directly from haskell/ghc: > > > > How about making the python interpreter just a little bit smarter, > > It already is ;-) > Though not exactly well known, expression statements that consist of a > literal (number or string) are ignored -- except for string literals in > docstring position (and then, they are attached as attributes, rather > than being in the code object. [Examples elided] > > If the first non-white-space character after the shebang line (if > > present) is a backslash, then the compiler ignores lines until it sees > > a line consisting of \begin{code} (which could be the first line), > > then compiles lines until it sees a line consisting of \end{code}, > > after which it switches back to searching for \begin{code}. > So this appears unnecessary. Just use quotes. That works fine for the '> ' variant. But the point of the \...{code} version is that the resulting source could be run through both lpython and TeX without preprocessing. How does using quotes play with TeX? > The main problems is that program editors are generally not smart enough > to do auto text wrapping within multiline strings. Emacs MMM-mode (http://www.xemacs.org/Documentation/packages/html/mmm.html) should work for this - or the two variants I suggested (switching from Python to TeX mode dynamically). http://www.mired.org/consulting.html Independent Software developer/SCM consultant, email for more information. O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From guido at python.org Thu Mar 10 01:29:18 2011 From: guido at python.org (Guido van Rossum) Date: Wed, 9 Mar 2011 19:29:18 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D780954.4000408@mrabarnett.plus.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> Message-ID: On Wed, Mar 9, 2011 at 6:12 PM, MRAB wrote: > ? ?def foo = CharField(size=10, nullable=False) Eek, it would be really easy to misread that. But as long as we're trying to reuse 'def', I think a decorator using introspection could make this form work: @CharField def foo(size=10, nullable=False): "docstring goes here" Not that I seriously recommend this. :-) -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Thu Mar 10 01:35:24 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 9 Mar 2011 19:35:24 -0500 Subject: [Python-ideas] Python package file type In-Reply-To: References: Message-ID: On Wed, Mar 9, 2011 at 4:39 PM, Yuval Greenfield wrote: > What do you guys think, could this be useful? Yes, but there's a lot of heavy lifting to be done before something like that is possible in practice. Currently, a lot of the associated developer energy is focused on the "distribute" package, as well as other projects such as pip and virtualenv. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From greg.ewing at canterbury.ac.nz Thu Mar 10 01:52:42 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 10 Mar 2011 13:52:42 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D76C0E3.6020101@hastings.org> Message-ID: <4D7820DA.5010401@canterbury.ac.nz> Jim Jewett wrote: > Though I personally suspect that if the name of the variable is > needed, it is really a slot rather than a discardable name, and it > really *should* be a slightly different object -- a decorable pointer > instead of just a pointer. Sorry, but I'm unable to extract any coherent meaning from that paragraph. :-( If you mean that the thing passed in as the name should be some object other than a string, can you provide a use case? -- Greg From ncoghlan at gmail.com Thu Mar 10 02:02:08 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 9 Mar 2011 20:02:08 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> Message-ID: On Wed, Mar 9, 2011 at 7:29 PM, Guido van Rossum wrote: > On Wed, Mar 9, 2011 at 6:12 PM, MRAB wrote: >> ? ?def foo = CharField(size=10, nullable=False) > > Eek, it would be really easy to misread that. > > But as long as we're trying to reuse 'def', I think a decorator using > introspection could make this form work: I don't like the suggestion as written, but I quite like it as a superior syntax proposal for PEP 359. def (builder) name(param_spec): code_block Unlike PEP 359 though, the builder could be given the compiled code block as a code object (compiled as a closure) and the param_spec would be a description of the full parameter spec (ala PEP 362). The invocation of the builder would then be along the lines of: name = builder("name", param_spec, code_block, globals_dict) Such a statement would: 1. differs from "class" as it would be a true closure 2. differ from bare "def" with a decorator as the builder is given the current globals, raw parameter spec and code object, rather than having to extract them from a pre-built function object. 3. allow "global": and "nonlocal" to work just as they do for a function 4. "class" and "bare def" would be become optimised versions of such a general def for creating functions, generators and classes 5. With an appropriate builder, it would be possible to force creation of a generator without using a dummy yield in dead code 6. "exec" would likely need updates in order to be able to properly execute compiled closures and code objects expecting arguments In a lot of ways, this would be resurrecting the "anonymous blocks" part of PEP 340 (only without the anonymity). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From python at mrabarnett.plus.com Thu Mar 10 02:37:52 2011 From: python at mrabarnett.plus.com (MRAB) Date: Thu, 10 Mar 2011 01:37:52 +0000 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> Message-ID: <4D782B70.3060709@mrabarnett.plus.com> On 10/03/2011 01:02, Nick Coghlan wrote: > On Wed, Mar 9, 2011 at 7:29 PM, Guido van Rossum wrote: >> On Wed, Mar 9, 2011 at 6:12 PM, MRAB wrote: >>> def foo = CharField(size=10, nullable=False) >> >> Eek, it would be really easy to misread that. >> >> But as long as we're trying to reuse 'def', I think a decorator using >> introspection could make this form work: > > I don't like the suggestion as written, but I quite like it as a > superior syntax proposal for PEP 359. > > def (builder) name(param_spec): > code_block > I think what I don't like is that the name is buried in the middle of the line and not near the start as in the "class" or "def" statement. > Unlike PEP 359 though, the builder could be given the compiled code > block as a code object (compiled as a closure) and the param_spec > would be a description of the full parameter spec (ala PEP 362). The > invocation of the builder would then be along the lines of: > > name = builder("name", param_spec, code_block, globals_dict) > > Such a statement would: > 1. differs from "class" as it would be a true closure > 2. differ from bare "def" with a decorator as the builder is given the > current globals, raw parameter spec and code object, rather than > having to extract them from a pre-built function object. > 3. allow "global": and "nonlocal" to work just as they do for a function > 4. "class" and "bare def" would be become optimised versions of such a > general def for creating functions, generators and classes > 5. With an appropriate builder, it would be possible to force creation > of a generator without using a dummy yield in dead code > 6. "exec" would likely need updates in order to be able to properly > execute compiled closures and code objects expecting arguments > > In a lot of ways, this would be resurrecting the "anonymous blocks" > part of PEP 340 (only without the anonymity). > From bruce at leapyear.org Thu Mar 10 05:15:46 2011 From: bruce at leapyear.org (Bruce Leban) Date: Wed, 9 Mar 2011 20:15:46 -0800 Subject: [Python-ideas] Literate python? In-Reply-To: <20110309183451.7ae2e97b@bhuda.mired.org> References: <20110308170210.71762b78@bhuda.mired.org> <20110309183451.7ae2e97b@bhuda.mired.org> Message-ID: At the cost of an extraneous # at the beginning you can do something like this: #\def\py#1{} \py{ ...python... """#} ...TeX... \py{""" ...python... #} This isn't completely right since a } in a string or python comment will mess it up. That can be handled by a slightly more complicated definition which changes the catcodes of #, " and ' so that they in turn change the definitions of }, \ and newline. I started to write this but it's complicated so I'll leave it as an exercise. :-) (If you can't figure it out, I'll be happy to help.) --- Bruce On Wed, Mar 9, 2011 at 3:34 PM, Mike Meyer wrote: > On Tue, 08 Mar 2011 21:01:25 -0500 > Terry Reedy wrote: > > > On 3/8/2011 5:02 PM, Mike Meyer wrote: > > > Wild idea, swiped directly from haskell/ghc: > > > > > > How about making the python interpreter just a little bit smarter, > > > > It already is ;-) > > Though not exactly well known, expression statements that consist of a > > literal (number or string) are ignored -- except for string literals in > > docstring position (and then, they are attached as attributes, rather > > than being in the code object. > > [Examples elided] > > > > If the first non-white-space character after the shebang line (if > > > present) is a backslash, then the compiler ignores lines until it sees > > > a line consisting of \begin{code} (which could be the first line), > > > then compiles lines until it sees a line consisting of \end{code}, > > > after which it switches back to searching for \begin{code}. > > So this appears unnecessary. Just use quotes. > > That works fine for the '> ' variant. But the point of the \...{code} > version is that the resulting source could be run through both lpython > and TeX without preprocessing. How does using quotes play with TeX? > > > The main problems is that program editors are generally not smart enough > > to do auto text wrapping within multiline strings. > > Emacs MMM-mode > (http://www.xemacs.org/Documentation/packages/html/mmm.html) should > work for this - or the two variants I suggested (switching from Python > to TeX mode dynamically). > > > -- > Mike Meyer > http://www.mired.org/consulting.html > Independent Software developer/SCM consultant, email for more information. > > O< ascii ribbon campaign - stop html mail - www.asciiribbon.org > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbueno at python.org.br Thu Mar 10 05:51:19 2011 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Thu, 10 Mar 2011 01:51:19 -0300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> Message-ID: On Wed, Mar 9, 2011 at 9:29 PM, Guido van Rossum wrote: > On Wed, Mar 9, 2011 at 6:12 PM, MRAB wrote: >> ? ?def foo = CharField(size=10, nullable=False) > > Eek, it would be really easy to misread that. > > But as long as we're trying to reuse 'def', I think a decorator using > introspection could make this form work: > > @CharField > def foo(size=10, nullable=False): "docstring goes here" > > Not that I seriously recommend this. :-) Though it would be easy to do for most cases, with nothing new needed: from collections import namedtuple class MetaAssignmentDecorator(type): def __call__(cls, func): parameters = dict(zip(func.__code__.co_varnames, func.__defaults__)) return type.__call__(cls, func.__name__, **parameters) class NamedTuple(metaclass=MetaAssignmentDecorator): def __new__(cls, name, values): return namedtuple(name, values) # Allows for @NamedTuple def Point(values="x y"): pass p = Point(127, 87) >>> print (p.x) 87 > > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > From benrudiak at googlemail.com Thu Mar 10 07:28:06 2011 From: benrudiak at googlemail.com (Ben Rudiak-Gould) Date: Wed, 9 Mar 2011 22:28:06 -0800 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) Message-ID: (This will probably be unthreaded, because I don't have the original thread root in my mailbox and I didn't want to reply to a random deeply nested message.) All syntax aside, I'm unhappy with the semantics of this idea, and I'd like some clarification. Specifically, I'd like to know what a name is. First, namedtuple: >>> import collections >>> def f(): foo = collections.namedtuple('foo', 'a b') return foo, foo(a=1, b=2) >>> f() (, foo(a=1, b=2)) >>> The namedtuple I defined is not __main__.foo; there's no such symbol. And it's not foo either, by the time it's asked for its repr(). The name argument to namedtuple has never been more than a hack that kinda-sorta works in the obvious cases. It's useful for debugging, not so much for production code. The same is true of class and function names, and they do have special syntax, but even so, this is not the sort of thing I'd want to enshrine in new syntax. If there's going to be new syntax, it should be part of a new general facility for runtime introspection of assignment targets, not a hacky way of passing a stringified identifier that has no meaning outside of the context of the caller. In Python right now, you can do this: def deftuple(name, fields): globals()[name] = collections.namedtuple(name, fields) deftuple('foo', 'a b') or this: def namedtupleclass(name, args, body): fields = body.pop('fields') module = body.pop('__module__') assert not args and not body return collections.namedtuple(name, fields) class foo(metaclass=namedtupleclass): fields = 'a b' I like these better than new syntax, because their level of hackiness roughly matches the intrinsic hackiness of the thing that they are doing. (By the way, I first wrote my metaclass to pass (module + '.' + name) as the first argument to namedtuple, but discovered that namedtuple doesn't support dotted tuple names. So much for that pathetic attempt at proper scoping.) overridable_property seems quite different, if I understand it correctly. In this case no string representation of the name is ever leaked to the outside world, and there's no problem of scope naming. Instead, the issue is that you want to create several related attributes that can refer to each other, from a common base name. It seems to me that the most straightforward way of doing this is: def def_overridable_property(name, classdict): classdict['get_' + name] = ... classdict['set_' + name] = ... classdict[name] = ... class Foo: def_overridable_property('foo', locals()) I realize that this is illegal by the current documentation of locals(), but there's no reason that it couldn't be legal and documented, is there? It might slow down the execution of class definitions; do those need to be fast? I wouldn't propose such a thing for function locals, but class locals get turned into a dictionary in the end anyway. Again, I like this solution because it seems roughly as hacky as the thing that it's designed to do. -- Ben From p.f.moore at gmail.com Thu Mar 10 12:20:45 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 10 Mar 2011 11:20:45 +0000 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D782B70.3060709@mrabarnett.plus.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> Message-ID: On 10 March 2011 01:37, MRAB wrote: >> I don't like the suggestion as written, but I quite like it as a >> superior syntax proposal for PEP 359. >> >> def (builder) name(param_spec): >> ? ? code_block >> > I think what I don't like is that the name is buried in the middle of > the line and not near the start as in the "class" or "def" statement. Syntactically, I have the same concern - when I first read that line I didn't spot "name" at all. But I like the semantics - it strikes me as something that could have a lot of different uses (rather than being solely focused on letting property classes know the names they are bound to, which is where this started). So I'm +1 on the semantics, and happy to suggest alternative colours for the bikeshed :-) Paul. From ncoghlan at gmail.com Thu Mar 10 13:49:41 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 Mar 2011 07:49:41 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> Message-ID: On Thu, Mar 10, 2011 at 6:20 AM, Paul Moore wrote: > On 10 March 2011 01:37, MRAB wrote: >>> I don't like the suggestion as written, but I quite like it as a >>> superior syntax proposal for PEP 359. >>> >>> def (builder) name(param_spec): >>> ? ? code_block >>> >> I think what I don't like is that the name is buried in the middle of >> the line and not near the start as in the "class" or "def" statement. > > Syntactically, I have the same concern - when I first read that line I > didn't spot "name" at all. I actually agree it is a major weakness of the syntax. You can play games with "from" to rearrange the line. For example: def name from builder(param_spec): code_block as sugar for: name = builder("name", param_spec_obj, code_obj) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From jimjjewett at gmail.com Thu Mar 10 16:46:32 2011 From: jimjjewett at gmail.com (Jim Jewett) Date: Thu, 10 Mar 2011 10:46:32 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D7820DA.5010401@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D76C0E3.6020101@hastings.org> <4D7820DA.5010401@canterbury.ac.nz> Message-ID: On Wed, Mar 9, 2011 at 7:52 PM, Greg Ewing wrote: > Jim Jewett wrote: > >> Though I personally suspect that if the name of the variable is >> needed, it is really a slot rather than a discardable name, and it >> really *should* be a slightly different object -- a decorable pointer >> instead of just a pointer. > If you mean that the thing passed in as the name should be > some object other than a string, can you provide a use case? No, I mean that the name itself should be more than just a pointer to an object. It should of course *include* a pointer to an object (the bound value), but it should *also* have its own independent existence, regardless of whatever it happens to be pointing to. Effectively, it should be a proxy object, rather than a simple pointer. That way, the pointer-holder can be constrained (e.g., hold only a string) or asked for its name, or annotated with documentation, or ... -jJ From p.f.moore at gmail.com Thu Mar 10 17:41:48 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 10 Mar 2011 16:41:48 +0000 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> Message-ID: On 10 March 2011 12:49, Nick Coghlan wrote: > I actually agree it is a major weakness of the syntax. You can play > games with "from" to rearrange the line. For example: > > def name from builder(param_spec): > ?code_block > > as sugar for: > > name = builder("name", param_spec_obj, code_obj) Yes, I like that better... Paul. From ericsnowcurrently at gmail.com Thu Mar 10 19:22:29 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Thu, 10 Mar 2011 11:22:29 -0700 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> Message-ID: This is the first iteration of the idea that doesn't seem confusing. The other approaches did not seem intuitive. -eric On Thu, Mar 10, 2011 at 5:49 AM, Nick Coghlan wrote: > On Thu, Mar 10, 2011 at 6:20 AM, Paul Moore wrote: > > On 10 March 2011 01:37, MRAB wrote: > >>> I don't like the suggestion as written, but I quite like it as a > >>> superior syntax proposal for PEP 359. > >>> > >>> def (builder) name(param_spec): > >>> code_block > >>> > >> I think what I don't like is that the name is buried in the middle of > >> the line and not near the start as in the "class" or "def" statement. > > > > Syntactically, I have the same concern - when I first read that line I > > didn't spot "name" at all. > > I actually agree it is a major weakness of the syntax. You can play > games with "from" to rearrange the line. For example: > > def name from builder(param_spec): > code_block > > as sugar for: > > name = builder("name", param_spec_obj, code_obj) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Thu Mar 10 20:02:23 2011 From: guido at python.org (Guido van Rossum) Date: Thu, 10 Mar 2011 14:02:23 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> Message-ID: On Thu, Mar 10, 2011 at 11:41 AM, Paul Moore wrote: > On 10 March 2011 12:49, Nick Coghlan wrote: >> I actually agree it is a major weakness of the syntax. You can play >> games with "from" to rearrange the line. For example: >> >> def name from builder(param_spec): >> ?code_block >> >> as sugar for: >> >> name = builder("name", param_spec_obj, code_obj) > > Yes, I like that better... I'd like it better if it came with a default builder implementation that would create regular functions, so that def name(): was equivalent to def name from (): But I don't see a reasonable way to do that. Also I think it's confusing to have both @some_magic def name(): ... and def name from some_magic( References: Message-ID: I agree, it's definitely not a simple task. I'd say to be somehow viable these would be the minimum requirements: * Registration of the file type * Detection of installed python runtimes. * Installer user interface that wraps the main setup.py flows. I might find the time to work on a POC for windows though I'd rather hear some python-ideas feedback before I embark on such a journey. One gripe that comes to mind is the fact that some people might like the .tar.gz file type and never want to switch over. --Yuval -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Fri Mar 11 00:03:59 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 10 Mar 2011 18:03:59 -0500 Subject: [Python-ideas] Python package file type In-Reply-To: References: Message-ID: On 3/10/2011 4:43 PM, Yuval Greenfield wrote: > I agree, it's definitely not a simple task. I'd say to be somehow viable > these would be the minimum requirements: > > * Registration of the file type > * Detection of installed python runtimes. > * Installer user interface that wraps the main setup.py flows. > > I might find the time to work on a POC for windows though I'd rather > hear some python-ideas feedback before I embark on such a journey. One > gripe that comes to mind is the fact that some people might like the > .tar.gz file type and never want to switch over. Part of the problem is that there are at least two categories of use cases. Simple: one has one package (in the import sense). This can include a README and other non-python files. Zip it, ship it. Recipient unzips into site-packages. All done. Complex: anything else. -- Terry Jan Reedy From ncoghlan at gmail.com Fri Mar 11 03:31:46 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 Mar 2011 21:31:46 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> Message-ID: On Thu, Mar 10, 2011 at 2:02 PM, Guido van Rossum wrote: > On Thu, Mar 10, 2011 at 11:41 AM, Paul Moore wrote: >> On 10 March 2011 12:49, Nick Coghlan wrote: >>> I actually agree it is a major weakness of the syntax. You can play >>> games with "from" to rearrange the line. For example: >>> >>> def name from builder(param_spec): >>> ?code_block >>> >>> as sugar for: >>> >>> name = builder("name", param_spec_obj, code_obj) >> >> Yes, I like that better... > > I'd like it better if it came with a default builder implementation > that would create regular functions, so that > > def name(): > ? > > was equivalent to > > def name from (): > ? Yeah, I was thinking that having builde equivalents for functions and generators would be an interesting way to go. Builder objects may need to be passed some extra parameters to make that feasible, though (specifically, a reference to globals(), as well as the closure fulfilment details). One interesting side effect of that is the ability to have a never-yields generator without having to insert a dummy never-executed yield statement anywhere. > But I don't see a reasonable way to do that. > > Also I think it's confusing to have both > > @some_magic > def name(): ... > > and > > def name from some_magic( > with different semantics. Don't forget: @more_magic def name from some_magic(params): code I'm *far* from convinced this is a good idea, but it at least meets the holder of making something new possible. If exec() was enhanced to allow correct execution of closures and code blocks expecting arguments, you could experiment with this using a decorator and extracting the various pieces from a function object after it had already been created. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Fri Mar 11 03:33:37 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 Mar 2011 21:33:37 -0500 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> Message-ID: On Thu, Mar 10, 2011 at 9:31 PM, Nick Coghlan wrote: > I'm *far* from convinced this is a good idea, but it at least meets > the holder of making something new possible. holder? How did my fingers think that was even close to what I meant? "meets the bar", that should have been. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From python at mrabarnett.plus.com Fri Mar 11 04:33:43 2011 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 11 Mar 2011 03:33:43 +0000 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> Message-ID: <4D799817.5000506@mrabarnett.plus.com> On 10/03/2011 19:02, Guido van Rossum wrote: > On Thu, Mar 10, 2011 at 11:41 AM, Paul Moore wrote: >> On 10 March 2011 12:49, Nick Coghlan wrote: >>> I actually agree it is a major weakness of the syntax. You can play >>> games with "from" to rearrange the line. For example: >>> >>> def name from builder(param_spec): >>> code_block >>> >>> as sugar for: >>> >>> name = builder("name", param_spec_obj, code_obj) >> >> Yes, I like that better... > > I'd like it better if it came with a default builder implementation > that would create regular functions, so that > > def name(): > > > was equivalent to > > def name from(): > > > But I don't see a reasonable way to do that. > > Also I think it's confusing to have both > > @some_magic > def name(): ... > > and > > def name from some_magic( > with different semantics. > Talking about different semantics, I had an (admittedly vague) idea about passing the body of a def statement to a builder as a string at runtime, allowing 'foreign' code to be embedded more easily. In a statement of the form: def name(...) from builder: ... the body of the def wouldn't be parsed by Python at compile time, but would, as I said, be passed to the builder as a string at runtime. The builder would be able to parse the string, compiling it to a callable object with hooks into Python so that it could access variables, call functions, etc, when it was actually executed. For example, this: c.execute("""insert into stocks values (?, ?, ?, ?, ?)""", ('2006-01-05', 'BUY', 'RHAT',100, 35.14)) could become (assuming that sqlite3 had a magic __compile__ function): def insert(date, trans, symbol, qty, price) from sqlite3: insert into stocks values (date, trans, symbol, qty, price) insert('2006-01-05', 'BUY', 'RHAT',100, 35.14) When run, the SQL code would fetch the values of the parameters as needed. Actually, a closer match to the original code would be: c.insert('2006-01-05', 'BUY', 'RHAT',100, 35.14) Clearly it's a method, so the instance would be passed as 'self', something like this: def insert(self, date, trans, symbol, qty, price) from sqlite3: insert into stocks values (date, trans, symbol, qty, price) but what happens to 'self' within the definition? It's not referred to in the SQL statement itself. From cesare.di.mauro at gmail.com Fri Mar 11 07:38:30 2011 From: cesare.di.mauro at gmail.com (Cesare Di Mauro) Date: Fri, 11 Mar 2011 07:38:30 +0100 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D799817.5000506@mrabarnett.plus.com> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> <4D799817.5000506@mrabarnett.plus.com> Message-ID: 2011/3/11 MRAB > > Talking about different semantics, I had an (admittedly vague) idea > about passing the body of a def statement to a builder as a string at > runtime, allowing 'foreign' code to be embedded more easily. > > In a statement of the form: > > def name(...) from builder: > ... > > the body of the def wouldn't be parsed by Python at compile time, but > would, as I said, be passed to the builder as a string at runtime. The > builder would be able to parse the string, compiling it to a callable > object with hooks into Python so that it could access variables, call > functions, etc, when it was actually executed. > > For example, this: > > c.execute("""insert into stocks values (?, ?, ?, ?, ?)""", > ('2006-01-05', 'BUY', 'RHAT',100, 35.14)) > > could become (assuming that sqlite3 had a magic __compile__ function): > > def insert(date, trans, symbol, qty, price) from sqlite3: > insert into stocks values (date, trans, symbol, qty, price) > > insert('2006-01-05', 'BUY', 'RHAT',100, 35.14) > > When run, the SQL code would fetch the values of the parameters as > needed. > > Actually, a closer match to the original code would be: > > c.insert('2006-01-05', 'BUY', 'RHAT',100, 35.14) > > Clearly it's a method, so the instance would be passed as 'self', > something like this: > > def insert(self, date, trans, symbol, qty, price) from sqlite3: > insert into stocks values (date, trans, symbol, qty, price) > > but what happens to 'self' within the definition? It's not referred to > in the SQL statement itself. I already do something like this: DB.stocks += '2006-01-05', 'BUY', 'RHAT',100, 35.14 DB.stocks -= ID == 123 DB.stocks[ID == 123] = qty == qty + 1, price == 46.51 print DB.stocks[date, 'COUNT(*)'].GroupBy[date].OrderBy[date].Where[date['2011-01-01' : '2011-02-28']].List() with Python 2.5+. So there's no need for a new syntax to implement such things in a "pythonic" way. However some changes can improve ORM-like code. For example, len() always checks that __len__ returns an integer, but "relaxing" it can help a bit(actually I revert to something like this: symbol.len()). Cesare -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Sat Mar 12 00:07:02 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 12 Mar 2011 12:07:02 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> Message-ID: <4D7AAB16.8070207@canterbury.ac.nz> Nick Coghlan wrote: > holder? How did my fingers think that was even close to what I meant? > > "meets the bar", that should have been. Well, bars can hold things, so it sort of make sense... (I've been doing a lot of cryptic crosswords lately, so my brain is working in unusual ways at the moment.) -- Greg From bborcic at gmail.com Sat Mar 12 14:22:31 2011 From: bborcic at gmail.com (Boris Borcic) Date: Sat, 12 Mar 2011 14:22:31 +0100 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> <4D799817.5000506@mrabarnett.plus.com> Message-ID: Cesare Di Mauro wrote: > 2011/3/11 MRAB > [...] > Clearly it's a method, so the instance would be passed as 'self', > something like this: > > def insert(self, date, trans, symbol, qty, price) from sqlite3: > insert into stocks values (date, trans, symbol, qty, price) > > but what happens to 'self' within the definition? It's not referred to > in the SQL statement itself. > > > I already do something like this: > > DB.stocks += '2006-01-05', 'BUY', 'RHAT',100, 35.14 > > DB.stocks -= ID == 123 > > DB.stocks[ID == 123] = qty == qty + 1, price == 46.51 > > print DB.stocks[date, > 'COUNT(*)'].GroupBy[date].OrderBy[date].Where[date['2011-01-01' : > '2011-02-28']].List() > > with Python 2.5+. > > So there's no need for a new syntax to implement such things in a > "pythonic" way. As neat as this looks, IMO this is a misunderstanding on (1) a pythonic way to invite foreign language code and syntax into python source code (2) a pythonic way to hide SQL semantics under python syntax. Tkinter did for tcl what SQLAlchemy does for SQL, and I feel both reveal a challenge for python to be more open to foreign code syntax. iow return to Caesar what's Caesar's. Cheers, BB From greg.ewing at canterbury.ac.nz Sun Mar 13 00:16:36 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 13 Mar 2011 12:16:36 +1300 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: References: <4D6E0847.5060304@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> <4D799817.5000506@mrabarnett.plus.com> Message-ID: <4D7BFED4.4090009@canterbury.ac.nz> Boris Borcic wrote: > Tkinter did for tcl what SQLAlchemy does for SQL, and I feel both reveal > a challenge for python to be more open to foreign code syntax. I tend to think it's a mistake to put more than one syntax in the same source file. It confuses the heck out of code editors and other tools that need to understand the code to some extent. -- Greg From dsdale24 at gmail.com Sun Mar 13 16:18:00 2011 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 13 Mar 2011 11:18:00 -0400 Subject: [Python-ideas] Would it possible to define abstract read/write properties with decorators? Message-ID: I'm a big fan of the decorator syntax introduced in python-2.6 to define properties: class Foo(object): @property def bar(self): return 1 @bar.setter def bar(self, val): pass Lately I've been learning about ABCs at http://www.python.org/dev/peps/pep-3119/ and http://docs.python.org/py3k/library/abc.html . The documentation states that abstract read-only properties can be defined with the decorator syntax, but the "long-form" [bar=property(getbar, setbar)] property declaration is required for read/write properties. Would it be possible to support the decorator syntax for read/write properties in python-3.3? In python-3.2, this is valid code that (sort-of) produces an abstract read/write property: class Foo(metaclass=ABCMeta): @abstractproperty def bar(self): return 1 @bar.setter def bar(self, val): pass but subclasses of Foo can be instantiated even if they do not define a bar.setter: class Baz(Foo): @property def bar(self): return 1 baz=Baz() which must have been the reason for requiring the long-form property declaration syntax. It seems like it should be possible for Python to support the decorator syntax for declaring abstract read/write properties. The most elegant approach might be the following, if it could be supported: class Foo(metaclass=ABCMeta): # Note the use of @property rather than @abstractproperty: @property @abstractmethod def bar(self): return 1 @bar.setter @abstractmethod def bar(self, val): pass I thought that would work with Python-3.2, but Foo is instantiable even though there are abstractmethods. If python's property could be tweaked to recognize those abstract methods and raise the usual TypeError, then we could subclass the abstract base class Foo in the usual way: class Baz(Foo): @Foo.bar.getter def bar(self): return 2 this is not yet instantiable, Baz.bar.setter is still abstract... @bar.setter def bar(self, val): pass Now Baz could be instantiated. Is this feasible? Darren From dsdale24 at gmail.com Sun Mar 13 17:49:28 2011 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 13 Mar 2011 12:49:28 -0400 Subject: [Python-ideas] Would it possible to define abstract read/write properties with decorators? In-Reply-To: References: Message-ID: On Sun, Mar 13, 2011 at 11:18 AM, Darren Dale wrote: [...] > It seems like it should be possible for Python to support the > decorator syntax for declaring abstract read/write properties. The > most elegant approach might be the following, if it could be > supported: > > class Foo(metaclass=ABCMeta): > ? ?# Note the use of @property rather than @abstractproperty: > ? ?@property > ? ?@abstractmethod > ? ?def bar(self): > ? ? ? ?return 1 > ? ?@bar.setter > ? ?@abstractmethod > ? ?def bar(self, val): > ? ? ? ?pass > > I thought that would work with Python-3.2, but Foo is instantiable > even though there are abstractmethods. If python's property could be > tweaked to recognize those abstract methods and raise the usual > TypeError, then we could subclass the abstract base class Foo in the > usual way: Here is a working example!: import abc class Property(object): def __init__(self, getter, setter=None): self._getter = getter self._setter = setter if (getattr(getter, '__isabstractmethod__', False) or getattr(setter, '__isabstractmethod__', False)): self.__isabstractmethod__ = True def __get__(self, instance, owner): if instance is None: return self return self._getter(instance) def __set__(self, instance, value): return self._setter(instance, value) def getter(self, func): return Property(func, self._setter) def setter(self, func): return Property(self._getter, func) class C(metaclass=abc.ABCMeta): @Property @abc.abstractmethod def x(self): return 1 @x.setter @abc.abstractmethod def x(self, val): pass try: c=C() except TypeError as e: print(e) class D(C): @C.x.getter def x(self): return 2 try: d=D() except TypeError as e: print(e) class E(D): @D.x.setter def x(self, val): pass print(E()) From bborcic at gmail.com Mon Mar 14 10:25:48 2011 From: bborcic at gmail.com (Boris Borcic) Date: Mon, 14 Mar 2011 10:25:48 +0100 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D7BFED4.4090009@canterbury.ac.nz> References: <4D6E0847.5060304@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> <299818FC-EC33-4F7F-B6AD-2193BE6BE8CC@gmail.com> <4D76820A.1010007@hastings.org> <4D76A30D.2090706@pearwood.info> <4D76BE05.1050903@canterbury.ac.nz> <4D780954.4000408@mrabarnett.plus.com> <4D782B70.3060709@mrabarnett.plus.com> <4D799817.5000506@mrabarnett.plus.com> <4D7BFED4.4090009@canterbury.ac.nz> Message-ID: Greg Ewing wrote: > Boris Borcic wrote: > >> Tkinter did for tcl what SQLAlchemy does for SQL, and I feel both >> reveal a challenge for python to be more open to foreign code syntax. > > I tend to think it's a mistake to put more than one > syntax in the same source file. It confuses the > heck out of code editors and other tools that need > to understand the code to some extent. > Of course the challenge implies easy delimitation of what's what. "Putting more than one syntax in the same source" is not the right way to put it, unless you count as an example the case of a triple-quoted SQL query string (what should count as the "plain vanilla" realization of said challenge). In text processing documents it's been 20 years since the easy wysiwyg embedding of images, drawings, movie clips etc, together with click-and-point specialized editors. Cheers, BB From facundobatista at gmail.com Tue Mar 15 19:03:21 2011 From: facundobatista at gmail.com (Facundo Batista) Date: Tue, 15 Mar 2011 14:03:21 -0400 Subject: [Python-ideas] Automatic comparisons by default Message-ID: Two very related proposals: 1. On "!=", if Python doesn't find __ne__, use "not __eq__()". 2. On "<=", if Python doesn't find __le__, use "__eq__() or __lt__()". The same for ">=", of course. Some considerations: - You can give decorators that fill the spaces, but I think that would be less error prone, more clean and less surprising with those rules (specially the first one). - If somebody wants for != not to be the opposite of ==, both methods can be defined. The same for __le__ and __ge__. - All that would be needed for comparisons (for those normal objects where those rules apply) would be __le__ and __eq__, which is what you need to replace __cmp__. - *I* don't know if this behaviour will break something (couldn't find a case). What do you think? Regards, -- .? ? Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From guido at python.org Tue Mar 15 19:47:01 2011 From: guido at python.org (Guido van Rossum) Date: Tue, 15 Mar 2011 11:47:01 -0700 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: References: Message-ID: On Tue, Mar 15, 2011 at 11:03 AM, Facundo Batista wrote: > Two very related proposals: > > 1. On "!=", if Python doesn't find __ne__, use "not __eq__()". +1 on this one. I cannot count how often I have written a base class for this sole purpose. And I cannot think of any cases where it would be the wrong thing, *except* those damn IEEE NaNs (which we can special-case). > 2. On "<=", if Python doesn't find __le__, use "__eq__() or __lt__()". > The same for ">=", of course. Big -1 for this. Inequalities (orderings) are much more subtle than equalities. See e.g. sets. I'd be okay with offering a standard base class to supply this (#2) behavior though. > Some considerations: > > - You can give decorators that fill the spaces, but I think that would > be less error prone, more clean and less surprising with those rules > (specially the first one). > > - If somebody wants for != not to be the opposite of ==, both methods > can be defined. The same for __le__ and __ge__. > > - All that would be needed for comparisons (for those normal objects > where those rules apply) would be __le__ and __eq__, which is what you > need to replace __cmp__. > > - *I* don't know if this behaviour will break something (couldn't find a case). > > What do you think? -- --Guido van Rossum (python.org/~guido) From ziade.tarek at gmail.com Tue Mar 15 20:11:56 2011 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Tue, 15 Mar 2011 15:11:56 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. Message-ID: Hey, As I told Doug during Pycon, I think it would be a good idea to link his PyMOTW pages to our modules documentation in docs.python.org so people have more examples etc. Cheers Tarek -- Tarek Ziad? | http://ziade.org From facundobatista at gmail.com Tue Mar 15 20:12:53 2011 From: facundobatista at gmail.com (Facundo Batista) Date: Tue, 15 Mar 2011 15:12:53 -0400 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: References: Message-ID: On Tue, Mar 15, 2011 at 2:47 PM, Guido van Rossum wrote: >> 2. On "<=", if Python doesn't find __le__, use "__eq__() or __lt__()". >> The same for ">=", of course. > > Big -1 for this. Inequalities (orderings) are much more subtle than > equalities. See e.g. sets. But in those cases all comparison methods could be defined, so Python will not do any automatic one. Or you say this is too implicit? (btw, good reference to the sets) -- .? ? Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From jason.orendorff at gmail.com Tue Mar 15 20:14:28 2011 From: jason.orendorff at gmail.com (Jason Orendorff) Date: Tue, 15 Mar 2011 14:14:28 -0500 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: References: Message-ID: On Tue, Mar 15, 2011 at 1:47 PM, Guido van Rossum wrote: >> 2. On "<=", if Python doesn't find __le__, use "__eq__() or __lt__()". >> The same for ">=", of course. > > Big -1 for this. Inequalities (orderings) are much more subtle than > equalities. See e.g. sets. But "<=" means "< or ==" even for sets. Got another example? Orderings can be subtle, but it's uncontroversial to say that "?" always means "< or =" even for partial orders. (I guess a mathematician might say instead that "<" means "? but not =". They are weird that way.) -j From masklinn at masklinn.net Tue Mar 15 20:17:05 2011 From: masklinn at masklinn.net (Masklinn) Date: Tue, 15 Mar 2011 20:17:05 +0100 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: References: Message-ID: <37204544-82FA-4938-A52A-615596DAD951@masklinn.net> On 2011-03-15, at 19:47 , Guido van Rossum wrote: >> 2. On "<=", if Python doesn't find __le__, use "__eq__() or __lt__()". >> The same for ">=", of course. > > Big -1 for this. Inequalities (orderings) are much more subtle than > equalities. See e.g. sets. Still, there should only be need for two operators to provide total ordering, and having to reimplement every single operator correctly is probably trickier than implementing just two of them. Haskel's ``Ord``, for instance, only requires (minimal complete definition) that either ``<=`` or ``compare`` (__le__ and __cmp__, respectively) be implemented, and Ord derives from Eq so Eq also needs to be defined, requiring either ``==`` (__eq__) or ``/=`` (__ne__) (that you have to define either ``<=`` or ``compare`` at least and not, say, ``>`` comes from the way the ``Ord`` typeclass is defined: all operators call ``compare`` and ``compare`` calls ``==`` and ``<=``, so the circularity is broken either by redefining ``compare`` or by redefining the ``<=`` operator it uses). Technically, through Python's dynamism, it should be possible to ask for any of the 4 ordering operator and any of the two (in)equality ones and then do the right thing in all situations. From raymond.hettinger at gmail.com Tue Mar 15 20:18:07 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 15 Mar 2011 12:18:07 -0700 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: References: Message-ID: On Mar 15, 2011, at 11:03 AM, Facundo Batista wrote: > Two very related proposals: > > 1. On "!=", if Python doesn't find __ne__, use "not __eq__()". > > 2. On "<=", if Python doesn't find __le__, use "__eq__() or __lt__()". > The same for ">=", of course. It's a little more complicated than "if Python doesn't find ...". In Python 3, object() already has __le,__, __gt__, __ge__, and __gt__, so those methods always get found. You can use an identity check to see if those methods have been overridden, but I think the only truly correct way to tell if one of the rich comparison methods is defined is to call it and see whether it returns NotImplemented. Raymond From raymond.hettinger at gmail.com Tue Mar 15 20:26:33 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 15 Mar 2011 12:26:33 -0700 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: <37204544-82FA-4938-A52A-615596DAD951@masklinn.net> References: <37204544-82FA-4938-A52A-615596DAD951@masklinn.net> Message-ID: <8AAB6690-7EFE-441C-8D2B-0B958E1F74C4@gmail.com> On Mar 15, 2011, at 12:17 PM, Masklinn wrote: > Still, there should only be need for two operators to provide total ordering, and having to reimplement every single operator correctly is probably trickier than implementing just two of them. I think we gave this up when rich comparisons were accepted into the language. The whole idea was that operators can be overridden individually. In addition, they are not simple True/False predicates anymore; instead, they can return NotImplemented which can trigger other calls, or they can return other objects such as vectors. In other words, the added flexibility means that all bets are off in terms of simple implied relationships. For some purposes, this was an empowering net-win. For other purposes, it is a little inconvenient (making you specify all six of the methods instead of just one). Raymond From facundobatista at gmail.com Tue Mar 15 20:27:23 2011 From: facundobatista at gmail.com (Facundo Batista) Date: Tue, 15 Mar 2011 15:27:23 -0400 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: References: Message-ID: On Tue, Mar 15, 2011 at 3:18 PM, Raymond Hettinger wrote: > It's a little more complicated than "if Python doesn't find ...". > In Python 3, object() already has __le,__, __gt__, __ge__, and __gt__, > so those methods always get found. I'm seeing that it also defines __eq__ and __ne__, but in this case we could make __ne__ to return NotImplemented, and automatically call "not __eq__()" (that for object() is the opposite). > You can use an identity check to see if those methods have been > overridden, but I think the only truly correct way to tell if one of the > rich comparison methods is defined is to call it and see whether > it returns NotImplemented. Not following... object.__lt__ raises TypeError, does not return NotImplemented. We could make object() to return NotImplemented for __le__ and __ge__, and in such case it would call "__eq__() or __lt__()" (returning True if the object is the same, or raising TypeError to prevent ordering the unorderable object) -- .? ? Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From guido at python.org Tue Mar 15 22:33:37 2011 From: guido at python.org (Guido van Rossum) Date: Tue, 15 Mar 2011 14:33:37 -0700 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: References: Message-ID: On Tue, Mar 15, 2011 at 12:27 PM, Facundo Batista wrote: > On Tue, Mar 15, 2011 at 3:18 PM, Raymond Hettinger > wrote: > >> It's a little more complicated than "if Python doesn't find ...". >> In Python 3, object() already has __le,__, __gt__, __ge__, and __gt__, >> so those methods always get found. > > I'm seeing that it also defines __eq__ and __ne__, but in this case we > could make __ne__ to return NotImplemented, and automatically call > "not __eq__()" (that for object() is the opposite). > > >> You can use an identity check to see if those methods have been >> overridden, but I think the only truly correct way to tell if one of the >> rich comparison methods is defined is to call it and see whether >> it returns NotImplemented. > > Not following... object.__lt__ raises TypeError, does not return NotImplemented. > > We could make object() to return NotImplemented for __le__ and __ge__, > and in such case it would call "__eq__() or __lt__()" (returning True > if the object is the same, or raising TypeError to prevent ordering > the unorderable object) By now it should be clear that this is a very complicated area. The subtleties to consider include but are probably not limited to: - what to do with object, which defines all six operations already - relationship between < and >=: Facundo said in his first message that "cmp" could be replaced by just <= and ==, but I do not believe that always works (e.g. sets) - relationship between <, <=, ==: is < defined in terms of <= and ==, or is <= defined in terms of < and ==? - what to do if the operands have different classes, e.g. what does a <= b mean if a defines __lt__ and b defines __ge__ (not to mention when one class is a subclass of the other) - backwards compatibility If someone still has any doubts let them try to implement this proposal in CPython (3.3 branch please) and see how many tests fail. (Or for that matter, try to implement it in PyPy -- although that doesn't do Python 3 yet, so it's even more complicated.) -- --Guido van Rossum (python.org/~guido) From facundobatista at gmail.com Tue Mar 15 23:34:06 2011 From: facundobatista at gmail.com (Facundo Batista) Date: Tue, 15 Mar 2011 18:34:06 -0400 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: References: Message-ID: On Tue, Mar 15, 2011 at 5:33 PM, Guido van Rossum wrote: > By now it should be clear that this is a very complicated area. The > subtleties to consider include but are probably not limited to: Yes, it's very complicated, :| I'll pursue only the idea of automatic __ne__ > - what to do with object, which defines all six operations already > > - relationship between < and >=: Facundo said in his first message > that "cmp" could be replaced by just <= and ==, but I do not believe > that always works (e.g. sets) But I was wrong... AFAIK now, you need __lt__ for ordering, and __eq__ / __ne__ for equality comparison, and the other methods for richer comparison. Thanks! -- .? ? Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From greg.ewing at canterbury.ac.nz Wed Mar 16 01:26:11 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 16 Mar 2011 13:26:11 +1300 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: References: Message-ID: <4D8003A3.9080409@canterbury.ac.nz> Raymond Hettinger wrote: > It's a little more complicated than "if Python doesn't find ...". > In Python 3, object() already has __le,__, __gt__, __ge__, and __gt__, > so those methods always get found. This whole mess makes me think it was a mistake to throw out __cmp__ in its entirety. The relationship between __cmp__ and the other methods was tricky under the old scheme because there were fallbacks in both directions. But I think it should be possible to come up with a new scheme that's well-behaved based on the following ideas: 1) __cmp__ returns one of four possible results: LessThan, EqualTo, GreaterThan or NotEqual (the latter for unordered types). 2) There are fallbacks from the individual methods to __cmp__, but *not* in the other direction. -- Greg From andrew.svetlov at gmail.com Wed Mar 16 01:41:39 2011 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Wed, 16 Mar 2011 02:41:39 +0200 Subject: [Python-ideas] [Python-Dev] User conversions in custom string.Formatter In-Reply-To: <4D800372.8000100@trueblade.com> References: <4D800372.8000100@trueblade.com> Message-ID: Well, let's discuss in python-ideas. I mean my custom formatter like: class Formatter(string.Formatter): def convert_field(self, value, conversion): if 'q' == conversion: return qname(value) elif 't' == conversion: return format_datetime(value) else: return super(Formatter, self).convert_field(value, conversion) I can use it like: s = "{0!q}, {1!t}".format(a, b) !q and !t are my extensions to formatter conversions. It's good enough but I prefer to see s = "{0!qname}, {1!time}".format(a, b) or something like that. It cannot break any existing rule (you still can use "{0!qname:<20}" for example) but you can use more readable names for converters. There are only changed file is Objects/stringlib/string_format.h as I can see (I don't count documentation and unitests). On Wed, Mar 16, 2011 at 2:25 AM, Eric Smith wrote: > On 03/15/2011 08:07 PM, Andrew Svetlov wrote: >> >> As PEP 3101 http://www.python.org/dev/peps/pep-3101/ says (and current >> Python does) user can specify conversions like "{0!s}". >> In custom formatters (derived from string.Formatter) he can override >> convert_field method to add custom conversions. >> >> I experimented with that last month and found it very convenient. >>> >>> From my perspective custom conversions are very close to 'filters' >> >> from html template engines (jinja2, mako, django etc). >> While I like to see custom conversions simple and easy I don't wan't >> to bring 'cascading' and 'parametrization' to standard formatting. >> >> But why don't relax spec a bit and allow to use any word (with >> 'identifier' semantix, I mean str.isidentifier()) as conversion name? >> Proposed update doesn't make any backward incompatibility (both python >> and public C API are not changed), it's clean and obvious. >> And it's very funny to use words instead single characters for custom >> user-specific conversions. >> >> What do you think? > > This should be in python-ideas, but I don't know if Andrew is subscribed, so > I'll reply here until it gets out of control. > > Andrew: Could you give a code sample of what you'd like to see? > > Eric. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com > From eric at trueblade.com Wed Mar 16 01:48:52 2011 From: eric at trueblade.com (Eric Smith) Date: Tue, 15 Mar 2011 20:48:52 -0400 Subject: [Python-ideas] [Python-Dev] User conversions in custom string.Formatter In-Reply-To: References: <4D800372.8000100@trueblade.com> Message-ID: <4D8008F4.2050402@trueblade.com> On 3/15/2011 8:41 PM, Andrew Svetlov wrote: > Well, let's discuss in python-ideas. > > I mean my custom formatter like: > > class Formatter(string.Formatter): > def convert_field(self, value, conversion): > if 'q' == conversion: > return qname(value) > elif 't' == conversion: > return format_datetime(value) > else: > return super(Formatter, self).convert_field(value, conversion) > > I can use it like: > > s = "{0!q}, {1!t}".format(a, b) That doesn't work. You need to do: s = Formatter().format("{0!q}, {1!t}", a, b) There's no way to extend the built in str.format to include user defined conversions. > !q and !t are my extensions to formatter conversions. It's good enough > but I prefer to see > > s = "{0!qname}, {1!time}".format(a, b) > > or something like that. It cannot break any existing rule (you still > can use "{0!qname:<20}" for example) but you can > use more readable names for converters. > > There are only changed file is Objects/stringlib/string_format.h as I > can see (I don't count documentation and unitests). Are you looking for those specific conversions, or some general purpose extension mechanism? I don't think a general purpose mechanism is worth the hassle, and I don't think your specific conversions are worth adding (not that I understand what they'd do, maybe you can convince everyone they're worthwhile). Note that you can get some of what you want by specifying a wrapper class that overrides __repr__ and doing: s = "{0!r}, {1!r}".format(QnameWrapper(a), DatetimeFormatWrapper(b)) From benjamin at python.org Wed Mar 16 02:00:12 2011 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 16 Mar 2011 01:00:12 +0000 (UTC) Subject: [Python-ideas] Automatic comparisons by default References: Message-ID: Guido van Rossum writes: > > On Tue, Mar 15, 2011 at 11:03 AM, Facundo Batista > wrote: > > Two very related proposals: > > > > 1. On "!=", if Python doesn't find __ne__, use "not __eq__()". Isn't this implemented in Python 3? From raymond.hettinger at gmail.com Wed Mar 16 02:12:18 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 15 Mar 2011 18:12:18 -0700 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: References: Message-ID: <00BE2A60-83D9-49F4-B106-F2A30759B11F@gmail.com> >> >>> Two very related proposals: >>> >>> 1. On "!=", if Python doesn't find __ne__, use "not __eq__()". > > Isn't this implemented in Python 3? Yes, it is :-) >>> class A: def __eq__(self, other): print('hello') return True >>> A() == A() hello True >>> A() != A() hello False It seems that very few core developers really know what is currently in Python 3. That is probably a by-product of being stuck with Python 2 at work. Raymond From solipsis at pitrou.net Wed Mar 16 02:18:09 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 16 Mar 2011 02:18:09 +0100 Subject: [Python-ideas] Automatic comparisons by default References: <00BE2A60-83D9-49F4-B106-F2A30759B11F@gmail.com> Message-ID: <20110316021809.45144798@pitrou.net> On Tue, 15 Mar 2011 18:12:18 -0700 Raymond Hettinger wrote: > > It seems that very few core developers really know what is currently in Python 3. That is probably a by-product of being stuck with Python 2 at work. I don't know about other core devs, but I'm equally ignorant about the specificities of __special__ methods in Python 2 and Python 3. I like to avoid having to deal with them :) Regards Antoine. From facundobatista at gmail.com Wed Mar 16 02:21:22 2011 From: facundobatista at gmail.com (Facundo Batista) Date: Tue, 15 Mar 2011 21:21:22 -0400 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: <00BE2A60-83D9-49F4-B106-F2A30759B11F@gmail.com> References: <00BE2A60-83D9-49F4-B106-F2A30759B11F@gmail.com> Message-ID: On Tue, Mar 15, 2011 at 9:12 PM, Raymond Hettinger wrote: > It seems that very few core developers really know what is currently in Python 3. ? That is probably a by-product of being stuck with Python 2 at work. Yeah... actually, I tried it in Py3 after Benjamin's question, and then lost (inverted?) *15 minutes* understanding how that was coded in object.c, :| So, this thread is officially closed, :) Thank you all!! Rusted-at-C--ly yours, -- .? ? Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From andrew.svetlov at gmail.com Wed Mar 16 02:48:26 2011 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Wed, 16 Mar 2011 03:48:26 +0200 Subject: [Python-ideas] [Python-Dev] User conversions in custom string.Formatter In-Reply-To: <4D801333.7060504@trueblade.com> References: <4D800372.8000100@trueblade.com> <4D8008F4.2050402@trueblade.com> <4D801333.7060504@trueblade.com> Message-ID: I can, but .parse() is thin wrapper around _string.formatter_parser, which is implemented in C (Objects/stringlib/string_format.h) and is opaque structure from python side. To override .parse() I should to reimplement the most part of this file. It's much easier to patch 'parse_field' and 'MarkupIterator_next' functions than reimplement all existing functionality. From eric at trueblade.com Wed Mar 16 02:32:35 2011 From: eric at trueblade.com (Eric Smith) Date: Tue, 15 Mar 2011 21:32:35 -0400 Subject: [Python-ideas] [Python-Dev] User conversions in custom string.Formatter In-Reply-To: References: <4D800372.8000100@trueblade.com> <4D8008F4.2050402@trueblade.com> Message-ID: <4D801333.7060504@trueblade.com> On 3/15/2011 9:07 PM, Andrew Svetlov wrote: > Ooh. Sorry. Of course I used > s = Formatter().format("{0!q}, {1!t}", a, b) > > I don't want to put those specific conversions into standard - that > doesn't make sense as you said. > MyFormatter().format(...) was intended to extend standard formatting > mechanic and it works pretty well for me. > > But I can use only single-character as converter names. > PEP 3101 doesn't limit that explicitly, it only says: > - there are !s and !r (later added !a) converters > - a custom Formatter class can define additional conversion flags > > You cannot specify multi-char converter name because > string:Formatter.parse (and maybe string:Formatter.get_field) > cannot parse that form. You can get whatever behavior you want by overriding the .parse() method in addition to convert_field. From andrew.svetlov at gmail.com Wed Mar 16 02:51:10 2011 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Wed, 16 Mar 2011 03:51:10 +0200 Subject: [Python-ideas] [Python-Dev] User conversions in custom string.Formatter In-Reply-To: References: <4D800372.8000100@trueblade.com> <4D8008F4.2050402@trueblade.com> <4D801333.7060504@trueblade.com> Message-ID: On Wed, Mar 16, 2011 at 3:48 AM, Andrew Svetlov wrote: > I can, but .parse() is thin wrapper around _string.formatter_parser, > which is implemented in C > (Objects/stringlib/string_format.h) and is opaque structure from python side. > To override .parse() I should to reimplement the most part of this file. > > It's much easier to patch 'parse_field' and 'MarkupIterator_next' > functions than reimplement all existing functionality. > Also please note I can easy override string.Formatter.format_field if need (while generally it's not required, __format__ works very well) but it's very hard to override .parse. From larry at hastings.org Wed Mar 16 05:49:38 2011 From: larry at hastings.org (Larry Hastings) Date: Wed, 16 Mar 2011 00:49:38 -0400 Subject: [Python-ideas] Assignment decorators (Re: The Descriptor Protocol...) In-Reply-To: <4D757EB9.8080308@hastings.org> References: <4D6E0847.5060304@gmail.com> <1C41A0E1-08EB-44AA-B03E-01C66508AF19@gmail.com> <4D6FFDF9.3060507@canterbury.ac.nz> <4D757EB9.8080308@hastings.org> Message-ID: <4D804162.60903@hastings.org> In case you'd like to play with assignment decorators using what's possible with current syntax, I present the "assign" decorator: def assign(fn): return fn(fn.__name__) class C(object): @assign def x(name): return "<<" + name + ">>" c = C() print(c.x) This prints "<>" in Python 3. A function was called at the definition time of "x"; it knew the name of the variable, and it was able to compute the value of the class attribute, and you only had to specify the variable name once. I'm not suggesting this be used for production code--it's awful magic. But this could be used to prototype use cases for assignment decorators. /larry/ From greg.ewing at canterbury.ac.nz Wed Mar 16 10:28:33 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 16 Mar 2011 22:28:33 +1300 Subject: [Python-ideas] A user story concerning things knowing their own names Message-ID: <4D8082C1.30400@canterbury.ac.nz> I just experienced an obscure bug resulting from copying and pasting an overridable_property and forgetting to change the passed-in name: content_size = overridable_property('size', "Size of the content area.") which should have been content_size = overridable_property('content_size', "Size of the content area.") This was quite difficult to track down, because the result was to effectively make it an alias of *another* property I have called 'size'. They happen to be near enough to the same thing that the error went unnoticed for quite some time. If I could write my overridable_property declarations without having to repeat the name, this kind of thing would not be able to happen. -- Greg From greg.ewing at canterbury.ac.nz Wed Mar 16 10:41:16 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 16 Mar 2011 22:41:16 +1300 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: <8AAB6690-7EFE-441C-8D2B-0B958E1F74C4@gmail.com> References: <37204544-82FA-4938-A52A-615596DAD951@masklinn.net> <8AAB6690-7EFE-441C-8D2B-0B958E1F74C4@gmail.com> Message-ID: <4D8085BC.3010107@canterbury.ac.nz> Raymond Hettinger wrote: > I think we gave this up when rich comparisons were accepted into the language. I don't understand that argument. An object that wants to can always override all six methods and do whatever it wants. There's no reason that a sensible system of defaults needs to interfere with that. -- Greg From steve at pearwood.info Wed Mar 16 14:37:35 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 17 Mar 2011 00:37:35 +1100 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: <4D8003A3.9080409@canterbury.ac.nz> References: <4D8003A3.9080409@canterbury.ac.nz> Message-ID: <4D80BD1F.6050700@pearwood.info> Greg Ewing wrote: > Raymond Hettinger wrote: > >> It's a little more complicated than "if Python doesn't find ...". >> In Python 3, object() already has __le,__, __gt__, __ge__, and __gt__, >> so those methods always get found. > > This whole mess makes me think it was a mistake to throw > out __cmp__ in its entirety. > > The relationship between __cmp__ and the other methods > was tricky under the old scheme because there were fallbacks > in both directions. But I think it should be possible to > come up with a new scheme that's well-behaved based on > the following ideas: > > 1) __cmp__ returns one of four possible results: LessThan, > EqualTo, GreaterThan or NotEqual (the latter for unordered > types). Wouldn't that last one be better called Unordered rather than NotEqual? More explicit and less likely to fool people into assuming total ordering ("NotEqual, that implies one of LessThan or GreaterThan, right?"). And shouldn't there be provision for __cmp__ to return NotImplemented, as rich comparison methods currently do, so as to allow the other operand to have a say? -- Steven From anikom15 at gmail.com Wed Mar 16 14:52:17 2011 From: anikom15 at gmail.com (Westley =?ISO-8859-1?Q?Mart=EDnez?=) Date: Wed, 16 Mar 2011 06:52:17 -0700 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: <4D8082C1.30400@canterbury.ac.nz> References: <4D8082C1.30400@canterbury.ac.nz> Message-ID: <1300283537.2509.0.camel@localhost.localdomain> On Wed, 2011-03-16 at 22:28 +1300, Greg Ewing wrote: > I just experienced an obscure bug resulting from copying > and pasting an overridable_property and forgetting to > change the passed-in name: > > content_size = overridable_property('size', > "Size of the content area.") > > which should have been > > content_size = overridable_property('content_size', > "Size of the content area.") > > This was quite difficult to track down, because the result > was to effectively make it an alias of *another* property > I have called 'size'. They happen to be near enough to the same > thing that the error went unnoticed for quite some time. > > If I could write my overridable_property declarations > without having to repeat the name, this kind of thing would > not be able to happen. > If it went unnoticed, what's the matter? From ncoghlan at gmail.com Wed Mar 16 14:58:44 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 16 Mar 2011 09:58:44 -0400 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: <1300283537.2509.0.camel@localhost.localdomain> References: <4D8082C1.30400@canterbury.ac.nz> <1300283537.2509.0.camel@localhost.localdomain> Message-ID: On Wed, Mar 16, 2011 at 9:52 AM, Westley Mart?nez wrote: > If it went unnoticed, what's the matter? Unnoticed just means it was a silent failure rather than a noisy one. It doesn't mean the bug wasn't resulting erroneous output. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From larry at hastings.org Wed Mar 16 16:10:21 2011 From: larry at hastings.org (Larry Hastings) Date: Wed, 16 Mar 2011 11:10:21 -0400 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: <4D8082C1.30400@canterbury.ac.nz> References: <4D8082C1.30400@canterbury.ac.nz> Message-ID: <4D80D2DD.7050509@hastings.org> On 03/16/2011 05:28 AM, Greg Ewing wrote: > If I could write my overridable_property declarations > without having to repeat the name, this kind of thing would > not be able to happen. As I suggested in my email on the Assignment Decorators thread this morning, you could achieve this in current Python, no extension needed: def assign(fn): return fn(fn.__name__) @assign def content_size(name): return overridable_property(name, "Size of the content area.") How bad do you want it? ;-) /larry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at gmail.com Wed Mar 16 17:11:29 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Wed, 16 Mar 2011 12:11:29 -0400 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: <4D80D2DD.7050509@hastings.org> References: <4D8082C1.30400@canterbury.ac.nz> <4D80D2DD.7050509@hastings.org> Message-ID: On 16 March 2011 11:10, Larry Hastings wrote: > > On 03/16/2011 05:28 AM, Greg Ewing wrote: > > If I could write my overridable_property declarations > without having to repeat the name, this kind of thing would > not be able to happen. > > > As I suggested in my email on the Assignment Decorators thread this > morning, you could achieve this in current Python, no extension needed: > > def assign(fn): > return fn(fn.__name__) > > @assign > def content_size(name): > return overridable_property(name, "Size of the content area.") > > > And building on this sightly you could do the following for namedtuple: >>> from collections import namedtuple >>> import inspect >>> def make_namedtuple(fn): ... args = ' '.join(inspect.getargspec(fn).args) ... return namedtuple(fn.__name__, args) ... >>> @make_namedtuple ... def Point(x, y): pass ... >>> p = Point(1, 2) >>> p Point(x=1, y=2) All the best, Michael > How bad do you want it? ;-) > > > /larry/ > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianb at colorstudy.com Wed Mar 16 18:13:14 2011 From: ianb at colorstudy.com (Ian Bicking) Date: Wed, 16 Mar 2011 12:13:14 -0500 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: <4D8082C1.30400@canterbury.ac.nz> References: <4D8082C1.30400@canterbury.ac.nz> Message-ID: I'll note this general problem is also present in any of the declarative ORMs, which use silly hacks to tell descriptors their name. Like you have: class Table(ORM): name = StringColumn() Another case where I've noticed a problem is any kind of descriptor that needs its own storage; the name of the property gives a possible stable namespace for the value, but without the name you either have to pass in a name or the storage area becomes volatile. For instance, a read-only descriptor: http://svn.colorstudy.com/home/ianb/recipes/setonce.py You can solve this in, e.g., the ORM class by doing things when a class is created -- but it requires very specific cooperation between the class and the descriptor. Everyone really does it different ways. One could imagine an extension of the descriptor protocol, where on class creation you called something like attr.__addtoclass__(cls, name) (for all attributes of the class that define that method) -- which if you just want the name you'd simply save that name in your object and return self. On Wed, Mar 16, 2011 at 4:28 AM, Greg Ewing wrote: > I just experienced an obscure bug resulting from copying > and pasting an overridable_property and forgetting to > change the passed-in name: > > content_size = overridable_property('size', > "Size of the content area.") > > which should have been > > content_size = overridable_property('content_size', > "Size of the content area.") > > This was quite difficult to track down, because the result > was to effectively make it an alias of *another* property > I have called 'size'. They happen to be near enough to the same > thing that the error went unnoticed for quite some time. > > If I could write my overridable_property declarations > without having to repeat the name, this kind of thing would > not be able to happen. > > -- > Greg > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Mar 16 19:47:45 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Mar 2011 13:47:45 -0500 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> Message-ID: On 3/16/11 12:13 PM, Ian Bicking wrote: > I'll note this general problem is also present in any of the declarative ORMs, > which use silly hacks to tell descriptors their name. Like you have: > > class Table(ORM): > name = StringColumn() > > Another case where I've noticed a problem is any kind of descriptor that needs > its own storage; the name of the property gives a possible stable namespace for > the value, but without the name you either have to pass in a name or the storage > area becomes volatile. For instance, a read-only descriptor: > http://svn.colorstudy.com/home/ianb/recipes/setonce.py > > You can solve this in, e.g., the ORM class by doing things when a class is > created -- but it requires very specific cooperation between the class and the > descriptor. Everyone really does it different ways. > > One could imagine an extension of the descriptor protocol, where on class > creation you called something like attr.__addtoclass__(cls, name) (for all > attributes of the class that define that method) -- which if you just want the > name you'd simply save that name in your object and return self. If we were to extend the descriptor protocol, I think it would be better to extend it by providing __get_ex__(self, instance, owner, name), __set_ex__(self, instance, name, value), etc. methods that would be called in preference over the current __get__() and __set__() methods. This allows descriptor objects to be reused. We do this reasonably often with the Traits package (which overrides __getattribute__() to essentially implement this __get_ex__()/__set_ex__() protocol by different names). For example, we have a complicated trait for specifying colors: ColorTrait = Trait("black", Tuple, List, Str, color_table) It would be nice to simply reuse this object everywhere: class Figure(HasTraits): background = ColorTrait foreground = ColorTrait The alternative is to only provide factories such that you always get a new descriptor: ColorTrait = lambda: Trait("black", Tuple, List, Str, color_table) class Figure(HasTraits): background = ColorTrait() foreground = ColorTrait() However, this makes these new descriptors inconsistent with current descriptors, which can be shared between classes. It also prevents you from passing around descriptor objects as first-class citizens. For example, I might want to ask the ColorTrait for its constituents in order to construct a new one that has a different default value. I can't ask that of a factory function. This objection holds for any proposal that requires the descriptor object itself to hold onto its name, no matter how it acquires that name. Personally, I don't want descriptors that know their own name; I want descriptors that can be told what name to use for each operation. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tjreedy at udel.edu Wed Mar 16 21:49:29 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 16 Mar 2011 16:49:29 -0400 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> <4D80D2DD.7050509@hastings.org> Message-ID: On 3/16/2011 12:11 PM, Michael Foord wrote: > > On 16 March 2011 11:10, Larry Hastings As I suggested in my email on the Assignment Decorators thread this > morning, you could achieve this in current Python, no extension needed: > > def assign(fn): > return fn(fn.__name__) > > @assign > def content_size(name): > return overridable_property(name, "Size of the content area.") > And building on this sightly you could do the following for namedtuple: > >>> from collections import namedtuple > >>> import inspect > >>> def make_namedtuple(fn): > ... args = ' '.join(inspect.getargspec(fn).args) > ... return namedtuple(fn.__name__, args) > ... > >>> @make_namedtuple > ... def Point(x, y): pass > ... > >>> p = Point(1, 2) > >>> p > Point(x=1, y=2) If make_namedtuple were added to collections, then one could import *that* instead of 'namedtuple' itself. -- Terry Jan Reedy From ianb at colorstudy.com Wed Mar 16 22:06:29 2011 From: ianb at colorstudy.com (Ian Bicking) Date: Wed, 16 Mar 2011 16:06:29 -0500 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> Message-ID: On Wed, Mar 16, 2011 at 1:47 PM, Robert Kern wrote: > On 3/16/11 12:13 PM, Ian Bicking wrote: > >> I'll note this general problem is also present in any of the declarative >> ORMs, >> which use silly hacks to tell descriptors their name. Like you have: >> >> class Table(ORM): >> name = StringColumn() >> >> Another case where I've noticed a problem is any kind of descriptor that >> needs >> its own storage; the name of the property gives a possible stable >> namespace for >> the value, but without the name you either have to pass in a name or the >> storage >> area becomes volatile. For instance, a read-only descriptor: >> http://svn.colorstudy.com/home/ianb/recipes/setonce.py >> >> You can solve this in, e.g., the ORM class by doing things when a class is >> created -- but it requires very specific cooperation between the class and >> the >> descriptor. Everyone really does it different ways. >> >> One could imagine an extension of the descriptor protocol, where on class >> creation you called something like attr.__addtoclass__(cls, name) (for all >> attributes of the class that define that method) -- which if you just want >> the >> name you'd simply save that name in your object and return self. >> > > If we were to extend the descriptor protocol, I think it would be better to > extend it by providing __get_ex__(self, instance, owner, name), > __set_ex__(self, instance, name, value), etc. methods that would be called > in preference over the current __get__() and __set__() methods. This allows > descriptor objects to be reused. > > We do this reasonably often with the Traits package (which overrides > __getattribute__() to essentially implement this __get_ex__()/__set_ex__() > protocol by different names). For example, we have a complicated trait for > specifying colors: > > ColorTrait = Trait("black", Tuple, List, Str, color_table) > > It would be nice to simply reuse this object everywhere: > > class Figure(HasTraits): > background = ColorTrait > foreground = ColorTrait > As I was thinking of __addtoclass__ it would address this, though at class instantiation time instead of attribute access time. Specifically it would be like there was a fixup stage that would look like: def fixup(cls): for class_instance in cls.__mro__: for name, value in class_instance.__dict__.items(): method = getattr(value, '__addtoclass__', None) if method is not None: new_value = method(cls, name) if new_value is not value: setattr(cls, name, new_value) If ColorTrait returns a new instance when __addtoclass__ is called, then it can, and all of its instances will be class-specific (and not shared with subclasses). Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug.hellmann at gmail.com Wed Mar 16 22:35:41 2011 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Wed, 16 Mar 2011 17:35:41 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: Message-ID: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> On Mar 15, 2011, at 3:11 PM, Tarek Ziad? wrote: > Hey, > > As I told Doug during Pycon, I think it would be a good idea to link > his PyMOTW pages to our modules documentation in docs.python.org so > people have more examples etc. I am willing to do the work to add the links, if there is a consensus that it's acceptable. I've copied Georg, since he (more or less) owns the documentation, and I don't know if he follows the ideas list. Doug From masklinn at masklinn.net Wed Mar 16 22:37:45 2011 From: masklinn at masklinn.net (Masklinn) Date: Wed, 16 Mar 2011 22:37:45 +0100 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> <4D80D2DD.7050509@hastings.org> Message-ID: On 2011-03-16, at 21:49 , Terry Reedy wrote: > On 3/16/2011 12:11 PM, Michael Foord wrote: >> >> On 16 March 2011 11:10, Larry Hastings > As I suggested in my email on the Assignment Decorators thread this >> morning, you could achieve this in current Python, no extension needed: >> >> def assign(fn): >> return fn(fn.__name__) >> >> @assign >> def content_size(name): >> return overridable_property(name, "Size of the content area.") > >> And building on this sightly you could do the following for namedtuple: >> >>> from collections import namedtuple >> >>> import inspect >> >>> def make_namedtuple(fn): >> ... args = ' '.join(inspect.getargspec(fn).args) >> ... return namedtuple(fn.__name__, args) >> ... >> >>> @make_namedtuple >> ... def Point(x, y): pass >> ... >> >>> p = Point(1, 2) >> >>> p >> Point(x=1, y=2) > > If make_namedtuple were added to collections, then one could import *that* instead of 'namedtuple' itself. Since `namedtuple` is a function, wouldn't it be possible to add this feature to the existing namedtuple, so that the number of names doing just about the same thing in slightly different ways doesn't explode? (it's just a different way of providing the exact same arguments after all) From doug.hellmann at gmail.com Wed Mar 16 22:40:45 2011 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Wed, 16 Mar 2011 17:40:45 -0400 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: References: Message-ID: <914103E7-6317-4EA2-9B90-E9C3021FB6E9@gmail.com> On Mar 15, 2011, at 2:47 PM, Guido van Rossum wrote: > On Tue, Mar 15, 2011 at 11:03 AM, Facundo Batista > wrote: >> Two very related proposals: >> >> 1. On "!=", if Python doesn't find __ne__, use "not __eq__()". > > +1 on this one. I cannot count how often I have written a base class > for this sole purpose. And I cannot think of any cases where it would > be the wrong thing, *except* those damn IEEE NaNs (which we can > special-case). > >> 2. On "<=", if Python doesn't find __le__, use "__eq__() or __lt__()". >> The same for ">=", of course. > > Big -1 for this. Inequalities (orderings) are much more subtle than > equalities. See e.g. sets. > > I'd be okay with offering a standard base class to supply this (#2) > behavior though. The functools.total_ordering class decorator fills in the gaps, right? Doug From masklinn at masklinn.net Wed Mar 16 22:48:19 2011 From: masklinn at masklinn.net (Masklinn) Date: Wed, 16 Mar 2011 22:48:19 +0100 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: <914103E7-6317-4EA2-9B90-E9C3021FB6E9@gmail.com> References: <914103E7-6317-4EA2-9B90-E9C3021FB6E9@gmail.com> Message-ID: <1E3B7053-8A4D-419B-A806-5450DFA031AB@masklinn.net> On 2011-03-16, at 22:40 , Doug Hellmann wrote: > On Mar 15, 2011, at 2:47 PM, Guido van Rossum wrote: >> On Tue, Mar 15, 2011 at 11:03 AM, Facundo Batista >> wrote: >>> Two very related proposals: >>> >>> 1. On "!=", if Python doesn't find __ne__, use "not __eq__()". >> >> +1 on this one. I cannot count how often I have written a base class >> for this sole purpose. And I cannot think of any cases where it would >> be the wrong thing, *except* those damn IEEE NaNs (which we can >> special-case). >> >>> 2. On "<=", if Python doesn't find __le__, use "__eq__() or __lt__()". >>> The same for ">=", of course. >> >> Big -1 for this. Inequalities (orderings) are much more subtle than >> equalities. See e.g. sets. >> >> I'd be okay with offering a standard base class to supply this (#2) >> behavior though. > > The functools.total_ordering class decorator fills in the gaps, right? What is it doing in functools? I thought functools was for higher-order functions (so functions manipulating functions), shouldn't something like a class decorator be in a classtools/typetools module? (also, since I apparently completely missed this, what was the rationale of making it a class decorator rather than, say, a mixin?) From raymond.hettinger at gmail.com Wed Mar 16 23:04:52 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 16 Mar 2011 15:04:52 -0700 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> <4D80D2DD.7050509@hastings.org> Message-ID: On Mar 16, 2011, at 1:49 PM, Terry Reedy wrote: > If make_namedtuple were added to collections, then one could import *that* instead of 'namedtuple' itself. Post a recipe somewhere. My bet is that it will have a near zero adoption rate. -1 on all of these random ideas to solve what is basically a non-problem. The name of a named tuple, or a class, or a function, or a variable typically is used many times, not just in the definition. If you make up a new factory function that uses inspect magic to guess user's intend target name, you will have introduced unnecessary complexity and will likely introduce unexpected behaviors and bugs. Raymond From greg.ewing at canterbury.ac.nz Wed Mar 16 23:53:32 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 17 Mar 2011 11:53:32 +1300 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: <4D80BD1F.6050700@pearwood.info> References: <4D8003A3.9080409@canterbury.ac.nz> <4D80BD1F.6050700@pearwood.info> Message-ID: <4D813F6C.7020604@canterbury.ac.nz> Steven D'Aprano wrote: > Greg Ewing wrote: > >> 1) __cmp__ returns one of four possible results: LessThan, >> EqualTo, GreaterThan or NotEqual (the latter for unordered >> types). > > Wouldn't that last one be better called Unordered rather than NotEqual? No, I don't think so. To me, "unordered" is an adjective that applies to a class of objects, not to individual objects. If you ask whether two particular objects of an unordered class are equal, the possible answers are "yes, they're equal" or "no, they're not equal". And either way, they're still "unordered". > And shouldn't there be provision for __cmp__ to return NotImplemented, Yes, as always. -- Greg From greg.ewing at canterbury.ac.nz Thu Mar 17 00:03:59 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 17 Mar 2011 12:03:59 +1300 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: <1300283537.2509.0.camel@localhost.localdomain> References: <4D8082C1.30400@canterbury.ac.nz> <1300283537.2509.0.camel@localhost.localdomain> Message-ID: <4D8141DF.4050909@canterbury.ac.nz> Westley Mart?nez wrote: > If it went unnoticed, what's the matter? It only went unnoticed because until recently I hadn't run any code that depended critically on it being right. A bug is still a bug even if you haven't found it yet! When it did became apparent that something was wrong, it still took quite a long time to track down the cause. I was looking for a bug in the get_content_size() method, whereas it was actually calling a different method altogether. Having attribute accesses go astray like that is not something you expect. -- Greg From greg.ewing at canterbury.ac.nz Thu Mar 17 00:18:56 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 17 Mar 2011 12:18:56 +1300 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: <4D80D2DD.7050509@hastings.org> References: <4D8082C1.30400@canterbury.ac.nz> <4D80D2DD.7050509@hastings.org> Message-ID: <4D814560.2000101@canterbury.ac.nz> Larry Hastings wrote: > def assign(fn): > return fn(fn.__name__) > > @assign > def content_size(name): > return overridable_property(name, "Size of the content area.") > > How bad do you want it? ;-) Not quite badly enough to replace all my existing overridable_property declarations with something that ugly. :-) But I appreciate the suggestion, and I'll keep it in mind next time I'm designing anything similar. -- Greg From greg.ewing at canterbury.ac.nz Thu Mar 17 01:02:05 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 17 Mar 2011 13:02:05 +1300 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> Message-ID: <4D814F7D.90703@canterbury.ac.nz> Robert Kern wrote: > Personally, I don't want descriptors that know their own name; I > want descriptors that can be told what name to use for each operation. This is a somewhat different use case from mine. For overridable_property, I *do* want the name to be known when the descriptor is created, so that it can precompute some things based on it. -- Greg From doug.hellmann at gmail.com Thu Mar 17 01:03:19 2011 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Wed, 16 Mar 2011 20:03:19 -0400 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: <1E3B7053-8A4D-419B-A806-5450DFA031AB@masklinn.net> References: <914103E7-6317-4EA2-9B90-E9C3021FB6E9@gmail.com> <1E3B7053-8A4D-419B-A806-5450DFA031AB@masklinn.net> Message-ID: On Mar 16, 2011, at 5:48 PM, Masklinn wrote: > On 2011-03-16, at 22:40 , Doug Hellmann wrote: >> On Mar 15, 2011, at 2:47 PM, Guido van Rossum wrote: >>> On Tue, Mar 15, 2011 at 11:03 AM, Facundo Batista >>> wrote: >>>> Two very related proposals: >>>> >>>> 1. On "!=", if Python doesn't find __ne__, use "not __eq__()". >>> >>> +1 on this one. I cannot count how often I have written a base class >>> for this sole purpose. And I cannot think of any cases where it would >>> be the wrong thing, *except* those damn IEEE NaNs (which we can >>> special-case). >>> >>>> 2. On "<=", if Python doesn't find __le__, use "__eq__() or __lt__()". >>>> The same for ">=", of course. >>> >>> Big -1 for this. Inequalities (orderings) are much more subtle than >>> equalities. See e.g. sets. >>> >>> I'd be okay with offering a standard base class to supply this (#2) >>> behavior though. >> >> The functools.total_ordering class decorator fills in the gaps, right? > What is it doing in functools? I thought functools was for higher-order functions (so functions manipulating functions), shouldn't something like a class decorator be in a classtools/typetools module? > > (also, since I apparently completely missed this, what was the rationale of making it a class decorator rather than, say, a mixin?) I couldn't really say. I just found it, I didn't put it there. :-) Doug From fuzzyman at gmail.com Thu Mar 17 02:15:35 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Wed, 16 Mar 2011 21:15:35 -0400 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: <4D814560.2000101@canterbury.ac.nz> References: <4D8082C1.30400@canterbury.ac.nz> <4D80D2DD.7050509@hastings.org> <4D814560.2000101@canterbury.ac.nz> Message-ID: On 16 March 2011 19:18, Greg Ewing wrote: > Larry Hastings wrote: > > def assign(fn): >> return fn(fn.__name__) >> >> @assign >> def content_size(name): >> return overridable_property(name, "Size of the content area.") >> How bad do you want it? ;-) >> > > Not quite badly enough to replace all my existing > overridable_property declarations with something that > ugly. :-) But I appreciate the suggestion, and I'll > keep it in mind next time I'm designing anything > similar. You should be able to replace it with something like: @do_something def content_size(): return "Size of the content area." Where do_something calls the function to get the string argument and takes the name from the function.__name__ (and uses them to create and return the overridable_property). Michael > > -- > Greg > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Mar 17 06:22:15 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 17 Mar 2011 01:22:15 -0400 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> <4D80D2DD.7050509@hastings.org> Message-ID: On 3/16/2011 4:49 PM, Terry Reedy wrote: > On 3/16/2011 12:11 PM, Michael Foord wrote: >> >> On 16 March 2011 11:10, Larry Hastings >> > As I suggested in my email on the Assignment Decorators thread this >> morning, you could achieve this in current Python, no extension needed: >> >> def assign(fn): >> return fn(fn.__name__) >> >> @assign >> def content_size(name): >> return overridable_property(name, "Size of the content area.") > >> And building on this sightly you could do the following for namedtuple: >> >>> from collections import namedtuple >> >>> import inspect >> >>> def make_namedtuple(fn): >> ... args = ' '.join(inspect.getargspec(fn).args) >> ... return namedtuple(fn.__name__, args) >> ... >> >>> @make_namedtuple >> ... def Point(x, y): pass >> ... >> >>> p = Point(1, 2) >> >>> p >> Point(x=1, y=2) > > If make_namedtuple were added to collections, then one could import > *that* instead of 'namedtuple' itself. I am not sure how serious I was when I wrote that, but after further thought, I retract it as a serious suggestion, if indeed it was ;-). I think both decorators are 'cute' as examples of what one can do with decorators, and possible cookbook recipes, but not something we really need in the stdlib. I mostly said something because they do not require new syntax and I do not like any of the new syntax proposals. There are many aspects of the rationale for decorators that do not apply to the assignment case. 1. Explicit function wrapping requires the name in tripicate rather than duplicate. Moreover, some interface wrappers for functions need fairly long, multi-component names. I am pretty sure such need is much less for named tuples and properties. 2. The triplicates are *not* on the same line but may be separated by an arbitrary number of lines. This not only impedes checking that all three are the same, but may also impede understanding of the function code. That sometimes really requires knowing how or what the function will be wrapped as. This is greatly aided by bringing the wrapping function back up to (above) the def line. For instance, @classmethod and @staticmethod explain the non-standard signature with 'cls' or '' instead of 'self'. For another example, twisted.internet.defer.inLineCallbacks is a decorator that wraps a generator function as a twisted.internet.defer.Deferred object. Knowing that a particular generator function implements a series of data/error callbacks tor twisted greatly helps in understanding it. The idea is 'twisted' enough as it is ;-). -- Terry Jan Reedy From mikegraham at gmail.com Thu Mar 17 14:48:23 2011 From: mikegraham at gmail.com (Mike Graham) Date: Thu, 17 Mar 2011 09:48:23 -0400 Subject: [Python-ideas] Automatic comparisons by default In-Reply-To: <1E3B7053-8A4D-419B-A806-5450DFA031AB@masklinn.net> References: <914103E7-6317-4EA2-9B90-E9C3021FB6E9@gmail.com> <1E3B7053-8A4D-419B-A806-5450DFA031AB@masklinn.net> Message-ID: On Wed, Mar 16, 2011 at 5:48 PM, Masklinn wrote: > (also, since I apparently completely missed this, what was the rationale of making it a class decorator rather than, say, a mixin?) Perhaps the better question is "Why do we ever do mixins through inheritance?" From ericsnowcurrently at gmail.com Thu Mar 17 16:49:04 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Thu, 17 Mar 2011 09:49:04 -0600 Subject: [Python-ideas] exec_closure (was Re: Assignment decorators) Message-ID: During sprints I spent some time getting cozy with the python internals and the result is an exec_closure function I've put in an extension module. Essentially, it exposes the PyEval_EvalCodeEx function, so you can do execs with closures and arbitrary code objects, like Nick suggested. He more or less mentored me through this, my first foray into the Underlying C code, so thanks Nick! In the next week I intend on cleaning it up, putting it on PyPI, and writing a bit about it. One example I hope to make is how to simulate a def-from using a decorator. My hope is to add new, meaningful capability to python, so we'll see how it goes. -eric On Thu, Mar 10, 2011 at 7:31 PM, Nick Coghlan wrote: > On Thu, Mar 10, 2011 at 2:02 PM, Guido van Rossum > wrote: > > On Thu, Mar 10, 2011 at 11:41 AM, Paul Moore > wrote: > >> On 10 March 2011 12:49, Nick Coghlan wrote: > >>> I actually agree it is a major weakness of the syntax. You can play > >>> games with "from" to rearrange the line. For example: > >>> > >>> def name from builder(param_spec): > >>> code_block > >>> > >>> as sugar for: > >>> > >>> name = builder("name", param_spec_obj, code_obj) > >> > >> Yes, I like that better... > > > > I'd like it better if it came with a default builder implementation > > that would create regular functions, so that > > > > def name(): > > > > > > was equivalent to > > > > def name from (): > > > > Yeah, I was thinking that having builde equivalents for functions and > generators would be an interesting way to go. Builder objects may need > to be passed some extra parameters to make that feasible, though > (specifically, a reference to globals(), as well as the closure > fulfilment details). > > One interesting side effect of that is the ability to have a > never-yields generator without having to insert a dummy never-executed > yield statement anywhere. > > > But I don't see a reasonable way to do that. > > > > Also I think it's confusing to have both > > > > @some_magic > > def name(): ... > > > > and > > > > def name from some_magic( > > > with different semantics. > > Don't forget: > > @more_magic > def name from some_magic(params): > code > > I'm *far* from convinced this is a good idea, but it at least meets > the holder of making something new possible. > > If exec() was enhanced to allow correct execution of closures and code > blocks expecting arguments, you could experiment with this using a > decorator and extracting the various pieces from a function object > after it had already been created. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From orsenthil at gmail.com Fri Mar 18 00:19:05 2011 From: orsenthil at gmail.com (Senthil Kumaran) Date: Fri, 18 Mar 2011 07:19:05 +0800 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> Message-ID: <20110317231905.GF3778@kevin> On Wed, Mar 16, 2011 at 05:35:41PM -0400, Doug Hellmann wrote: > > As I told Doug during Pycon, I think it would be a good idea to > > link his PyMOTW pages to our modules documentation in > > docs.python.org so people have more examples etc. > PyMOTW is a really helpful resource. > I am willing to do the work to add the links, if there is a consensus that it's acceptable. How about adding some examples rather than linking to the blog on each and every module page? I think, this may serve to present the content in one-place. > I've copied Georg, since he (more or less) owns the documentation, > and I don't know if he follows the ideas list. He does. -- Senthil From tjreedy at udel.edu Fri Mar 18 00:51:54 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 17 Mar 2011 19:51:54 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: <20110317231905.GF3778@kevin> References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> Message-ID: On 3/17/2011 7:19 PM, Senthil Kumaran wrote: > On Wed, Mar 16, 2011 at 05:35:41PM -0400, Doug Hellmann wrote: > >>> As I told Doug during Pycon, I think it would be a good idea to >>> link his PyMOTW pages to our modules documentation in >>> docs.python.org so people have more examples etc. Various people have written various docs showing Python by example. I do not think any one should be singled out in the docs. On the other hand, the wiki could have a PythonByExample page (or pages) with links to various resources. If someone were ambitious, there could even be a page for each builtin class and each stdlib module, each with multiple links and examples. -- Terry Jan Reedy From steve at pearwood.info Fri Mar 18 02:47:50 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 18 Mar 2011 12:47:50 +1100 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> Message-ID: <4D82B9C6.9030709@pearwood.info> Terry Reedy wrote: > On 3/17/2011 7:19 PM, Senthil Kumaran wrote: >> On Wed, Mar 16, 2011 at 05:35:41PM -0400, Doug Hellmann wrote: >> >>>> As I told Doug during Pycon, I think it would be a good idea to >>>> link his PyMOTW pages to our modules documentation in >>>> docs.python.org so people have more examples etc. > > Various people have written various docs showing Python by example. I do > not think any one should be singled out in the docs. On the other hand, > the wiki could have a PythonByExample page (or pages) with links to > various resources. What he said. With all respect to Doug, do we really want to bless his website more than any of the other Python blogs, tutorials, etc. out on the Internet? I wouldn't mind having a prominent "External resources" page in the Python docs, if it is actively maintained and doesn't turn into a bunch of dead links in 12 months time. But I have grave doubts about linking to an external site all through the module documentation, no matter how useful it is. Who controls the external content? -- Steven From doug.hellmann at gmail.com Fri Mar 18 03:17:45 2011 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Thu, 17 Mar 2011 22:17:45 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> Message-ID: <2F2574A3-78BC-4B60-AF23-FD7601649961@gmail.com> On Mar 17, 2011, at 7:51 PM, Terry Reedy wrote: > On 3/17/2011 7:19 PM, Senthil Kumaran wrote: >> On Wed, Mar 16, 2011 at 05:35:41PM -0400, Doug Hellmann wrote: >> >>>> As I told Doug during Pycon, I think it would be a good idea to >>>> link his PyMOTW pages to our modules documentation in >>>> docs.python.org so people have more examples etc. > > Various people have written various docs showing Python by example. I do not think any one should be singled out in the docs. I can certainly appreciate that position. > On the other hand, the wiki could have a PythonByExample page (or pages) with links to various resources. > > If someone were ambitious, there could even be a page for each builtin class and each stdlib module, each with multiple links and examples. > > -- > Terry Jan Reedy > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas From guido at python.org Fri Mar 18 05:18:10 2011 From: guido at python.org (Guido van Rossum) Date: Thu, 17 Mar 2011 21:18:10 -0700 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: <4D8082C1.30400@canterbury.ac.nz> References: <4D8082C1.30400@canterbury.ac.nz> Message-ID: On Wed, Mar 16, 2011 at 2:28 AM, Greg Ewing wrote: > I just experienced an obscure bug resulting from copying > and pasting an overridable_property and forgetting to > change the passed-in name: > > ?content_size = overridable_property('size', > ? ?"Size of the content area.") > > which should have been > > ?content_size = overridable_property('content_size', > ? ?"Size of the content area.") > > This was quite difficult to track down, because the result > was to effectively make it an alias of *another* property > I have called 'size'. They happen to be near enough to the same > thing that the error went unnoticed for quite some time. > > If I could write my overridable_property declarations > without having to repeat the name, this kind of thing would > not be able to happen. I appreciate the user story. I also appreciate what Ian said: that this is often solved using a metaclass (in fact it is now a very common pattern) but that everybody does it somewhat differently. And I appreciate that Greg's use case is not solved by a metaclass (even if we turned the pattern into a metaclass or class decorator in the stdlib, which might be a good idea regardless). At the same time, it seems that there aren't a lot of specific examples besides namedtuple (which seems to cause lots of emotions and is I think best left alone) and Greg's overridable_property. So, unless we can come up with a really nice way (either syntactical or perhaps through a magic builtin) to give functions like overridable_property() access to the LHS name, and find more use cases, I don't see this happening. I really don't like "assignment decorators" (which aren't the same thing at all as class or function decorators, no matter how much I squint) nor most other solutions (e.g. ":=" -- too subtle, and might well mean something else). But I'm not precluding that someone will come up with a better solution. In the mean time, as long as it's just one use case I still like spelling it using a function decorator: @overridable_property def content_size(): "Size of the content are" The overridable_property can then access the __name__ and __doc__ attributes of the function passed into it, assert that it has no arguments using the inspect module, and return an appropriate instance of the OverridableProperty class. Voil?, decorated assignment. :-) PS. Greg: is your current overridable_property part of some open source code that you've published yet? Searches for it mostly seem to turn up recent discussions here... -- --Guido van Rossum (python.org/~guido) From ziade.tarek at gmail.com Fri Mar 18 06:50:51 2011 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Thu, 17 Mar 2011 22:50:51 -0700 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: <4D82B9C6.9030709@pearwood.info> References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: I don't know about any other place where there's such an exhaustive documentation of the stdlib. Every module we have, have more examples in Doug's work than in the stdlib doc itself or elsewhere. I think this doc is the best one we have and not pointing to it is too bad ihmo. Le 17 mars 2011 18:48, "Steven D'Aprano" a ?crit : > Terry Reedy wrote: >> On 3/17/2011 7:19 PM, Senthil Kumaran wrote: >>> On Wed, Mar 16, 2011 at 05:35:41PM -0400, Doug Hellmann wrote: >>> >>>>> As I told Doug during Pycon, I think it would be a good idea to >>>>> link his PyMOTW pages to our modules documentation in >>>>> docs.python.org so people have more examples etc. >> >> Various people have written various docs showing Python by example. I do >> not think any one should be singled out in the docs. On the other hand, >> the wiki could have a PythonByExample page (or pages) with links to >> various resources. > > What he said. > > With all respect to Doug, do we really want to bless his website more > than any of the other Python blogs, tutorials, etc. out on the Internet? > > I wouldn't mind having a prominent "External resources" page in the > Python docs, if it is actively maintained and doesn't turn into a bunch > of dead links in 12 months time. But I have grave doubts about linking > to an external site all through the module documentation, no matter how > useful it is. Who controls the external content? > > > -- > Steven > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Fri Mar 18 09:06:10 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 18 Mar 2011 04:06:10 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: On 3/18/2011 1:50 AM, Tarek Ziad? wrote: > I don't know about any other place where there's such an exhaustive > documentation of the stdlib. It may well be the broadest collection on the net. Indeed, it is so broad that he is having it published as a commercial book (Amazon is taking preorders). But that does not make each exposition the best there is for each and every module. I have elsewhere seen some pretty in-depth articles on particular modules. In any case, as I believe Doug acknowleded, it would be inappropriate to promote one particular book in the manuals. What this discussion has so far ignored is that there is no such thing as 'the stdlib'. There are multiple Python versions and releases, and we have this thing called Python 3, which is a bit but significantly different from Python 2. Though the web pages do not say much that I found, the examples are for (mostly unspecified) Python 2. The book promo blurb specifically says 2.7, so I presume he tested and updated as necessary. (Probably not too much was needed since 2.x code is mostly forward compatible up to 2.7). I have no idea if he added new material for new features added late in Py2. For instance, an up-to-date discussion of difflib should include an example showing the need for the SequenceMatcher autojunk parameter added in 2.7.1 to fix the bug independently discovered and reported by multiple people. In any case, Python 3.x manuals should have examples that run with 3.x and not reference Python 2 code. > Every module we have, have more examples in Doug's work than in the > stdlib doc itself or elsewhere. There are a lot of modules to check;-). > I think this doc is the best one we have and not pointing to it is too > bad ihmo. Do it on the wiki, as I suggested. But do specify that it is Python 2 code. > Le 17 mars 2011 18:48, "Steven D'Aprano" > > a ?crit : > > Terry Reedy wrote: > >> On 3/17/2011 7:19 PM, Senthil Kumaran wrote: > >>> On Wed, Mar 16, 2011 at 05:35:41PM -0400, Doug Hellmann wrote: > >>> > >>>>> As I told Doug during Pycon, I think it would be a good idea to > >>>>> link his PyMOTW pages to our modules documentation in > >>>>> docs.python.org so people have more > examples etc. > >> > >> Various people have written various docs showing Python by example. > I do > >> not think any one should be singled out in the docs. On the other hand, > >> the wiki could have a PythonByExample page (or pages) with links to > >> various resources. > > > > What he said. > > > > With all respect to Doug, do we really want to bless his website more > > than any of the other Python blogs, tutorials, etc. out on the Internet? > > > > I wouldn't mind having a prominent "External resources" page in the > > Python docs, if it is actively maintained and doesn't turn into a bunch > > of dead links in 12 months time. Or obsolete links to code that is not updated, as is most often the case. One of the reasons for doc patches is to keep up with code changes. > > But I have grave doubts about linking > > to an external site all through the module documentation, no matter how > > useful it is. Who controls the external content? The external author. -- Terry Jan Reedy From orsenthil at gmail.com Fri Mar 18 09:20:14 2011 From: orsenthil at gmail.com (Senthil Kumaran) Date: Fri, 18 Mar 2011 16:20:14 +0800 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: <20110318082014.GA2827@kevin> On Thu, Mar 17, 2011 at 10:50:51PM -0700, Tarek Ziad? wrote: > I think this doc is the best one we have and not pointing to it is too bad > ihmo. Why not adopting some of the work (examples) and then making it a part of the docs instead of pointing the link? I know it involves work, but it has benefits too. I also like Terry Reedy's idea that wiki might be a good place and make the examples version. BTW, I found effbot's examples helpful too. -- Senthil From doug.hellmann at gmail.com Fri Mar 18 13:29:35 2011 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Fri, 18 Mar 2011 08:29:35 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: On Mar 18, 2011, at 4:06 AM, Terry Reedy wrote: > On 3/18/2011 1:50 AM, Tarek Ziad? wrote: >> I don't know about any other place where there's such an exhaustive >> documentation of the stdlib. > > It may well be the broadest collection on the net. Indeed, it is so broad that he is having it published as a commercial book (Amazon is taking preorders). But that does not make each exposition the best there is for each and every module. I have elsewhere seen some pretty in-depth articles on particular modules. In any case, as I believe Doug acknowleded, it would be inappropriate to promote one particular book in the manuals. > > What this discussion has so far ignored is that there is no such thing as 'the stdlib'. There are multiple Python versions and releases, and we have this thing called Python 3, which is a bit but significantly different from Python 2. Though the web pages do not say much that I found, the examples are for (mostly unspecified) Python 2. The book promo blurb specifically says 2.7, so I presume he tested and updated as necessary. (Probably not too much was needed since 2.x code is mostly forward compatible up to 2.7). It isn't mentioned on every page, but the about page for the project does talk about the version of code and modules supported (2.7). I will make that information more explicit on each page when I start porting the examples to Python 3. > I have no idea if he added new material for new features added late in Py2. For instance, an up-to-date discussion of difflib should include an example showing the need for the SequenceMatcher autojunk parameter added in 2.7.1 to fix the bug independently discovered and reported by multiple people. Thanks, Terry, I'll make a note of that. I did try to refresh the content over the last year, but I'm sure I missed some subtle pieces or changes that went in after I finished the refresh. I think the difflib article is one of the first I wrote, so that would make it as somewhere between 3 and 4 years old. A lot has changed in that time! > In any case, Python 3.x manuals should have examples that run with 3.x and not reference Python 2 code. I understood Tarek's proposal to refer to the 2.7 docs only. > >> Every module we have, have more examples in Doug's work than in the >> stdlib doc itself or elsewhere. > > There are a lot of modules to check;-). > >> I think this doc is the best one we have and not pointing to it is too >> bad ihmo. > > Do it on the wiki, as I suggested. But do specify that it is Python 2 code. I think there is sufficient well-reasoned opposition to the idea that we should drop it. I appreciate Tarek's encouragement, but I also see the other perspective and don't want any ill-will. Google being what it is, people don't seem to have a hard time finding the examples where they are, so I am content to leave well enough alone. Thanks, Doug From jimjjewett at gmail.com Fri Mar 18 13:59:54 2011 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 18 Mar 2011 08:59:54 -0400 Subject: [Python-ideas] mixins as decorators vs inheritance [was: Automatic comparisons by default] Message-ID: On Thu, Mar 17, 2011 at 9:48 AM, Mike Graham wrote: > On Wed, Mar 16, 2011 at 5:48 PM, Masklinn wrote: >> (also, since I apparently completely missed this, >> what was the rationale of making it a class >> decorator rather than, say, a mixin?) > Perhaps the better question is "Why do we ever > do mixins through inheritance?" That is a good question, and I was tempted to switch, until I realized that there are some good answers... (a) So we can do an isinstance check (b) So we can more easily override parts of the mixin -jJ From mikegraham at gmail.com Fri Mar 18 14:58:38 2011 From: mikegraham at gmail.com (Mike Graham) Date: Fri, 18 Mar 2011 09:58:38 -0400 Subject: [Python-ideas] mixins as decorators vs inheritance [was: Automatic comparisons by default] In-Reply-To: References: Message-ID: On Fri, Mar 18, 2011 at 8:59 AM, Jim Jewett wrote: > On Thu, Mar 17, 2011 at 9:48 AM, Mike Graham wrote: >> On Wed, Mar 16, 2011 at 5:48 PM, Masklinn wrote: >>> (also, since I apparently completely missed this, >>> what was the rationale of making it a class >>> decorator rather than, say, a mixin?) > >> Perhaps the better question is "Why do we ever >> do mixins through inheritance?" > > That is a good question, and I was tempted to switch, > until I realized that there are some good answers... > > (a) ?So we can do an isinstance check > (b) ?So we can more easily override parts of the mixin > > -jJ (a) is a misfeature. isinstance checks are bad: they make your code less flexible ? in Python the actual type of an object isn't something we treat as semantic data. This is *especially* the case with mixins, which are just a convenient thing for code reuse, not some meaningful type. To take the initial example, if functools.total_ordering was implemented with the inheritance style, checking isinstance(foo, TotallyOrdered) would make zero sense, as supporting-all-ordering-operations isn't a meaningful abstract category. It would also exclude all those classes that already exist and implement all six operators.* (b) is more compelling. We would have to write code to force only mixing in the methods we want manually and allowing overriding (probably automatically) if we wanted to use decorator-based-mixins. We could write this code just once as a mixin library (it's actually quite trivial). I actually see this as better for overriding than inheritance-based-mixins in some ways. For example, if I wanted to write my own foo overriding SomeMixin.foo, I almost certainly wouldn't call SomeMixin.foo: I don't want it. But that's totally against the rules! I show for myself that I'm not really doing inheritance, and I might be breaking super for someone else down the line. Mike *Of course, here I am ignoring the possibility that TotallyOrdered** is implemented as an abstract base class and all the stdlib is registered with it appropriately. This is because I don't think that would be the right thing to do ? TotallyOrdered isn't a meaningful abstract type. More fundamentally, I also worry that Python ABCs' pairing of interface and default implementation breaks down in some common cases. **My only major reservation in opposing a TotallyOrdered class is that I absolutely love the name. From guido at python.org Fri Mar 18 15:49:09 2011 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Mar 2011 07:49:09 -0700 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> Message-ID: [+python-ideas] On Thu, Mar 17, 2011 at 10:12 PM, Ian Bicking wrote: > Did you have any opinion on __addtoclass__? It seemed to me to address most > of the use cases (even Greg's as far as I understood it). [Reminder: Ian's proposal is to call attr.__addtoclass__(cls, name) for all attributes of the class that have the method, at class construction time.] You know, somehow I missed that when skimming. :-( It nicely encapsulates the main reason for having a metaclass in many of these cases. There's another pattern where all class attributes that have a certain property are also collected in a per-class datastructure, and at that point we're back to the custom metaclass; but standardizing on __addtoclass__ with the proposed signature would be a first step. We could either have a standard metaclass that does this, or we could just make it part of 'type' (the default metaclass) and be done with it. -- --Guido van Rossum (python.org/~guido) From jimjjewett at gmail.com Fri Mar 18 16:25:57 2011 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 18 Mar 2011 11:25:57 -0400 Subject: [Python-ideas] mixins as decorators vs inheritance [was: Automatic comparisons by default] In-Reply-To: References: Message-ID: On Fri, Mar 18, 2011 at 9:58 AM, Mike Graham wrote: > On Fri, Mar 18, 2011 at 8:59 AM, Jim Jewett wrote: >> On Thu, Mar 17, 2011 at 9:48 AM, Mike Graham wrote: >>> ... "Why do we ever do mixins through inheritance?" >> (a) ?So we can do an isinstance check > (a) is a misfeature. isinstance checks are bad: ... > supporting-all-ordering-operations isn't a meaningful > abstract category. It would also exclude all those > classes that already exist and implement all six > operators.* Are there some limits to __instancecheck__ and __subclasscheck__ that I just haven't run into yet? Why not write them to say "yes" if all six operations are available? -jJ From jackdied at gmail.com Fri Mar 18 16:39:45 2011 From: jackdied at gmail.com (Jack Diederich) Date: Fri, 18 Mar 2011 11:39:45 -0400 Subject: [Python-ideas] mixins as decorators vs inheritance [was: Automatic comparisons by default] In-Reply-To: References: Message-ID: On Fri, Mar 18, 2011 at 8:59 AM, Jim Jewett wrote: > On Thu, Mar 17, 2011 at 9:48 AM, Mike Graham wrote: >> On Wed, Mar 16, 2011 at 5:48 PM, Masklinn wrote: >>> (also, since I apparently completely missed this, >>> what was the rationale of making it a class >>> decorator rather than, say, a mixin?) > >> Perhaps the better question is "Why do we ever >> do mixins through inheritance?" > > That is a good question, and I was tempted to switch, > until I realized that there are some good answers... > > (a) ?So we can do an isinstance check > (b) ?So we can more easily override parts of the mixin total_ordering won't override methods you've explicitly written. And IMO mixins were always a work-around and not a feature. Class decorators are a cleaner way of doing a similar thing. -Jack From guido at python.org Fri Mar 18 17:24:02 2011 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Mar 2011 09:24:02 -0700 Subject: [Python-ideas] mixins as decorators vs inheritance [was: Automatic comparisons by default] In-Reply-To: References: Message-ID: On Fri, Mar 18, 2011 at 8:39 AM, Jack Diederich wrote: > On Fri, Mar 18, 2011 at 8:59 AM, Jim Jewett wrote: >> On Thu, Mar 17, 2011 at 9:48 AM, Mike Graham wrote: >>> On Wed, Mar 16, 2011 at 5:48 PM, Masklinn wrote: >>>> (also, since I apparently completely missed this, >>>> what was the rationale of making it a class >>>> decorator rather than, say, a mixin?) >> >>> Perhaps the better question is "Why do we ever >>> do mixins through inheritance?" >> >> That is a good question, and I was tempted to switch, >> until I realized that there are some good answers... >> >> (a) ?So we can do an isinstance check >> (b) ?So we can more easily override parts of the mixin > > total_ordering won't override methods you've explicitly written. ? And > IMO mixins were always a work-around and not a feature. ?Class > decorators are a cleaner way of doing a similar thing. There are some cases where class decorators are better than mixins, but I don't think we should start rejecting the mixin class pattern outright. The two patterns work differently and have different strengths. Inheritance may be overrated, but it isn't dead! (Also, ABCs give new life to certain isinstance checks. isinstance isn't dead yet either. :-) -- --Guido van Rossum (python.org/~guido) From masklinn at masklinn.net Fri Mar 18 17:29:59 2011 From: masklinn at masklinn.net (Masklinn) Date: Fri, 18 Mar 2011 17:29:59 +0100 Subject: [Python-ideas] mixins as decorators vs inheritance [was: Automatic comparisons by default] In-Reply-To: References: Message-ID: <100C9475-88ED-4419-8F89-64EF9D35C8C4@masklinn.net> On 2011-03-18, at 13:59 , Jim Jewett wrote: > On Thu, Mar 17, 2011 at 9:48 AM, Mike Graham wrote: >> On Wed, Mar 16, 2011 at 5:48 PM, Masklinn wrote: >>> (also, since I apparently completely missed this, >>> what was the rationale of making it a class >>> decorator rather than, say, a mixin?) > >> Perhaps the better question is "Why do we ever >> do mixins through inheritance?" > > That is a good question, and I was tempted to switch, > until I realized that there are some good answers... > > (a) So we can do an isinstance check > (b) So we can more easily override parts of the mixin (c) Class decorators are harder to get into and implement (an inherited mixin is mostly an inheritable class, a class decorator skirts much closer to metatypes play) (d) Class decorators are a recent feature and manually decorating class was never something widely used (as opposed to decorating functions) (e) plain old inertia From masklinn at masklinn.net Fri Mar 18 17:32:28 2011 From: masklinn at masklinn.net (Masklinn) Date: Fri, 18 Mar 2011 17:32:28 +0100 Subject: [Python-ideas] mixins as decorators vs inheritance [was: Automatic comparisons by default] In-Reply-To: References: Message-ID: <81D84B45-38E0-4DBD-97A5-DDA78BFC1EE3@masklinn.net> On 2011-03-18, at 14:58 , Mike Graham wrote: > > *Of course, here I am ignoring the possibility that TotallyOrdered** > is implemented as an abstract base class and all the stdlib is > registered with it appropriately. This is because I don't think that > would be the right thing to do ? TotallyOrdered isn't a meaningful > abstract type. Why wouldn't TotallyOrdered not be a meaningful abstract type? Especially when compared to Sized or Hashable? From robert.kern at gmail.com Fri Mar 18 17:44:18 2011 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 Mar 2011 11:44:18 -0500 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> Message-ID: On 3/17/11 11:18 PM, Guido van Rossum wrote: > At the same time, it seems that there aren't a lot of specific > examples besides namedtuple (which seems to cause lots of emotions and > is I think best left alone) and Greg's overridable_property. So, > unless we can come up with a really nice way (either syntactical or > perhaps through a magic builtin) to give functions like > overridable_property() access to the LHS name, and find more use > cases, I don't see this happening. A sizable portion of Traits needs the name information. http://pypi.python.org/pypi/Traits As I explained elsewhere, we currently implement this by overriding __getattribute__ to implement a descriptor-like protocol that passes along the name to the trait object rather than storing it on the trait object itself, but we could re-engineer Traits make do with Ian's __addtoclass__ proposal. Being able to use plain descriptors would allow us to rewrite Traits to avoid our current C-implemented base class, which would let us interoperate with other frameworks better. I can list all of the specific features of Traits that makes use of this information if you like. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From raymond.hettinger at gmail.com Fri Mar 18 18:04:09 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Fri, 18 Mar 2011 10:04:09 -0700 Subject: [Python-ideas] mixins as decorators vs inheritance [was: Automatic comparisons by default] In-Reply-To: References: Message-ID: On Mar 18, 2011, at 6:58 AM, Mike Graham wrote: > > (a) is a misfeature. isinstance checks are bad: they make your code > less flexible ? in Python the actual type of an object isn't something > we treat as semantic data. This is *especially* the case with mixins, > which are just a convenient thing for code reuse, not some meaningful > type. I believe that is outdated advice. Since Guido introduced abstract base classes, the trend (and grain) of Python is to use isinstance() to check for a given interface (i.e. distinguishing a Sequence from a Mapping) and to use ABCs as mixins (i.e. MutableMapping has replaced UserDict.DictMixin). Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From masklinn at masklinn.net Fri Mar 18 18:21:39 2011 From: masklinn at masklinn.net (Masklinn) Date: Fri, 18 Mar 2011 18:21:39 +0100 Subject: [Python-ideas] mixins as decorators vs inheritance [was: Automatic comparisons by default] In-Reply-To: References: Message-ID: <86B384D3-970F-4C1C-BE51-B7A2D4F8D54D@masklinn.net> On 18 mars 2011, at 18:04, Raymond Hettinger wrote: > On Mar 18, 2011, at 6:58 AM, Mike Graham wrote: >> >> (a) is a misfeature. isinstance checks are bad: they make your code >> less flexible ? in Python the actual type of an object isn't something >> we treat as semantic data. This is *especially* the case with mixins, >> which are just a convenient thing for code reuse, not some meaningful >> type. > > I believe that is outdated advice. Since Guido introduced > abstract base classes, the trend (and grain) of Python is to > use isinstance() to check for a given interface (i.e. distinguishing > a Sequence from a Mapping) I'm not sure "interface" is the best word here as it probably has java-inherited links to explicit, static specification, whereas a (good) abc instance check would, I believe, check the shape of the testee instead. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Fri Mar 18 18:29:52 2011 From: dsdale24 at gmail.com (Darren Dale) Date: Fri, 18 Mar 2011 13:29:52 -0400 Subject: [Python-ideas] Would it possible to define abstract read/write properties with decorators? In-Reply-To: References: Message-ID: On Sun, Mar 13, 2011 at 12:49 PM, Darren Dale wrote: > On Sun, Mar 13, 2011 at 11:18 AM, Darren Dale wrote: > [...] >> It seems like it should be possible for Python to support the >> decorator syntax for declaring abstract read/write properties. The >> most elegant approach might be the following, if it could be >> supported: >> >> class Foo(metaclass=ABCMeta): >> ? ?# Note the use of @property rather than @abstractproperty: >> ? ?@property >> ? ?@abstractmethod >> ? ?def bar(self): >> ? ? ? ?return 1 >> ? ?@bar.setter >> ? ?@abstractmethod >> ? ?def bar(self, val): >> ? ? ? ?pass >> >> I thought that would work with Python-3.2, but Foo is instantiable >> even though there are abstractmethods. If python's property could be >> tweaked to recognize those abstract methods and raise the usual >> TypeError, then we could subclass the abstract base class Foo in the >> usual way: > > Here is a working example!: The modifications to "property" to better support abstract base classes using the decorator syntax and @abstractmethod (rather than @abstractproperty) are even simpler than I originally thought: class Property(property): def __init__(self, *args, **kwargs): super(Property, self).__init__(*args, **kwargs) for f in (self.fget, self.fset, self.fdel): if getattr(f, '__isabstractmethod__', False): self.__isabstractmethod__ = True break > > class C(metaclass=abc.ABCMeta): > ? ?@Property > ? ?@abc.abstractmethod > ? ?def x(self): > ? ? ? ?return 1 > ? ?@x.setter > ? ?@abc.abstractmethod > ? ?def x(self, val): > ? ? ? ?pass > > try: > ? ?c=C() > except TypeError as e: > ? ?print(e) > > class D(C): > ? ?@C.x.getter > ? ?def x(self): > ? ? ? ?return 2 > > try: > ? ?d=D() > except TypeError as e: > ? ?print(e) > > class E(D): > ? ?@D.x.setter > ? ?def x(self, val): > ? ? ? ?pass > > print(E()) > running this example yields: Can't instantiate abstract class C with abstract methods x Can't instantiate abstract class D with abstract methods x <__main__.E object at 0x212ee10> Wouldn't it be possible to include this in python-3.3? Darren From guido at python.org Fri Mar 18 19:33:38 2011 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Mar 2011 11:33:38 -0700 Subject: [Python-ideas] mixins as decorators vs inheritance [was: Automatic comparisons by default] In-Reply-To: <86B384D3-970F-4C1C-BE51-B7A2D4F8D54D@masklinn.net> References: <86B384D3-970F-4C1C-BE51-B7A2D4F8D54D@masklinn.net> Message-ID: On Fri, Mar 18, 2011 at 10:21 AM, Masklinn wrote: > On 18 mars 2011, at 18:04, Raymond Hettinger > wrote: > > On Mar 18, 2011, at 6:58 AM, Mike Graham wrote: > > (a) is a misfeature. isinstance checks are bad: they make your code > less flexible ? in Python the actual type of an object isn't something > we treat as semantic data. This is *especially* the case with mixins, > which are just a convenient thing for code reuse, not some meaningful > type. > > I believe that is outdated advice. ?Since Guido introduced > abstract base classes,?the trend (and grain) of Python is to > use isinstance() to check for a given interface (i.e. distinguishing > a Sequence from a Mapping) > > I'm not sure "interface" is the best word here as it probably has > java-inherited links to explicit, static specification, whereas a (good) abc > instance check would, I believe, check the shape of the testee instead. That's debatable, and not the way the ABC debate for Python ended up. And Java doesn't have the monopoly on the term interface. -- --Guido van Rossum (python.org/~guido) From guido at python.org Fri Mar 18 19:36:01 2011 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Mar 2011 11:36:01 -0700 Subject: [Python-ideas] Would it possible to define abstract read/write properties with decorators? In-Reply-To: References: Message-ID: On Fri, Mar 18, 2011 at 10:29 AM, Darren Dale wrote: > On Sun, Mar 13, 2011 at 12:49 PM, Darren Dale wrote: >> On Sun, Mar 13, 2011 at 11:18 AM, Darren Dale wrote: >> [...] >>> It seems like it should be possible for Python to support the >>> decorator syntax for declaring abstract read/write properties. The >>> most elegant approach might be the following, if it could be >>> supported: >>> >>> class Foo(metaclass=ABCMeta): >>> ? ?# Note the use of @property rather than @abstractproperty: >>> ? ?@property >>> ? ?@abstractmethod >>> ? ?def bar(self): >>> ? ? ? ?return 1 >>> ? ?@bar.setter >>> ? ?@abstractmethod >>> ? ?def bar(self, val): >>> ? ? ? ?pass >>> >>> I thought that would work with Python-3.2, but Foo is instantiable >>> even though there are abstractmethods. If python's property could be >>> tweaked to recognize those abstract methods and raise the usual >>> TypeError, then we could subclass the abstract base class Foo in the >>> usual way: >> >> Here is a working example!: > > The modifications to "property" to better support abstract base > classes using the decorator syntax and @abstractmethod (rather than > @abstractproperty) are even simpler than I originally thought: > > class Property(property): > > ? ?def __init__(self, *args, **kwargs): > ? ? ? ?super(Property, self).__init__(*args, **kwargs) > ? ? ? ?for f in (self.fget, self.fset, self.fdel): > ? ? ? ? ? ?if getattr(f, '__isabstractmethod__', False): > ? ? ? ? ? ? ? ?self.__isabstractmethod__ = True > ? ? ? ? ? ? ? ?break > >> >> class C(metaclass=abc.ABCMeta): >> ? ?@Property >> ? ?@abc.abstractmethod >> ? ?def x(self): >> ? ? ? ?return 1 >> ? ?@x.setter >> ? ?@abc.abstractmethod >> ? ?def x(self, val): >> ? ? ? ?pass >> >> try: >> ? ?c=C() >> except TypeError as e: >> ? ?print(e) >> >> class D(C): >> ? ?@C.x.getter >> ? ?def x(self): >> ? ? ? ?return 2 >> >> try: >> ? ?d=D() >> except TypeError as e: >> ? ?print(e) >> >> class E(D): >> ? ?@D.x.setter >> ? ?def x(self, val): >> ? ? ? ?pass >> >> print(E()) >> > > running this example yields: > > Can't instantiate abstract class C with abstract methods x > Can't instantiate abstract class D with abstract methods x > <__main__.E object at 0x212ee10> > > Wouldn't it be possible to include this in python-3.3? Sounds good to me. -- --Guido van Rossum (python.org/~guido) From mikegraham at gmail.com Fri Mar 18 20:35:48 2011 From: mikegraham at gmail.com (Mike Graham) Date: Fri, 18 Mar 2011 15:35:48 -0400 Subject: [Python-ideas] mixins as decorators vs inheritance [was: Automatic comparisons by default] In-Reply-To: References: Message-ID: On Fri, Mar 18, 2011 at 1:04 PM, Raymond Hettinger wrote: > On Mar 18, 2011, at 6:58 AM, Mike Graham wrote: >> >> (a) is a misfeature. isinstance checks are bad: they make your code >> less flexible ? in Python the actual type of an object isn't something >> we treat as semantic data. This is *especially* the case with mixins, >> which are just a convenient thing for code reuse, not some meaningful >> type. > > I believe that is outdated advice. ?Since Guido introduced > abstract base classes,?the trend (and grain) of Python is to > use isinstance() to check for a given interface (i.e. distinguishing > a Sequence from a Mapping) and to use ABCs as mixins > (i.e. MutableMapping has replaced UserDict.DictMixin). Having an object and not knowing whether it's a sequence or a mapping isn't a good situation to be in and can virtually always be avoided for the best; having a somewhat saner, more flexible way to check doesn't really change this. I've yet really to see the added value in ABCs (other than the stdlib ABCs having useful mixin methods). We were assured when abcs were introduced that duck typing wasn't dead and that you don't have to use them, so that's exactly what I've done. I recognize that a lot of people smarter and more important to Python than I am decided ABCs were worth having, but I've yet to see the utility in doing typechecks myself and have a hard time imagining updating my advice to encourage using isinstance. From mikegraham at gmail.com Fri Mar 18 20:38:21 2011 From: mikegraham at gmail.com (Mike Graham) Date: Fri, 18 Mar 2011 15:38:21 -0400 Subject: [Python-ideas] mixins as decorators vs inheritance [was: Automatic comparisons by default] In-Reply-To: <81D84B45-38E0-4DBD-97A5-DDA78BFC1EE3@masklinn.net> References: <81D84B45-38E0-4DBD-97A5-DDA78BFC1EE3@masklinn.net> Message-ID: On Fri, Mar 18, 2011 at 12:32 PM, Masklinn wrote: > On 2011-03-18, at 14:58 , Mike Graham wrote: >> >> *Of course, here I am ignoring the possibility that TotallyOrdered** >> is implemented as an abstract base class and all the stdlib is >> registered with it appropriately. This is because I don't think that >> would be the right thing to do ? TotallyOrdered isn't a meaningful >> abstract type. > > Why wouldn't TotallyOrdered not be a meaningful abstract type? Especially when compared to Sized or Hashable? You're right. From ianb at colorstudy.com Fri Mar 18 21:48:25 2011 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 18 Mar 2011 15:48:25 -0500 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: <4D82B9C6.9030709@pearwood.info> References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: On Thu, Mar 17, 2011 at 8:47 PM, Steven D'Aprano wrote: > Terry Reedy wrote: > >> On 3/17/2011 7:19 PM, Senthil Kumaran wrote: >> >>> On Wed, Mar 16, 2011 at 05:35:41PM -0400, Doug Hellmann wrote: >>> >>> As I told Doug during Pycon, I think it would be a good idea to >>>>> link his PyMOTW pages to our modules documentation in >>>>> docs.python.org so people have more examples etc. >>>>> >>>> >> Various people have written various docs showing Python by example. I do >> not think any one should be singled out in the docs. On the other hand, the >> wiki could have a PythonByExample page (or pages) with links to various >> resources. >> > > What he said. > > With all respect to Doug, do we really want to bless his website more than > any of the other Python blogs, tutorials, etc. out on the Internet? > Bah humbug. If we could link stdlib docs to every good quality piece of coverage for that module then that would be great. It's not like someone else has been denied, or that we're giving Doug exclusive linking rights or something. It just happens he has written the most comprehensive and maintained set of docs, and so it would be bureaucratically rather easy to get a bunch more helpful links in the docs that will help people learn Python better. Frankly it doesn't matter if it's "blessed" as that doesn't incur any real benefit. > I wouldn't mind having a prominent "External resources" page in the Python > docs, if it is actively maintained and doesn't turn into a bunch of dead > links in 12 months time. But I have grave doubts about linking to an > external site all through the module documentation, no matter how useful it > is. Who controls the external content? > Adding any content, including links, incurs extra maintenance for that content. Links are a little harder than other pieces, and they should be added only with some consideration of the quality of the content. Again, conveniently, PyMOTW is a big list of quality documents, and AFAICT there is widespread approval of the content. Appropriate linking to some other documents might also be quite helpful; adding PyMOTW makes it more likely that will happen, but worrying about all the links we *aren't* adding doesn't move anything forward at all. Some tooling to manage the links would be nice, but doesn't seem like a particularly big barrier -- a standard link checker would find dead links (including existing external links) and there are tools to mirror content so that if it's considered valuable and it really does disappear, we can consider mirroring it (Wikipedia seems to do something roughly like this with web-addressable citations). Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Fri Mar 18 22:10:25 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 19 Mar 2011 10:10:25 +1300 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> Message-ID: <4D83CA41.6080309@canterbury.ac.nz> Guido van Rossum wrote: > PS. Greg: is your current overridable_property part of some open > source code that you've published yet? Yes, I'm using it in two projects at the moment: PyGUI: http://www.cosc.canterbury.ac.nz/greg.ewing/python_gui/ Albow: http://www.cosc.canterbury.ac.nz/greg.ewing/python/Albow/ > Searches for it mostly seem to turn up recent discussions here... You probably won't find it with a direct search because it's more of an internal detail of those libraries rather than a separately-advertised feature. -- Greg From guido at python.org Fri Mar 18 23:29:33 2011 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Mar 2011 15:29:33 -0700 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: On Fri, Mar 18, 2011 at 1:48 PM, Ian Bicking wrote: > On Thu, Mar 17, 2011 at 8:47 PM, Steven D'Aprano > wrote: >> With all respect to Doug, do we really want to bless his website more than >> any of the other Python blogs, tutorials, etc. out on the Internet? > > Bah humbug.? If we could link stdlib docs to every good quality piece of > coverage for that module then that would be great.? It's not like someone > else has been denied, or that we're giving Doug exclusive linking rights or > something.? It just happens he has written the most comprehensive and > maintained set of docs, and so it would be bureaucratically rather easy to > get a bunch more helpful links in the docs that will help people learn > Python better.? Frankly it doesn't matter if it's "blessed" as that doesn't > incur any real benefit. Good call! -- --Guido van Rossum (python.org/~guido) From orsenthil at gmail.com Sat Mar 19 00:24:13 2011 From: orsenthil at gmail.com (Senthil Kumaran) Date: Sat, 19 Mar 2011 07:24:13 +0800 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: <20110318232413.GA2596@kevin> On Fri, Mar 18, 2011 at 03:48:25PM -0500, Ian Bicking wrote: > > Some tooling to manage the links would be nice, but doesn't seem like a > particularly big barrier -- a standard link checker would find dead links > (including existing external links) and there are tools to mirror content so Sphinx has it already. It does that when you do 'make linkcheck'. -- Senthil From greg.ewing at canterbury.ac.nz Sat Mar 19 02:37:32 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 19 Mar 2011 14:37:32 +1300 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> Message-ID: <4D8408DC.4020103@canterbury.ac.nz> Guido van Rossum wrote: > There's another pattern where all class attributes that have a certain > property are also collected in a per-class datastructure, I think __addtoclass__ could cover those as well, if you can arrange for the relevant objects to inherit from a class having an appropriate __addtoclass__ implementation. If that's not convenient, another approach would be to wrap them in something whose __addtoclass__ does the right thing and then unwraps itself. -- Greg From greg.ewing at canterbury.ac.nz Sat Mar 19 02:42:13 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 19 Mar 2011 14:42:13 +1300 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> Message-ID: <4D8409F5.8030600@canterbury.ac.nz> Guido van Rossum wrote: > We could either have a standard metaclass that does this, or we could > just make it part of 'type' (the default metaclass) and be done with > it. If it's a metaclass, standard or not, we're back to the same difficulties of mixing different metaclasses together. -- Greg From steve at pearwood.info Sat Mar 19 02:48:48 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 19 Mar 2011 12:48:48 +1100 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: <4D840B80.3040806@pearwood.info> Ian Bicking wrote: > On Thu, Mar 17, 2011 at 8:47 PM, Steven D'Aprano wrote: > >> Terry Reedy wrote: >> >>> On 3/17/2011 7:19 PM, Senthil Kumaran wrote: >>> >>>> On Wed, Mar 16, 2011 at 05:35:41PM -0400, Doug Hellmann wrote: >>>> >>>> As I told Doug during Pycon, I think it would be a good idea to >>>>>> link his PyMOTW pages to our modules documentation in >>>>>> docs.python.org so people have more examples etc. >>>>>> >>> Various people have written various docs showing Python by example. I do >>> not think any one should be singled out in the docs. On the other hand, the >>> wiki could have a PythonByExample page (or pages) with links to various >>> resources. >>> >> What he said. >> >> With all respect to Doug, do we really want to bless his website more than >> any of the other Python blogs, tutorials, etc. out on the Internet? >> > > Bah humbug. If we could link stdlib docs to every good quality piece of > coverage for that module then that would be great. It's not like someone > else has been denied, or that we're giving Doug exclusive linking rights or In that case, I nominate Michael Foord's documentation of urllib2 for linking as well: http://www.voidspace.org.uk/python/articles/urllib2.shtml I am serious, by the way, I think Michael's urllib2 docs are excellent. But are we sure we want to go down this path? It's neither practical nor desirable to fill the Python docs with links to every good quality external source, so in a very real sense, yes, others will be denied. If not now, at some point we're going to say "I'm sorry, your web site is really excellent, but we're not going to link to you." But that's not my main objection. We keep the bar high for inclusion in the standard library, and it shouldn't offend anyone. I'm sure people will cope with the disappointment of having their excellent tutorial or blog rejected for inclusion. I think that the real risk comes from the implications of linking to an external page from the docs. If you think that there are no such implications, then you will probably think that there is no downside to such links. I hope to persuade you that there are, and that they need to be considered before making this decision. Giving a list of "useful external resources" is a very different from linking to Doug's site repeatedly throughout the module docs. The Python docs are not some blog, where external links are posted for fun or for giving credit, nor is it Wikipedia where the external links are used as authority. The Python docs are the authority. If they link to an external page, it confers some level of authority and officialness to that external page. Sometimes we do so explicitly, e.g. we link to Fredrik Lundh?s Elementtree pages: http://docs.python.org/library/xml.etree.elementtree.html othertimes it is implied, e.g. the Decimal docs say "See also" and then link to a pair of carefully selected (semi-)official sources. But the proposal goes even further: it would link to Doug's site from nearly every page in the modules documentation. By linking to an external site in such an intimate fashion, I believe we would be giving a significant level of official standing to an external site that we don't control. We would be saying not just that the site is a useful site, but that it's such a great site, and a *trusted* site, that we link to it all throughout the official documentation. That says a lot, and we shouldn't be so blas? about saying it. > something. It just happens he has written the most comprehensive and > maintained set of docs, and so it would be bureaucratically rather easy to > get a bunch more helpful links in the docs that will help people learn > Python better. Frankly it doesn't matter if it's "blessed" as that doesn't > incur any real benefit. I think you mean "cost". But there is a real cost as well as benefit: the cost comes as risk. I think you have misunderstood my point about who controls the external content. Dead links are the least risk, and the only one that can be managed automatically. We would be linking to pages that aren't controlled by us. We have no real control over whether the pages remain updated, or what content goes into those pages, or whether they get filled with advertising, or whatever. These are real risks -- even if you trust Doug implicitly, what happens if he gets hit by a bus and somebody else takes over his website? Elementtree excepted, when we take on a module or package and distribute it as an official part of the standard library, we expect the author to hand over official control to the PSF (even if practical control remains in the author's hands). Perhaps we should consider something similar for documentation. -- Steven From jimjjewett at gmail.com Sat Mar 19 02:58:21 2011 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 18 Mar 2011 21:58:21 -0400 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: <4D8408DC.4020103@canterbury.ac.nz> References: <4D8082C1.30400@canterbury.ac.nz> <4D8408DC.4020103@canterbury.ac.nz> Message-ID: On Fri, Mar 18, 2011 at 9:37 PM, Greg Ewing wrote: > Guido van Rossum wrote: >> There's another pattern where all class attributes that have a certain >> property are also collected in a per-class datastructure, > I think __addtoclass__ could cover those as well, > if you can arrange for the relevant objects to inherit > from a class having an appropriate __addtoclass__ > implementation. How do you put an attribute (such as __addtoclass__ ) on a name? Or are you proposing that the actual pattern be something more like: x=SpecialObj() And that normal initiation be handled either later, or as part of the SpecialObj initiation x=SpecialObj()=5 or x=SpecialObj(); x=5 or x=SpecialObj(value=5) Doing this for every name seems likely to be wasteful. Doing it only for certain initial values seems too magical. Doing it only for certain attributes -- there still needs to be a way to mark them, and I suppose we're back to either a decorator or a special assignment operator. @decorated_implies_an_object x=5 x:=5 What have I missed here? -jJ From ianb at colorstudy.com Sat Mar 19 03:10:00 2011 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 18 Mar 2011 21:10:00 -0500 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> <4D8408DC.4020103@canterbury.ac.nz> Message-ID: On Fri, Mar 18, 2011 at 8:58 PM, Jim Jewett wrote: > On Fri, Mar 18, 2011 at 9:37 PM, Greg Ewing > wrote: > > Guido van Rossum wrote: > > >> There's another pattern where all class attributes that have a certain > >> property are also collected in a per-class datastructure, > > > I think __addtoclass__ could cover those as well, > > if you can arrange for the relevant objects to inherit > > from a class having an appropriate __addtoclass__ > > implementation. > > How do you put an attribute (such as __addtoclass__ ) on a name? Or > are you proposing that the actual pattern be something more like: > > x=SpecialObj() > > And that normal initiation be handled either later, or as part of the > SpecialObj initiation > > x=SpecialObj()=5 > or > x=SpecialObj(); x=5 > or > x=SpecialObj(value=5) > What we're describing only applies to class variables; a top-level variable wouldn't be affected. Imagine for instance a column class (ORMish) that wants to know its name: class Column(object): def __init__(self, **kw): ... def __addtoclass__(self, name, cls): self.name = name return self Now if you do: class MyTable: username = Column() Then MyTable.username.name == 'username' If you wanted to be able to reuse values, like Greg wants, you could do: class Column(object): def __init__(self, kw): self.kw = kw ... def copy(self): return self.__class__(self.kw) def __addtoclass__(self, name, cls): new_obj = self.copy() new_obj.name = name return new_obj Or you could use several different classes (e.g., BoundColumn), or... well, there's many ways to skin a cat. Like descriptors, only objects that implement this new method would participate. It's not really like a decorator, it's much more like descriptors -- decorators like classmethod just happen to be descriptor factories. Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Sat Mar 19 03:15:43 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 19 Mar 2011 15:15:43 +1300 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> <4D8408DC.4020103@canterbury.ac.nz> Message-ID: <4D8411CF.9060209@canterbury.ac.nz> Ian Bicking wrote: > If you wanted to be able to reuse values, like Greg wants, Actually I don't want to reuse values, that was someone else. For my use case it's fine to create a new descriptor for each use. -- Greg From orsenthil at gmail.com Sat Mar 19 03:17:28 2011 From: orsenthil at gmail.com (Senthil Kumaran) Date: Sat, 19 Mar 2011 10:17:28 +0800 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: <4D840B80.3040806@pearwood.info> References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> Message-ID: <20110319021728.GE2596@kevin> On Sat, Mar 19, 2011 at 12:48:48PM +1100, Steven D'Aprano wrote: > In that case, I nominate Michael Foord's documentation of urllib2 > for linking as well: > > http://www.voidspace.org.uk/python/articles/urllib2.shtml > > I am serious, by the way, I think Michael's urllib2 docs are excellent. Already done! :) http://docs.python.org/howto/urllib2.html http://docs.python.org/dev/howto/urllib2.html But there is a difference. Do you see it is adopted as a howto under python docs itself and undergoes similar maintenance as any other document. We had fix some examples in there a couple of times. I can understand case which Steven is putting forth and I tend to agree with that. Linking an external resource at various places consistently (like a footnote in every module doc) is kind of a deviation from our current practise, there are chances of it getting backfired with unforeseen scenarios later. (A possible case is if the external resource undergoes a lag in maintenance and docs get updated more frequently?) There is case in hand too, Python3 documentation vs PyMOTW being available only for Python2. As such, I am fine with the way things currently are. Stdlib documentation is for reference and I search for PyMOTW to landup and look for examples in Doug's documents and I use both of them in tandem and needless to say, once a particular module becomes more familiar, I tend to look for reference more than the example. -- Senthil From tjreedy at udel.edu Sat Mar 19 04:06:21 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 18 Mar 2011 23:06:21 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: On 3/18/2011 4:48 PM, Ian Bicking wrote: > Frankly it doesn't > matter if it's "blessed" as that doesn't incur any real benefit. Driving traffic to a site with ads, and especially one whose content is also available as a paid book, is a real benefit. Google's fortune is based on that benefit. I actually think that would be fine on wiki pages where any python writer can add links to their stuff. I already suggested that there could be one for module. There could also be some for statements and builtins. The book I am working on will discuss function def statements with emphasis on while and for loops. I would love to have links to it in the while, for, and def sections of the reference, but I presently would not expect that -- only some mention in the wiki. -- Terry Jan Reedy From python at mrabarnett.plus.com Sat Mar 19 04:25:57 2011 From: python at mrabarnett.plus.com (MRAB) Date: Sat, 19 Mar 2011 03:25:57 +0000 Subject: [Python-ideas] Adding function checks to regex Message-ID: <4D842245.7040707@mrabarnett.plus.com> Some of those who are relative new to regexes sometimes ask how to write a regex which checks that a number is in a range or is a valid date. Although this may be possible, it certainly isn't easy. From what I've read, Perl has a way of including code in a regex, but I don't think that's a good idea However, it occurs to me that there may be a case for being able to call a supplied function to perform such checking. Borrowing some syntax from Perl, it could look like this: def range_check(m): return 1 <= int(m.group()) <= 10 numbers = regex.findall(r"\b\d+\b(*CALL)", text, call=range_check) The regex module would match as normal until the "(*CALL)", at which point it would call the function. If the function returns True, the matching continues (and succeeds); if the function returns False, the matching backtracks (and fails). The function would be passed a match object. An extension, again borrowing the syntax from Perl, could include a tag like this: numbers = regex.findall(r"\b\d+\b(*CALL:RANGE)", text, call=range_check) The tag would be passed to the function so that it could support multiple checks. Alternatively, a tag could always be passed; if no tag is provided then None would be passed instead. There's also the additional possibility of providing a dict of functions instead and using the tag to select the function which should be called. I'd be interested in your opinions. From tjreedy at udel.edu Sat Mar 19 04:28:13 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 18 Mar 2011 23:28:13 -0400 Subject: [Python-ideas] Add links in manual to test_modules. Message-ID: I am already making use of the source links added to the 3.2 manuals. I propose that we also add links to test-modules where appropriate, both in the language and library manuals. 1. It would help developers. 2. It could help users. Each test module is a set of examples of how to use the corresponding aspect of Python both correctly and incorrectly, with many corner cases that might not be clear from the manual. Each test module should exercise every feature of the stdlib module or other area covered. In addition, unlike external sets of examples, the test suite is updated and expected to run with the corresponding version. As language and implementation tests are better separated, people could better see which behavior is which. 3. Exposing the test suite would expose deficiencies and encourage improvements. -- Terry Jan Reedy From orsenthil at gmail.com Sat Mar 19 05:16:59 2011 From: orsenthil at gmail.com (Senthil Kumaran) Date: Sat, 19 Mar 2011 12:16:59 +0800 Subject: [Python-ideas] Add links in manual to test_modules. In-Reply-To: References: Message-ID: <20110319041659.GF2596@kevin> On Fri, Mar 18, 2011 at 11:28:13PM -0400, Terry Reedy wrote: > I propose that we also add links to test-modules where appropriate, > both in the language and library manuals. I hope this is not motivated from the other discussion we were having to link example snippets. I find link to source modules helpful , but I not sure how helpful it would be to link the tests, as it has chances of adding to confusing given the xUnit test 'unittests' we have in our tests module. When looking for usage assistance people look for complete scripts rather than test scripts. Source module link help to provide 'more-accurate' information as lot many times code explains better than words. So, +0 from me. -- Senthil From ncoghlan at gmail.com Sat Mar 19 05:37:25 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 19 Mar 2011 14:37:25 +1000 Subject: [Python-ideas] Add links in manual to test_modules. In-Reply-To: <20110319041659.GF2596@kevin> References: <20110319041659.GF2596@kevin> Message-ID: Ick, no. We do all sorts of dodgy stuff in our test suite to stress implementations, probe obscure corner cases, double up on checks based on where and when bugs happened to be reported. Large parts of it are written to make the tests easier to write, not because they reflect any kind of idiomatic code, or good ways of doing things in a real application. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From steve at pearwood.info Sat Mar 19 07:45:00 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 19 Mar 2011 17:45:00 +1100 Subject: [Python-ideas] Add links in manual to test_modules. In-Reply-To: References: <20110319041659.GF2596@kevin> Message-ID: <4D8450EC.10209@pearwood.info> Nick Coghlan wrote: > Ick, no. > > We do all sorts of dodgy stuff in our test suite to stress > implementations, probe obscure corner cases, double up on checks based > on where and when bugs happened to be reported. Large parts of it are > written to make the tests easier to write, not because they reflect > any kind of idiomatic code, or good ways of doing things in a real > application. But surely a test suite counts as a real application? It's likely to be bigger than the "actual" application or library, it still needs to be maintained, and is more likely to have bugs (on account of there being no test suite for the tests). Speaking for myself, I find code reuse and design of my test suites to be one of the harder parts of writing code. Perhaps I'd learn something from the Python tests, even if only "everyone has trouble writing good unit-tests" *wink* As I see it, the main benefit of Terry's suggestion is that it may encourage developers to write new tests for the standard library, or to refactor the existing tests. +0.5 from me. -- Steven From cmjohnson.mailinglist at gmail.com Sat Mar 19 09:41:32 2011 From: cmjohnson.mailinglist at gmail.com (Carl M. Johnson) Date: Fri, 18 Mar 2011 22:41:32 -1000 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: On Fri, Mar 18, 2011 at 5:06 PM, Terry Reedy wrote: > I actually think that would be fine on wiki pages where any python writer > can add links to their stuff. I already suggested that there could be one > for module. There could also be some for statements and builtins. It might be nice if at the bottom of the official docs there a link to a wiki version of the docs and in the wiki version there were external links to things like MOTW. Of course, the wiki would eventually have links to resources that were OK but not great, but I don't think that would be a problem as long as each page of the wiki has a link to its equivalent in the official docs. The idea would be if I wanted to know about module X, first I look in the official docs. If I find that my question wasn't answered, or I just want more, I could switch over to the wiki to see if anything different is over there and then check out the list of external links. I'm not sure if it would be maintainable or not, but would it be that bad to try as an experiment? If after six months it doesn't seem to have worked, take the link out of the official docs and shut the wiki down. No harm in trying something out, right? -- Carl Johnson From __peter__ at web.de Sat Mar 19 12:33:58 2011 From: __peter__ at web.de (Peter Otten) Date: Sat, 19 Mar 2011 12:33:58 +0100 Subject: [Python-ideas] Adding function checks to regex References: <4D842245.7040707@mrabarnett.plus.com> Message-ID: MRAB wrote: > Some of those who are relative new to regexes sometimes ask how to write > a regex which checks that a number is in a range or is a valid date. > Although this may be possible, it certainly isn't easy. > > From what I've read, Perl has a way of including code in a regex, but I > don't think that's a good idea > > However, it occurs to me that there may be a case for being able to call > a supplied function to perform such checking. > > Borrowing some syntax from Perl, it could look like this: > > def range_check(m): > return 1 <= int(m.group()) <= 10 > > numbers = regex.findall(r"\b\d+\b(*CALL)", text, call=range_check) > > The regex module would match as normal until the "(*CALL)", at which > point it would call the function. If the function returns True, the > matching continues (and succeeds); if the function returns False, the > matching backtracks (and fails). I would approach that with numbers = (int(m.group()) for m in re.finditer(r"\b\d+\b")) numbers = [n for n in numbers if 1 <= n <= 10] here. This is of similar complexity, but has the advantage that you can use the building blocks throughout your python scripts. Could you give an example where the benefits of the proposed syntax stand out more? > The function would be passed a match object. > > An extension, again borrowing the syntax from Perl, could include a tag > like this: > > numbers = regex.findall(r"\b\d+\b(*CALL:RANGE)", text, > call=range_check) > > The tag would be passed to the function so that it could support > multiple checks. [brainstorm mode] Could the same be achieved without new regex syntax? I'm thinking of reusing named groups: re.findall(r"\b(?P\d+)\b", text, number=lambda s: 1 <= int(s) <= 10) Peter From solipsis at pitrou.net Sat Mar 19 14:01:30 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 19 Mar 2011 14:01:30 +0100 Subject: [Python-ideas] Add links in manual to test_modules. References: <20110319041659.GF2596@kevin> <4D8450EC.10209@pearwood.info> Message-ID: <20110319140130.4cc63394@pitrou.net> On Sat, 19 Mar 2011 17:45:00 +1100 Steven D'Aprano wrote: > Nick Coghlan wrote: > > Ick, no. > > > > We do all sorts of dodgy stuff in our test suite to stress > > implementations, probe obscure corner cases, double up on checks based > > on where and when bugs happened to be reported. Large parts of it are > > written to make the tests easier to write, not because they reflect > > any kind of idiomatic code, or good ways of doing things in a real > > application. > > But surely a test suite counts as a real application? It's likely to be > bigger than the "actual" application or library, it still needs to be > maintained, and is more likely to have bugs (on account of there being > no test suite for the tests). I agree with Nick, the test suite does not give "examples" of how to use the APIs - and often it will actually invoke semi-private APIs in order to easy testing. Claiming the test suite has any educational value would be a disservice to our users. -1 from me. Regards Antoine. From mwm at mired.org Sat Mar 19 15:26:25 2011 From: mwm at mired.org (Mike Meyer) Date: Sat, 19 Mar 2011 10:26:25 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: <4D840B80.3040806@pearwood.info> References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> Message-ID: <20110319102625.062dbd27@bhuda.mired.org> On Sat, 19 Mar 2011 12:48:48 +1100 Steven D'Aprano wrote: > But that's not my main objection. We keep the bar high for inclusion in > the standard library, and it shouldn't offend anyone. Has treating the documentation like the library been considered? That is, instead of linking to Doug's examples, assuming Doug consented, incorporate them directly into the doc, and give Doug a commit bit so he can maintain them. That would also solve the versioning issues, at least to the level of "which version is the example for." Done right, the examples should be automatically testable. http://www.mired.org/consulting.html Independent Software developer/SCM consultant, email for more information. O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From doug.hellmann at gmail.com Sat Mar 19 15:54:32 2011 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Sat, 19 Mar 2011 10:54:32 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: <20110319102625.062dbd27@bhuda.mired.org> References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <20110319102625.062dbd27@bhuda.mired.org> Message-ID: On Mar 19, 2011, at 10:26 AM, Mike Meyer wrote: > On Sat, 19 Mar 2011 12:48:48 +1100 > Steven D'Aprano wrote: >> But that's not my main objection. We keep the bar high for inclusion in >> the standard library, and it shouldn't offend anyone. > > Has treating the documentation like the library been considered? That > is, instead of linking to Doug's examples, assuming Doug consented, > incorporate them directly into the doc, and give Doug a commit bit so > he can maintain them. That would also solve the versioning issues, at > least to the level of "which version is the example for." Done right, > the examples should be automatically testable. Some of the examples were copied into the standard library docs as part of the GHOP context a few years ago. I would prefer not to incorporate everything directly, for now, because it would complicate the licensing for publication. I appreciate everyone's willingness to discuss Tarek's proposal and consider alternatives, but I think it is probably best to leave things as they are until there is an official policy on outbound links from the standard library docs. I agree that having a large number of links to other sites would be a maintenance burden, so I completely understand the reluctance to take that on as a project. Drawing a parallel from the way the library code is handled, it seems best to treat PyMOTW as a separately maintained resource and let users find it separately. Now, if there was an index equivalent to PyPI for documentation resources, *that* would complete the analogy. :-) Doug From dsdale24 at gmail.com Sat Mar 19 16:29:23 2011 From: dsdale24 at gmail.com (Darren Dale) Date: Sat, 19 Mar 2011 11:29:23 -0400 Subject: [Python-ideas] Would it possible to define abstract read/write properties with decorators? In-Reply-To: References: Message-ID: On Fri, Mar 18, 2011 at 2:36 PM, Guido van Rossum wrote: > On Fri, Mar 18, 2011 at 10:29 AM, Darren Dale wrote: >> On Sun, Mar 13, 2011 at 12:49 PM, Darren Dale wrote: >>> On Sun, Mar 13, 2011 at 11:18 AM, Darren Dale wrote: >>> [...] >>>> It seems like it should be possible for Python to support the >>>> decorator syntax for declaring abstract read/write properties. The >>>> most elegant approach might be the following, if it could be >>>> supported: >>>> >>>> class Foo(metaclass=ABCMeta): >>>> ? ?# Note the use of @property rather than @abstractproperty: >>>> ? ?@property >>>> ? ?@abstractmethod >>>> ? ?def bar(self): >>>> ? ? ? ?return 1 >>>> ? ?@bar.setter >>>> ? ?@abstractmethod >>>> ? ?def bar(self, val): >>>> ? ? ? ?pass [...] >> The modifications to "property" to better support abstract base >> classes using the decorator syntax and @abstractmethod (rather than >> @abstractproperty) are even simpler than I originally thought: >> >> class Property(property): >> >> ? ?def __init__(self, *args, **kwargs): >> ? ? ? ?super(Property, self).__init__(*args, **kwargs) >> ? ? ? ?for f in (self.fget, self.fset, self.fdel): >> ? ? ? ? ? ?if getattr(f, '__isabstractmethod__', False): >> ? ? ? ? ? ? ? ?self.__isabstractmethod__ = True >> ? ? ? ? ? ? ? ?break >> >>> >>> class C(metaclass=abc.ABCMeta): >>> ? ?@Property >>> ? ?@abc.abstractmethod >>> ? ?def x(self): >>> ? ? ? ?return 1 >>> ? ?@x.setter >>> ? ?@abc.abstractmethod >>> ? ?def x(self, val): >>> ? ? ? ?pass >>> >>> try: >>> ? ?c=C() >>> except TypeError as e: >>> ? ?print(e) >>> >>> class D(C): >>> ? ?@C.x.getter >>> ? ?def x(self): >>> ? ? ? ?return 2 >>> >>> try: >>> ? ?d=D() >>> except TypeError as e: >>> ? ?print(e) >>> >>> class E(D): >>> ? ?@D.x.setter >>> ? ?def x(self, val): >>> ? ? ? ?pass >>> >>> print(E()) >>> >> >> running this example yields: >> >> Can't instantiate abstract class C with abstract methods x >> Can't instantiate abstract class D with abstract methods x >> <__main__.E object at 0x212ee10> >> >> Wouldn't it be possible to include this in python-3.3? > > Sounds good to me. I took a stab at this, but unfortunately I have not been able to perform a complete build of python from the mercurial checkout on either ubuntu 11.04 or OS X 10.6.6, for reasons that appear unrelated to the changes below (undefined setlocale symbols on OS X, Could not find platform dependent libraries segfault on ubuntu). I'm an experienced python programmer, but not an experienced python hacker. Would anyone care to comment on (or test) the changes?: diff -r e34b09c69dd3 Objects/descrobject.c --- a/Objects/descrobject.c Sat Mar 12 22:31:06 2011 -0500 +++ b/Objects/descrobject.c Sat Mar 19 11:22:14 2011 -0400 @@ -1117,6 +1121,7 @@ PyObject *prop_set; PyObject *prop_del; PyObject *prop_doc; + PyObject *prop_isabstract; int getter_doc; } propertyobject; @@ -1128,6 +1133,8 @@ {"fset", T_OBJECT, offsetof(propertyobject, prop_set), READONLY}, {"fdel", T_OBJECT, offsetof(propertyobject, prop_del), READONLY}, {"__doc__", T_OBJECT, offsetof(propertyobject, prop_doc), READONLY}, + {"__isabstractmethod__", T_OBJECT, + offsetof(propertyobject, prop_isabstract), READONLY}, {0} }; @@ -1180,6 +1187,7 @@ Py_XDECREF(gs->prop_set); Py_XDECREF(gs->prop_del); Py_XDECREF(gs->prop_doc); + Py_XDECREF(gs->prop_isabstract); self->ob_type->tp_free(self); } @@ -1213,7 +1221,7 @@ PyErr_SetString(PyExc_AttributeError, value == NULL ? "can't delete attribute" : - "can't set attribute"); + "can't set attribute"); return -1; } if (value == NULL) @@ -1263,6 +1271,21 @@ return new; } +static void +property_identify_abstract_method(PyObject *self, PyObject *method) +{ + /* Set self.__isabstractmethod__ if method is abstract */ + if (method != NULL){ + PyObject *is_abstract = PyObject_GetAttrString(method, + "__isabstractmethod__"); + if (PyObject_IsTrue(is_abstract) > 0){ + Py_INCREF(Py_True); + PyObject_SetAttrString(self, "__isabstractmethod__", Py_True); + } + Py_DECREF(is_abstract); + } +} + static int property_init(PyObject *self, PyObject *args, PyObject *kwds) { @@ -1285,11 +1308,13 @@ Py_XINCREF(set); Py_XINCREF(del); Py_XINCREF(doc); + Py_INCREF(Py_False); prop->prop_get = get; prop->prop_set = set; prop->prop_del = del; prop->prop_doc = doc; + prop->prop_isabstract = Py_False; prop->getter_doc = 0; /* if no docstring given and the getter has one, use that one */ @@ -1320,6 +1345,11 @@ } } + /* set __isabstractmethod__ if fget, fset, or fdel are abstract methods */ + property_identify_abstract_method(self, get); + property_identify_abstract_method(self, set); + property_identify_abstract_method(self, del); + return 0; } From python at mrabarnett.plus.com Sat Mar 19 17:19:30 2011 From: python at mrabarnett.plus.com (MRAB) Date: Sat, 19 Mar 2011 16:19:30 +0000 Subject: [Python-ideas] Adding function checks to regex In-Reply-To: References: <4D842245.7040707@mrabarnett.plus.com> Message-ID: <4D84D792.7040008@mrabarnett.plus.com> On 19/03/2011 11:33, Peter Otten wrote: > MRAB wrote: > >> Some of those who are relative new to regexes sometimes ask how to write >> a regex which checks that a number is in a range or is a valid date. >> Although this may be possible, it certainly isn't easy. >> >> From what I've read, Perl has a way of including code in a regex, but I >> don't think that's a good idea >> >> However, it occurs to me that there may be a case for being able to call >> a supplied function to perform such checking. >> >> Borrowing some syntax from Perl, it could look like this: >> >> def range_check(m): >> return 1<= int(m.group())<= 10 >> >> numbers = regex.findall(r"\b\d+\b(*CALL)", text, call=range_check) >> >> The regex module would match as normal until the "(*CALL)", at which >> point it would call the function. If the function returns True, the >> matching continues (and succeeds); if the function returns False, the >> matching backtracks (and fails). > > I would approach that with > > numbers = (int(m.group()) for m in re.finditer(r"\b\d+\b")) > numbers = [n for n in numbers if 1<= n<= 10] > > here. This is of similar complexity, but has the advantage that you can use > the building blocks throughout your python scripts. Could you give an > example where the benefits of the proposed syntax stand out more? > There may be a use case in config files where you define rules (for example, Apache ) or web forms where you have validation, but a regex is too limited. This would enable you to add 'richer' checking. There could be a predefined set of checks, such as whether a date is valid. >> The function would be passed a match object. >> >> An extension, again borrowing the syntax from Perl, could include a tag >> like this: >> >> numbers = regex.findall(r"\b\d+\b(*CALL:RANGE)", text, >> call=range_check) >> >> The tag would be passed to the function so that it could support >> multiple checks. > > [brainstorm mode] > Could the same be achieved without new regex syntax? I'm thinking of reusing > named groups: > > re.findall(r"\b(?P\d+)\b", text, > number=lambda s: 1<= int(s)<= 10) > I'm not sure about that. From guido at python.org Sat Mar 19 17:22:50 2011 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Mar 2011 09:22:50 -0700 Subject: [Python-ideas] Add links in manual to test_modules. In-Reply-To: <4D8450EC.10209@pearwood.info> References: <20110319041659.GF2596@kevin> <4D8450EC.10209@pearwood.info> Message-ID: On Fri, Mar 18, 2011 at 11:45 PM, Steven D'Aprano wrote: > Nick Coghlan wrote: >> >> Ick, no. >> >> We do all sorts of dodgy stuff in our test suite to stress >> implementations, probe obscure corner cases, double up on checks based >> on where and when bugs happened to be reported. Large parts of it are >> written to make the tests easier to write, not because they reflect >> any kind of idiomatic code, or good ways of doing things in a real >> application. > > But surely a test suite counts as a real application? It's likely to be > bigger than the "actual" application or library, it still needs to be > maintained, and is more likely to have bugs (on account of there being no > test suite for the tests). > > Speaking for myself, I find code reuse and design of my test suites to be > one of the harder parts of writing code. Perhaps I'd learn something from > the Python tests, even if only "everyone has trouble writing good > unit-tests" *wink* > > As I see it, the main benefit of Terry's suggestion is that it may encourage > developers to write new tests for the standard library, or to refactor the > existing tests. +0.5 from me. I'm with Nick. Tests (at least the ones we have for the standard library) are rarely any good as example code for the modules being tested. They may be great if you want to learn to write tests or if you want to contribute to the stdlib, but they are easy enough to find. Linking them from the docs is sending people to a body of code that most people should never peruse. The one exception is that the tests can show language/library lawyers how something is supposed to behave in more detail than docs, without having to actually read the source. But again that's pretty advanced and the people interested in that stuff know where to go. -- --Guido van Rossum (python.org/~guido) From guido at python.org Sat Mar 19 17:24:53 2011 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Mar 2011 09:24:53 -0700 Subject: [Python-ideas] Would it possible to define abstract read/write properties with decorators? In-Reply-To: References: Message-ID: Thanks much for your contribution! In order to get it reviewed and submitted, can you please create a bug for this issue (mention the python-ideas thread), upload your patch there, and perhaps ping python-dev? --Guido On Sat, Mar 19, 2011 at 8:29 AM, Darren Dale wrote: > On Fri, Mar 18, 2011 at 2:36 PM, Guido van Rossum wrote: >> On Fri, Mar 18, 2011 at 10:29 AM, Darren Dale wrote: >>> On Sun, Mar 13, 2011 at 12:49 PM, Darren Dale wrote: >>>> On Sun, Mar 13, 2011 at 11:18 AM, Darren Dale wrote: >>>> [...] >>>>> It seems like it should be possible for Python to support the >>>>> decorator syntax for declaring abstract read/write properties. The >>>>> most elegant approach might be the following, if it could be >>>>> supported: >>>>> >>>>> class Foo(metaclass=ABCMeta): >>>>> ? ?# Note the use of @property rather than @abstractproperty: >>>>> ? ?@property >>>>> ? ?@abstractmethod >>>>> ? ?def bar(self): >>>>> ? ? ? ?return 1 >>>>> ? ?@bar.setter >>>>> ? ?@abstractmethod >>>>> ? ?def bar(self, val): >>>>> ? ? ? ?pass > [...] >>> The modifications to "property" to better support abstract base >>> classes using the decorator syntax and @abstractmethod (rather than >>> @abstractproperty) are even simpler than I originally thought: >>> >>> class Property(property): >>> >>> ? ?def __init__(self, *args, **kwargs): >>> ? ? ? ?super(Property, self).__init__(*args, **kwargs) >>> ? ? ? ?for f in (self.fget, self.fset, self.fdel): >>> ? ? ? ? ? ?if getattr(f, '__isabstractmethod__', False): >>> ? ? ? ? ? ? ? ?self.__isabstractmethod__ = True >>> ? ? ? ? ? ? ? ?break >>> >>>> >>>> class C(metaclass=abc.ABCMeta): >>>> ? ?@Property >>>> ? ?@abc.abstractmethod >>>> ? ?def x(self): >>>> ? ? ? ?return 1 >>>> ? ?@x.setter >>>> ? ?@abc.abstractmethod >>>> ? ?def x(self, val): >>>> ? ? ? ?pass >>>> >>>> try: >>>> ? ?c=C() >>>> except TypeError as e: >>>> ? ?print(e) >>>> >>>> class D(C): >>>> ? ?@C.x.getter >>>> ? ?def x(self): >>>> ? ? ? ?return 2 >>>> >>>> try: >>>> ? ?d=D() >>>> except TypeError as e: >>>> ? ?print(e) >>>> >>>> class E(D): >>>> ? ?@D.x.setter >>>> ? ?def x(self, val): >>>> ? ? ? ?pass >>>> >>>> print(E()) >>>> >>> >>> running this example yields: >>> >>> Can't instantiate abstract class C with abstract methods x >>> Can't instantiate abstract class D with abstract methods x >>> <__main__.E object at 0x212ee10> >>> >>> Wouldn't it be possible to include this in python-3.3? >> >> Sounds good to me. > > I took a stab at this, but unfortunately I have not been able to > perform a complete build of python from the mercurial checkout on > either ubuntu 11.04 or OS X 10.6.6, for reasons that appear unrelated > to the changes below (undefined setlocale symbols on OS X, Could not > find platform dependent libraries segfault on ubuntu). > I'm an experienced python programmer, but not an experienced python > hacker. Would anyone care to comment on (or test) the changes?: > > diff -r e34b09c69dd3 Objects/descrobject.c > --- a/Objects/descrobject.c ? ? Sat Mar 12 22:31:06 2011 -0500 > +++ b/Objects/descrobject.c ? ? Sat Mar 19 11:22:14 2011 -0400 > @@ -1117,6 +1121,7 @@ > ? ? PyObject *prop_set; > ? ? PyObject *prop_del; > ? ? PyObject *prop_doc; > + ? ?PyObject *prop_isabstract; > ? ? int getter_doc; > ?} propertyobject; > > @@ -1128,6 +1133,8 @@ > ? ? {"fset", T_OBJECT, offsetof(propertyobject, prop_set), READONLY}, > ? ? {"fdel", T_OBJECT, offsetof(propertyobject, prop_del), READONLY}, > ? ? {"__doc__", ?T_OBJECT, offsetof(propertyobject, prop_doc), READONLY}, > + ? ?{"__isabstractmethod__", ?T_OBJECT, > + ? ? offsetof(propertyobject, prop_isabstract), READONLY}, > ? ? {0} > ?}; > > @@ -1180,6 +1187,7 @@ > ? ? Py_XDECREF(gs->prop_set); > ? ? Py_XDECREF(gs->prop_del); > ? ? Py_XDECREF(gs->prop_doc); > + ? ?Py_XDECREF(gs->prop_isabstract); > ? ? self->ob_type->tp_free(self); > ?} > > @@ -1213,7 +1221,7 @@ > ? ? ? ? PyErr_SetString(PyExc_AttributeError, > ? ? ? ? ? ? ? ? ? ? ? ? value == NULL ? > ? ? ? ? ? ? ? ? ? ? ? ? "can't delete attribute" : > - ? ? ? ? ? ? ? ?"can't set attribute"); > + ? ? ? ? ? ? ? ? ? ? ? ?"can't set attribute"); > ? ? ? ? return -1; > ? ? } > ? ? if (value == NULL) > @@ -1263,6 +1271,21 @@ > ? ? return new; > ?} > > +static void > +property_identify_abstract_method(PyObject *self, PyObject *method) > +{ > + ? ?/* Set self.__isabstractmethod__ if method is abstract */ > + ? ?if (method != NULL){ > + ? ? ? ?PyObject *is_abstract = PyObject_GetAttrString(method, > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? "__isabstractmethod__"); > + ? ? ? ?if (PyObject_IsTrue(is_abstract) > 0){ > + ? ? ? ? ? ?Py_INCREF(Py_True); > + ? ? ? ? ? ?PyObject_SetAttrString(self, "__isabstractmethod__", Py_True); > + ? ? ? ?} > + ? ? ? ?Py_DECREF(is_abstract); > + ? ?} > +} > + > ?static int > ?property_init(PyObject *self, PyObject *args, PyObject *kwds) > ?{ > @@ -1285,11 +1308,13 @@ > ? ? Py_XINCREF(set); > ? ? Py_XINCREF(del); > ? ? Py_XINCREF(doc); > + ? ?Py_INCREF(Py_False); > > ? ? prop->prop_get = get; > ? ? prop->prop_set = set; > ? ? prop->prop_del = del; > ? ? prop->prop_doc = doc; > + ? ?prop->prop_isabstract = Py_False; > ? ? prop->getter_doc = 0; > > ? ? /* if no docstring given and the getter has one, use that one */ > @@ -1320,6 +1345,11 @@ > ? ? ? ? } > ? ? } > > + ? ?/* set __isabstractmethod__ if fget, fset, or fdel are abstract methods */ > + ? ?property_identify_abstract_method(self, get); > + ? ?property_identify_abstract_method(self, set); > + ? ?property_identify_abstract_method(self, del); > + > ? ? return 0; > ?} > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -- --Guido van Rossum (python.org/~guido) From ironfroggy at gmail.com Sat Mar 19 17:35:34 2011 From: ironfroggy at gmail.com (Calvin Spealman) Date: Sat, 19 Mar 2011 12:35:34 -0400 Subject: [Python-ideas] Adding function checks to regex In-Reply-To: <4D84D792.7040008@mrabarnett.plus.com> References: <4D842245.7040707@mrabarnett.plus.com> <4D84D792.7040008@mrabarnett.plus.com> Message-ID: I am -1 on the whole idea. However, for the sake of argument, I'll say that if it was done I would not bind the callbacks at match time. Instead, they would be part of the compiled regex objects. r = re.compile(r"foo:(?C\d+)", check_bounds=lambda d: 1 <= int(d) <= 100) and then r could be used like any other regex, and you don't need to know about the callbacks when actually using it, just to build it. On Sat, Mar 19, 2011 at 12:19 PM, MRAB wrote: > On 19/03/2011 11:33, Peter Otten wrote: > >> MRAB wrote: >> >> Some of those who are relative new to regexes sometimes ask how to write >>> a regex which checks that a number is in a range or is a valid date. >>> Although this may be possible, it certainly isn't easy. >>> >>> From what I've read, Perl has a way of including code in a regex, but I >>> don't think that's a good idea >>> >>> However, it occurs to me that there may be a case for being able to call >>> a supplied function to perform such checking. >>> >>> Borrowing some syntax from Perl, it could look like this: >>> >>> def range_check(m): >>> return 1<= int(m.group())<= 10 >>> >>> numbers = regex.findall(r"\b\d+\b(*CALL)", text, call=range_check) >>> >>> The regex module would match as normal until the "(*CALL)", at which >>> point it would call the function. If the function returns True, the >>> matching continues (and succeeds); if the function returns False, the >>> matching backtracks (and fails). >>> >> >> I would approach that with >> >> numbers = (int(m.group()) for m in re.finditer(r"\b\d+\b")) >> numbers = [n for n in numbers if 1<= n<= 10] >> >> here. This is of similar complexity, but has the advantage that you can >> use >> the building blocks throughout your python scripts. Could you give an >> example where the benefits of the proposed syntax stand out more? >> >> There may be a use case in config files where you define rules (for > example, Apache ) or web forms where you have validation, > but a regex is too limited. This would enable you to add 'richer' > checking. There could be a predefined set of checks, such as whether a > date is valid. > > > The function would be passed a match object. >>> >>> An extension, again borrowing the syntax from Perl, could include a tag >>> like this: >>> >>> numbers = regex.findall(r"\b\d+\b(*CALL:RANGE)", text, >>> call=range_check) >>> >>> The tag would be passed to the function so that it could support >>> multiple checks. >>> >> >> [brainstorm mode] >> Could the same be achieved without new regex syntax? I'm thinking of >> reusing >> named groups: >> >> re.findall(r"\b(?P\d+)\b", text, >> number=lambda s: 1<= int(s)<= 10) >> >> I'm not sure about that. > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnoller at gmail.com Sat Mar 19 18:27:06 2011 From: jnoller at gmail.com (Jesse Noller) Date: Sat, 19 Mar 2011 13:27:06 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: On Fri, Mar 18, 2011 at 6:29 PM, Guido van Rossum wrote: > On Fri, Mar 18, 2011 at 1:48 PM, Ian Bicking wrote: >> On Thu, Mar 17, 2011 at 8:47 PM, Steven D'Aprano >> wrote: >>> With all respect to Doug, do we really want to bless his website more than >>> any of the other Python blogs, tutorials, etc. out on the Internet? >> >> Bah humbug.? If we could link stdlib docs to every good quality piece of >> coverage for that module then that would be great.? It's not like someone >> else has been denied, or that we're giving Doug exclusive linking rights or >> something.? It just happens he has written the most comprehensive and >> maintained set of docs, and so it would be bureaucratically rather easy to >> get a bunch more helpful links in the docs that will help people learn >> Python better.? Frankly it doesn't matter if it's "blessed" as that doesn't >> incur any real benefit. > > Good call! > +1000 Dougs docs are indispensable. The number of times I have had people at work and elsewhere come to me and ask "why aren't the PMOTW in, or linked to from the stdlib docs" is astounding. People consider them *better* resources than the stdlib docs right now. We shouldn't be afraid to link to real, valuable resources that enhance peoples' ability to learn the language and the standard library. I don't agree with the hand waving around broken links, the fact the Doug wrote a book, endorsements, etc. The fact is, he's written better docs on many things, and we're doing the community a disservice by not actively exposing them as supplements to the existing documentation. Why is it so hard to simply do the right thing here? jesse From g.brandl at gmx.net Sat Mar 19 19:02:00 2011 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 19 Mar 2011 19:02:00 +0100 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: On 19.03.2011 18:27, Jesse Noller wrote: > On Fri, Mar 18, 2011 at 6:29 PM, Guido van Rossum wrote: >> On Fri, Mar 18, 2011 at 1:48 PM, Ian Bicking wrote: >>> On Thu, Mar 17, 2011 at 8:47 PM, Steven D'Aprano >>> wrote: >>>> With all respect to Doug, do we really want to bless his website more than >>>> any of the other Python blogs, tutorials, etc. out on the Internet? >>> >>> Bah humbug. If we could link stdlib docs to every good quality piece of >>> coverage for that module then that would be great. It's not like someone >>> else has been denied, or that we're giving Doug exclusive linking rights or >>> something. It just happens he has written the most comprehensive and >>> maintained set of docs, and so it would be bureaucratically rather easy to >>> get a bunch more helpful links in the docs that will help people learn >>> Python better. Frankly it doesn't matter if it's "blessed" as that doesn't >>> incur any real benefit. >> >> Good call! >> > > +1000 > > Dougs docs are indispensable. The number of times I have had people at > work and elsewhere come to me and ask "why aren't the PMOTW in, or > linked to from the stdlib docs" is astounding. People consider them > *better* resources than the stdlib docs right now. We shouldn't be > afraid to link to real, valuable resources that enhance peoples' > ability to learn the language and the standard library. What Jesse said. Georg From raymond.hettinger at gmail.com Sat Mar 19 19:10:10 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sat, 19 Mar 2011 11:10:10 -0700 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: <4666A9E2-A7A6-454D-BD9E-D4265747C6D5@gmail.com> On Mar 19, 2011, at 1:41 AM, Carl M. Johnson wrote: > On Fri, Mar 18, 2011 at 5:06 PM, Terry Reedy wrote: > >> I actually think that would be fine on wiki pages where any python writer >> can add links to their stuff. I already suggested that there could be one >> for module. There could also be some for statements and builtins. > > It might be nice if at the bottom of the official docs there a link to > a wiki version of the docs and in the wiki version there were external > links to things like MOTW. +1 I'm a big fan of Doug's work and would like to see some way to get there. Also, I think a wiki is a good way for people to post examples and start conversations around something they find confusing or post recipes for good ways to use a given feature. Raymond From ianb at colorstudy.com Sat Mar 19 20:19:16 2011 From: ianb at colorstudy.com (Ian Bicking) Date: Sat, 19 Mar 2011 14:19:16 -0500 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: <4D840B80.3040806@pearwood.info> References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> Message-ID: On Fri, Mar 18, 2011 at 8:48 PM, Steven D'Aprano wrote: > Ian Bicking wrote: > >> On Thu, Mar 17, 2011 at 8:47 PM, Steven D'Aprano > >wrote: >> >> Terry Reedy wrote: >>> >>> On 3/17/2011 7:19 PM, Senthil Kumaran wrote: >>>> >>>> On Wed, Mar 16, 2011 at 05:35:41PM -0400, Doug Hellmann wrote: >>>>> >>>>> As I told Doug during Pycon, I think it would be a good idea to >>>>> >>>>>> link his PyMOTW pages to our modules documentation in >>>>>>> docs.python.org so people have more examples etc. >>>>>>> >>>>>>> Various people have written various docs showing Python by example. >>>> I do >>>> not think any one should be singled out in the docs. On the other hand, >>>> the >>>> wiki could have a PythonByExample page (or pages) with links to various >>>> resources. >>>> >>>> What he said. >>> >>> With all respect to Doug, do we really want to bless his website more >>> than >>> any of the other Python blogs, tutorials, etc. out on the Internet? >>> >>> >> Bah humbug. If we could link stdlib docs to every good quality piece of >> coverage for that module then that would be great. It's not like someone >> else has been denied, or that we're giving Doug exclusive linking rights >> or >> > > In that case, I nominate Michael Foord's documentation of urllib2 for > linking as well: > > http://www.voidspace.org.uk/python/articles/urllib2.shtml > > I am serious, by the way, I think Michael's urllib2 docs are excellent. > Certainly, it also seems very appropriate. > But are we sure we want to go down this path? It's neither practical nor > desirable to fill the Python docs with links to every good quality external > source, so in a very real sense, yes, others will be denied. If not now, at > some point we're going to say "I'm sorry, your web site is really excellent, > but we're not going to link to you." > > But that's not my main objection. We keep the bar high for inclusion in the > standard library, and it shouldn't offend anyone. I'm sure people will cope > with the disappointment of having their excellent tutorial or blog rejected > for inclusion. > I guess I'd summarize your point here that you feel that collective ownership and maintaining of additional docs will lead to better quality than external and single-author documentation. I think with docs in particular this is difficult because the bureaucratic nature of collective ownership makes it hard to actually do the work to improve documentation, as it's hard to maintain a sense of voice and I think it's hard to maintain the willpower to write documentation in particular in the face of collectivist stop-energy. And unlike the standard library itself, documentation does not suffer nearly as much from size -- the stdlib itself is conservatively maintained, but documentation could be much more freely maintained, and external links are one way of accomplishing that. > I think that the real risk comes from the implications of linking to an > external page from the docs. If you think that there are no such > implications, then you will probably think that there is no downside to such > links. I hope to persuade you that there are, and that they need to be > considered before making this decision. > > Giving a list of "useful external resources" is a very different from > linking to Doug's site repeatedly throughout the module docs. The Python > docs are not some blog, where external links are posted for fun or for > giving credit, nor is it Wikipedia where the external links are used as > authority. The Python docs are the authority. If they link to an external > page, it confers some level of authority and officialness to that external > page. Sometimes we do so explicitly, e.g. we link to Fredrik Lundh?s > Elementtree pages: > > http://docs.python.org/library/xml.etree.elementtree.html > > othertimes it is implied, e.g. the Decimal docs say "See also" and then > link to a pair of carefully selected (semi-)official sources. > I don't think there is any significant authority incurred here, because the docs themselves are not authoritative. Generally speaking if many people use something in the standard library that is not documented, it turns into a kind of precedence anyway -- so if Doug or anyone else documents something non-public in the standard library, it means that interface is going to have to be supported or at least thoughtfully deprecated. That's true regardless of linking. > But the proposal goes even further: it would link to Doug's site from > nearly every page in the modules documentation. By linking to an external > site in such an intimate fashion, I believe we would be giving a significant > level of official standing to an external site that we don't control. We > would be saying not just that the site is a useful site, but that it's such > a great site, and a *trusted* site, that we link to it all throughout the > official documentation. That says a lot, and we shouldn't be so blas? about > saying it. I... don't really get this. I feel fine with this condition with respect to Doug's docs, but I don't see it as necessary anyway; there are good documents written by relatively unknown authors on specific topics. I don't seriously fear those docs will be changed underneath our feet. They probably *won't* be updated, and I have definitely noticed old Developer's Works articles that are distractingly out of date. OTOH, linking to good things *helps*, because when you Google for something it's hard to separate the old from the new. It adds us to the curation process. > something. It just happens he has written the most comprehensive and >> maintained set of docs, and so it would be bureaucratically rather easy to >> get a bunch more helpful links in the docs that will help people learn >> Python better. Frankly it doesn't matter if it's "blessed" as that >> doesn't >> incur any real benefit. >> > > I think you mean "cost". > I mean benefit to the person we're linking to. People are writing these things to help other people learn Python, they are acts of generosity, and any other benefits they might incur to the author are just a nice coincidence. > But there is a real cost as well as benefit: the cost comes as risk. I > think you have misunderstood my point about who controls the external > content. Dead links are the least risk, and the only one that can be managed > automatically. We would be linking to pages that aren't controlled by us. We > have no real control over whether the pages remain updated, or what content > goes into those pages, or whether they get filled with advertising, or > whatever. These are real risks -- even if you trust Doug implicitly, what > happens if he gets hit by a bus and somebody else takes over his website? > Easy: we change the links! Even if we have to remove them entirely and lose the content, in the meantime it will have done good. If it seems like a concern, maybe we can talk about licensing -- e.g., a nice CC license (I don't see a problem with non-commercial), with a gentleman's agreement that we not clone the content unless the author explicitly lets go or becomes unresponsive. But such licensing is a detail we can consider later IMHO, it doesn't have any concrete effect now and we could look at it later if we start seeing a lot of external documentation and use of that documentation. There are some content management concerns, which would probably be good to think through. It would be nice if links were tagged with what Python versions they describe because it is a realistic concern that things won't be updated (and already there's many things that aren't updated for Python 3, or will be forked documents in which case you'd want both links in with different tags). It would be even nicer if we had a little microformat for pages to self-describe versions. It would be nice if there was a somewhat transparent link submission process, along with content guidelines; even if they are fuzzy guidelines like "a new link should add significant benefit over the content of the original documentation and any existing links" or "we strongly prefer documentation that has seen community acknowledgement and use". Mostly expectation management for submitters. External documents that themselves refer to other documents can be problematic, especially something like "a review of web frameworks" -- which might be nice to link to from wsgiref, but in a practical sense probably much too hard to maintain unless someone really wrote it to be timeless (which would be possible with the proper PyPI category links, wiki links, etc). I don't feel like just linking to wiki.python.org pages is a good idea, because all the problems with external links are also problems with wiki pages and generally wiki pages seem to end up worse -- more out of date, less well edited, more dead links, etc. In theory they could be maintained better, but it's failed enough in practice to put me off (barring some serious rethinking of the wiki itself; like maybe a more Stack Overflow approach might be successful where a freeform editable page is not). Also I don't think we'd see much traffic from the docs through to useful external resources if we have an interstitial wiki page. And if people don't click through then we'll have neutered the point of the original proposal. Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce at leapyear.org Sat Mar 19 21:50:48 2011 From: bruce at leapyear.org (Bruce Leban) Date: Sat, 19 Mar 2011 13:50:48 -0700 Subject: [Python-ideas] Adding function checks to regex In-Reply-To: References: <4D842245.7040707@mrabarnett.plus.com> <4D84D792.7040008@mrabarnett.plus.com> Message-ID: On Sat, Mar 19, 2011 at 4:33 AM, Peter Otten <__peter__ at web.de> wrote: > Could the same be achieved without new regex syntax? I'm thinking of > reusing > named groups: > > re.findall(r"\b(?P\d+)\b", text, > number=lambda s: 1 <= int(s) <= 10) I like this alternative. (1) the function can simply operate on a string rather than a regex object. (2) it makes the function optional, enabling verification and testing of the regex to be separated from testing the function. (3) it would make it easier to port code that uses this to other languages and perhaps make it more likely to be adopted by other languages. On Sat, Mar 19, 2011 at 9:35 AM, Calvin Spealman wrote: > I am -1 on the whole idea. > > However, for the sake of argument, I'll say that if it was done I would not > bind the callbacks at match time. > > Instead, they would be part of the compiled regex objects. > > r = re.compile(r"foo:(?C\d+)", check_bounds=lambda d: 1 <= > int(d) <= 100) > > and then r could be used like any other regex, and you don't need to know > about the callbacks when actually using it, just to build it. > I'd want to understand likely use cases before deciding on early/late binding of the callbacks. And I'm not sure the expressive power of this is worth the effort. --- Bruce New Puzzazz newsletter: http://j.mp/puzzazz-news-2011-03 Make your web app more secure: http://j.mp/gruyere-security -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat Mar 19 22:07:33 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 19 Mar 2011 22:07:33 +0100 Subject: [Python-ideas] Adding function checks to regex References: <4D842245.7040707@mrabarnett.plus.com> Message-ID: <20110319220733.1091afc2@pitrou.net> On Sat, 19 Mar 2011 03:25:57 +0000 MRAB wrote: > > However, it occurs to me that there may be a case for being able to call > a supplied function to perform such checking. What would be such a case? Adding more complications to the regex syntax and semantics is something most of us would frown upon, IMHO. *Especially* if it involves mixing in arbitrary Python callbacks referenced by name in the regex... Regards Antoine. From raymond.hettinger at gmail.com Sat Mar 19 22:26:16 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sat, 19 Mar 2011 14:26:16 -0700 Subject: [Python-ideas] Adding function checks to regex In-Reply-To: <20110319220733.1091afc2@pitrou.net> References: <4D842245.7040707@mrabarnett.plus.com> <20110319220733.1091afc2@pitrou.net> Message-ID: On Mar 19, 2011, at 2:07 PM, Antoine Pitrou wrote: > On Sat, 19 Mar 2011 03:25:57 +0000 > MRAB wrote: >> >> However, it occurs to me that there may be a case for being able to call >> a supplied function to perform such checking. > > What would be such a case? > Adding more complications to the regex syntax and semantics is > something most of us would frown upon, IMHO. *Especially* if it involves > mixing in arbitrary Python callbacks referenced by name in the regex... I concur with Antoine. The re module and syntax already present a learning challenge. Adding more features (especially non-standard ones) would make the situation worse, even if if it simplified some particular use case. Raymond From masklinn at masklinn.net Sat Mar 19 22:32:14 2011 From: masklinn at masklinn.net (Masklinn) Date: Sat, 19 Mar 2011 22:32:14 +0100 Subject: [Python-ideas] Adding function checks to regex In-Reply-To: References: <4D842245.7040707@mrabarnett.plus.com> <4D84D792.7040008@mrabarnett.plus.com> Message-ID: <36AE6D88-6108-4D61-ACC9-B6D0B8CEBD25@masklinn.net> On 2011-03-19, at 17:35 , Calvin Spealman wrote: > I am -1 on the whole idea. > > However, for the sake of argument, I'll say that if it was done I would not > bind the callbacks at match time. > > Instead, they would be part of the compiled regex objects. > > r = re.compile(r"foo:(?C\d+)", check_bounds=lambda d: 1 <= > int(d) <= 100) > > and then r could be used like any other regex, and you don't need to know > about the callbacks when actually using it, just to build it. That's equivalent to Peter's code though: you're compiling a regex object, he was using one of the module-level functions. That's similar to the way `flags` are used: they're provided to the module-level function (including compile), but not to the SRE_Pattern methods. From tjreedy at udel.edu Sat Mar 19 22:38:30 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 19 Mar 2011 17:38:30 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <20110319102625.062dbd27@bhuda.mired.org> Message-ID: On 3/19/2011 10:54 AM, Doug Hellmann wrote: > > Drawing a parallel from the way the library code is handled, it seems > best to treat PyMOTW as a separately maintained resource and let > users find it separately. Now, if there was an index equivalent to > PyPI for documentation resources, *that* would complete the analogy. > :-) A Python Documentation Index (PyDI) is a nice idea. Authors could register, select keywords, and add links (and possibly upload). Users could search and rate as with packages. -- Terry Jan Reedy From ncoghlan at gmail.com Sat Mar 19 22:39:40 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 Mar 2011 07:39:40 +1000 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: On Sun, Mar 20, 2011 at 3:27 AM, Jesse Noller wrote: > I don't agree with the hand waving around broken links, the fact the > Doug wrote a book, endorsements, etc. The fact is, he's written better > docs on many things, and we're doing the community a disservice by not > actively exposing them as supplements to the existing documentation. > > Why is it so hard to simply do the right thing here? Because it's a new idea and a level of integration-without-incorporation that hasn't been considered before. The PSF reps on here (along with everyone else) wouldn't be doing a good job as stewards of the language if valid concerns were glossed over without being given due consideration. We may decide that given the specifics of the situation then including direct links from the 2.7 documentation to the current 2.x specific versions is an appropriate outcome. If Doug no longer wanted to maintain them separately in the future, then they could be incorporated at that point rather than having to do it up front (ideally under a contributor agreement, since relying on the existing CC license would mean that commercial entities like IDE vendors would need to drop it when redistributing the Python documentation set). The key point that I think distinguishes PyMoTW from most other documentation resources is that even though it is also being published as a book, the whole thing is available online under a Creative Commons Attribution-NonCommercial-ShareAlike license. Linking to a public resource like that is a *very* different prospect to linking to something that isn't independently redistributable. The source code for it all is also posted on github, so even if Doug were to abandon the project and drop off the face of the planet tomorrow, the repo could be cloned by a new maintainer. PyMOTW also covers each module individually, and links back to the relevant section of the official Python documentation*, something which other online resources may not do. For the reasons given above, I'm also +1 on including PyMoTW links in a "See Also" block at the bottom of each module's documentation. There are specifics at play here (as noted above) that distinguish this from linking to arbitrary resources on the web. One advantage to doing this systematically is that it can be done as part of the documentation generation process, rather than needing to be maintained individually in the source code of the documentation for each module. Cheers, Nick. * I thought http://docs.python.org/2.x had been added as an alias to allow stable external links to 2.x specific documentation, but neither that or 3.x appear to be currently working. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From jnoller at gmail.com Sat Mar 19 22:52:34 2011 From: jnoller at gmail.com (Jesse Noller) Date: Sat, 19 Mar 2011 17:52:34 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <20110319102625.062dbd27@bhuda.mired.org> Message-ID: On Sat, Mar 19, 2011 at 5:38 PM, Terry Reedy wrote: > On 3/19/2011 10:54 AM, Doug Hellmann wrote: >> > >> Drawing a parallel from the way the library code is handled, it seems >> best to treat PyMOTW as a separately maintained resource and let >> users find it separately. Now, if there was an index equivalent to >> PyPI for documentation resources, *that* would complete the analogy. >> :-) > > A Python Documentation Index (PyDI) is a nice idea. Authors could register, > select keywords, and add links (and possibly upload). > Users could search and rate as with packages. > > -- > Terry Jan Reedy Might be worth suggesting to the folks at http://readthedocs.org/ whom the PSF just helped sponsor: http://readthedocs.org/ jesse From jnoller at gmail.com Sat Mar 19 22:58:24 2011 From: jnoller at gmail.com (Jesse Noller) Date: Sat, 19 Mar 2011 17:58:24 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: On Sat, Mar 19, 2011 at 5:39 PM, Nick Coghlan wrote: > On Sun, Mar 20, 2011 at 3:27 AM, Jesse Noller wrote: >> I don't agree with the hand waving around broken links, the fact the >> Doug wrote a book, endorsements, etc. The fact is, he's written better >> docs on many things, and we're doing the community a disservice by not >> actively exposing them as supplements to the existing documentation. >> >> Why is it so hard to simply do the right thing here? > > Because it's a new idea and a level of > integration-without-incorporation that hasn't been considered before. > The PSF reps on here (along with everyone else) wouldn't be doing a > good job as stewards of the language if valid concerns were glossed > over without being given due consideration. While I understand all of those things: I think that we've systematically become obsessed with the worst-possible case scenarios, "what ifs", and so on. This obsession with getting something perfect that addresses all possible use cases means that normally nothing happens, or something which performs a pale shadow of the original proposal. It's one thing to be conservative about say, language changes - semantics changes have impacts that will far exceed the lifetime of any of the developers discussing the issue. It's something else when were debating about the inclusion of Really Good Resources that accentuate our own. I just worry we're hand wringing over the perfect solution - and not to throw voltaire too many bones - The perfect is the enemy of the good. jesse From alexander.belopolsky at gmail.com Sat Mar 19 23:19:25 2011 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sat, 19 Mar 2011 18:19:25 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: Message-ID: On Tue, Mar 15, 2011 at 3:11 PM, Tarek Ziad? wrote: > > As I told Doug during Pycon, I think it would be a good idea to link > his PyMOTW pages to our modules documentation in docs.python.org so > people have more examples etc. > I did not know about PyMOTW until now, so I visited a page on the module that I am well familiar with, the datetime module. On a cursory review, I don't think PyMOTW adds much to an already rather extensive docs.python.org documentation. One section, "Combining Dates and Times" struck me as not very clear. It starts with an example: print 'Now :', datetime.datetime.now() print 'Today :', datetime.datetime.today() .. $ python datetime_datetime.py Now : 2008-03-15 22:58:14.770074 Today : 2008-03-15 22:58:14.779804 .. Why would someone interested in combining dates and time would like to know two subtly different functions that return current time in a datetime object? The surrounding text does not explain the difference between datetime.now() and datetime.today(). Overall I am -1 on linking PyMOTW datetime page from datetime documentation. I am sure there are instances when a PyMOTW is a valuable addition to the "official" module documentation, but I think a decision to link it should be made on a case by case basis by someone who would review both the official and PyMOTW pages and decide that a link to PyMOTW adds to the quality of documentation for a given module. From ncoghlan at gmail.com Sat Mar 19 23:29:22 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 Mar 2011 08:29:22 +1000 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: Message-ID: On Sun, Mar 20, 2011 at 8:19 AM, Alexander Belopolsky wrote: > I am sure there are instances when a PyMOTW is a valuable addition to > the "official" module documentation, but I think a decision to link it > should be made on a case by case basis by someone who would review > both the official and PyMOTW pages and decide that a link to PyMOTW > adds to the quality of documentation for a given module. I think this is another case where "perfect is the enemy of good", though. Perfect: a broader "Python Documentation Index" that collects and curates links to documentation of various Python topics, including the standard library modules Perfect: a case-by-case comparison of the stdlib docs and the PyMOTW docs for each module by a module expert, deciding whether or not to link to the PyMOTW version in a "See Also" link Good: just link them all as part of the module documentation generation process. Some people may understand the stdlib docs better, some may understand PyMOTW better, but providing ready access to both is unlikely to actively *confuse* anyone that wasn't already lost. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From raymond.hettinger at gmail.com Sat Mar 19 23:47:02 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sat, 19 Mar 2011 15:47:02 -0700 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: Message-ID: On Mar 19, 2011, at 3:29 PM, Nick Coghlan wrote: > > Good: just link them all as part of the module documentation > generation process. Some people may understand the stdlib docs better, > some may understand PyMOTW better, but providing ready access to both > is unlikely to actively *confuse* anyone that wasn't already lost. It is interesting how this thread continues to press ahead even after Doug himself has said "it seems best to treat PyMOTW as a separately maintained resource and let users find it separately". The seems unequivocal to me. Raymond P.S. I really like Doug's collected articles and find them to be a pleasure to read; however, the articles have a introductory survey quality to them and do not purport to be up-to-date or to be complete (many methods, flags, arguments, classes, are omitted), so I'm not sure how well they would serve as primary documentation. My experience with Whatsnew in 3.1 and 3.2 showed that Python is changing at a rate that is very difficult to keep up with (for example, it entailed reading and researching several thousand lines of Misc/NEWS entries), so keeping the articiles up-to-date would not be a trivial task. From ncoghlan at gmail.com Sun Mar 20 00:50:20 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 Mar 2011 09:50:20 +1000 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: Message-ID: On Sun, Mar 20, 2011 at 8:47 AM, Raymond Hettinger wrote: > P.S. ?I really like Doug's collected articles and find them > to be a pleasure to read; however, the articles have > a introductory survey quality to them and do not purport > to be up-to-date or to be complete (many methods, flags, > arguments, classes, are omitted), so I'm not sure how well > they would serve as primary documentation. It's precisely their survey quality that makes them a potentially useful supplement to the existing documentation. Jacob Kaplan-Moss gave an excellent talk at PyCon regarding the multiple levels at which documentation needs to work. Most of the module documentation in the Library Reference dives right in to API level reference details, which can be impenetrable for users trying to get a feel for an unfamiliar module. Not all of our documentation is like that (plenty of people will have heard me talking up the new logging tutorials Vinay added for 3.2), but quite a lot of it is. Systematically linking to PyMOTW would be about providing new users with a resource that may help them come to grips with a module when using it for the first time, helping to fill the gaps where our own documentation fails to cover this aspect. All respect to Doug, while his opinion as the PyMOTW author is certainly relevant, this question is about whether or not such links would make the Library Reference documentation better for newcomers, so the final decision certainly isn't his (if the final call belongs to anyone other than Guido, it would be Georg). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ianb at colorstudy.com Sun Mar 20 00:58:50 2011 From: ianb at colorstudy.com (Ian Bicking) Date: Sat, 19 Mar 2011 18:58:50 -0500 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: Message-ID: On Sat, Mar 19, 2011 at 5:47 PM, Raymond Hettinger < raymond.hettinger at gmail.com> wrote: > > On Mar 19, 2011, at 3:29 PM, Nick Coghlan wrote: > > > > Good: just link them all as part of the module documentation > > generation process. Some people may understand the stdlib docs better, > > some may understand PyMOTW better, but providing ready access to both > > is unlikely to actively *confuse* anyone that wasn't already lost. > > It is interesting how this thread continues to press ahead > even after Doug himself has said "it seems best to treat > PyMOTW as a separately maintained resource and let users > find it separately". > > The seems unequivocal to me. > Doug's in the awkward position that we're all kind of talking about him ;) If I was in his place I know I'd feel weird about it and kind of want to avoid the discussion; which would be fine, and it's fine if Doug feels like avoiding this discussion. If he's really bothered by it, or feels that inclusion of links would be a problem, okay, but I read his statement as meaning he didn't feel confident championing, or maybe even advocating, the inclusion of links himself. But people aren't advocating for those links on Doug's behalf either, the advocacy is for a perceived benefit to document readers. Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Sun Mar 20 01:32:40 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 20 Mar 2011 11:32:40 +1100 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> Message-ID: <4D854B28.7050306@pearwood.info> Ian Bicking wrote: > I guess I'd summarize your point here that you feel that collective > ownership and maintaining of additional docs will lead to better quality > than external and single-author documentation. Not really -- I'm more concerned by the risk and loss of control by partial reliance on external docs. I'm sure the risk is manageable, but it's easy to get caught up in the enthusiasm for a change and not make any provision for managing that risk until after something has gone wrong. [...] >> But there is a real cost as well as benefit: the cost comes as risk. I >> think you have misunderstood my point about who controls the external >> content. Dead links are the least risk, and the only one that can be managed >> automatically. We would be linking to pages that aren't controlled by us. We >> have no real control over whether the pages remain updated, or what content >> goes into those pages, or whether they get filled with advertising, or >> whatever. These are real risks -- even if you trust Doug implicitly, what >> happens if he gets hit by a bus and somebody else takes over his website? >> > > Easy: we change the links! Even if we have to remove them entirely and lose > the content, in the meantime it will have done good. Of course we can change the links, but there will always be a lag between some hypothetical negative change occurring and the links being removed. First we have to notice the change, then we have to reach agreement that it is bad enough to remove the links (which won't necessarily be clear), and only then remove the links. In the meantime, what message are we sending? > If it seems like a > concern, maybe we can talk about licensing -- e.g., a nice CC license (I > don't see a problem with non-commercial), with a gentleman's agreement that > we not clone the content unless the author explicitly lets go or becomes > unresponsive. But such licensing is a detail we can consider later IMHO, it > doesn't have any concrete effect now and we could look at it later if we > start seeing a lot of external documentation and use of that documentation. "If" it seems like a concern? We're having this debate because there *is* a concern. I don't know that licensing is actually an issue. If we're just linking to an external site, do we care what the license of that content is? Perhaps we do -- Terry has already raised the issue that he's writing a book too, and would love (but doesn't expect) to have his content linked to the Python docs. Good quality links like that are worth real money. Nothing will poison a community faster than the idea that the organization is playing favourites, that some people are getting their commercial content advertised by python.org while others are excluded. That's a serious can of worms, and I don't think we should just gloss over this risk for the short-term benefit of gaining some nice documentation. In the meantime, I note that on this page: http://www.python.org/doc/ there is a section "Additional Documentation", which includes Richard Gruet's Python Cheat Sheet. I don't see any reason why we shouldn't link to Doug's site from there. If anyone is willing to champion the idea of more extensive linking, then I think it deserves a PEP. -- Steven From jackdied at gmail.com Sun Mar 20 02:30:22 2011 From: jackdied at gmail.com (Jack Diederich) Date: Sat, 19 Mar 2011 21:30:22 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: <4D854B28.7050306@pearwood.info> References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> Message-ID: Would someone please just commit a doc fix to include the links? No one who is against it seems to care enough that they would revert them. -Jack From orsenthil at gmail.com Sun Mar 20 02:44:07 2011 From: orsenthil at gmail.com (Senthil Kumaran) Date: Sun, 20 Mar 2011 09:44:07 +0800 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> Message-ID: <20110320014407.GA3185@kevin> On Sat, Mar 19, 2011 at 09:30:22PM -0400, Jack Diederich wrote: > Would someone please just commit a doc fix to include the links? No > one who is against it seems to care enough that they would revert > them. Could not help saying, but this is a really bad idea. -- Senthil From ncoghlan at gmail.com Sun Mar 20 03:53:33 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 Mar 2011 12:53:33 +1000 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> Message-ID: On Sun, Mar 20, 2011 at 11:30 AM, Jack Diederich wrote: > Would someone please just commit a doc fix to include the links? ?No > one who is against it seems to care enough that they would revert > them. While the philosophy of "rough consensus and running code" does hold in general, there's no need to quite *that* high-handed about it. The general objections to pervasive linking to an external resource are sound (perceived endorsement, risk of link rot, risk of outdated content). However, I (along with many others) am of the opinion that those concerns are of minimal significance in this case due to the open nature of the licensing on PyMOTW, and especially given the potential gain in allowing *new* Python users to more easily come to grips with Python modules, without needing to ask questions of the internet in general (and search engines in particular). The fact that the author and current maintainer of PyMOTW is an active member of the Python community in other ways certainly helps, but it isn't critical to the question of whether or not such links would improve the documentation. That said, with the BDFL and the current docs maintainer both in favour of the proposal, discussion should really be moving more towards "How?" rather than "Whether or not?". Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From alexander.belopolsky at gmail.com Sun Mar 20 04:52:56 2011 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sat, 19 Mar 2011 23:52:56 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> Message-ID: On Sat, Mar 19, 2011 at 10:53 PM, Nick Coghlan wrote: > .. > That said, with the BDFL and the current docs maintainer both in > favour of the proposal, discussion should really be moving more > towards "How?" rather than "Whether or not?". How? Instead of "pervasive linking" add a link to a "See also" sections of the modules for which PyMOTW in the opinion of the editor (meaning committer who adds the link) is a valuable addition to the official documentation. Note that this is the same approach as we currently take with respect to source code links. We don't add them pervasively, but only in cases when the source code is deemed useful to the target audience. From ncoghlan at gmail.com Sun Mar 20 07:33:13 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 Mar 2011 16:33:13 +1000 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> Message-ID: On Sun, Mar 20, 2011 at 1:52 PM, Alexander Belopolsky wrote: > How? ?Instead of "pervasive linking" ?add a link to a "See also" > sections of the modules for which PyMOTW in the opinion of the editor > (meaning committer who adds the link) is a valuable addition to the > official documentation. ?Note that this is the same approach as we > currently take with respect to source code links. ?We don't add them > pervasively, but only in cases when the source code is deemed useful > to the target audience. There's a huge difference between linking to source code (which may or may not be edifying when it comes to respecting current Python idioms) by default, and linking to well written documentation. Having a mechanism to say "please don't automatically link to PyMOTW for this module, I really, really don't like it and think it will actively harm anyone attempting to understand how best to use this API" may make some sense, but can anyone cite a case where PyMOTW is actually *that* wrong? To those who see these links as a bad idea, who are you afraid it will hurt? We have an immediate target audience that we think it will help: new Python users that will read the docs and follow the links there. >From those opposing it I've heard objections along the lines of... 1. Why endorse Doug's take on the standard library over that provided by a multitude of other authors (such as Dive into Python, Python Essential Reference, etc)? PyMOTW is unique (as far as I am aware) as it is structured the same way as the Library Reference (i.e. with per-module documentation), reasonably comprehensive (although it does leave out some of the more obscure modules, such as binhex) and released under a permissive CC license that allows independent redistribution (albeit not by commercial entities). That's a hugely valuable resource that has been made available for free, and we'd be doing our users a service by linking to it more prominently than just providing a single link in an irregularly maintained and unpublicised list of additional resources [1] that isn't even available through the official documentation on docs.python.org. 2. What about link rot and obsolete material? PyMOTW has an active maintainer in Doug Hellman who posts the source as a github project (under the aforementioned permissive license). If these become a problem, either the links can then be dropped from new versions of the documentation, or else the project can be forked by a new maintainer. I would personally hope that if Doug tired of maintaining the project at some point in the future, he would be willing to turn it over to PSF stewardship under the same licensing terms as the rest of the documentation, but that possible scenario isn't an argument against adding the external links for the benefits of users *now*. While I've tried to resist making any argument based on the specifics of *who* Doug is rather than what he's written, people should be aware that we aren't talking about a random person who happened to post some good free information on the internet here: we're talking about the current Communications Officer for the PSF [2]. I can understand his reasons for wanting to maintain personal editorial control over PyMOTW, so linking rather than embedding is the next best option when it comes to serving users' interests. (Note that embedding under the existing CC license would not only be disrespectful of Doug's wishes, it would also cause a real redistribution licensing problem for the many commercial entities that pass the Python documentation along to their users, including Linux vendors, IDE vendors, etc) Cheers, Nick. [1] http://www.python.org/doc/ (bottom of the page) [2] http://www.python.org/psf/members/ -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Sun Mar 20 09:16:31 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 20 Mar 2011 04:16:31 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> Message-ID: On 3/19/2011 5:39 PM, Nick Coghlan wrote: > On Sun, Mar 20, 2011 at 3:27 AM, Jesse Noller >> Why is it so hard to simply do the right thing here? First one must decide what is the right thing. Do you agree that adding 2.x examples to 3.x doc is the wrong thing? The initial proposal did not mention the Python version that Doug's work applies to, and indeed http://www.doughellmann.com/PyMOTW/about.html (still) does not either. Ths only hint on that page is the example showing that 'import PyMOTW' works on 2.6. I had to go to Amazon to discover and report that the book is aimed at (and presumably tested with) 2.7. The initial proposal also did not specify the version of the docs to be augmented. Usually, these days, 'add x to the docs' means to add to the current 3.x development docs and maybe backport. I do not see '2.7' in the subject line above. I believe I was the first to raise these issues. > Because it's a new idea and a level of > integration-without-incorporation that hasn't been considered before. > The PSF reps on here (along with everyone else) wouldn't be doing a > good job as stewards of the language if valid concerns were glossed > over without being given due consideration. +1 -- Terry Jan Reedy From tjreedy at udel.edu Sun Mar 20 10:23:45 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 20 Mar 2011 05:23:45 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> Message-ID: On 3/20/2011 2:33 AM, Nick Coghlan wrote: > To those who see these links as a bad idea, who are you afraid it will > hurt? We have an immediate target audience that we think it will help: > new Python users that will read the docs and follow the links there. I believe that sending new Python 3 users to Python 2 examples can confuse them and burden them with obsolete (for them) concerns and in that sense hurt them. I have the impression from python-list that a substantial proportion of new users now begin with Python 3, and I expect that proportion to grow substantially in the next year or two. Others start with 2.6, whose docs are frozen, for complete external library access. ... [snip] > 2. What about link rot and obsolete material? > > PyMOTW has an active maintainer .. who has recently updated them to 2.7 and *might* do a 3.x version in the future. But they are currently obsolete relative to 3.2. > these become a problem, either the links can then be dropped from new > versions of the documentation, Since there will be no new 2.x versions I find this confusing. -- Terry Jan Reedy From ncoghlan at gmail.com Sun Mar 20 10:51:21 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 Mar 2011 19:51:21 +1000 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> Message-ID: On Sun, Mar 20, 2011 at 7:23 PM, Terry Reedy wrote: >> these become a problem, either the links can then be dropped from new >> versions of the documentation, > > Since there will be no new 2.x versions I find this confusing. Everything I have said in this thread is based on the (to me, obvious) presumption that the current PyMOTW would be linked only from the 2.7 documentation (and there *will* be plenty of maintenance releases that will pick up that change). Once a Python 3 version of PyMOTW is available (which shouldn't take *too* long, given the executable nature of the examples), then similar links could be added to the 3.x documentation. Another idea that occurred to me this evening to help mitigate any concerns regarding stale links to an external site in the bundled documentation (e.g. source builds, CHM files) is to pipe the PyMOTW references through a redirector on python.org. Those links could then remain stable even if the PyMOTW files are moved to a new domain at some point in the future. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From dsdale24 at gmail.com Sun Mar 20 15:06:12 2011 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 20 Mar 2011 10:06:12 -0400 Subject: [Python-ideas] Would it possible to define abstract read/write properties with decorators? In-Reply-To: References: Message-ID: On Sat, Mar 19, 2011 at 12:24 PM, Guido van Rossum wrote: > Thanks much for your contribution! In order to get it reviewed and > submitted, can you please create a bug for this issue (mention the > python-ideas thread), upload your patch there, and perhaps ping > python-dev? I did so, at http://bugs.python.org/issue11610 . The first reviewer directed me to discussion concerning the implementation of abstract classmethods at http://bugs.python.org/issue5867 . In that case, you objected to extending the implementation of classmethod to allow assigning to an __isabstractmethod__ attribute, which would have allowed the same syntax I suggested, combining @abstractmethod and @classmethod. The patch I made (which doesn't work yet) also attempts to extend the implementation of a builtin. Do you still object to this approach? There are two somewhat related issues: * api: The first reviewer objects to using a single decorator for methods (regular, class, and static) but a combination of two decorators for properties. * implementation: The current abc.abstractproperty has some issues beyond not supporting the decorator syntax, which are due to the fact that properties are composite objects, and it is the methods which compose the property that should imbue "abstractness". - To provide an implementation for an abstract property, one currently has to completely respecify a concrete property and rebind it. If an ABC defines an abstract read/write property and a subclass mistakenly redefines it as a read-only property, the ABC mechanisms will not catch the error. - I can imagine cases where an abstract base class may define a simple concrete getter but an abstract setter. This is not possible with the current abstractproperty. - An abstractproperty cannot be made concrete through the use of the decorators: class D(MyABC): @MyABC.my_abstract_property.setter def my_abstract_property(self): ... because @MyABC.my_abstract_property.setter returns another instance of abstractproperty. I think the general approach I suggested resolves all of these issues. If you have reservations about extending builtins, an alternative might be to improve the definition of abstractproperty so it supports the features and addresses the issues we have been discussing, and has decorators implemented such that once all of the abstract methods have been replaced with concrete ones, they return an instance of the built-in property rather than abc.abstractproperty. Does this sound like it could be an acceptable alternative? Darren From guido at python.org Sun Mar 20 16:10:46 2011 From: guido at python.org (Guido van Rossum) Date: Sun, 20 Mar 2011 08:10:46 -0700 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: <4D8411CF.9060209@canterbury.ac.nz> References: <4D8082C1.30400@canterbury.ac.nz> <4D8408DC.4020103@canterbury.ac.nz> <4D8411CF.9060209@canterbury.ac.nz> Message-ID: On Fri, Mar 18, 2011 at 7:15 PM, Greg Ewing wrote: > Actually I don't want to reuse values, that was someone > else. For my use case it's fine to create a new descriptor > for each use. So, apologies if this has been brought up or rejected before, wouldn't a class decorator work for you? That is totally capable of calling x.__addtoclass__() (or whatever you want to call it -- it's now between the decorator and overridable_property) for each class attribute (really: for each value in the class __dict__) that has it, and doesn't seem to have the problems with combining unrelated metaclasses that you brought up: unrelated class decorators combine just fine (especially ones like this that mutate the class but still return the original class object). -- --Guido van Rossum (python.org/~guido) From alexander.belopolsky at gmail.com Sun Mar 20 16:42:40 2011 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sun, 20 Mar 2011 11:42:40 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> Message-ID: On Sun, Mar 20, 2011 at 2:33 AM, Nick Coghlan wrote: > .. but can anyone cite a case where PyMOTW is > actually *that* wrong? Didn't I do it in my first reply to this thread? """ .. I visited a page on the module that I am well familiar with, the datetime module. On a cursory review, I don't think PyMOTW adds much to an already rather extensive docs.python.org documentation. One section, "Combining Dates and Times" struck me as not very clear. It starts with an example: print 'Now :', datetime.datetime.now() print 'Today :', datetime.datetime.today() .. $ python datetime_datetime.py Now : 2008-03-15 22:58:14.770074 Today : 2008-03-15 22:58:14.779804 .. Why would someone interested in combining dates and time would like to know two subtly different functions that return current time in a datetime object? The surrounding text does not explain the difference between datetime.now() and datetime.today(). Overall I am -1 on linking PyMOTW datetime page from datetime documentation. """ Note that my "-1" is limited to linking from the datetime module documentation. Having not read any other PyMOTW pages, I have no basis to form an opinion on whether links to those other pages will improve reader experience. The datetime module may be unique because it actually suffers from too much documentation rather than the lack of it. The official datetime documentation and its PyMOTW page are not complimentary. They cover the same material in different styles. Some users are better off reading just PyMOTW, others may prefer the official doc. I don't really see much incremental value from reading both. Note that it looks like PyMOTW author intended his pages to be self-contained rather than a compliment to the official doc. There is no link from PyMOTW datetime page to http://docs.python.org/library/datetime.html. PS: The PyMOTW datetime page is at http://blog.doughellmann.com/2008/03/pymotw-datetime.html. PPS: I don't think the PyMOTW datetime page is current for 2.7. This page seems to be 3 years old and 2.7 have seen a few additions to the datetime module. From alexander.belopolsky at gmail.com Sun Mar 20 16:52:05 2011 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sun, 20 Mar 2011 11:52:05 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> Message-ID: On Sun, Mar 20, 2011 at 5:51 AM, Nick Coghlan wrote: .. > Once a Python 3 version of PyMOTW is available (which shouldn't take > *too* long, given the executable nature of the examples), then similar > links could be added to the 3.x documentation. Fixing Python 2.x examples to run under 3.x is unlikely to produce quality documentation. In many instances Python 3.x provides better idioms than those available in 2.x. From alexander.belopolsky at gmail.com Sun Mar 20 16:53:54 2011 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sun, 20 Mar 2011 11:53:54 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> Message-ID: On Sun, Mar 20, 2011 at 11:42 AM, Alexander Belopolsky wrote: .. > PPS: ?I don't think the PyMOTW datetime page is current for 2.7. ?This > page seems to be 3 years old and 2.7 have seen a few additions to the > datetime module. > Please strike this comment. I did not realize that the module names in the headers of PyMOTW pages are hyperlinked to the official docs. From doug.hellmann at gmail.com Sun Mar 20 16:58:15 2011 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Sun, 20 Mar 2011 11:58:15 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> Message-ID: <7B7DFF08-4CC9-44F3-A250-753EE19459F4@gmail.com> On Mar 20, 2011, at 11:42 AM, Alexander Belopolsky wrote: > On Sun, Mar 20, 2011 at 2:33 AM, Nick Coghlan wrote: >> .. but can anyone cite a case where PyMOTW is >> actually *that* wrong? > > Didn't I do it in my first reply to this thread? > > """ > .. I visited a page on the > module that I am well familiar with, the datetime module. On a > cursory review, I don't think PyMOTW adds much to an already rather > extensive docs.python.org documentation. One section, "Combining > Dates and Times" struck me as not very clear. It starts with an > example: > > print 'Now :', datetime.datetime.now() > print 'Today :', datetime.datetime.today() > .. > > $ python datetime_datetime.py > Now : 2008-03-15 22:58:14.770074 > Today : 2008-03-15 22:58:14.779804 > .. > > Why would someone interested in combining dates and time would like to > know two subtly different functions that return current time in a > datetime object? The surrounding text does not explain the difference > between datetime.now() and datetime.today(). > > Overall I am -1 on linking PyMOTW datetime page from datetime documentation. > """ > > Note that my "-1" is limited to linking from the datetime module > documentation. Having not read any other PyMOTW pages, I have no > basis to form an opinion on whether links to those other pages will > improve reader experience. > > The datetime module may be unique because it actually suffers from too > much documentation rather than the lack of it. The official datetime > documentation and its PyMOTW page are not complimentary. They cover > the same material in different styles. Some users are better off > reading just PyMOTW, others may prefer the official doc. I don't > really see much incremental value from reading both. > > Note that it looks like PyMOTW author intended his pages to be > self-contained rather than a compliment to the official doc. There is > no link from PyMOTW datetime page to > http://docs.python.org/library/datetime.html. > > PS: The PyMOTW datetime page is at > http://blog.doughellmann.com/2008/03/pymotw-datetime.html. > > PPS: I don't think the PyMOTW datetime page is current for 2.7. This > page seems to be 3 years old and 2.7 have seen a few additions to the > datetime module. The canonical version of that page is http://www.doughellmann.com/PyMOTW/datetime/. You will find a link to the stdlib docs, as well as some other useful date-related docs and tools, in the "See also" section at the bottom of that page. If you are going to review the content page by page, please look at the versions of the articles under http://www.doughellmann.com/PyMOTW/ Doug From alexander.belopolsky at gmail.com Sun Mar 20 17:07:49 2011 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sun, 20 Mar 2011 12:07:49 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: <7B7DFF08-4CC9-44F3-A250-753EE19459F4@gmail.com> References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> <7B7DFF08-4CC9-44F3-A250-753EE19459F4@gmail.com> Message-ID: On Sun, Mar 20, 2011 at 11:58 AM, Doug Hellmann wrote: >.. > > The canonical version of that page is http://www.doughellmann.com/PyMOTW/datetime/. You will find a link to the stdlib docs, as well as some other useful date-related docs and tools, in the "See also" section at the bottom of that page. > > If you are going to review the content page by page, please look at the versions of the articles under http://www.doughellmann.com/PyMOTW/ Thanks, I've realized that I missed the back-link already, but now I also see that the "canonical version" is more up to date. Let me review that page more thoroughly. From alexander.belopolsky at gmail.com Sun Mar 20 19:18:42 2011 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sun, 20 Mar 2011 14:18:42 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> <7B7DFF08-4CC9-44F3-A250-753EE19459F4@gmail.com> Message-ID: On Sun, Mar 20, 2011 at 12:07 PM, Alexander Belopolsky wrote: .. >?Let me review that page more thoroughly. I'll focus on criticism even though, overall I find the PyMOTD:datetime page a good introduction to the datetime module. I would recommend this page to beginners, but probably *before* the official docs, so a "see also" link is probably not the right way to link PyMOTD pages. Maybe a link to PyMOTD should be added to the main docs.python.org page. Now, what I don't like about PyMOTD:datetime. 1. Order of presentation: time, date, timedelta, datetime, tzinfo. For an introductory article, I would start with date, then do datetime and timedelta. The time objects are not that useful and aware time objects are rather exotic since time method of datetime strips tzinfo. The first example exposes the reader to the tzinfo attribute of time objects without any explanation. 2. In the first paragraph of the "Times" section: "the default of 0 is unlikely to be what you want" - this is somewhat confusing given that the following example is using the default value for microseconds and 01:02:03 time which is equally unlikely to be what someone wants in a typical application. It looks like the "unlikely" comment was about time() with no arguments and default of 0 for all components. It would be better to use a "likely" example, preferably in the PM range so that the reader is gently introduced to the 24-hour system. For example, 5pm as a typical office closing hour would be good, or say 5:45 pm as a realistic train departure time. Any of these would be better that a purely artificial time(1, 2, 3). 3. The min/max/resolution example. I don't think I ever used these attributes of the time class. Resolution is occasionally useful, but mostly for timedelta. Similarly, min and max of the other types reflect some non-obvious design choice, but for time, it is just the 24 hours in a day range. 4. The error passing float to microsecond argument example. Why are microseconds singled out? Similar error would result from passing float for hours, minutes or seconds. An example of how to generate and catch type error is not very helpful in an document about the datetime module. I would much rather see an example of how to convert fractional hours, minutes or seconds to datetime objects using timedelta constructor. (This is another case where dealing with time type is awkward because it does not support arithmetics. Another reason not to pick it as the first type to cover.) 5. The now() and today() example that I criticized in my earlier post is still present in the "canonical" version. The text preceding that example says: "there are several convenient class methods to make creating datetime instances from other common values." However, this does not match the examples which immediately follow. I would expect to see datetime.fromtimestamp() in that section, but for some reason it is covered in the "Dates" section instead. 6. In the revised "Time Zones" section, the author toned down his criticism of the approach taken by module developers. In the 2008 version, he called the tzinfo situation "ironic." Still, this section does not provide any useful examples. At the very least, it should give an example of passing tzinfo to datetime.now() to obtain current time in an aware datetime object. For 3.2 such example could use the new timezone type, but for 2.x it would be appropriate to provide an example using either pytz or a sample tzinfo implementation. The datetime module is a difficult area to cover. As I said before, it is likely that the situation with the other modules is different. If maintainers of other modules think that their documentation will benefit from a PyMOTW link, I have no objection to that. I still -1 on adding a PyMOTW:datetime link to the datetime module reference manual. I hope Doug will find my review of his datetime page helpful. I think PyMOTW will similarly benefit from other core developers' reviews as they consider linking PyMOTW to their documentation. However, blindly linking all pages simply because some people find Doug's work overall a better guide to stdlib than the official pages will only confuse readers. The reference manual and PyMOTW are two different works targeting different audiences. PyMOTW is more like a tutorial, trying to concisely introduce main features of each module without a claim to be comprehensive. In the reference manual on the other hand we try to be complete in feature coverage and economical in illustrative examples. It is reasonable to expect users to read PyMOTW articles in their entirety while reading entire sections of the reference manual would mean a very boring weekend. This observation makes it hard to find a good place for a PyMOTW? Should it go in the beginning or the end of the reference manual page? In either case, it is unlikely to be noticed by a typical user who goes directly to the class or method documentation through some kind of search. From ncoghlan at gmail.com Sun Mar 20 22:11:05 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 21 Mar 2011 07:11:05 +1000 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> <7B7DFF08-4CC9-44F3-A250-753EE19459F4@gmail.com> Message-ID: On Mon, Mar 21, 2011 at 4:18 AM, Alexander Belopolsky wrote: > The reference manual and PyMOTW are two different works > targeting different audiences. ?PyMOTW is more like a tutorial, trying > to concisely introduce main features of each module without a claim to > be comprehensive. ? In the reference manual on the other hand we try > to be complete in feature coverage and economical in illustrative > examples. You just summarised *exactly* why a bunch of us want to include it in the official documentation (by reference, anyway): so people can read PyMOTW as an introduction, and use the official docs as a reference. A See Also at the bottom of individual module pages, plus an "External Resources" link on the front page of the docs (as you suggested) would cover that nicely (especially with a pydotorg redirector in place to guarantee link stability). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From anikom15 at gmail.com Sun Mar 20 23:22:41 2011 From: anikom15 at gmail.com (Westley =?ISO-8859-1?Q?Mart=EDnez?=) Date: Sun, 20 Mar 2011 15:22:41 -0700 Subject: [Python-ideas] Add links in manual to test_modules. In-Reply-To: References: <20110319041659.GF2596@kevin> <4D8450EC.10209@pearwood.info> Message-ID: <1300659761.2535.1.camel@localhost.localdomain> On Sat, 2011-03-19 at 09:22 -0700, Guido van Rossum wrote: > On Fri, Mar 18, 2011 at 11:45 PM, Steven D'Aprano wrote: > > Nick Coghlan wrote: > >> > >> Ick, no. > >> > >> We do all sorts of dodgy stuff in our test suite to stress > >> implementations, probe obscure corner cases, double up on checks based > >> on where and when bugs happened to be reported. Large parts of it are > >> written to make the tests easier to write, not because they reflect > >> any kind of idiomatic code, or good ways of doing things in a real > >> application. > > > > But surely a test suite counts as a real application? It's likely to be > > bigger than the "actual" application or library, it still needs to be > > maintained, and is more likely to have bugs (on account of there being no > > test suite for the tests). > > > > Speaking for myself, I find code reuse and design of my test suites to be > > one of the harder parts of writing code. Perhaps I'd learn something from > > the Python tests, even if only "everyone has trouble writing good > > unit-tests" *wink* > > > > As I see it, the main benefit of Terry's suggestion is that it may encourage > > developers to write new tests for the standard library, or to refactor the > > existing tests. +0.5 from me. > > I'm with Nick. Tests (at least the ones we have for the standard > library) are rarely any good as example code for the modules being > tested. They may be great if you want to learn to write tests or if > you want to contribute to the stdlib, but they are easy enough to > find. Linking them from the docs is sending people to a body of code > that most people should never peruse. > > The one exception is that the tests can show language/library lawyers > how something is supposed to behave in more detail than docs, without > having to actually read the source. But again that's pretty advanced > and the people interested in that stuff know where to go. > I agree with Nick and Guido. Tests are essentially source code, but instead of creating functionality it is testing functionality, and like source code may not be written ideally. From jnoller at gmail.com Sun Mar 20 23:19:42 2011 From: jnoller at gmail.com (Jesse Noller) Date: Sun, 20 Mar 2011 18:19:42 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> <7B7DFF08-4CC9-44F3-A250-753EE19459F4@gmail.com> Message-ID: On Sun, Mar 20, 2011 at 5:11 PM, Nick Coghlan wrote: > On Mon, Mar 21, 2011 at 4:18 AM, Alexander Belopolsky > wrote: >> The reference manual and PyMOTW are two different works >> targeting different audiences. ?PyMOTW is more like a tutorial, trying >> to concisely introduce main features of each module without a claim to >> be comprehensive. ? In the reference manual on the other hand we try >> to be complete in feature coverage and economical in illustrative >> examples. > > You just summarised *exactly* why a bunch of us want to include it in > the official documentation (by reference, anyway): so people can read > PyMOTW as an introduction, and use the official docs as a reference. > > A See Also at the bottom of individual module pages, plus an "External > Resources" link on the front page of the docs (as you suggested) would > cover that nicely (especially with a pydotorg redirector in place to > guarantee link stability). > > Cheers, > Nick. What Nick said. You summarized why we want this done in the first place - the narrative/tutorial style works really, really well for a lot of people, non programmers and beginners. They're not API docs, and they're not meant to be. For the record? When I'm dealing with datetime, or logging, or other "bigger" modules - I tend to go to doug's site first to see if I can find a quick bit before I go through the official docs. jesse From jnoller at gmail.com Mon Mar 21 00:05:20 2011 From: jnoller at gmail.com (Jesse Noller) Date: Sun, 20 Mar 2011 19:05:20 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> <7B7DFF08-4CC9-44F3-A250-753EE19459F4@gmail.com> Message-ID: On Sun, Mar 20, 2011 at 2:18 PM, Alexander Belopolsky wrote: > On Sun, Mar 20, 2011 at 12:07 PM, Alexander Belopolsky > wrote: > .. >>?Let me review that page more thoroughly. > > I'll focus on criticism even though, overall I find the > PyMOTD:datetime page a good introduction to the datetime module. ? I > would recommend this page to beginners, but probably *before* the > official docs, so a "see also" link is probably not the right way to > link PyMOTD pages. ? Maybe a link to PyMOTD should be added to the > main docs.python.org page. > > Now, what I don't like about PyMOTD:datetime. > > 1. Order of presentation: time, date, timedelta, datetime, tzinfo. > For an introductory article, I would start with date, then do datetime > and timedelta. ?The time objects are not that useful and aware time > objects are rather exotic since time method of datetime strips tzinfo. > ?The first example exposes the reader to the tzinfo attribute of time > objects without any explanation. > > 2. In the first paragraph of the "Times" section: "the default of 0 is > unlikely to be what you want" - this is somewhat confusing given that > the following example is using the default value for microseconds and > 01:02:03 time which is equally unlikely to be what someone wants in a > typical application. ?It looks like the "unlikely" comment was about > time() with no arguments and default of 0 for all components. ?It > would be better to use a "likely" example, preferably in the PM range > so that the reader is gently introduced to the 24-hour system. ?For > example, 5pm as a typical office closing hour would be good, or say > 5:45 pm as a realistic train departure time. ?Any of these would be > better that a purely artificial time(1, 2, 3). > > 3. ?The min/max/resolution example. ?I don't think I ever used these > attributes of the time class. Resolution is occasionally useful, but > mostly for timedelta. ?Similarly, min and max of the other types > reflect some non-obvious design choice, but for time, it is just the > 24 hours in a day range. > > 4. The error passing float to microsecond ?argument example. ?Why are > microseconds singled out? ?Similar error would result from passing > float for hours, minutes or seconds. ?An example of how to generate > and catch type error is not very helpful in an document about the > datetime module. ?I would much rather see an example of how to convert > fractional hours, minutes or seconds to datetime objects using > timedelta constructor. ?(This is another case where dealing with time > type is awkward ?because it does not support arithmetics. ? Another > reason not to pick it as the first type to cover.) > > 5. The now() and today() example that I criticized in my earlier post > is still present in the "canonical" version. ?The text preceding that > example says: "there are several convenient class methods to make > creating datetime instances from other common values." ?However, this > does not match the examples which immediately follow. ?I would expect > to see datetime.fromtimestamp() in that section, but for some reason > it is covered in the "Dates" section instead. > > 6. In the revised "Time Zones" section, the author toned down his > criticism of the approach taken by module developers. ?In the 2008 > version, he called the tzinfo situation "ironic." ?Still, this section > does not provide any useful examples. ? At the very least, it should > give an example of passing tzinfo to datetime.now() to obtain current > time in an aware datetime object. ?For 3.2 such example could use the > new timezone type, but for 2.x it would be appropriate to provide an > example using either pytz or a sample tzinfo implementation. > > The datetime module is a difficult area to cover. ?As I said before, > it is likely that the situation with the other modules is different. > If maintainers of other modules think that their documentation will > benefit from a PyMOTW link, I have no objection to that. ?I still -1 > on adding a PyMOTW:datetime link to the datetime module reference > manual. > > I hope Doug will find my review of his datetime page helpful. ?I think > PyMOTW will similarly benefit from other core developers' reviews as > they consider linking PyMOTW to their documentation. ?However, blindly > linking all pages simply because some people find Doug's work overall > a better guide to stdlib than the official pages will only confuse > readers. ?The reference manual and PyMOTW are two different works > targeting different audiences. ?PyMOTW is more like a tutorial, trying > to concisely introduce main features of each module without a claim to > be comprehensive. ? In the reference manual on the other hand we try > to be complete in feature coverage and economical in illustrative > examples. > > It is reasonable to expect users to read ?PyMOTW articles in their > entirety while reading entire sections of the reference manual would > mean a very boring weekend. ? This observation makes it hard to find a > good place for a PyMOTW? ?Should it go in the beginning or the end of > the reference manual page? ?In either case, it is unlikely to be > noticed by a typical user who goes directly to the class or method > documentation through some kind of search. Let me flip this around Alexander: If you completely rewrite the datetime module's documentation - since I disagree with your non-narrative, or non-example driven documentation and the order in which you present various things, I *might* consider linking to it, even if *it might* be a useful resource to introductory developers. I thought adding things to the docs (one of the reasons I asked we give Doug commit rights) might be a simple, easy or lower-argument task. I see now I was wrong - almost perversely so. jesse From bkjones at gmail.com Mon Mar 21 02:18:06 2011 From: bkjones at gmail.com (Brian Jones) Date: Sun, 20 Mar 2011 21:18:06 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> <7B7DFF08-4CC9-44F3-A250-753EE19459F4@gmail.com> Message-ID: On Sun, Mar 20, 2011 at 2:18 PM, Alexander Belopolsky < alexander.belopolsky at gmail.com> wrote: > On Sun, Mar 20, 2011 at 12:07 PM, Alexander Belopolsky > wrote: > .. > > Let me review that page more thoroughly. > > I'll focus on criticism even though, overall I find the > PyMOTD:datetime page a good introduction to the datetime module. I > would recommend this page to beginners, but probably *before* the > official docs, so a "see also" link is probably not the right way to > link PyMOTD pages. Maybe a link to PyMOTD should be added to the > main docs.python.org page. > > Now, what I don't like about PyMOTD:datetime. > > 1. Order of presentation: time, date, timedelta, datetime, tzinfo. > For an introductory article, I would start with date, then do datetime > and timedelta. The time objects are not that useful and aware time > objects are rather exotic since time method of datetime strips tzinfo. > The first example exposes the reader to the tzinfo attribute of time > objects without any explanation. > This is all just so ridiculous. If we're so sensitive to some (btw, arbitrary and subjective in this case) ordering, then why is the official stdlib documentation *not* in this order? Why does the datetime module documentation not cover the actual class for which it is named until after things like timedelta? Why does the document start out by covering constants, follow with available types, then shove in a few completely random miscellaneous statements, and dive into the timedelta object? Is this *really* the best way to introduce the datetime module to someone new to the language? How does the standard documentation hold up to the criteria you're using to critique the PyMOTW documentation? Why is there so much energy put forth to squash a good idea and almost none toward doing the actual work to improve what's already there? If the standard documentation was really that awesome, this discussion may never have come up. Let's think about the bar we've set in the standard library docs, and consider how PyMOTW contributes to raising that bar. The standard lib documentation is *not* all that awesome, *especially* for newcomers, in large part because large swaths of it are probably autogenerated direct from the source code. It therefore lacks almost any notion of human tone or personality, is completely unopinionated, dry, and in plenty of cases fails to document a given module completely. Where it does *document* completely, it doesn't necessarily provide examples illustrating the complete set of functionality provided by the module. Neither does PyMOTW, but the point is that they're complementary resources. PyMOTW provides a "feel" for a module. The stdlib docs are really designed to be a reference. If they *weren't* designed to strictly be a reference, then they need more work than I thought. Anyway, this critique is completely orthogonal to the issue at hand. What I think we're after here is: 1. Adding value over and above the standard library documentation 2. Targeting an audience (newbies or new-to-Python) that is just about completely ignored by the standard library documentation. Do the PyMOTW documents, on the whole, add value over and above the standard library documentation? I have yet to hear any compelling argument that they don't. Doug's existing 2.7-based work should be linked in the Python 2.7 documentation. This is not to say that this should happen to the exclusion of any other existing or future material by other authors. We already link to other external material, so it's not like there's no precedent for this. There's no precedent for the pervasiveness of the links being proposed, but that's because there's no precedent for a single work that so closely mirrors and deliberately targets full coverage of every module in the standard library. Does the proposal represent perfection? Absolutely not. Perfection would involve an overhaul of the existing documentation, which in plenty of places sucks no matter what your experience with Python or programming in general. But even if that were to occur, I think Doug's work would still add value. I move to stop playing Monday morning quarterback with Doug's work. There's no work that couldn't be better based on anyone's subjective idea of what "better" means. I've been in the position in the past of editing Doug's work. I'm currently reviewing Doug's book. I've been a long-time reader and user of both the standard library docs and PyMOTW. I'm working on a book for O'Reilly *right now* that will compete *directly* with Doug's book. I have the capability and, some might speculate, a motive to rip Doug's work apart. Doing so, in my opinion, has no merit, hinders efforts to improve the overall experience of those new to the language, and attempts to hide a perfectly worthy collection of work from those in need. Further, when these people come to stackoverflow and say that they have a question not answered by the standard docs, we're all going to point them to PyMOTW anyway. Doug's work is solid. Not perfect -- solid. It approaches topics in a reasonable way, provides a bit more humanistic tone and personality, covers the topics accurately, and does so for the entire standard library. If I weren't knee-deep in a book project of my own, I'd be happy to make this task my first commit ever to the Python documentation tree. If this thread continues until July, I just might be able to do that ;-) brian > 2. In the first paragraph of the "Times" section: "the default of 0 is > unlikely to be what you want" - this is somewhat confusing given that > the following example is using the default value for microseconds and > 01:02:03 time which is equally unlikely to be what someone wants in a > typical application. It looks like the "unlikely" comment was about > time() with no arguments and default of 0 for all components. It > would be better to use a "likely" example, preferably in the PM range > so that the reader is gently introduced to the 24-hour system. For > example, 5pm as a typical office closing hour would be good, or say > 5:45 pm as a realistic train departure time. Any of these would be > better that a purely artificial time(1, 2, 3). > 3. The min/max/resolution example. I don't think I ever used these > attributes of the time class. Resolution is occasionally useful, but > mostly for timedelta. Similarly, min and max of the other types > reflect some non-obvious design choice, but for time, it is just the > 24 hours in a day range. > > 4. The error passing float to microsecond argument example. Why are > microseconds singled out? Similar error would result from passing > float for hours, minutes or seconds. An example of how to generate > and catch type error is not very helpful in an document about the > datetime module. I would much rather see an example of how to convert > fractional hours, minutes or seconds to datetime objects using > timedelta constructor. (This is another case where dealing with time > type is awkward because it does not support arithmetics. Another > reason not to pick it as the first type to cover.) > > 5. The now() and today() example that I criticized in my earlier post > is still present in the "canonical" version. The text preceding that > example says: "there are several convenient class methods to make > creating datetime instances from other common values." However, this > does not match the examples which immediately follow. I would expect > to see datetime.fromtimestamp() in that section, but for some reason > it is covered in the "Dates" section instead. > > 6. In the revised "Time Zones" section, the author toned down his > criticism of the approach taken by module developers. In the 2008 > version, he called the tzinfo situation "ironic." Still, this section > does not provide any useful examples. At the very least, it should > give an example of passing tzinfo to datetime.now() to obtain current > time in an aware datetime object. For 3.2 such example could use the > new timezone type, but for 2.x it would be appropriate to provide an > example using either pytz or a sample tzinfo implementation. > > The datetime module is a difficult area to cover. As I said before, > it is likely that the situation with the other modules is different. > If maintainers of other modules think that their documentation will > benefit from a PyMOTW link, I have no objection to that. I still -1 > on adding a PyMOTW:datetime link to the datetime module reference > manual. > > I hope Doug will find my review of his datetime page helpful. I think > PyMOTW will similarly benefit from other core developers' reviews as > they consider linking PyMOTW to their documentation. However, blindly > linking all pages simply because some people find Doug's work overall > a better guide to stdlib than the official pages will only confuse > readers. The reference manual and PyMOTW are two different works > targeting different audiences. PyMOTW is more like a tutorial, trying > to concisely introduce main features of each module without a claim to > be comprehensive. In the reference manual on the other hand we try > to be complete in feature coverage and economical in illustrative > examples. > > It is reasonable to expect users to read PyMOTW articles in their > entirety while reading entire sections of the reference manual would > mean a very boring weekend. This observation makes it hard to find a > good place for a PyMOTW? Should it go in the beginning or the end of > the reference manual page? In either case, it is unlikely to be > noticed by a typical user who goes directly to the class or method > documentation through some kind of search. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -- Brian K. Jones My Blog http://www.protocolostomy.com Follow me http://twitter.com/bkjones -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimjjewett at gmail.com Mon Mar 21 03:18:28 2011 From: jimjjewett at gmail.com (Jim Jewett) Date: Sun, 20 Mar 2011 22:18:28 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> <7B7DFF08-4CC9-44F3-A250-753EE19459F4@gmail.com> Message-ID: On Sun, Mar 20, 2011 at 9:18 PM, Brian Jones wrote: > On Sun, Mar 20, 2011 at 2:18 PM, Alexander Belopolsky > wrote: >> I'll focus on criticism even though, overall I find the >> PyMOTD:datetime page a good introduction to the datetime module. ? I >> would recommend this page to beginners, but probably *before* the >> official docs, so a "see also" link is probably not the right way to >> link PyMOTD pages. ? Maybe a link to PyMOTD should be added to the >> main docs.python.org page. >> Now, what I don't like about PyMOTD:datetime. >> 1. Order of presentation: time, date, timedelta, datetime, tzinfo. >> For an introductory article, I would start with date, then do datetime >> and timedelta. ?The time objects are not that useful > This is all just so ridiculous. > If we're so sensitive to some (btw, arbitrary and subjective in this case) > ordering, then why is the official stdlib documentation *not* in this order? The order he recommended makes sense for a tutorial introduction. Another order -- such as putting constants first -- makes sense for an API reference. He does say it would make sense to refer beginners to PyMOTD *before* the official documentation; a see-also at the *end* of the official module documentation won't do that. (Adding it to a list of alternative references at the beginning of the not-per-module documentation *might*.) -jJ From guido at python.org Mon Mar 21 03:18:51 2011 From: guido at python.org (Guido van Rossum) Date: Sun, 20 Mar 2011 19:18:51 -0700 Subject: [Python-ideas] Would it possible to define abstract read/write properties with decorators? In-Reply-To: References: Message-ID: It looks like you have moved on to a different strategy; let me comment on the code review instead. On Sun, Mar 20, 2011 at 7:06 AM, Darren Dale wrote: > On Sat, Mar 19, 2011 at 12:24 PM, Guido van Rossum wrote: >> Thanks much for your contribution! In order to get it reviewed and >> submitted, can you please create a bug for this issue (mention the >> python-ideas thread), upload your patch there, and perhaps ping >> python-dev? > > I did so, at http://bugs.python.org/issue11610 . The first reviewer > directed me to discussion concerning the implementation of abstract > classmethods at http://bugs.python.org/issue5867 . In that case, you > objected to extending the implementation of classmethod to allow > assigning to an __isabstractmethod__ attribute, which would have > allowed the same syntax I suggested, combining @abstractmethod and > @classmethod. The patch I made (which doesn't work yet) also attempts > to extend the implementation of a builtin. Do you still object to this > approach? There are two somewhat related issues: > > * api: The first reviewer objects to using a single decorator for > methods (regular, class, and static) but a combination of two > decorators for properties. > * implementation: The current abc.abstractproperty has some issues > beyond not supporting the decorator syntax, which are due to the fact > that properties are composite objects, and it is the methods which > compose the property that should imbue "abstractness". > ?- To provide an implementation for an abstract property, one > currently has to completely respecify a concrete property and rebind > it. If an ABC defines an abstract read/write property and a subclass > mistakenly redefines it as a read-only property, the ABC mechanisms > will not catch the error. > ?- I can imagine cases where an abstract base class may define a > simple concrete getter but an abstract setter. This is not possible > with the current abstractproperty. > ?- An abstractproperty cannot be made concrete through the use of the > decorators: > > ? ?class D(MyABC): > ? ? ? ?@MyABC.my_abstract_property.setter > ? ? ? ?def my_abstract_property(self): > ? ? ? ? ? ?... > > because @MyABC.my_abstract_property.setter returns another instance of > abstractproperty. > > I think the general approach I suggested resolves all of these issues. > If you have reservations about extending builtins, an alternative > might be to improve the definition of abstractproperty so it supports > the features and addresses the issues we have been discussing, and has > decorators implemented such that once all of the abstract methods have > been replaced with concrete ones, they return an instance of the > built-in property rather than abc.abstractproperty. Does this sound > like it could be an acceptable alternative? > > Darren > -- --Guido van Rossum (python.org/~guido) From alexander.belopolsky at gmail.com Mon Mar 21 04:11:42 2011 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sun, 20 Mar 2011 23:11:42 -0400 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: <32D24669-A45D-4C63-9722-B1C3AE8C1F55@gmail.com> <20110317231905.GF3778@kevin> <4D82B9C6.9030709@pearwood.info> <4D840B80.3040806@pearwood.info> <4D854B28.7050306@pearwood.info> <7B7DFF08-4CC9-44F3-A250-753EE19459F4@gmail.com> Message-ID: On Sun, Mar 20, 2011 at 9:18 PM, Brian Jones wrote: .. > This is all just so ridiculous. > If we're so sensitive to some (btw, arbitrary and subjective in this case) > ordering, then why is the official stdlib documentation *not* in this order? One possible explanation is that it was not written by me. :-) Seriously, official datetime documentation can be improved. Order of presentation is one area of improvement. Note that the order in "Available Types" is different from the order of per-type sections. I would certainly make sense to use the same somehow logical order for both. However, for a reference documentation that is not designed to be read sequentially, the order of presentation is not as important as in a module overview or a tutorial. > Why does the datetime module documentation not cover the actual class for > which it is named until after things like timedelta? I don't know. In the summary section the timedelta is listed after datetime. On the other hand, covering timedelta first, is likely to reduce the number of back-references because timedelta arithmetics is self-contained while datetime arithmetic properties cannot be described without introducing timedelta. > Why does the document > start out by covering constants, follow with available types, then shove in > a few completely random miscellaneous statements, and dive into the > timedelta object? Constants followed by types is a fairly standard order in stdlib documentation. I think we mostly follow the order in which things are defined in code which in turn usually organized so that things are defined before they are referenced. What "random miscellaneous statements" do you refer to? Documentation patches are always welcome. > Is this *really* the best way to introduce the datetime > module to someone new to the language? No. Reference manual is *not* the best way to introduce anything to someone new to the language. We do try to make the reference manual novice friendly as long as it does not conflict with completeness or accuracy. > How does the standard documentation > hold up to the criteria you're using to critique the PyMOTW documentation? I would not use the same criteria for the two works. They serve different purposes. > Why is there so much energy put forth to squash a good idea and almost none > toward doing the actual work to improve what's already there? Why do you think adding a link to official module documentation pointing to a document that has not been reviewed by module maintainers is a good idea? .. > Do the PyMOTW documents, on the whole, add value over and above the standard > library documentation? I have yet to hear any compelling argument that they > don't. I don't know about "on the whole." I have specific issues with the datetime article. Other modules' maintainers may or may not have issues with the specific PyMOTW articles. Why does this need to be all or nothing? Note that we don't cross-reference official tutorial sections from Language Reference. PyMOTW looks like the missing Library Tutorial. I don't object to featuring it as such on the main documentation page. From brian.curtin at gmail.com Mon Mar 21 04:13:22 2011 From: brian.curtin at gmail.com (Brian Curtin) Date: Mon, 21 Mar 2011 03:13:22 +0000 Subject: [Python-ideas] Linking Doug's stdlib documentation to our main modules doc. In-Reply-To: References: Message-ID: On Tue, Mar 15, 2011 at 19:11, Tarek Ziad? wrote: > Hey, > > As I told Doug during Pycon, I think it would be a good idea to link > his PyMOTW pages to our modules documentation in docs.python.org so > people have more examples etc. > > Cheers > Tarek +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Mon Mar 21 06:10:43 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Mon, 21 Mar 2011 18:10:43 +1300 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> <4D8408DC.4020103@canterbury.ac.nz> <4D8411CF.9060209@canterbury.ac.nz> Message-ID: <4D86DDD3.1040709@canterbury.ac.nz> Guido van Rossum wrote: > So, apologies if this has been brought up or rejected before, wouldn't > a class decorator work for you? It would work, although it would be a bit less than satisfying, because the property wouldn't be fully self-contained. Some of the plumbing would still be showing, albeit less obtrusively. -- Greg From ianb at colorstudy.com Mon Mar 21 06:18:48 2011 From: ianb at colorstudy.com (Ian Bicking) Date: Mon, 21 Mar 2011 00:18:48 -0500 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: <4D86DDD3.1040709@canterbury.ac.nz> References: <4D8082C1.30400@canterbury.ac.nz> <4D8408DC.4020103@canterbury.ac.nz> <4D8411CF.9060209@canterbury.ac.nz> <4D86DDD3.1040709@canterbury.ac.nz> Message-ID: On Mon, Mar 21, 2011 at 12:10 AM, Greg Ewing wrote: > Guido van Rossum wrote: > > So, apologies if this has been brought up or rejected before, wouldn't >> a class decorator work for you? >> > > It would work, although it would be a bit less than satisfying, > because the property wouldn't be fully self-contained. Some of > the plumbing would still be showing, albeit less obtrusively. > If you forget the decorator (easy to do) the errors could be lots of ugly " object has no attribute 'name'" -- and you could make the error slightly better, but not much because the PropertyThatMustKnowName doesn't get a chance to validate itself (since you didn't use the decorator and it can't really know that). Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Mar 21 18:04:19 2011 From: guido at python.org (Guido van Rossum) Date: Mon, 21 Mar 2011 10:04:19 -0700 Subject: [Python-ideas] A user story concerning things knowing their own names In-Reply-To: References: <4D8082C1.30400@canterbury.ac.nz> <4D8408DC.4020103@canterbury.ac.nz> <4D8411CF.9060209@canterbury.ac.nz> <4D86DDD3.1040709@canterbury.ac.nz> Message-ID: On Sun, Mar 20, 2011 at 10:18 PM, Ian Bicking wrote: > On Mon, Mar 21, 2011 at 12:10 AM, Greg Ewing > wrote: >> >> Guido van Rossum wrote: >> >>> So, apologies if this has been brought up or rejected before, wouldn't >>> a class decorator work for you? >> >> It would work, although it would be a bit less than satisfying, >> because the property wouldn't be fully self-contained. Some of >> the plumbing would still be showing, albeit less obtrusively. > > If you forget the decorator (easy to do) the errors could be lots of ugly > " object has no attribute 'name'" -- > and you could make the error slightly better, but not much because the > PropertyThatMustKnowName doesn't get a chance to validate itself (since you > didn't use the decorator and it can't really know that). It would be easy enough to record the filename and line where the constructor was called, and report those in the error message. All in all it does sound like it could be an improvement over having to pass the name in redundantly, and it has the advantage that it works today. -- --Guido van Rossum (python.org/~guido) From g.rodola at gmail.com Tue Mar 22 10:16:08 2011 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Tue, 22 Mar 2011 10:16:08 +0100 Subject: [Python-ideas] Function multiple arguments assignment Message-ID: It's likely this has already been proposed in past, I don't know, anyway... This occurred to me while using subprocess module yesterday. I had to do something like this: subprocess.Popen(["exe"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) Would it be appropriate to permit something similar to multiple variable assignment such as: subprocess.Popen(["exe"], stdin=stdout=stderr=subprocess.PIPE) ...? Regards --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/psutil From ironfroggy at gmail.com Tue Mar 22 10:31:49 2011 From: ironfroggy at gmail.com (Calvin Spealman) Date: Tue, 22 Mar 2011 05:31:49 -0400 Subject: [Python-ideas] Function multiple arguments assignment In-Reply-To: References: Message-ID: +0 While this makes sense when I think it about it, it doesn't make sense when I feel about it. On Tue, Mar 22, 2011 at 5:16 AM, Giampaolo Rodol? wrote: > It's likely this has already been proposed in past, I don't know, anyway... > This occurred to me while using subprocess module yesterday. > I had to do something like this: > > subprocess.Popen(["exe"], stdin=subprocess.PIPE, > stdout=subprocess.PIPE, stderr=subprocess.PIPE) > > Would it be appropriate to permit something similar to multiple > variable assignment such as: > > subprocess.Popen(["exe"], stdin=stdout=stderr=subprocess.PIPE) > > ...? > > > Regards > > > --- Giampaolo > http://code.google.com/p/pyftpdlib/ > http://code.google.com/psutil > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Mar 22 11:01:25 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 22 Mar 2011 20:01:25 +1000 Subject: [Python-ideas] Function multiple arguments assignment In-Reply-To: References: Message-ID: On Tue, Mar 22, 2011 at 7:16 PM, Giampaolo Rodol? wrote: > It's likely this has already been proposed in past, I don't know, anyway... > This occurred to me while using subprocess module yesterday. > I had to do something like this: > > subprocess.Popen(["exe"], stdin=subprocess.PIPE, > stdout=subprocess.PIPE, stderr=subprocess.PIPE) > > Would it be appropriate to permit something similar to multiple > variable assignment such as: > > subprocess.Popen(["exe"], stdin=stdout=stderr=subprocess.PIPE) Hitting a gnat with a hammer. Something that may make more sense is a PopenPipe that changes the default for stdin/out/err to be distinct pipes rather than inherited from the parent (but still able to be set explicitly). It is certainly annoying that creating a fully piped subprocess is so verbose. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From p.f.moore at gmail.com Tue Mar 22 15:57:25 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 22 Mar 2011 14:57:25 +0000 Subject: [Python-ideas] Function multiple arguments assignment In-Reply-To: References: Message-ID: On 22 March 2011 10:01, Nick Coghlan wrote: >> >> subprocess.Popen(["exe"], stdin=stdout=stderr=subprocess.PIPE) > > Hitting a gnat with a hammer. > > Something that may make more sense is a PopenPipe that changes the > default for stdin/out/err to be distinct pipes rather than inherited > from the parent (but still able to be set explicitly). It is certainly > annoying that creating a fully piped subprocess is so verbose. Yes. This always struck me as a (minor, but annoying) usability issue with subprocess rather than a difficulty crying out for a new language feature :-) Note that part of the problem is all those "subprocess." prefixes. Using from subprocess import * is so tempting here... :-) Paul. From solipsis at pitrou.net Tue Mar 22 16:11:33 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 22 Mar 2011 16:11:33 +0100 Subject: [Python-ideas] Function multiple arguments assignment References: Message-ID: <20110322161133.3e27cf62@pitrou.net> On Tue, 22 Mar 2011 20:01:25 +1000 Nick Coghlan wrote: > On Tue, Mar 22, 2011 at 7:16 PM, Giampaolo Rodol? wrote: > > It's likely this has already been proposed in past, I don't know, anyway... > > This occurred to me while using subprocess module yesterday. > > I had to do something like this: > > > > subprocess.Popen(["exe"], stdin=subprocess.PIPE, > > stdout=subprocess.PIPE, stderr=subprocess.PIPE) > > > > Would it be appropriate to permit something similar to multiple > > variable assignment such as: > > > > subprocess.Popen(["exe"], stdin=stdout=stderr=subprocess.PIPE) > > Hitting a gnat with a hammer. > > Something that may make more sense is a PopenPipe that changes the > default for stdin/out/err to be distinct pipes rather than inherited > from the parent (but still able to be set explicitly). It is certainly > annoying that creating a fully piped subprocess is so verbose. It's much less verbose if you use "from subprocess import PIPE" (which is unlikely to conflict with anything else in your module namespace, I think). Regards Antoine. From ncoghlan at gmail.com Tue Mar 22 21:33:13 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 Mar 2011 06:33:13 +1000 Subject: [Python-ideas] Function multiple arguments assignment In-Reply-To: References: Message-ID: On Wed, Mar 23, 2011 at 12:57 AM, Paul Moore wrote: > Note that part of the problem is all those "subprocess." prefixes. > Using from subprocess import * is so tempting here... :-) As Antoine noted, selective direct imports definitely reduce the verbosity, as do things like simply abbreviating the module name. Using the convenience helpers (like subprocess.call()) when applicable also helps. There's probably room for another helper or two, though - e.g. I think there's a patch on the tracker somewhere that makes it easy to create threaded background readers to keep the stdout and stderr pipes from filling up and blocking the child process. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From greg.ewing at canterbury.ac.nz Tue Mar 22 23:47:30 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 23 Mar 2011 11:47:30 +1300 Subject: [Python-ideas] Function multiple arguments assignment In-Reply-To: References: Message-ID: <4D892702.5040509@canterbury.ac.nz> Paul Moore wrote: > On 22 March 2011 10:01, Nick Coghlan wrote: > >>>subprocess.Popen(["exe"], stdin=stdout=stderr=subprocess.PIPE) Perhaps subprocess.Popen could have a 'default' argument specifying what to do for unspecified file descriptors. Then the above could be written subprocess.Popen(["exe"], default = subprocess.PIPE) -- Greg From debatem1 at gmail.com Wed Mar 23 00:35:18 2011 From: debatem1 at gmail.com (geremy condra) Date: Tue, 22 Mar 2011 16:35:18 -0700 Subject: [Python-ideas] Function multiple arguments assignment In-Reply-To: <4D892702.5040509@canterbury.ac.nz> References: <4D892702.5040509@canterbury.ac.nz> Message-ID: On Tue, Mar 22, 2011 at 3:47 PM, Greg Ewing wrote: > Paul Moore wrote: >> >> On 22 March 2011 10:01, Nick Coghlan wrote: >> >>>> subprocess.Popen(["exe"], stdin=stdout=stderr=subprocess.PIPE) > > Perhaps subprocess.Popen could have a 'default' argument > specifying what to do for unspecified file descriptors. > Then the above could be written > > ? subprocess.Popen(["exe"], default = subprocess.PIPE) Not to be too pointed about it, but IMO the last thing subprocess.Popen needs is more keyword arguments. I realize that it needs to be many things to many people, but its signature already requires >50 lines of documentation and a similar volume of example text to explain. To me, Nick Coghlan's more-helper-functions approach seems more sensible here. Geremy Condra From jsbueno at python.org.br Wed Mar 23 01:25:13 2011 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Tue, 22 Mar 2011 21:25:13 -0300 Subject: [Python-ideas] Function multiple arguments assignment In-Reply-To: References: <4D892702.5040509@canterbury.ac.nz> Message-ID: On Tue, Mar 22, 2011 at 8:35 PM, geremy condra wrote: > On Tue, Mar 22, 2011 at 3:47 PM, Greg Ewing wrote: >> Paul Moore wrote: >>> >>> On 22 March 2011 10:01, Nick Coghlan wrote: >>> >>>>> subprocess.Popen(["exe"], stdin=stdout=stderr=subprocess.PIPE) >> >> Perhaps subprocess.Popen could have a 'default' argument >> specifying what to do for unspecified file descriptors. >> Then the above could be written >> >> ? subprocess.Popen(["exe"], default = subprocess.PIPE) > > Not to be too pointed about it, but IMO the last thing > subprocess.Popen needs is more keyword arguments. I realize that it > needs to be many things to many people, but its signature already > requires >50 lines of documentation and a similar volume of example > text to explain. To me, Nick Coghlan's more-helper-functions approach > seems more sensible here. In other news, last week I needed to "detach" a subprocess, and thre is no high-level way to do it. There is very little documentation on this, one mostly has to follow C documentation using "for" and arcane system calls (os.setsid and os._exit) to get a process to be daemonized. Maybe subprocess could include a call to take care of all this mess - (or is it indeed only me who needs to detach processes? ) I followed the recipe from [1] and could not get it simpler [1] - http://stackoverflow.com/questions/972362/spawning-process-from-python > > Geremy Condra > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > From ben+python at benfinney.id.au Wed Mar 23 04:05:55 2011 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 23 Mar 2011 14:05:55 +1100 Subject: [Python-ideas] Detaching (daemonising) a process (was: Function multiple arguments assignment) References: <4D892702.5040509@canterbury.ac.nz> Message-ID: <87oc52359o.fsf_-_@benfinney.id.au> "Joao S. O. Bueno" writes: > In other news, last week I needed to "detach" a subprocess, and thre > is no high-level way to do it. There is very little documentation on > this, one mostly has to follow C documentation using "for" and arcane > system calls (os.setsid and os._exit) to get a process to be > daemonized. PEP 3143, and its reference implementation ?python-daemon?, are my intended fix for this. I need to re-think how it uses lock files, but it is already useful for many people. The plan is to eventually get the ?python-daemon? implementation into the standard library, so there's then One Obvious Way To Do It. Please let me know how well it works for you. -- \ ?Computer perspective on Moore's Law: Human effort becomes | `\ twice as expensive roughly every two years.? ?anonymous | _o__) | Ben Finney From dirkjan at ochtman.nl Thu Mar 24 12:43:41 2011 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Thu, 24 Mar 2011 12:43:41 +0100 Subject: [Python-ideas] [Python-Dev] Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: On Thu, Mar 24, 2011 at 12:40, Jameson Quinn wrote: > "class attrdict" is a perennial dead-end for intermediate pythonistas who > want to save 3 characters/5 keystrokes for item access. Other languages such > as javascript allow "somedict.foo" to mean the same as "somedict['foo']", so > why not python? Well, there are a number of reasons why not, beginning with > all the magic method names in python. This should go on python-ideas. Cheers, Dirkjan From jameson.quinn at gmail.com Thu Mar 24 13:23:20 2011 From: jameson.quinn at gmail.com (Jameson Quinn) Date: Thu, 24 Mar 2011 06:23:20 -0600 Subject: [Python-ideas] Fwd: Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: "class attrdict" is a perennial dead-end for intermediate pythonistas who want to save 3 characters/5 keystrokes for item access. Other languages such as javascript allow "somedict.foo" to mean the same as "somedict['foo']", they think, so why not python? Well, there are a number of reasons why not, beginning with the possible conflicts with keywords or any of the magic method names in python. But saving keystrokes is still a reasonable goal. So what about a compromise? Allow "somedict..foo", with two dots, to take that place. It still saves 2 relatively-hard-to-type characters. The "foo" part would of course have to obey attribute/identifier naming rules. So there would be no shortcut for "somedict['$#!%']". But for any identifier-legal foo, the interpreter would just read ..foo as ['foo']. I would not be surprised if I'm not the first person to suggest this. If so, and there's already well-known reasons why this is a bad idea, I apologize. But if the only reason not to is "we never did it that way before" or "it would be too addictive, and so people would never want to use older python versions" or "headache for tools like pylint", I think we should do it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Thu Mar 24 13:36:10 2011 From: phd at phdru.name (Oleg Broytman) Date: Thu, 24 Mar 2011 15:36:10 +0300 Subject: [Python-ideas] getitem/getattr access In-Reply-To: References: Message-ID: <20110324123610.GA30847@iskra.aviel.ru> On Thu, Mar 24, 2011 at 06:23:20AM -0600, Jameson Quinn wrote: > "class attrdict" is a perennial dead-end for intermediate pythonistas who > want to save 3 characters/5 keystrokes for item access. Other languages such > as javascript allow "somedict.foo" to mean the same as "somedict['foo']", > they think, so why not python? See class DictRecord at http://ppa.cvs.sourceforge.net/viewvc/ppa/qps/qUtils.py d = DictRecord(test="test") print d.test Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From mal at egenix.com Thu Mar 24 14:44:25 2011 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 24 Mar 2011 14:44:25 +0100 Subject: [Python-ideas] Fwd: Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: <4D8B4AB9.1020402@egenix.com> Jameson Quinn wrote: > "class attrdict" is a perennial dead-end for intermediate pythonistas who > want to save 3 characters/5 keystrokes for item access. Other languages such > as javascript allow "somedict.foo" to mean the same as "somedict['foo']", > they think, so why not python? Well, there are a number of reasons why not, > beginning with the possible conflicts with keywords or any of the magic > method names in python. You can have all that in Python as well - you only need to create a dictionary type that maps attribute access to dictionary access. Wrapping existing dictionaries like that is also easily possible. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Mar 24 2011) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From anikom15 at gmail.com Thu Mar 24 14:59:19 2011 From: anikom15 at gmail.com (Westley =?ISO-8859-1?Q?Mart=EDnez?=) Date: Thu, 24 Mar 2011 06:59:19 -0700 Subject: [Python-ideas] Fwd: Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: <1300975159.19094.1.camel@localhost.localdomain> On Thu, 2011-03-24 at 06:23 -0600, Jameson Quinn wrote: > "class attrdict" is a perennial dead-end for intermediate pythonistas > who want to save 3 characters/5 keystrokes for item access. Other > languages such as javascript allow "somedict.foo" to mean the same as > "somedict['foo']", they think, so why not python? Well, there are a > number of reasons why not, beginning with the possible conflicts with > keywords or any of the magic method names in python. > > > But saving keystrokes is still a reasonable goal. > > > So what about a compromise? Allow "somedict..foo", with two dots, to > take that place. It still saves 2 relatively-hard-to-type characters. > > > The "foo" part would of course have to obey attribute/identifier > naming rules. So there would be no shortcut for "somedict['$#!%']". > But for any identifier-legal foo, the interpreter would just > read ..foo as ['foo']. > > > I would not be surprised if I'm not the first person to suggest this. > If so, and there's already well-known reasons why this is a bad idea, > I apologize. But if the only reason not to is "we never did it that > way before" or "it would be too addictive, and so people would never > want to use older python versions" or "headache for tools like > pylint", I think we should do it. > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas For one, it looks far too similar to object.attr. From josiah.carlson at gmail.com Thu Mar 24 17:10:45 2011 From: josiah.carlson at gmail.com (Josiah Carlson) Date: Thu, 24 Mar 2011 09:10:45 -0700 Subject: [Python-ideas] [Python-Dev] Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: Pardon me for the run-by comment. Your proposed double-dot syntax foo..bar (that is something whose __*item__ methods you want to invoke) is visually indistinguishable from a mistyped goo..baz (whose __*item__ methods you do not want to invoke) . The only way a 3rd party could know what you meant is if they subsequently scanned all nearby code to see if that particular access pattern was repeated, or if they knew that you really wanted to access the contents of the dictionary and not the attributes of an object. It also seems to me that the use of a dictionary instead of a container for attributes (via AttrDict) is also a mistake (I've written these myself, but only because of their convenience, not because I like them conceputally). Consistency for the sake of sanity is something that I strive for, especially when writing code that others are to read. If you feel the need to have an AttrDict class, might I suggest a very simple wrapper that ensures that you aren't mixing access patterns that would confuse. >>> class AttrDict(object): ... def __init__(self, dict): ... self.__dict__ = dict ... >>> b = {} >>> c = AttrDict(b) >>> c.a Traceback (most recent call last): File "", line 1, in AttributeError: 'AttrDict' object has no attribute 'a' >>> b['a'] = 1 >>> c.a 1 No need to implement *any* magic method beyond __init__. Of course you can't access it like a dictionary, but I thought that was what you were trying to avoid. The claimed reduction in keystrokes is false economy. While you may save a few keystrokes (the square brackets and quote marks) for some accesses, "syntax should not look like grit on my monitor", and an extra period is the most grit-like of any syntax I've ever seen. And finally, for the sake of consistency, if foo..bar is allowed, why not allow for foo..1? Lists are also used with container[index]? Oh, because of the ambiguity. Did we mean foo[1], or did we mean foo['1']. Therein lies the rub, as while previously the behavior was consistent for all container types (foo[bar] does the same thing, for all possible bar), now accessing a string key in a container gets special syntax via foo..string . That doesn't feel right to me. For all of these reasons, I'm -1 . Regards, - Josiah On Thu, Mar 24, 2011 at 8:20 AM, Jameson Quinn wrote: > 2011/3/24 Brian Curtin >> >> On Thu, Mar 24, 2011 at 06:40, Jameson Quinn >> wrote: >>> >>> "class attrdict" is a perennial dead-end for intermediate pythonistas who >>> want to save 3 characters/5 keystrokes for item access. Other languages such >>> as javascript allow "somedict.foo" to mean the same as "somedict['foo']", so >>> why not python? Well, there are a number of reasons why not, beginning with >>> all the magic method names in python. >>> But saving keystrokes is still a reasonable goal. >> >> Code is read far more often than it is written, so readability tends to >> count more than most other metrics. >>> >>> So what about a compromise? Allow "somedict..foo", with two dots, to take >>> that place. It still saves 2 characters (often 4 keystrokes; and I find even >>> ', "[", or "]" harder to type than "."). >> >> I don't see the benefit, but maybe it'll save a few bytes in file size. >> Anyone reviewing your code now has to think "does this need one or two >> dots?" >> Anyways, why not just do something like this: >> class AttrDict(dict): >> ?? ?def __getattr__(self, attr): >> ?? ? ? ?return super(AttrDict, self).__getitem__(attr) >> >>> d = AttrDict() >> >>> d["a"] = 1 >> >>> d.a >> 1 > > There are a few reasons not to do it your way. For one, you could easily > forget about one of the built-in dict methods (e.g. d.get != d["get"]). For > another, if you look on the web, you'll find at least 15 different recipes > for that thing you just made, several of which have more-or-less subtle > errors waiting to get you. Furthermore, the whole point is to have this > available for built-in dicts. Say you get a dict as json - you can either > subclass your own json decoder, with all the pitfalls, or you can explicitly > pass the decoded dict to AttrDict, causing an extra object to be created and > obfuscating your code. And finally, who wants to copy that AttrDict code for > the 137th time? > As for the question of "one or two dots", it's exactly the same question you > face now with "dot or bracket", so I don't see the problem. > It's not merely a matter of saving keystrokes. To me, it would be actually > easier to read code in this style. When I'm doing things like accessing my > json data, that is essentially attribute access; why should my syntax > colorer color it the same as my UI strings? > In sum: > -Saves keystrokes > -saves bugs from miscooked recipes > -faster and less memory than any such recipe > -more-readable code > -very low-risk for old code > Jameson > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/josiah.carlson%40gmail.com > > From santoso.wijaya at gmail.com Thu Mar 24 22:03:54 2011 From: santoso.wijaya at gmail.com (Santoso Wijaya) Date: Thu, 24 Mar 2011 14:03:54 -0700 Subject: [Python-ideas] [Python-Dev] Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: I just want to chip in that, as far as syntactic sugar go, `somedict:foo` looks better than `somedict..foo`. 2c... ~/santa On Thu, Mar 24, 2011 at 4:40 AM, Jameson Quinn wrote: > "class attrdict" is a perennial dead-end for intermediate pythonistas who > want to save 3 characters/5 keystrokes for item access. Other languages such > as javascript allow "somedict.foo" to mean the same as "somedict['foo']", so > why not python? Well, there are a number of reasons why not, beginning with > all the magic method names in python. > > But saving keystrokes is still a reasonable goal. > > So what about a compromise? Allow "somedict..foo", with two dots, to take > that place. It still saves 2 characters (often 4 keystrokes; and I find even > ', "[", or "]" harder to type than "."). > > The "foo" part would of course have to obey attribute/identifier naming > rules. So there would be no shortcut for "somedict['$#!%']". But for any > identifier-legal foo, the interpreter would just read ..foo as ['foo']. > > I would not be surprised if I'm not the first person to suggest this. If > so, and there's already well-known reasons why this is a bad idea, I > apologize. But if the only reason not to is "we never did it that way > before" or "it would be too addictive, and so people would never want to use > older python versions" or "headache for tools like pylint", I think we > should do it. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/santoso.wijaya%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From masklinn at masklinn.net Thu Mar 24 22:45:50 2011 From: masklinn at masklinn.net (Masklinn) Date: Thu, 24 Mar 2011 22:45:50 +0100 Subject: [Python-ideas] [Python-Dev] Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: <283A47FD-B34C-499A-9F0B-0783BCA95CC3@masklinn.net> On 2011-03-24, at 22:03 , Santoso Wijaya wrote: > I just want to chip in that, as far as syntactic sugar go, `somedict:foo` > looks better than `somedict..foo`. > > 2c... > > ~/santa On the other hand, the colon is generally used for definitions (in Python, with the defined on the left side and the definition on the right one) not for accesses. From anikom15 at gmail.com Fri Mar 25 02:20:19 2011 From: anikom15 at gmail.com (Westley =?ISO-8859-1?Q?Mart=EDnez?=) Date: Thu, 24 Mar 2011 18:20:19 -0700 Subject: [Python-ideas] [Python-Dev] Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: <1301016019.20652.1.camel@localhost.localdomain> On Thu, 2011-03-24 at 14:03 -0700, Santoso Wijaya wrote: > I just want to chip in that, as far as syntactic sugar go, > `somedict:foo` looks better than `somedict..foo`. > > 2c... > > > > ~/santa > > > On Thu, Mar 24, 2011 at 4:40 AM, Jameson Quinn > wrote: > "class attrdict" is a perennial dead-end for intermediate > pythonistas who want to save 3 characters/5 keystrokes for > item access. Other languages such as javascript allow > "somedict.foo" to mean the same as "somedict['foo']", so why > not python? Well, there are a number of reasons why not, > beginning with all the magic method names in python. > > > But saving keystrokes is still a reasonable goal. > > > So what about a compromise? Allow "somedict..foo", with two > dots, to take that place. It still saves 2 characters (often 4 > keystrokes; and I find even ', "[", or "]" harder to type than > "."). > > > The "foo" part would of course have to obey > attribute/identifier naming rules. So there would be no > shortcut for "somedict['$#!%']". But for any identifier-legal > foo, the interpreter would just read ..foo as ['foo']. > > > I would not be surprised if I'm not the first person to > suggest this. If so, and there's already well-known reasons > why this is a bad idea, I apologize. But if the only reason > not to is "we never did it that way before" or "it would be > too addictive, and so people would never want to use older > python versions" or "headache for tools like pylint", I think > we should do it. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/santoso.wijaya%40gmail.com > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas That may be worse. A colon suggests relation, can be confused for dictionary attribution assignment, and can be confused with block declaration. From zuo at chopin.edu.pl Fri Mar 25 14:30:28 2011 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Fri, 25 Mar 2011 14:30:28 +0100 Subject: [Python-ideas] Make-statement [Re: Different interface for namedtuple?] In-Reply-To: <4D72C4AC.7040302@canterbury.ac.nz> References: <265ECDA0-F91F-488F-821E-596EB84C4951@gmail.com> <4D72C4AC.7040302@canterbury.ac.nz> Message-ID: <20110325133028.GA2329@chopin.edu.pl> Hello, Greg Ewing dixit (2011-03-06, 12:18): > For Python, I postulated an "instance" statement that would > be used something like this: > > instance Wardrobe(Thing): > > name = "wardrobe" > description = "A nice mahogany double-door wardrobe." > > def take(self): > print "The wardrobe is too heavy to pick up." Why don't use a class decorator? E.g.: def instance(*args, **kwargs): return (lambda cls: cls(*args, **kwargs)) And then simply: @instance(...some init args...) class Wardrobe(Thing): ... Cheers. *j From zuo at chopin.edu.pl Fri Mar 25 15:06:37 2011 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Fri, 25 Mar 2011 15:06:37 +0100 Subject: [Python-ideas] namedtuple() subclasses again Message-ID: <20110325140637.GB2329@chopin.edu.pl> Hello, Another thoughts (and use cases) about namedtuple-DRY-matter and (what is more important) subclassing namedtuple() classes: Quite often I need to create my own named tuple class(es) with some additional methods or modifications of existing ones (most often: a custom version of __repr__()). Now I must write something like: _MyNamedTupleBase = namedtuple('MyNamedTuple', ('one_field', 'another_field', 'and_another_one')) class MyNamedTuple(_MyNamedTupleBase): def __repr__(self): "My sophisticated __repr__()" # and e.g. some new methods... ...or: class MyNamedTuple(namedtuple('MyNamedTuple', ('one_field', 'another_field', 'and_another_one'))): def __repr__(self): "My sophisticated __repr__()" # and e.g. some new methods... It would be very nice to be able to do it in such a way: class MyNamedTuple(namedtuple.abc): _fields = ( 'one_field', 'another_field', 'and_another_one', ) def __repr__(self): "My sophisticated __repr__()" # and e.g. some new methods... ...and especially: class MyAbstractNamedTuple(namedtuple.abc): def __repr__(self): "My sophisticated __repr__()" # and e.g. some new methods... class MyNamedTupleA(MyAbstractNamedTuple): _fields = 'a b c' class MyNamedTupleB(MyAbstractNamedTuple): _fields = ( 'one_field', 'another_field', 'and_another_one', ) (Please note that _fields is a part of the public API of named tuples). Implementation would be easy to explain (and to do; actually I have an outline in my head). The type (metaclass) of namedtuple.abc would be a subclass of abc.ABCMeta and would also act (by ABC registration mechanism) as an abstract base for all named tuples and structsequences like sys.version_info. So all these expressions would be True: >>> isinstance(namedtuple('Foo', 'x y z'), namedtuple.abc) >>> isinstance(sys.version_info, namedtuple.abc) And obviously: >>> isinstance(MyNamedTuple, namedtuple.abc) # referiring to the above examples What do you think about the idea? Regards. *j From zuo at chopin.edu.pl Fri Mar 25 15:25:17 2011 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Fri, 25 Mar 2011 15:25:17 +0100 Subject: [Python-ideas] namedtuple() subclasses again In-Reply-To: <20110325140637.GB2329@chopin.edu.pl> References: <20110325140637.GB2329@chopin.edu.pl> Message-ID: <20110325142517.GA4164@chopin.edu.pl> Sorry, mistake. Jan Kaliszewski dixit (2011-03-25, 15:06): > >>> isinstance(namedtuple('Foo', 'x y z'), namedtuple.abc) > >>> isinstance(sys.version_info, namedtuple.abc) > And obviously: > >>> isinstance(MyNamedTuple, namedtuple.abc) # referiring to the above examples I ment: >>> issubclass(namedtuple('Foo', 'x y z'), namedtuple.abc) >>> issubclass(type(sys.version_info), namedtuple.abc) >>> issubclass(MyNamedTuple, namedtuple.abc) ...and ipso facto: >>> isinstance(namedtuple('Foo', 'x y z')(1, 2, 3), namedtuple.abc) >>> isinstance(sys.version_info, namedtuple.abc) >>> isinstance(MyNamedTuple(1, 2, 3), namedtuple.abc) Cheers. *j From josiah.carlson at gmail.com Fri Mar 25 18:25:18 2011 From: josiah.carlson at gmail.com (Josiah Carlson) Date: Fri, 25 Mar 2011 10:25:18 -0700 Subject: [Python-ideas] [Python-Dev] Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: On Fri, Mar 25, 2011 at 9:00 AM, Jameson Quinn wrote: > I realized that python already has a way to access the string-based members > of a dict without using quotes: > def expect_a_chair(chair, **kw): > ??print "Thanks. That chair is %s." % chair > ??if kw: > ?? ?for key, val in kw.iteritems(): > ?? ? ?print "I wasn't expecting the (%s) %s!" % (val, key) > d = json.loads('{"chair":"comfy","inquisition":"Spanish"}') > expect_a_chair(**d) > try: > ??expect_a_chair({}) > except TypeError: > ??print "No chair." > The ** operator does this. Notice that nowhere in that python code (not > counting the json) do I have to put "chair" in quotes. > Honestly, this solves my current use case. I can use functions like > expect_a_chair for everything I need right now. > But perhaps, if there were a quote-free way to access string-based dict > items, it should be based on this. The problem is, this ** operator is a > unary operator, and the non-unary ** is already taken. > So, I don't have a perfect name for my proposed quoteless synonym for > '["..."]'. My best option is '*.'. Note that this could also be used for > unpacking, and with defaults: > d*.(x,y,z) #=== (d["x"], d["y"], d['z']) > e=d*.(e=None) #like 'e=d.get("e", None)'. > Does this sound worth-it to anyone? Gut reaction: Dear gods, how and why does * have a . operator, and why is there a function being called on it?!? Huh; x,y,z aren't defined in that scope, that's gotta return a NameError for sure. Or wait, is that some new way to specify a vector for multiplication? When did Python get a vector type and literal? If punctuation is the answer, Perl is the question. Since this is Python, and punctuation is decidedly not the answer, I'm going to go a little further than before: -sys.maxint Regards, - Josiah From jameson.quinn at gmail.com Fri Mar 25 18:25:11 2011 From: jameson.quinn at gmail.com (Jameson Quinn) Date: Fri, 25 Mar 2011 11:25:11 -0600 Subject: [Python-ideas] Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: I realized that python already has a way to access the string-based members of a dict without using quotes: def expect_a_chair(chair, **kw): print "Thanks. That chair is %s." % chair if kw: for key, val in kw.iteritems(): print "I wasn't expecting the (%s) %s!" % (val, key) d = json.loads('{"chair":"comfy","inquisition":"Spanish"}') expect_a_chair(**d) try: expect_a_chair({}) except TypeError: print "No chair." The ** "operator" does this. Notice that nowhere in that python code (not counting the json) do I have to put "chair" in quotes. Honestly, this solves my current use case. I can use functions like expect_a_chair for everything I need right now. So I'm reasonably satisfied. But perhaps, if there were a quote-free way to access string-based dict items, it should be based on this. The problem is, this ** "operator" is unary, and the non-unary ** is already taken for exponentials. So, I don't have a perfect name for my proposed quoteless synonym for '["attrname"]'. My best option is '*.attrname'. Note that this could also be used for unpacking, and with defaults: d*.a #=== d["a"] d*.(x,y,z) #=== (d["x"], d["y"], d['z']) e=d*.(e=None) #like 'e=d.get("e", None)'. Does this sound worth-it to anyone? Jameson (ps. I mistakenly sent this reply to python-dev earlier; sorry, this is the right place for it. Minor edits in this version.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Mar 25 18:31:04 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 25 Mar 2011 17:31:04 +0000 Subject: [Python-ideas] Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: On 25 March 2011 17:25, Jameson Quinn wrote: > Does this sound worth-it to anyone? I think you're flogging a dead horse here. No, it doesn't sound remotely worth it to me. -1000 Paul. From jameson.quinn at gmail.com Fri Mar 25 18:47:01 2011 From: jameson.quinn at gmail.com (Jameson Quinn) Date: Fri, 25 Mar 2011 11:47:01 -0600 Subject: [Python-ideas] [Python-Dev] Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: 2011/3/25 Josiah Carlson > On Fri, Mar 25, 2011 at 9:00 AM, Jameson Quinn > wrote: > > I realized that python already has a way to access the string-based > members > > of a dict without using quotes: > > def expect_a_chair(chair, **kw): > > print "Thanks. That chair is %s." % chair > > if kw: > > for key, val in kw.iteritems(): > > print "I wasn't expecting the (%s) %s!" % (val, key) > > d = json.loads('{"chair":"comfy","inquisition":"Spanish"}') > > expect_a_chair(**d) > > try: > > expect_a_chair({}) > > except TypeError: > > print "No chair." > > The ** operator does this. Notice that nowhere in that python code (not > > counting the json) do I have to put "chair" in quotes. > > Honestly, this solves my current use case. I can use functions like > > expect_a_chair for everything I need right now. > > But perhaps, if there were a quote-free way to access string-based dict > > items, it should be based on this. The problem is, this ** operator is a > > unary operator, and the non-unary ** is already taken. > > So, I don't have a perfect name for my proposed quoteless synonym for > > '["..."]'. My best option is '*.'. Note that this could also be used for > > unpacking, and with defaults: > > d*.(x,y,z) #=== (d["x"], d["y"], d['z']) > > e=d*.(e=None) #like 'e=d.get("e", None)'. > > Does this sound worth-it to anyone? > > Gut reaction: > Dear gods, how and why does * have a . operator, and why is there a > function being called on it?!? > Huh; x,y,z aren't defined in that scope, that's gotta return a > NameError for sure. > Or wait, is that some new way to specify a vector for multiplication? > When did Python get a vector type and literal? > > If punctuation is the answer, Perl is the question. Since this is > Python, and punctuation is decidedly not the answer, I'm going to go a > little further than before: -sys.maxint > > Regards, > - Josiah > OK. As I said, I think that the existing functionality as I showed in expect_a_chair solves my dislike for using quotes for something which logically is an attribute name. (It would be even better if json had a way to distinguish object-like data, with well-defined parameters, from more free-form data, but that is out of scope here.) Jameson ps. A true masochist could do something like my unpacking '*.' by a horrendous abuse of the lambda syntax: (lambda x,y,z,e=None, **kw:oh_please_god_no(locals()))(**d) I leave the definition of oh_please_god_no as an exercise for the reader. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Fri Mar 25 19:21:20 2011 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 25 Mar 2011 11:21:20 -0700 Subject: [Python-ideas] [Python-Dev] Dict access with double-dot (syntactic sugar) In-Reply-To: References: Message-ID: <4D8CDD20.8030201@stoneleaf.us> Josiah Carlson wrote: > If punctuation is the answer, Perl is the question. Since this is > Python, and punctuation is decidedly not the answer, I'm going to go a > little further than before: -sys.maxint +1 QOTW From zuo at chopin.edu.pl Sun Mar 27 04:30:41 2011 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Sun, 27 Mar 2011 04:30:41 +0200 Subject: [Python-ideas] Two small functional-style-related improvements Message-ID: <20110327023041.GB2510@chopin.edu.pl> Hello. IMHO it'd be nice... 1. ...to add: * operator.is_none -- equivalent to (lambda x: x is None)) * operator.is_not_none -- equivalent tolambda x: x is not None)) ...making using 'is None'/'is not None' tests with any(), all(), filter(), itertools.takewhile/dropwhile() more convenient and readable (possibly also optimised for speed). 2. ...to add: * operator.anti_caller (or e.g. functools.negator?) -- equivalent to: def anti_caller(func): def call_and_negate(*args, **kwargs): return not func(*args, **kwargs) return call_and_negate What do you think? *j From python at mrabarnett.plus.com Sun Mar 27 04:44:16 2011 From: python at mrabarnett.plus.com (MRAB) Date: Sun, 27 Mar 2011 03:44:16 +0100 Subject: [Python-ideas] Two small functional-style-related improvements In-Reply-To: <20110327023041.GB2510@chopin.edu.pl> References: <20110327023041.GB2510@chopin.edu.pl> Message-ID: <4D8EA480.1060302@mrabarnett.plus.com> On 27/03/2011 03:30, Jan Kaliszewski wrote: > Hello. > > IMHO it'd be nice... > > 1. ...to add: > > * operator.is_none -- equivalent to (lambda x: x is None)) > * operator.is_not_none -- equivalent tolambda x: x is not None)) > > ...making using 'is None'/'is not None' tests with any(), all(), > filter(), itertools.takewhile/dropwhile() more convenient and readable > (possibly also optimised for speed). > > 2. ...to add: > > * operator.anti_caller (or e.g. functools.negator?) -- equivalent to: > > def anti_caller(func): > def call_and_negate(*args, **kwargs): > return not func(*args, **kwargs) > return call_and_negate > > What do you think? > *j > I think that suggestion 2 is a strange one! :-) From ben+python at benfinney.id.au Sun Mar 27 05:14:42 2011 From: ben+python at benfinney.id.au (Ben Finney) Date: Sun, 27 Mar 2011 14:14:42 +1100 Subject: [Python-ideas] Two small functional-style-related improvements References: <20110327023041.GB2510@chopin.edu.pl> Message-ID: <877hbltftp.fsf@benfinney.id.au> Jan Kaliszewski writes: > IMHO it'd be nice... > > 1. ...to add: > > * operator.is_none -- equivalent to (lambda x: x is None)) > * operator.is_not_none -- equivalent tolambda x: x is not None)) Why so specific? What's wrong with ?operator.is_(x, None)? and ?operator.is_not(x, None)?? Those both work today. > 2. ...to add: > > * operator.anti_caller (or e.g. functools.negator?) -- equivalent to: > > def anti_caller(func): > def call_and_negate(*args, **kwargs): > return not func(*args, **kwargs) > return call_and_negate This one seems overkill for the standard library. -- \ ?When cryptography is outlawed, bayl bhgynjf jvyy unir | `\ cevinpl.? ?Anonymous | _o__) | Ben Finney From steve at pearwood.info Sun Mar 27 05:41:44 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 27 Mar 2011 14:41:44 +1100 Subject: [Python-ideas] Two small functional-style-related improvements In-Reply-To: <20110327023041.GB2510@chopin.edu.pl> References: <20110327023041.GB2510@chopin.edu.pl> Message-ID: <4D8EB1F8.2090004@pearwood.info> Jan Kaliszewski wrote: > Hello. > > IMHO it'd be nice... > > 1. ...to add: > > * operator.is_none -- equivalent to (lambda x: x is None)) > * operator.is_not_none -- equivalent tolambda x: x is not None)) > > ...making using 'is None'/'is not None' tests with any(), all(), > filter(), itertools.takewhile/dropwhile() more convenient and readable > (possibly also optimised for speed). How is import operator any(map(operator.is_none, iterable)) more convenient and readable than: any(x is None for x in iterable) ? I'm asking this as someone who likes map and other functional tools. Now that we have generator expressions and list comprehensions in the language, wrapping trivial expressions in a function is far less common. Likewise, how could a function call that includes 'x is None' be faster than 'x is None' alone? The overhead of calling the function would have to be negative! We can see this with the existing operator.is_ function: [steve at sylar ~]$ python3 -m timeit -r 15 -s "from operator import is_" "is_(42, None)" 1000000 loops, best of 15: 0.224 usec per loop [steve at sylar ~]$ python3 -m timeit -r 15 "42 is None" 1000000 loops, best of 15: 0.117 usec per loop For comparison purposes: [steve at sylar ~]$ python3 -m timeit -r 15 -s "def f(x): x is None" "f(42)" 1000000 loops, best of 15: 0.378 usec per loop and just for completeness: [steve at sylar ~]$ python3 -m timeit -r 15 -s "from operator import is_; from functools import partial; f = partial(is_, None)" "f(42)" 1000000 loops, best of 15: 0.301 usec per loop The possible time saving compared to a pure-Python function is very small, and there's rarely a need to use a function when you can just use an expression. > 2. ...to add: > > * operator.anti_caller (or e.g. functools.negator?) -- equivalent to: > > def anti_caller(func): > def call_and_negate(*args, **kwargs): > return not func(*args, **kwargs) > return call_and_negate This seems to be a decorator, so I don't believe it belongs in the operator module. Either way, I don't see the point to it. Why would you use this @functools.negator def spam(x): return something instead of just this? def spam(x): return not something What is your use-case for this function? -- Steven From zuo at chopin.edu.pl Sun Mar 27 13:53:26 2011 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Sun, 27 Mar 2011 13:53:26 +0200 Subject: [Python-ideas] Two small functional-style-related improvements In-Reply-To: <877hbltftp.fsf@benfinney.id.au> References: <20110327023041.GB2510@chopin.edu.pl> <877hbltftp.fsf@benfinney.id.au> Message-ID: <20110327115326.GA2197@chopin.edu.pl> Ben Finney dixit (2011-03-27, 14:14): > Jan Kaliszewski writes: > > > IMHO it'd be nice... > > > > 1. ...to add: > > > > * operator.is_none -- equivalent to (lambda x: x is None)) > > * operator.is_not_none -- equivalent tolambda x: x is not None)) > > Why so specific? None is quite specific (and widely used in different contexts) and is None/is not None tests are very common. > What's wrong with ?operator.is_(x, None)? and > ?operator.is_not(x, None)?? Those both work today. But cannot be used quickly with all(), any(), filter(), takewhile(), dropwhile() etc. without ugly lambda or playing with partial(). Which one of the following do you prefer? * filter((lambda x: x is None), iterable) * filter(functools.partial(operator.is_, None), iterable) * filter(None, (x is None for x in iterable)) * filter(operator.is_none, iterable) > > 2. ...to add: > > > > * operator.anti_caller (or e.g. functools.negator?) -- equivalent to: > > > > def anti_caller(func): > > def call_and_negate(*args, **kwargs): > > return not func(*args, **kwargs) > > return call_and_negate > > This one seems overkill for the standard library. But itertools.filterfalse() was added -- which is counterpart for filter(). Why not to cover also all(), any(), takewhile(), dropwhile() etc. with only one additional function? Regards. *j From phd at phdru.name Sun Mar 27 14:02:56 2011 From: phd at phdru.name (Oleg Broytman) Date: Sun, 27 Mar 2011 16:02:56 +0400 Subject: [Python-ideas] Two small functional-style-related improvements In-Reply-To: <20110327115326.GA2197@chopin.edu.pl> References: <20110327023041.GB2510@chopin.edu.pl> <877hbltftp.fsf@benfinney.id.au> <20110327115326.GA2197@chopin.edu.pl> Message-ID: <20110327120255.GA31149@iskra.aviel.ru> On Sun, Mar 27, 2011 at 01:53:26PM +0200, Jan Kaliszewski wrote: > Which one of the following do you prefer? > > * filter((lambda x: x is None), iterable) > * filter(functools.partial(operator.is_, None), iterable) > * filter(None, (x is None for x in iterable)) > * filter(operator.is_none, iterable) I prefer [x for x in iterable if x is not None] Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From ben+python at benfinney.id.au Sun Mar 27 14:46:13 2011 From: ben+python at benfinney.id.au (Ben Finney) Date: Sun, 27 Mar 2011 23:46:13 +1100 Subject: [Python-ideas] Two small functional-style-related improvements References: <20110327023041.GB2510@chopin.edu.pl> <877hbltftp.fsf@benfinney.id.au> <20110327115326.GA2197@chopin.edu.pl> Message-ID: <877hbkspd6.fsf@benfinney.id.au> Jan Kaliszewski writes: > Which one of the following do you prefer? > > * filter((lambda x: x is None), iterable) > * filter(functools.partial(operator.is_, None), iterable) > * filter(None, (x is None for x in iterable)) > * filter(operator.is_none, iterable) (x for x in iterable if x is None) -- \ ?The Way to see by Faith is to shut the Eye of Reason.? | `\ ?Benjamin Franklin | _o__) | Ben Finney From zuo at chopin.edu.pl Sun Mar 27 15:05:36 2011 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Sun, 27 Mar 2011 15:05:36 +0200 Subject: [Python-ideas] Two small functional-style-related improvements In-Reply-To: <4D8EB1F8.2090004@pearwood.info> References: <20110327023041.GB2510@chopin.edu.pl> <4D8EB1F8.2090004@pearwood.info> Message-ID: <20110327130536.GB2197@chopin.edu.pl> Steven D'Aprano dixit (2011-03-27, 14:41): > Jan Kaliszewski wrote: [...] > >* operator.is_none -- equivalent to (lambda x: x is None)) > >* operator.is_not_none -- equivalent tolambda x: x is not None)) > > > >...making using 'is None'/'is not None' tests with any(), all(), > >filter(), itertools.takewhile/dropwhile() more convenient and readable > >(possibly also optimised for speed). > > How is > > import operator > any(map(operator.is_none, iterable)) > > more convenient and readable than: > > any(x is None for x in iterable) It isn't indeed -- but I'd prefer e.g.: filter(is_not_none, iterable) ...rather than: filter(None, (x is not None for x in iterable)) (not saying about more complex cases, when additional inner for-loop makes the expression much less readable). And --what is maybe more important-- such functions as itertools.dropwhile, itertools.takewhile, itertools.groupby (together with .sort/sorted) do not have their generator expression counterparts (at least resonably simple ones). Ane then, if we wish to stay in the functional-style we must use ugly lambdas/partial for now. > >2. ...to add: > > > >* operator.anti_caller (or e.g. functools.negator?) -- equivalent to: > > > >def anti_caller(func): > > def call_and_negate(*args, **kwargs): > > return not func(*args, **kwargs) > > return call_and_negate > > This seems to be a decorator, so I don't believe it belongs in the > operator module. Yes, maybe rather: functools.negated > Either way, I don't see the point to it. [...] > What is your use-case for this function? I'd would like to be able to do: assert any(dropwhile(negated(str.isalnum), takewhile(negated(str.isspace), my_names))) ...instead of: assert any(dropwhile(lambda name: not name.isalnum(), takewhile(lambda name: not name.isspace(), my_names))) Regards. *j From zuo at chopin.edu.pl Sun Mar 27 15:11:49 2011 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Sun, 27 Mar 2011 15:11:49 +0200 Subject: [Python-ideas] Two small functional-style-related improvements In-Reply-To: <877hbkspd6.fsf@benfinney.id.au> References: <20110327023041.GB2510@chopin.edu.pl> <877hbltftp.fsf@benfinney.id.au> <20110327115326.GA2197@chopin.edu.pl> <877hbkspd6.fsf@benfinney.id.au> Message-ID: <20110327131149.GA3164@chopin.edu.pl> Ben Finney dixit (2011-03-27, 23:46): > Jan Kaliszewski writes: > > > Which one of the following do you prefer? > > > > * filter((lambda x: x is None), iterable) > > * filter(functools.partial(operator.is_, None), iterable) > > * filter(None, (x is None for x in iterable)) > > * filter(operator.is_none, iterable) > > (x for x in iterable if x is None) Yeah, you're right. To less sleep... But anyway, I'd prefer filter(is_none, iterable) :) *j From ben+python at benfinney.id.au Sun Mar 27 15:16:17 2011 From: ben+python at benfinney.id.au (Ben Finney) Date: Mon, 28 Mar 2011 00:16:17 +1100 Subject: [Python-ideas] Two small functional-style-related improvements References: <20110327023041.GB2510@chopin.edu.pl> <877hbltftp.fsf@benfinney.id.au> <20110327115326.GA2197@chopin.edu.pl> <877hbkspd6.fsf@benfinney.id.au> <20110327131149.GA3164@chopin.edu.pl> Message-ID: <871v1ssnz2.fsf@benfinney.id.au> Jan Kaliszewski writes: > But anyway, I'd prefer filter(is_none, iterable) > :) Feel free to write such a function for your code, then. I don't see how it's important enough to be in the Python standard library. -- \ ?Ridicule is the only weapon which can be used against | `\ unintelligible propositions.? ?Thomas Jefferson, 1816-07-30 | _o__) | Ben Finney From p.f.moore at gmail.com Sun Mar 27 16:38:58 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 27 Mar 2011 15:38:58 +0100 Subject: [Python-ideas] Two small functional-style-related improvements In-Reply-To: <20110327130536.GB2197@chopin.edu.pl> References: <20110327023041.GB2510@chopin.edu.pl> <4D8EB1F8.2090004@pearwood.info> <20110327130536.GB2197@chopin.edu.pl> Message-ID: On 27 March 2011 14:05, Jan Kaliszewski wrote: > I'd would like to be able to do: > > ? ?assert any(dropwhile(negated(str.isalnum), > ? ? ? ? ? ? ? ? ? ? ? ? takewhile(negated(str.isspace), my_names))) > > ...instead of: > > ? ?assert any(dropwhile(lambda name: not name.isalnum(), > ? ? ? ? ? ? ? ? ? ? ? ? takewhile(lambda name: not name.isspace(), > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? my_names))) Do you honestly find *either* of those readable? I can't even work out what they do well enough to try to come up with an alternative version using generator expressions... If I needed an assertion like that, I'd write a function with a name that explains what's going on, which does the whole of that dropwhile-takewhile routine (probably using multiple lines, with comments) and then use that as assert(any(whatever_this_is(my_names))). -1 on putting anything in the stdlib which encourages obfuscated code like that. (Note: If that style suits you, then writing your own functions in your codebase to support it is fine, I just don't see it as something the stdlib should be doing). Paul. From zuo at chopin.edu.pl Sun Mar 27 16:41:31 2011 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Sun, 27 Mar 2011 16:41:31 +0200 Subject: [Python-ideas] Two small functional-style-related improvements In-Reply-To: <871v1ssnz2.fsf@benfinney.id.au> References: <20110327023041.GB2510@chopin.edu.pl> <877hbltftp.fsf@benfinney.id.au> <20110327115326.GA2197@chopin.edu.pl> <877hbkspd6.fsf@benfinney.id.au> <20110327131149.GA3164@chopin.edu.pl> <871v1ssnz2.fsf@benfinney.id.au> Message-ID: <20110327144131.GA3724@chopin.edu.pl> Ben Finney dixit (2011-03-28, 00:16): > Jan Kaliszewski writes: > > > But anyway, I'd prefer filter(is_none, iterable) > > :) > > Feel free to write such a function for your code, then. I don't see how > it's important enough to be in the Python standard library. But what about dropwhile, takewhile and other predicate-based functions that do not have simple gen-exp/comprehension counterparts? Redards. *j From masklinn at masklinn.net Sun Mar 27 16:43:18 2011 From: masklinn at masklinn.net (Masklinn) Date: Sun, 27 Mar 2011 16:43:18 +0200 Subject: [Python-ideas] Two small functional-style-related improvements In-Reply-To: <877hbltftp.fsf@benfinney.id.au> References: <20110327023041.GB2510@chopin.edu.pl> <877hbltftp.fsf@benfinney.id.au> Message-ID: <8E5D9D41-CF02-446F-B837-9299B12F7A48@masklinn.net> On 2011-03-27, at 05:14 , Ben Finney wrote: > Jan Kaliszewski writes: > >> IMHO it'd be nice... >> >> 1. ...to add: >> >> * operator.is_none -- equivalent to (lambda x: x is None)) >> * operator.is_not_none -- equivalent tolambda x: x is not None)) > > Why so specific? What's wrong with ?operator.is_(x, None)? and > ?operator.is_not(x, None)?? Those both work today. It would be nice if the binary pseudo-operators in ``operator`` supported right sections though: what would be really nifty here would be to write predicate_user(operator.is_(None), collection) and have it behave as predicate_user(lambda value: operator.is_(value, None), collection) (note: order is important, I think the most common use case for non-commutative operator sections is to fix the second operand). This would not only obviate the purported need for an ``is_none`` operator method, it would make operator far more interesting for all higher-order tasks. From zuo at chopin.edu.pl Sun Mar 27 17:46:40 2011 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Sun, 27 Mar 2011 17:46:40 +0200 Subject: [Python-ideas] namedtuple.abc -- draft implementation (was: namedtuple() subclasses again) In-Reply-To: <20110325140637.GB2329@chopin.edu.pl> References: <20110325140637.GB2329@chopin.edu.pl> Message-ID: <20110327154640.GB3724@chopin.edu.pl> Here is a draft implementation: http://dpaste.org/T9w6/ Please note that namedtuple API is not touched, except adding 'abc' attribute (being the abstract base class in question). Regards. *j From solipsis at pitrou.net Sun Mar 27 17:59:55 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 27 Mar 2011 17:59:55 +0200 Subject: [Python-ideas] namedtuple() subclasses again References: <20110325140637.GB2329@chopin.edu.pl> Message-ID: <20110327175955.0674655e@pitrou.net> On Fri, 25 Mar 2011 15:06:37 +0100 Jan Kaliszewski wrote: > > ...and especially: > > class MyAbstractNamedTuple(namedtuple.abc): > def __repr__(self): > "My sophisticated __repr__()" > # and e.g. some new methods... > > class MyNamedTupleA(MyAbstractNamedTuple): > _fields = 'a b c' > > class MyNamedTupleB(MyAbstractNamedTuple): > _fields = ( > 'one_field', > 'another_field', > 'and_another_one', > ) Can't you multiple inheritance instead? >>> class Base(tuple): ... def _method(self): return 5 ... __slots__ = () ... >>> class C(Base, namedtuple('Point', 'x y')): ... __slots__ = () ... >>> c = C(x=1, y=2) >>> c C(x=1, y=2) >>> c._method() 5 >>> c.__dict__ Traceback (most recent call last): File "", line 1, in AttributeError: 'C' object has no attribute '__dict__' >>> a, b = c >>> a 1 >>> b 2 Antoine. From raymond.hettinger at gmail.com Sun Mar 27 19:13:20 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 27 Mar 2011 10:13:20 -0700 Subject: [Python-ideas] namedtuple.abc -- draft implementation (was: namedtuple() subclasses again) In-Reply-To: <20110327154640.GB3724@chopin.edu.pl> References: <20110325140637.GB2329@chopin.edu.pl> <20110327154640.GB3724@chopin.edu.pl> Message-ID: <8B2AE4D3-6FC1-47DF-BB95-D6896F63C04C@gmail.com> On Mar 27, 2011, at 8:46 AM, Jan Kaliszewski wrote: > Here is a draft implementation: > > http://dpaste.org/T9w6/ > > Please note that namedtuple API is not touched, except adding 'abc' > attribute (being the abstract base class in question). There is an open tracker item for a namedtuple.abc. Please attach you patch there (issue7796). Raymond From tjreedy at udel.edu Sun Mar 27 21:09:39 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 27 Mar 2011 15:09:39 -0400 Subject: [Python-ideas] Two small functional-style-related improvements In-Reply-To: <20110327115326.GA2197@chopin.edu.pl> References: <20110327023041.GB2510@chopin.edu.pl> <877hbltftp.fsf@benfinney.id.au> <20110327115326.GA2197@chopin.edu.pl> Message-ID: On 3/27/2011 7:53 AM, Jan Kaliszewski wrote: > Which one of the following do you prefer? > > * filter((lambda x: x is None), iterable) > * filter(functools.partial(operator.is_, None), iterable) These two produce an iterable of Nones, one for each None in the original iterable. I have trouble imagining a real use case for this. I can see a use cases for dropping everything but None. This can, of course, always be done at the point of use: for ob in iterable: if ob is not None: do_something(ob) If you want to separate the conditional from the action and hide the conditional, then write a trivial, specific, filter generator: def dropNone(it): for ob in it: if ob is not None: yield ob for ob in dropNone(iterable): do_something(ob) For repeated use, I prefer dropNone to any filter version, including your proposal. > * filter(None, (x is None for x in iterable)) This is not the same thing as the above as it instead produces an iterable of Trues, one for each None in iterable. -- Terry Jan Reedy From tjreedy at udel.edu Sun Mar 27 21:12:53 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 27 Mar 2011 15:12:53 -0400 Subject: [Python-ideas] Two small functional-style-related improvements In-Reply-To: References: <20110327023041.GB2510@chopin.edu.pl> <4D8EB1F8.2090004@pearwood.info> <20110327130536.GB2197@chopin.edu.pl> Message-ID: On 3/27/2011 10:38 AM, Paul Moore wrote: > On 27 March 2011 14:05, Jan Kaliszewski wrote: >> I'd would like to be able to do: >> >> assert any(dropwhile(negated(str.isalnum), >> takewhile(negated(str.isspace), my_names))) >> >> ...instead of: >> >> assert any(dropwhile(lambda name: not name.isalnum(), >> takewhile(lambda name: not name.isspace(), >> my_names))) > > Do you honestly find *either* of those readable? I can't even work out > what they do well enough to try to come up with an alternative version > using generator expressions... Me neither, not without more motivation than list reading. > If I needed an assertion like that, I'd > write a function with a name that explains what's going on, which does > the whole of that dropwhile-takewhile routine (probably using multiple > lines, with comments) and then use that as > assert(any(whatever_this_is(my_names))). Ditto. And once the function is abstracted and named, the detailed implementation does not matter so much. > -1 on putting anything in the stdlib which encourages obfuscated code > like that. (Note: If that style suits you, then writing your own > functions in your codebase to support it is fine, I just don't see it > as something the stdlib should be doing). -- Terry Jan Reedy From zuo at chopin.edu.pl Sun Mar 27 22:53:16 2011 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Sun, 27 Mar 2011 22:53:16 +0200 Subject: [Python-ideas] namedtuple() subclasses again In-Reply-To: <20110327175955.0674655e@pitrou.net> References: <20110325140637.GB2329@chopin.edu.pl> <20110327175955.0674655e@pitrou.net> Message-ID: <20110327205316.GC3724@chopin.edu.pl> Antoine Pitrou dixit (2011-03-27, 17:59): > Can't you multiple inheritance instead? > > >>> class Base(tuple): > ... def _method(self): return 5 > ... __slots__ = () > ... > >>> class C(Base, namedtuple('Point', 'x y')): > ... __slots__ = () > ... > >>> c = C(x=1, y=2) > >>> c > C(x=1, y=2) > >>> c._method() > 5 > >>> c.__dict__ > Traceback (most recent call last): > File "", line 1, in > AttributeError: 'C' object has no attribute '__dict__' > >>> a, b = c > >>> a > 1 > >>> b > 2 You're right. But my idea was to make it simple and clean from the user point of view (without all those __slots__ etc.). Another approach could be a decorator transforming a given class into namedtuple with methods defined in that class: @namedtuple.from_class class MyRecord: # or e.g. class MyRecord(MyMixinWithSomeMethods): fields = 'username password' def __str__(self): return '{0.__class__}({0.username}, ...)'.format(self) Regards. *j From jkbbwr at gmail.com Sun Mar 27 23:33:52 2011 From: jkbbwr at gmail.com (Jakob Bowyer) Date: Sun, 27 Mar 2011 22:33:52 +0100 Subject: [Python-ideas] Extending error handling on with statements. Message-ID: I personally love using with statements when handling file like objects. This is all well and good until an exception is thrown from the with statement. This is ok if you expect the exception because you can use try and except but personally I feel that another condition to with would feel more 'pythonic' this means that you could fail the with statement with an exception jump to the clause, then jump back to the with statement trying the code in the clause e.g. rather than try: with open('nofile.txt','r') as inp: #nofile.txt does not exist and throws an exception except IOError: with open('another.txt','r') as inp: #carry on where you left off... You could simply have with open('nofile.txt','r') as inp: #exception here else: #give a new file to the with statement here and/or run some panic code where your program does something to fix the situation. It could be a foolish idea as I am only a intermediate user but I thought it might be worth voicing none the less as you don't learn from staying silent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zuo at chopin.edu.pl Sun Mar 27 23:34:10 2011 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Sun, 27 Mar 2011 23:34:10 +0200 Subject: [Python-ideas] namedtuple.abc -- shortened implementation (was: namedtuple() subclasses again) In-Reply-To: <20110325140637.GB2329@chopin.edu.pl> References: <20110325140637.GB2329@chopin.edu.pl> Message-ID: <20110327213410.GD3724@chopin.edu.pl> Here is another --shortened and possibly better-- draft implementation: http://dpaste.org/2aiQ/ Regards. *j From raymond.hettinger at gmail.com Sun Mar 27 23:40:55 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 27 Mar 2011 14:40:55 -0700 Subject: [Python-ideas] namedtuple() subclasses again In-Reply-To: <20110327205316.GC3724@chopin.edu.pl> References: <20110325140637.GB2329@chopin.edu.pl> <20110327175955.0674655e@pitrou.net> <20110327205316.GC3724@chopin.edu.pl> Message-ID: <4F555176-7CF0-4EBF-8C3A-8A5C2B05894B@gmail.com> On Mar 27, 2011, at 1:53 PM, Jan Kaliszewski wrote: > Another approach could be a decorator transforming a given class into > namedtuple with methods defined in that class: > > @namedtuple.from_class > class MyRecord: # or e.g. class MyRecord(MyMixinWithSomeMethods): > fields = 'username password' > def __str__(self): > return '{0.__class__}({0.username}, ...)'.format(self) For the record (pun intended), I'm opposed to changing the API for namedtuples. It is a mature, successful API that stands to benefit very little from from making a second way to do it. Experimentation is great and it would be nice to have alternative recipes posted in the ASPN Cookbook or some other place, but I believe the standard library is the wrong place to fiat in a second way to create them. If a new recipe gains traction, we can link to it from the docs. Python development is currently suffering from excess enthusiasm with advanced code manipulations occurring upon instantiation -- metaclasses, decorators, and context managers are fun to play with, but no fun to debug or trace through when something goes wrong. Raymond From steve at pearwood.info Mon Mar 28 00:40:17 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 28 Mar 2011 09:40:17 +1100 Subject: [Python-ideas] Extending error handling on with statements. In-Reply-To: References: Message-ID: <4D8FBCD1.1030601@pearwood.info> Jakob Bowyer wrote: > I personally love using with statements when handling file like objects. > This is all well and good until an exception is thrown from the with > statement. This is ok if you expect the exception because you can use try You should always expect an exception when doing file I/O. > and except but personally I feel that another condition to with would feel > more 'pythonic' this means that you could fail the with statement with an > exception jump to the clause, then jump back to the with statement trying > the code in the clause e.g. rather than > > try: > with open('nofile.txt','r') as inp: > #nofile.txt does not exist and throws an exception > except IOError: > with open('another.txt','r') as inp: > #carry on where you left off... > > You could simply have > > with open('nofile.txt','r') as inp: > #exception here > else: > #give a new file to the with statement here and/or run some panic code > where your program does something to fix the situation. You say "jump back to the with statement", and "give a new file to the with statement". It sounds like you are thinking of turning with into a looping construct, e.g. this BASIC-like pseudo-code: 10 somefile = 'nofile.txt') 20 with open(somefile, 'r') as inp: ... 70 else: # Try again with a new file. 80 somefile = 'a different file.txt' 90 goto 20 The obvious problem with this is obvious: if the *second* file also fails to open, you will loop forever as the handler jumps to the `else` clause, sets the same name, and returns to try the with statement again. This will be an annoying source of errors. I don't know if I like this idea: I can see that it can be useful to repeat a block if an error occurs, but I think that it needs to be more obvious that you are looping. You also seem to be assuming that the only error that will be caught will be "file not found" type errors. The beauty of a try-except block is that you can have different handlers depending on the error: somefile = 42 # Oops! try: with open(somefile, 'r') as inp: ... except TypeError: handle_filename_not_a_string() Your suggested `else` clause loses all information about what sort of error occurs, as well as where: outfile = 'output.txt' with open(outfile, 'w') as out: out.write(42) # Oops! else: # Try another file. outfile = 'another file.txt' Lastly, your suggested syntax would be confusing. In try blocks, the `else` clause runs when there is no error. In with blocks, it would run when there is an error. That's not helpful: things that look similar should be similar. Of course, you can fix this problem by changing the with statement to use `except` clauses: with open(fname) as f: ... except TypeError: ... except IOError as e: ... else: # no error ... but this adds much complexity to the with statement, and except for the magic goto, you can already do that at the cost of one line and one indent level: try: with open(fname) as f: ... except TypeError: ... except IOError as e: ... else: # no error ... Saving one indent level and a line doesn't seem important enough for new syntax, especially new syntax which essentially duplicates functionality that already exists. -- Steven From zuo at chopin.edu.pl Mon Mar 28 01:18:54 2011 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Mon, 28 Mar 2011 01:18:54 +0200 Subject: [Python-ideas] Extending error handling on with statements. In-Reply-To: <4D8FBCD1.1030601@pearwood.info> References: <4D8FBCD1.1030601@pearwood.info> Message-ID: <20110327231854.GA7208@chopin.edu.pl> I don't like the idea of that magic goto. But: with ...: ... except ...: ... ... as a shortcut for: try: with ...: ... except ...: ... ... IMHO seems to be worth considering. Regards. *j From mwm at mired.org Mon Mar 28 04:54:58 2011 From: mwm at mired.org (Mike Meyer) Date: Sun, 27 Mar 2011 22:54:58 -0400 Subject: [Python-ideas] Extending error handling on with statements. In-Reply-To: <20110327231854.GA7208@chopin.edu.pl> References: <4D8FBCD1.1030601@pearwood.info> <20110327231854.GA7208@chopin.edu.pl> Message-ID: <20110327225458.52d72eaa@bhuda.mired.org> On Mon, 28 Mar 2011 01:18:54 +0200 Jan Kaliszewski wrote: > I don't like the idea of that magic goto. > > But: > > with ...: > ... > except ...: > ... > ... > > as a shortcut for: > > try: > with ...: > ... > except ...: > ... > ... > > IMHO seems to be worth considering. I've played around with this kind of thing in the past, and always eventually decided there wasn't a nice way to handle all the various desirable uses. The with statement has two bits of code, and wraps a try/finally around one of them. You might legitimately want to handle exceptions in either bit of code differently. Which means there are three things you might reasonably want an except clause on a with statement to do: 1) Wrap the entire statement (what Jan proposed). 2) Wrap the block contained by the with. 3) Be part of the try/finally implied by the with. On the other hand, the concept of "magic goto" does generate another idea (and hence another post). http://www.mired.org/consulting.html Independent Software developer/SCM consultant, email for more information. O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From ncoghlan at gmail.com Mon Mar 28 05:09:20 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 28 Mar 2011 13:09:20 +1000 Subject: [Python-ideas] Extending error handling on with statements. In-Reply-To: References: Message-ID: On Mon, Mar 28, 2011 at 7:33 AM, Jakob Bowyer wrote: > I personally love using with statements when handling file like objects. > This is all well and good until an exception is thrown from the with > statement. This is ok if you expect the exception because you can use try > and except but personally I feel that another?condition?to with would feel > more 'pythonic' this means that you could fail the with statement with an > exception jump to the clause, then jump back to the with statement trying > the code in the clause e.g. rather than > try: > ?? ?with open('nofile.txt','r') as inp: > ?? ? ? ?#nofile.txt does not exist and throws an exception > except IOError: > ?? ?with open('another.txt','r') as inp: > ?? ? ? ?#carry on where you left off... > You could simply have > with open('nofile.txt','r') as inp: > ?? ?#exception here > else: > ?? ?#give a new file to the with statement here and/or run some panic code Don't fight the language, just write a new CM that does what you want: with open_any('r', 'nofile.txt', 'another.txt') as inp: # If we get here, one of the files was opened # We can use inp.name to find out which one (And open_any() is pretty easy to write as a generator with an initial loop containing a try/catch block, an else clause on the loop that throws an exception, and then a subsequent with statement that yields the open file) You *really* need to be careful when wrapping try blocks around with statements, as they're almost always too broad (typically, you only want to cover the CM creation, not the entire body of the with statement). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mwm at mired.org Mon Mar 28 05:15:25 2011 From: mwm at mired.org (Mike Meyer) Date: Sun, 27 Mar 2011 23:15:25 -0400 Subject: [Python-ideas] Retry clause (was: Extending error handling on with statements.) In-Reply-To: References: Message-ID: <20110327231525.25df089c@bhuda.mired.org> On Sun, 27 Mar 2011 22:33:52 +0100 Jakob Bowyer wrote: > I personally love using with statements when handling file like objects. > This is all well and good until an exception is thrown from the with > statement. This is ok if you expect the exception because you can use try > and except but personally I feel that another condition to with would feel > more 'pythonic' this means that you could fail the with statement with an > exception jump to the clause, then jump back to the with statement trying > the code in the clause e.g. rather than The idea of exception handlers "jumping back" is actually good enough to have been implemented in one language (eiffel), but sufficiently different from what "except" does that I think it calls for new syntax. How about a "retry" clause for try statements? I think it runs into the same problems as an "except" clause when it comes to adding it to the with clause, so lets skip that for now. retry ...: as part of a try clause would work just like an except clause: if the exception was one of those listed after retry, then you'd enter the block following the retry, otherwise you skip it. If the retry block raises an exception or hits "return" or "yield", it behaves just like an except block. If the retry block executes it's last statement, it then branches back to the first statement of the "try" block. This would let you write something like: i = 0 try: with open("tmpname.%d" % i, 'r') as inp: .... retry IOError: if IOError.errno != ENOENT: raise i += 1 if i > 100: raise To search for a file. http://www.mired.org/consulting.html Independent Software developer/SCM consultant, email for more information. O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From greg.ewing at canterbury.ac.nz Mon Mar 28 07:11:30 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Mon, 28 Mar 2011 18:11:30 +1300 Subject: [Python-ideas] Extending error handling on with statements. In-Reply-To: References: Message-ID: <4D901882.1080104@canterbury.ac.nz> Jakob Bowyer wrote: > try: > with open('nofile.txt','r') as inp: > #nofile.txt does not exist and throws an exception > except IOError: > with open('another.txt','r') as inp: > #carry on where you left off... You could write this as try: inp = open('nofile.txt','r') except IOError: inp = open('another.txt','r') with inp: ... -- Greg From yoavglazner at gmail.com Mon Mar 28 07:22:57 2011 From: yoavglazner at gmail.com (yoav glazner) Date: Mon, 28 Mar 2011 07:22:57 +0200 Subject: [Python-ideas] Extending error handling on with statements. In-Reply-To: References: Message-ID: On Mon, Mar 28, 2011 at 5:09 AM, Nick Coghlan wrote: > On Mon, Mar 28, 2011 at 7:33 AM, Jakob Bowyer wrote: > > I personally love using with statements when handling file like objects. > > This is all well and good until an exception is thrown from the with > > statement. This is ok if you expect the exception because you can use try > > and except but personally I feel that another condition to with would > feel > > more 'pythonic' this means that you could fail the with statement with an > > exception jump to the clause, then jump back to the with statement trying > > the code in the clause e.g. rather than > > try: > > with open('nofile.txt','r') as inp: > > #nofile.txt does not exist and throws an exception > > except IOError: > > with open('another.txt','r') as inp: > > #carry on where you left off... > > You could simply have > > with open('nofile.txt','r') as inp: > > #exception here > > else: > > #give a new file to the with statement here and/or run some panic > code > > Don't fight the language, just write a new CM that does what you want: > > with open_any('r', 'nofile.txt', 'another.txt') as inp: > # If we get here, one of the files was opened > # We can use inp.name to find out which one > > (And open_any() is pretty easy to write as a generator with an initial > loop containing a try/catch block, an else clause on the loop that > throws an exception, and then a subsequent with statement that yields > the open file) > > You *really* need to be careful when wrapping try blocks around with > statements, as they're almost always too broad (typically, you only > want to cover the CM creation, not the entire body of the with > statement). > I think it can be a nice to write: try with open('idonthavethisfile.py'): .. except Exception as expectedException: .. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreengels at gmail.com Mon Mar 28 07:33:47 2011 From: andreengels at gmail.com (Andre Engels) Date: Mon, 28 Mar 2011 07:33:47 +0200 Subject: [Python-ideas] Retry clause (was: Extending error handling on with statements.) In-Reply-To: <20110327231525.25df089c@bhuda.mired.org> References: <20110327231525.25df089c@bhuda.mired.org> Message-ID: On Mon, Mar 28, 2011 at 5:15 AM, Mike Meyer wrote: > On Sun, 27 Mar 2011 22:33:52 +0100 > Jakob Bowyer wrote: > >> I personally love using with statements when handling file like objects. >> This is all well and good until an exception is thrown from the with >> statement. This is ok if you expect the exception because you can use try >> and except but personally I feel that another condition to with would feel >> more 'pythonic' this means that you could fail the with statement with an >> exception jump to the clause, then jump back to the with statement trying >> the code in the clause e.g. rather than > > The idea of exception handlers "jumping back" is actually good enough > to have been implemented in one language (eiffel), but sufficiently > different from what "except" does that I think it calls for new > syntax. > > How about a "retry" clause for try statements? I think it runs into > the same problems as an "except" clause when it comes to adding it to > the with clause, so lets skip that for now. > > retry ...: as part of a try clause would work just like an except > clause: if the exception was one of those listed after retry, then > you'd enter the block following the retry, otherwise you skip it. If > the retry block raises an exception or hits "return" or "yield", it > behaves just like an except block. If the retry block executes it's > last statement, it then branches back to the first statement of the > "try" block. > > This would let you write something like: > > i = 0 > try: > ? ?with open("tmpname.%d" % i, 'r') as inp: > ? ? .... > retry IOError: > ? if IOError.errno != ENOENT: > ? ? ?raise > ? i += 1 > ? if i > 100: > ? ? ?raise > > To search for a file. I think 'retry' would be clearer if it is used as a command on its own, like return, break or continue, but then only within an except block. Your code above could then go: ... except IOError: if IOError.errno != ENOENT: raise i += 1 if i <= 100: retry else: raise -- Andr? Engels, andreengels at gmail.com From ncoghlan at gmail.com Mon Mar 28 08:06:38 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 28 Mar 2011 16:06:38 +1000 Subject: [Python-ideas] Retry clause (was: Extending error handling on with statements.) In-Reply-To: <20110327231525.25df089c@bhuda.mired.org> References: <20110327231525.25df089c@bhuda.mired.org> Message-ID: On Mon, Mar 28, 2011 at 1:15 PM, Mike Meyer wrote: > On Sun, 27 Mar 2011 22:33:52 +0100 > Jakob Bowyer wrote: > >> I personally love using with statements when handling file like objects. >> This is all well and good until an exception is thrown from the with >> statement. This is ok if you expect the exception because you can use try >> and except but personally I feel that another condition to with would feel >> more 'pythonic' this means that you could fail the with statement with an >> exception jump to the clause, then jump back to the with statement trying >> the code in the clause e.g. rather than > > The idea of exception handlers "jumping back" is actually good enough > to have been implemented in one language (eiffel), but sufficiently > different from what "except" does that I think it calls for new > syntax. If you want a loop, write a loop. for fname in possible_fnames: try: f = open(fname) except IOError: continue break else: raise RuntimeError("Could not open any of {}".format(possible_fnames)) with f: # Do stuff Turning the above into a custom "open_any" context manager is trivial. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ddasilva at umd.edu Mon Mar 28 19:11:44 2011 From: ddasilva at umd.edu (Daniel da Silva) Date: Mon, 28 Mar 2011 13:11:44 -0400 Subject: [Python-ideas] namedtuple() subclasses again In-Reply-To: <4F555176-7CF0-4EBF-8C3A-8A5C2B05894B@gmail.com> References: <20110325140637.GB2329@chopin.edu.pl> <20110327175955.0674655e@pitrou.net> <20110327205316.GC3724@chopin.edu.pl> <4F555176-7CF0-4EBF-8C3A-8A5C2B05894B@gmail.com> Message-ID: Is there a use case other an adding __repr__? The most popular way to use namedtuples are just are just a shorthand for defining a special type of simple class. But if you're going to be adding methods, you're breaking out of simple situation they are used for, and you might as well just free yourself and make it the class. Daniel On Sun, Mar 27, 2011 at 5:40 PM, Raymond Hettinger < raymond.hettinger at gmail.com> wrote: > > On Mar 27, 2011, at 1:53 PM, Jan Kaliszewski wrote: > > Another approach could be a decorator transforming a given class into > > namedtuple with methods defined in that class: > > > > @namedtuple.from_class > > class MyRecord: # or e.g. class MyRecord(MyMixinWithSomeMethods): > > fields = 'username password' > > def __str__(self): > > return '{0.__class__}({0.username}, ...)'.format(self) > > For the record (pun intended), I'm opposed to changing the API for > namedtuples. > > It is a mature, successful API that stands to benefit very little from from > making a second way to do it. > > Experimentation is great and it would be nice to have alternative recipes > posted in the ASPN Cookbook or some other place, but I believe the standard > library is the wrong place to fiat in a second way to create them. If a new > recipe gains traction, we can link to it from the docs. > > Python development is currently suffering from excess enthusiasm with > advanced code manipulations occurring upon instantiation -- metaclasses, > decorators, and context managers are fun to play with, but no fun to debug > or trace through when something goes wrong. > > > Raymond > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.brandl at gmx.net Mon Mar 28 19:18:44 2011 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 28 Mar 2011 19:18:44 +0200 Subject: [Python-ideas] namedtuple() subclasses again In-Reply-To: <4F555176-7CF0-4EBF-8C3A-8A5C2B05894B@gmail.com> References: <20110325140637.GB2329@chopin.edu.pl> <20110327175955.0674655e@pitrou.net> <20110327205316.GC3724@chopin.edu.pl> <4F555176-7CF0-4EBF-8C3A-8A5C2B05894B@gmail.com> Message-ID: On 27.03.2011 23:40, Raymond Hettinger wrote: > Python development is currently suffering from excess enthusiasm with > advanced code manipulations occurring upon instantiation -- metaclasses, > decorators, and context managers are fun to play with, but no fun to debug or > trace through when something goes wrong. Not sure how context managers would fit in that category though. Georg From ddasilva at umd.edu Mon Mar 28 19:40:23 2011 From: ddasilva at umd.edu (Daniel da Silva) Date: Mon, 28 Mar 2011 13:40:23 -0400 Subject: [Python-ideas] Python package file type In-Reply-To: References: Message-ID: This seems to only focus on systems that *have* GUIs. Aren't they only a fraction of the systems that end up needing to install python packages regularly? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ddasilva at umd.edu Mon Mar 28 19:46:07 2011 From: ddasilva at umd.edu (Daniel da Silva) Date: Mon, 28 Mar 2011 13:46:07 -0400 Subject: [Python-ideas] Adding function checks to regex In-Reply-To: References: <4D842245.7040707@mrabarnett.plus.com> Message-ID: > I would approach that with > > numbers = (int(m.group()) for m in re.finditer(r"\b\d+\b")) > numbers = [n for n in numbers if 1 <= n <= 10] > To follow up on this: he has pointed out an existing way of doing something that fully covers the goal of your addition. The current way is both straightforward, elegant, and self-describes what it is doing, I believe. I think if we have an obvious way to do it, we usually want to be consistent with our normal attempt of having one obvious way to do it. If his way wasn't obvious, you may not be Dutch. -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Mon Mar 28 21:34:15 2011 From: python at mrabarnett.plus.com (MRAB) Date: Mon, 28 Mar 2011 20:34:15 +0100 Subject: [Python-ideas] Adding function checks to regex In-Reply-To: References: <4D842245.7040707@mrabarnett.plus.com> Message-ID: <4D90E2B7.3000005@mrabarnett.plus.com> On 28/03/2011 18:46, Daniel da Silva wrote: > > I would approach that with > > numbers = (int(m.group()) for m in re.finditer(r"\b\d+\b")) > numbers = [n for n in numbers if 1 <= n <= 10] > > > To follow up on this: he has pointed out an existing way of doing > something that fully covers the goal of your addition. The current way > is both straightforward, elegant, and self-describes what it is doing, I > believe. I think if we have an obvious way to do it, we usually want to > be consistent with our normal attempt of having one obvious way to do it. > > If his way wasn't obvious, you may not be Dutch. > I was thinking about 2 possible uses: 1. Where you would have a regex in a configuration or setup file, or validation for a field, but with extra checks which are tricky or impossible in a regex, eg date ranges. 2. Where you want to perform a check during the matching, much in the way that you would use a lookahead or lookbehind. So far no-one has been able to come up with a convincing real world use case. Still, it's better to make a bad suggestion than not to make a good one. :-) From ncoghlan at gmail.com Tue Mar 29 00:32:24 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 Mar 2011 08:32:24 +1000 Subject: [Python-ideas] namedtuple() subclasses again In-Reply-To: References: <20110325140637.GB2329@chopin.edu.pl> <20110327175955.0674655e@pitrou.net> <20110327205316.GC3724@chopin.edu.pl> <4F555176-7CF0-4EBF-8C3A-8A5C2B05894B@gmail.com> Message-ID: On Tue, Mar 29, 2011 at 3:18 AM, Georg Brandl wrote: > On 27.03.2011 23:40, Raymond Hettinger wrote: > >> Python development is currently suffering from excess enthusiasm with >> advanced code manipulations occurring upon instantiation -- metaclasses, >> decorators, and context managers are fun to play with, but no fun to debug or >> trace through when something goes wrong. > > Not sure how context managers would fit in that category though. Badly written __exit__ methods can definitely make debugging failures interesting (although I believe 3.x exception chaining helps a lot with avoiding that). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From taleinat at gmail.com Tue Mar 29 00:58:40 2011 From: taleinat at gmail.com (Tal Einat) Date: Tue, 29 Mar 2011 00:58:40 +0200 Subject: [Python-ideas] Adding function checks to regex In-Reply-To: <4D90E2B7.3000005@mrabarnett.plus.com> References: <4D842245.7040707@mrabarnett.plus.com> <4D90E2B7.3000005@mrabarnett.plus.com> Message-ID: On Mon, Mar 28, 2011 at 9:34 PM, MRAB wrote: > On 28/03/2011 18:46, Daniel da Silva wrote: > >> >> I would approach that with >> >> numbers = (int(m.group()) for m in re.finditer(r"\b\d+\b")) >> numbers = [n for n in numbers if 1 <= n <= 10] >> >> >> To follow up on this: he has pointed out an existing way of doing >> something that fully covers the goal of your addition. The current way >> is both straightforward, elegant, and self-describes what it is doing, I >> believe. I think if we have an obvious way to do it, we usually want to >> be consistent with our normal attempt of having one obvious way to do it. >> >> If his way wasn't obvious, you may not be Dutch. >> >> I was thinking about 2 possible uses: > > 1. Where you would have a regex in a configuration or setup file, or > validation for a field, but with extra checks which are tricky or > impossible in a regex, eg date ranges. > > 2. Where you want to perform a check during the matching, much in the > way that you would use a lookahead or lookbehind. > > So far no-one has been able to come up with a convincing real world use > case. Still, it's better to make a bad suggestion than not to make a > good one. :-) A regex-with-filter can be useful, but I don't think any changes to the stdlib are necessary, a simple 3rd party module (or even just a cookbook recipe) would suffice. I've used something very similar (I rolled my own, a regexp wrapper with a filter function). I was scraping blogs for links to user profiles on various social sites. I wanted to hand-code how user-profile URLs looked for some major sites, and ended up using regexps. I needed the filtering to deal with various edge-cases. (There are better solutions but I needed something quick!) Writing the wrapper class and mimicking the re object API was easy. I like the idea of allowing separate filters for different named groups. (This would work especially well with the new regex module, which allows more than 99 groups and has better support for named groups.) If there's interest I could clean up my code and publish it somewhere. - Tal Einat -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Tue Mar 29 19:26:34 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 29 Mar 2011 11:26:34 -0600 Subject: [Python-ideas] def-from Message-ID: During the discussion about assignment decorators Nick brought up the idea of adding a def-from syntax: def from (): ... which would effectively call (, , ) This is much like how meta-classes work, though this would be with functions (sort of meta-functions). Ultimately, they would amount to the same thing. The current "def" statement would just have an implicit builder. During the sprints I explored the idea of this with Nick by making a "build_from" decorator, and an exec_closure builtin to provide the full capability needed to emulate the def-from syntax with a code object. It turned out the exec_closure didn't buy much. However, one thing that became apparent in discussing this with Nick is that just passing the code object of the decorated function (or of the def body) as doesn't buy much. To really get much bang out of this you would need to pass the AST of the body. With the AST you could manipulate it as needed before compiling. (Nick's idea) Suddenly class definitions are just a special case of def-from. If you really wanted to get crazy you could pass the raw string as (can't blame this one on Nick). With the raw string you could put just about anything in there, like a DSL or another programming language. Then parse it however you like, and use that result to compile something else or build some data set or call some external library or whatever you care to do with raw data. It would not be restricted to parsable Python In the normal "def" context Python is hardwired to turn it into a function code object, and the implicit builder to generate the function object thereon. I am reticent to suggest adding undue complexity to the languge or adversely affect readability (potentially) if it doesn't offer plenty of increased expressive power. So, crazy raw string thing aside, how about the def-from syntax, particularly with the AST passed? Nick already indicated to me that we probably should get comfy with metaclass __prepare__ before we get any more metaprogramming, and he's probably right. I was thinking about implementing def-from as an exercise in syntax hacking, regardless. Any thoughts? Are there better syntax hacking exercises (like Raymond's "def x.y(..." or "x.y = ..." or "x.(name)"? Could def-from have a place in the future? -eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Tue Mar 29 19:28:33 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 29 Mar 2011 11:28:33 -0600 Subject: [Python-ideas] def-from In-Reply-To: References: Message-ID: During the discussion about assignment decorators Nick brought up the idea of adding a def-from syntax: def from (): ... which would effectively call (, , ) This is much like how meta-classes work, though this would be with functions (sort of meta-functions). Ultimately, they would amount to the same thing. The current "def" statement would just have an implicit builder. During the sprints I explored the idea of this with Nick by making a "build_from" decorator, and an exec_closure builtin to provide the full capability needed to emulate the def-from syntax with a code object. It turned out the exec_closure didn't buy much. However, one thing that became apparent in discussing this with Nick is that just passing the code object of the decorated function (or of the def body) as doesn't buy much. To really get much bang out of this you would need to pass the AST of the body. With the AST you could manipulate it as needed before compiling. (Nick's idea) Suddenly class definitions are just a special case of def-from. If you really wanted to get crazy you could pass the raw string as (can't blame this one on Nick). With the raw string you could put just about anything in there, like a DSL or another programming language. Then parse it however you like, and use that result to compile something else or build some data set or call some external library or whatever you care to do with raw data. It would not be restricted to parsable Python In the normal "def" context Python is hardwired to turn it into a function code object, and the implicit builder to generate the function object thereon. I am reticent to suggest adding undue complexity to the languge or adversely affect readability (potentially) if it doesn't offer plenty of increased expressive power. So, crazy raw string thing aside, how about the def-from syntax, particularly with the AST passed? Nick already indicated to me that we probably should get comfy with metaclass __prepare__ before we get any more metaprogramming, and he's probably right. I was thinking about implementing def-from as an exercise in syntax hacking, regardless. Any thoughts? Are there better syntax hacking exercises (like Raymond's "def x.y(..." or "x.y = ..." or "x.(name)"? Could def-from have a place in the future? -eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Mar 29 19:40:41 2011 From: guido at python.org (Guido van Rossum) Date: Tue, 29 Mar 2011 10:40:41 -0700 Subject: [Python-ideas] def-from In-Reply-To: References: Message-ID: On Tue, Mar 29, 2011 at 10:26 AM, Eric Snow wrote: > During the discussion about assignment decorators Nick brought up the idea > of adding a def-from syntax: > def from (): > ?? ?... > which would effectively call > (, , ) > This is much like how meta-classes work, though this would be with functions > (sort of meta-functions). ?Ultimately, they would amount to the same thing. > ?The current "def" statement would just have an implicit builder. > During the sprints I explored the idea of this with Nick by making a > "build_from" decorator, and an exec_closure builtin to provide the full > capability needed to emulate the def-from syntax with a code object. ?It > turned out the exec_closure didn't buy much. ?However, one thing that became > apparent in discussing this with Nick is that just passing the code object > of the decorated function (or of the def body) as doesn't buy much. > ?To really get much bang out of this you would need to pass the AST of the > body. ?With the AST you could manipulate it as needed before compiling. > ?(Nick's idea) ?Suddenly class definitions are just a special case of > def-from. I'm glad you went down this particular rabbit hole in so much detail. You have proved beyond a doubt that the idea is not compatible with how Python currently compiles code, since it would mean that you couldn't save the generated bytecode to a .pyc file just by parsing and compiling the source code -- either you'd have to have the runtime environment available to generate the bytecode, or you'd have to put off generating the bytecode until much later. (I knew this all along, but had a hard time explaining it to the proponents of things like this, or the "make" statement, etc. -- many people have quite a misguided idea about how dynamic Python really is, and this doesn't stop them from proposing changes that only make sense in the alternate reality they believe they live in.) > If you really wanted to get crazy you could pass the raw string as > (can't blame this one on Nick). ?With the raw string you could put just > about anything in there, like a DSL or another programming language. ?Then > parse it however you like, and use that result to compile something else or > build some data set or call some external library or whatever you care to do > with raw data. ?It would not be restricted to parsable Python ?In the normal > "def" context Python is hardwired to turn it into a function code object, > and the implicit builder to generate the function object thereon. Yeah, this is a nice reduction to the absurd of the original idea; the contradiction you've arrived at proves that the original idea cannot work. > I am reticent to suggest adding undue complexity to the languge or adversely > affect readability (potentially) if it doesn't offer plenty of increased > expressive power. ?So, crazy raw string thing aside, how about the def-from > syntax, particularly with the AST passed? ?Nick already indicated to me that > we probably should get comfy with metaclass __prepare__ before we get any > more metaprogramming, and he's probably right. ?I was thinking about > implementing def-from as an exercise in syntax hacking, regardless. ?Any > thoughts? ?Are there better syntax hacking exercises (like Raymond's "def > x.y(..." or "x.y = ..." or "x.(name)"? ?Could def-from have a place in the > future? To me, the idea has always been dead. I'm glad you've provided the supporting documentation of its demise. -- --Guido van Rossum (python.org/~guido) From ericsnowcurrently at gmail.com Tue Mar 29 19:47:50 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 29 Mar 2011 11:47:50 -0600 Subject: [Python-ideas] {Python-ideas] C-API exposure Message-ID: As I have been toying around with a few things, I have noticed that the C-API provides a lot more functionality than is exposed in Python. Much of the functionality can be reproduced one way or another. However, I was wondering if it would be feasible (and tractable) to expose every bit of the C-API in python. If it happened, then everything in there could be written in pure python relative to all the other exposed pieces. This would allow easier prototyping of new language features. It would not be practical from a performance standpoint for most stuff, but it would help people understand how python works underneath. As well, exposing all the pieces would provide a way to test the C-API completely from pure python. While I see several good things, I also see the size of the task. Just exposing the C-API would be a feat. On top of that, emulating the innards of each piece in pure python using the other exposed pieces would be a big job. Would it be worth it? Would it expose things we actually don't want exposed? I think it would be really cool, but half the time that is a good warning sign. -eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Mar 29 19:53:10 2011 From: guido at python.org (Guido van Rossum) Date: Tue, 29 Mar 2011 10:53:10 -0700 Subject: [Python-ideas] {Python-ideas] C-API exposure In-Reply-To: References: Message-ID: On Tue, Mar 29, 2011 at 10:47 AM, Eric Snow wrote: > As I have been toying around with a few things, I have noticed that the > C-API provides a lot more functionality than is exposed in Python. ?Much of > the functionality can be reproduced one way or another. ?However, I was > wondering if it would be feasible (and tractable) to expose every bit of the > C-API in python. > If it happened, then everything in there could be written in pure python > relative to all the other exposed pieces. ?This would allow easier > prototyping of new language features. ?It would not be practical from a > performance standpoint for most stuff, but it would help people understand > how python works underneath. ?As well, exposing all the pieces would provide > a way to test the C-API completely from pure python. > While I see several good things, I also see the size of the task. ?Just > exposing the C-API would be a feat. ?On top of that, emulating the innards > of each piece in pure python using the other exposed pieces would be a big > job. ?Would it be worth it? ?Would it expose things we actually don't want > exposed? > I think it would be really cool, but half the time that is a good warning > sign. Well, would it really be pure Python? You should carefully consider how portable that "pure Python" code you propose to write would be to alternate Python implementations like Jython, IronPython or PyPy. It also sounds like you're about to independently discover Cython. Finally, can you be specific? Do you have some examples of C-APIs that could be exposed? What would be gained? -- --Guido van Rossum (python.org/~guido) From ericsnowcurrently at gmail.com Tue Mar 29 19:59:40 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 29 Mar 2011 11:59:40 -0600 Subject: [Python-ideas] descriptors outside of classes Message-ID: Here's another outlandish idea. How about if descriptors could be used outside of classes. I.e. any global or local variable could be assigned a descriptor object and the descriptor protocol would be respected for that variable. This would be a pretty messy change, and I have no illusions that the idea will go anywhere. However, would there be room for this in python? The advantage is that it would allow for greater flexibility in hacking things up. The downside is that it hides what's really going on. Descriptors on classes are already less-than-obvious if you aren't familiar with how variables are handled on objects. I would expect that descriptors in the global namespace would be even more-so. -eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Tue Mar 29 20:30:01 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 29 Mar 2011 12:30:01 -0600 Subject: [Python-ideas] {Python-ideas] C-API exposure In-Reply-To: References: Message-ID: Certainly neither the new builtins nor the "pure" Python extrapolations would be portable. And I wasn't suggesting that they be exposed in the builtins module, but rather in their own module. By nature they would be implementation specific. However, they would be as insightful as poking around the C-API is (which I found to be very), but in Python. As far as Cython goes, I am not terribly familiar with it. However, I think it is a sort of opposite. Cython seems to push Python down into C. The C-API builtins would push the C into Python (that doesn't sound good). Regardless, I think doing this would take too much work to be worth it. But I did want to get the idea out there. I starting thinking about this when I was messing around with exec_closure. While it has proven superfluous, working on it exposed me to all the pieces in the C-API that do not have counterparts in Python. Things like cell objects. There are things in there that you can emulate, but not in an explicit way (like PyEval_EvalCodeEx). It seems like as time has gone by, more of the internals have been exposed, like the AST module, the types module, metaclasses, the dis module, and others. Certainly these are not run-of-the-mill modules, and neither would this be. Those others have come about as needs have presented. I expect that will continue to be the case. The idea here was to skip to the chase and just expose the whole API. One of my key questions is, what are the dangers in doing so? Security? Risk of fostering hacks? More people relying on implementation specific details? Enabling code that is incongrous with the Python vision? These are questions to which I am trying to find answers as I dive into the python-dev world. I appreciate the feedback by the way! -eric On Tue, Mar 29, 2011 at 11:53 AM, Guido van Rossum wrote: > On Tue, Mar 29, 2011 at 10:47 AM, Eric Snow > wrote: > > As I have been toying around with a few things, I have noticed that the > > C-API provides a lot more functionality than is exposed in Python. Much > of > > the functionality can be reproduced one way or another. However, I was > > wondering if it would be feasible (and tractable) to expose every bit of > the > > C-API in python. > > > If it happened, then everything in there could be written in pure python > > relative to all the other exposed pieces. This would allow easier > > prototyping of new language features. It would not be practical from a > > performance standpoint for most stuff, but it would help people > understand > > how python works underneath. As well, exposing all the pieces would > provide > > a way to test the C-API completely from pure python. > > > While I see several good things, I also see the size of the task. Just > > exposing the C-API would be a feat. On top of that, emulating the > innards > > of each piece in pure python using the other exposed pieces would be a > big > > job. Would it be worth it? Would it expose things we actually don't > want > > exposed? > > > I think it would be really cool, but half the time that is a good warning > > sign. > > Well, would it really be pure Python? You should carefully consider > how portable that "pure Python" code you propose to write would be to > alternate Python implementations like Jython, IronPython or PyPy. > > It also sounds like you're about to independently discover Cython. > > Finally, can you be specific? Do you have some examples of C-APIs that > could be exposed? What would be gained? > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Mar 29 20:47:28 2011 From: guido at python.org (Guido van Rossum) Date: Tue, 29 Mar 2011 11:47:28 -0700 Subject: [Python-ideas] {Python-ideas] C-API exposure In-Reply-To: References: Message-ID: On Tue, Mar 29, 2011 at 11:30 AM, Eric Snow wrote: > Certainly neither the new builtins nor the "pure" Python extrapolations > would be portable. ?And I wasn't suggesting that they be exposed in the > builtins module, but rather in their own module. ?By nature they would be > implementation specific. ?However, they would be as insightful as poking > around the C-API is (which I found to be very), but in Python. > As far as Cython goes, I am not terribly familiar with it. ?However, I think > it is a sort of opposite. ?Cython seems to push Python down into C. ?The > C-API builtins would push the C into Python (that doesn't sound good). > Regardless, I think doing this would take too much work to be worth it. ?But > I did want to get the idea out there. ?I starting thinking about this when I > was messing around with exec_closure. ? ?While it has proven superfluous, > working on it exposed me to all the pieces in the C-API that do not have > counterparts in Python. ?Things like cell objects. ?There are things in > there that you can emulate, but not in an explicit way (like > PyEval_EvalCodeEx). > It seems like as time has gone by, more of the internals have been exposed, > like the AST module, the types module, metaclasses, the dis module, and > others. ?Certainly these are not run-of-the-mill modules, and neither would > this be. ?Those others have come about as needs have presented. ?I expect > that will continue to be the case. ?The idea here was to skip to the chase > and just expose the whole API. Well people already do this using ctypes... > One of my key questions is, what are the dangers in doing so? ?Security? > ?Risk of fostering hacks? ?More people relying on implementation specific > details? ?Enabling code that is incongrous with the Python vision? ?These > are questions to which I am trying to find answers as I dive into the > python-dev world. ?I appreciate the feedback by the way! Any or all of the above, probably, depending on the specific API you're considering... You seem to have ignored my suggestion to think about how this would work in other Python interpreters. Also many of the C APIs have subtle reference count behavior -- you don't want to have to worry about refcounting bugs *in your Python code*. -- --Guido van Rossum (python.org/~guido) From ericsnowcurrently at gmail.com Tue Mar 29 20:58:54 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 29 Mar 2011 12:58:54 -0600 Subject: [Python-ideas] def-from In-Reply-To: References: Message-ID: I appreciate your candor. You are right that the bytecode would have to live elsewhere, probably memory. Or you would have to handle the .pyc generation dynamically. namedtuples do dynamic code generation and execution, but not on the same scale as def-from would cause. Unless I misunderstand (not unlikely), it seems like your objection is that the code for the builder in a def-from would not necessarily be around yet to build the bytecode for the .pyc file. That is one of the parts of CPython that I simply haven't gotten to yet, but I have a hunch you know what you are talking about. :) Forgive me if I am wildly off, but in that case it would require a builder to be in a C module, or for it to tie into the existing mechanism CPython uses to build .pyc files and to use them. The former is what the implicit function builder would do. And that is not the only complexity it would add. This would be just one more means of meta-programming that people would have to wrap their heads around (if they wanted to use it). It would also add more complexity to the C-API. In light of all this, the benefits would have to be substantial, which is not clear to me that they are, which in reality means that it isn't worth getting into Python for now. Sometimes features have a way of coming in later when the benefits make it worth it, but I am not going to hold my breath on this one. However, the seeming flexibility of the idea is alluring. I suppose that's why it keeps coming up. -eric On Tue, Mar 29, 2011 at 11:40 AM, Guido van Rossum wrote: > On Tue, Mar 29, 2011 at 10:26 AM, Eric Snow > wrote: > > During the discussion about assignment decorators Nick brought up the > idea > > of adding a def-from syntax: > > > def from (): > > ... > > > which would effectively call > > > (, , ) > > > This is much like how meta-classes work, though this would be with > functions > > (sort of meta-functions). Ultimately, they would amount to the same > thing. > > The current "def" statement would just have an implicit builder. > > > During the sprints I explored the idea of this with Nick by making a > > "build_from" decorator, and an exec_closure builtin to provide the full > > capability needed to emulate the def-from syntax with a code object. It > > turned out the exec_closure didn't buy much. However, one thing that > became > > apparent in discussing this with Nick is that just passing the code > object > > of the decorated function (or of the def body) as doesn't buy > much. > > To really get much bang out of this you would need to pass the AST of > the > > body. With the AST you could manipulate it as needed before compiling. > > (Nick's idea) Suddenly class definitions are just a special case of > > def-from. > > I'm glad you went down this particular rabbit hole in so much detail. > You have proved beyond a doubt that the idea is not compatible with > how Python currently compiles code, since it would mean that you > couldn't save the generated bytecode to a .pyc file just by parsing > and compiling the source code -- either you'd have to have the runtime > environment available to generate the bytecode, or you'd have to put > off generating the bytecode until much later. > > (I knew this all along, but had a hard time explaining it to the > proponents of things like this, or the "make" statement, etc. -- many > people have quite a misguided idea about how dynamic Python really is, > and this doesn't stop them from proposing changes that only make sense > in the alternate reality they believe they live in.) > > > If you really wanted to get crazy you could pass the raw string as > > (can't blame this one on Nick). With the raw string you could put just > > about anything in there, like a DSL or another programming language. > Then > > parse it however you like, and use that result to compile something else > or > > build some data set or call some external library or whatever you care to > do > > with raw data. It would not be restricted to parsable Python In the > normal > > "def" context Python is hardwired to turn it into a function code object, > > and the implicit builder to generate the function object thereon. > > Yeah, this is a nice reduction to the absurd of the original idea; the > contradiction you've arrived at proves that the original idea cannot > work. > > > I am reticent to suggest adding undue complexity to the languge or > adversely > > affect readability (potentially) if it doesn't offer plenty of increased > > expressive power. So, crazy raw string thing aside, how about the > def-from > > syntax, particularly with the AST passed? Nick already indicated to me > that > > we probably should get comfy with metaclass __prepare__ before we get any > > more metaprogramming, and he's probably right. I was thinking about > > implementing def-from as an exercise in syntax hacking, regardless. Any > > thoughts? Are there better syntax hacking exercises (like Raymond's "def > > x.y(..." or "x.y = ..." or "x.(name)"? Could def-from have a place in > the > > future? > > To me, the idea has always been dead. I'm glad you've provided the > supporting documentation of its demise. > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Tue Mar 29 21:01:50 2011 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 29 Mar 2011 20:01:50 +0100 Subject: [Python-ideas] def-from In-Reply-To: References: Message-ID: <4D922C9E.9060006@mrabarnett.plus.com> On 29/03/2011 18:40, Guido van Rossum wrote: [snip] > To me, the idea has always been dead. I'm glad you've provided the > supporting documentation of its demise. > It's not dead, it's just pining. :-) From ericsnowcurrently at gmail.com Tue Mar 29 21:15:41 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 29 Mar 2011 13:15:41 -0600 Subject: [Python-ideas] {Python-ideas] C-API exposure In-Reply-To: References: Message-ID: On Tue, Mar 29, 2011 at 12:47 PM, Guido van Rossum wrote: > On Tue, Mar 29, 2011 at 11:30 AM, Eric Snow > wrote: > > Certainly neither the new builtins nor the "pure" Python extrapolations > > would be portable. And I wasn't suggesting that they be exposed in the > > builtins module, but rather in their own module. By nature they would be > > implementation specific. However, they would be as insightful as poking > > around the C-API is (which I found to be very), but in Python. > > As far as Cython goes, I am not terribly familiar with it. However, I > think > > it is a sort of opposite. Cython seems to push Python down into C. The > > C-API builtins would push the C into Python (that doesn't sound good). > > Regardless, I think doing this would take too much work to be worth it. > But > > I did want to get the idea out there. I starting thinking about this > when I > > was messing around with exec_closure. While it has proven superfluous, > > working on it exposed me to all the pieces in the C-API that do not have > > counterparts in Python. Things like cell objects. There are things in > > there that you can emulate, but not in an explicit way (like > > PyEval_EvalCodeEx). > > It seems like as time has gone by, more of the internals have been > exposed, > > like the AST module, the types module, metaclasses, the dis module, and > > others. Certainly these are not run-of-the-mill modules, and neither > would > > this be. Those others have come about as needs have presented. I expect > > that will continue to be the case. The idea here was to skip to the > chase > > and just expose the whole API. > > Well people already do this using ctypes... > > Haven't used them myself yet. So you can use them to expose all of the C-API? Cool! > > One of my key questions is, what are the dangers in doing so? Security? > > Risk of fostering hacks? More people relying on implementation specific > > details? Enabling code that is incongrous with the Python vision? These > > are questions to which I am trying to find answers as I dive into the > > python-dev world. I appreciate the feedback by the way! > > Any or all of the above, probably, depending on the specific API > you're considering... > > I figured as much. :( Unless I have missed a doc somewhere, it seems like it is a process of time to get a feel for what belongs in Python and what doesn't. > You seem to have ignored my suggestion to think about how this would > work in other Python interpreters. > > I really am not sure. It seems like there are already several modules in the stdlib that are implementation specific, like dis. This would fall in that category. But maybe we are trying to lock or eliminate that category? > Also many of the C APIs have subtle reference count behavior -- you > don't want to have to worry about refcounting bugs *in your Python > code*. > > That is one that I definitely hadn't thought about. I am totally with you on that. Chalk that up to one more obstacle, though I suppose you would have to deal with it with a ctypes approach as well. -- > --Guido van Rossum (python.org/~guido) > I don't want this discussion to be an abuse of people's time to the benefit of my understanding, but I am finding these threads to be very insightful. So, thanks! -eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Tue Mar 29 21:17:18 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 29 Mar 2011 13:17:18 -0600 Subject: [Python-ideas] def-from In-Reply-To: <4D922C9E.9060006@mrabarnett.plus.com> References: <4D922C9E.9060006@mrabarnett.plus.com> Message-ID: It's only a flesh wound. On Tue, Mar 29, 2011 at 1:01 PM, MRAB wrote: > On 29/03/2011 18:40, Guido van Rossum wrote: > [snip] > > To me, the idea has always been dead. I'm glad you've provided the >> supporting documentation of its demise. >> >> It's not dead, it's just pining. :-) > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Tue Mar 29 21:23:02 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 29 Mar 2011 13:23:02 -0600 Subject: [Python-ideas] def-from In-Reply-To: References: Message-ID: Another thing I had not considered is the effect this would have on the other implementations. I have no clue as to how difficult all this would be to accomplish in the DLR, or the JVM, or pypy, or others. I have no doubt that they could find a way, but for all I know it would put an undue burden on them. Is that a criteria for Python feature consideration, the impact on other implementations of adding the feature? -eric On Tue, Mar 29, 2011 at 12:58 PM, Eric Snow wrote: > I appreciate your candor. You are right that the bytecode would have to > live elsewhere, probably memory. Or you would have to handle the .pyc > generation dynamically. namedtuples do dynamic code generation and > execution, but not on the same scale as def-from would cause. > > Unless I misunderstand (not unlikely), it seems like your objection is that > the code for the builder in a def-from would not necessarily be around yet > to build the bytecode for the .pyc file. That is one of the parts of > CPython that I simply haven't gotten to yet, but I have a hunch you know > what you are talking about. :) Forgive me if I am wildly off, but in that > case it would require a builder to be in a C module, or for it to tie into > the existing mechanism CPython uses to build .pyc files and to use them. > The former is what the implicit function builder would do. > > And that is not the only complexity it would add. This would be just one > more means of meta-programming that people would have to wrap their heads > around (if they wanted to use it). It would also add more complexity to the > C-API. > > In light of all this, the benefits would have to be substantial, which is > not clear to me that they are, which in reality means that it isn't worth > getting into Python for now. Sometimes features have a way of coming in > later when the benefits make it worth it, but I am not going to hold my > breath on this one. However, the seeming flexibility of the idea is > alluring. I suppose that's why it keeps coming up. > > -eric > > > On Tue, Mar 29, 2011 at 11:40 AM, Guido van Rossum wrote: > >> On Tue, Mar 29, 2011 at 10:26 AM, Eric Snow >> wrote: >> > During the discussion about assignment decorators Nick brought up the >> idea >> > of adding a def-from syntax: >> >> > def from (): >> > ... >> >> > which would effectively call >> >> > (, , ) >> >> > This is much like how meta-classes work, though this would be with >> functions >> > (sort of meta-functions). Ultimately, they would amount to the same >> thing. >> > The current "def" statement would just have an implicit builder. >> >> > During the sprints I explored the idea of this with Nick by making a >> > "build_from" decorator, and an exec_closure builtin to provide the full >> > capability needed to emulate the def-from syntax with a code object. It >> > turned out the exec_closure didn't buy much. However, one thing that >> became >> > apparent in discussing this with Nick is that just passing the code >> object >> > of the decorated function (or of the def body) as doesn't buy >> much. >> > To really get much bang out of this you would need to pass the AST of >> the >> > body. With the AST you could manipulate it as needed before compiling. >> > (Nick's idea) Suddenly class definitions are just a special case of >> > def-from. >> >> I'm glad you went down this particular rabbit hole in so much detail. >> You have proved beyond a doubt that the idea is not compatible with >> how Python currently compiles code, since it would mean that you >> couldn't save the generated bytecode to a .pyc file just by parsing >> and compiling the source code -- either you'd have to have the runtime >> environment available to generate the bytecode, or you'd have to put >> off generating the bytecode until much later. >> >> (I knew this all along, but had a hard time explaining it to the >> proponents of things like this, or the "make" statement, etc. -- many >> people have quite a misguided idea about how dynamic Python really is, >> and this doesn't stop them from proposing changes that only make sense >> in the alternate reality they believe they live in.) >> >> > If you really wanted to get crazy you could pass the raw string as >> >> > (can't blame this one on Nick). With the raw string you could put just >> > about anything in there, like a DSL or another programming language. >> Then >> > parse it however you like, and use that result to compile something else >> or >> > build some data set or call some external library or whatever you care >> to do >> > with raw data. It would not be restricted to parsable Python In the >> normal >> > "def" context Python is hardwired to turn it into a function code >> object, >> > and the implicit builder to generate the function object thereon. >> >> Yeah, this is a nice reduction to the absurd of the original idea; the >> contradiction you've arrived at proves that the original idea cannot >> work. >> >> > I am reticent to suggest adding undue complexity to the languge or >> adversely >> > affect readability (potentially) if it doesn't offer plenty of increased >> > expressive power. So, crazy raw string thing aside, how about the >> def-from >> > syntax, particularly with the AST passed? Nick already indicated to me >> that >> > we probably should get comfy with metaclass __prepare__ before we get >> any >> > more metaprogramming, and he's probably right. I was thinking about >> > implementing def-from as an exercise in syntax hacking, regardless. Any >> > thoughts? Are there better syntax hacking exercises (like Raymond's >> "def >> > x.y(..." or "x.y = ..." or "x.(name)"? Could def-from have a place in >> the >> > future? >> >> To me, the idea has always been dead. I'm glad you've provided the >> supporting documentation of its demise. >> >> -- >> --Guido van Rossum (python.org/~guido) >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Tue Mar 29 21:32:16 2011 From: sturla at molden.no (Sturla Molden) Date: Tue, 29 Mar 2011 21:32:16 +0200 Subject: [Python-ideas] {Python-ideas] C-API exposure In-Reply-To: References: Message-ID: <4D9233C0.30708@molden.no> Den 29.03.2011 21:15, skrev Eric Snow: > > Well people already do this using ctypes... > > Haven't used them myself yet. So you can use them to expose all of > the C-API? Cool! > Yes, ctypes.pythonapi exposes the Python C API to Python. You can use it for evil code like this hack to prevent thread switch (doesn't work with Python 3, as the implementation has changed). Just make sure you don't call extension code that releases the GIL, or all bets are off ;-) Sturla from contextlib import contextmanager import ctypes _Py_Ticker = ctypes.c_int.in_dll(ctypes.pythonapi,"_Py_Ticker") @contextmanager def atomic(): tmp = _Py_Ticker.value _Py_Ticker.value = 0x7fffffff yield _Py_Ticker.value = tmp - 1 Now we can do with atomic(): # whatever pass -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Mar 29 22:29:33 2011 From: guido at python.org (Guido van Rossum) Date: Tue, 29 Mar 2011 13:29:33 -0700 Subject: [Python-ideas] def-from In-Reply-To: References: Message-ID: On Tue, Mar 29, 2011 at 11:58 AM, Eric Snow wrote: > Forgive me if I am wildly off, but in that > case it would require a builder to be in a C module, or for it to tie into > the existing mechanism CPython uses to build .pyc files and to use them. It's worse than that. The compiler that generates the .pyc file cannot have knowledge of the environment in which the code will be executed -- and that includes things like importing modules. On Tue, Mar 29, 2011 at 12:23 PM, Eric Snow wrote: > Another thing I had not considered is the effect this would have on the > other implementations. I have no clue as to how difficult all this would be > to accomplish in the DLR, or the JVM, or pypy, or others. I have no doubt > that they could find a way, but for all I know it would put an undue burden > on them. Is that a criteria for Python feature consideration, the impact on > other implementations of adding the feature? For a feature that changes the language syntax, most certainly. For a feature that adds something to the stdlib, yes, unless you are offering functionality that would simply make no sense in another implementation. (E.g. 'dis', which you mentioned before, gets a pass because it refers to the bytecode which is a CPython-exclusive feature. But it is pretty much only used for interactive debugging.) Note that things weren't always like this. But now they are. We are striving to increase compatibility between the different Python implementations so as to reduce the pain of users switching implementations. (E.g. it would be a shame if your code would run twice as fast on PyPy but you can't port it because you happen to use one little CPython-only feature.) -- --Guido van Rossum (python.org/~guido) From debatem1 at gmail.com Tue Mar 29 22:53:44 2011 From: debatem1 at gmail.com (geremy condra) Date: Tue, 29 Mar 2011 13:53:44 -0700 Subject: [Python-ideas] {Python-ideas] C-API exposure In-Reply-To: <4D9233C0.30708@molden.no> References: <4D9233C0.30708@molden.no> Message-ID: On Tue, Mar 29, 2011 at 12:32 PM, Sturla Molden wrote: > Den 29.03.2011 21:15, skrev Eric Snow: >> >> Well people already do this using ctypes... >> > Haven't used them myself yet. ?So you can use them to expose all of the > C-API? ?Cool! > > > Yes, ctypes.pythonapi exposes the Python C API to Python. > > You can use it for evil code like this hack to prevent thread switch > (doesn't work with Python 3, as the implementation has changed). Just make > sure you don't call extension code that releases the GIL, or all bets are > off ;-) > > Sturla > > > > from contextlib import contextmanager > import ctypes > _Py_Ticker = ctypes.c_int.in_dll(ctypes.pythonapi,"_Py_Ticker") > > @contextmanager > def atomic(): > ??? tmp = _Py_Ticker.value > ??? _Py_Ticker.value = 0x7fffffff > ??? yield > ??? _Py_Ticker.value = tmp - 1 > > Now we can do > > with atomic(): > ??? # whatever > ??? pass Huh. That's terrifying. Thanks for the example. Geremy Condra From greg.ewing at canterbury.ac.nz Wed Mar 30 00:16:33 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 30 Mar 2011 11:16:33 +1300 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: Message-ID: <4D925A41.7070604@canterbury.ac.nz> Eric Snow wrote: > Here's another outlandish idea. How about if descriptors could be used > outside of classes. I.e. any global or local variable could be assigned > a descriptor object and the descriptor protocol would be respected for > that variable. There's a major problem with that: if *every* variable behaves that way, then how do you pass around and manipulate descriptor objects themselves? -- Greg From ncoghlan at gmail.com Wed Mar 30 06:00:47 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 30 Mar 2011 14:00:47 +1000 Subject: [Python-ideas] {Python-ideas] C-API exposure In-Reply-To: <4D9233C0.30708@molden.no> References: <4D9233C0.30708@molden.no> Message-ID: On Wed, Mar 30, 2011 at 5:32 AM, Sturla Molden wrote: > from contextlib import contextmanager > import ctypes > _Py_Ticker = ctypes.c_int.in_dll(ctypes.pythonapi,"_Py_Ticker") > > @contextmanager > def atomic(): > ??? tmp = _Py_Ticker.value > ??? _Py_Ticker.value = 0x7fffffff > ??? yield > ??? _Py_Ticker.value = tmp - 1 Yikes, at least stick a try-finally in there! If you must practice evil, practice safe evil ;) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Wed Mar 30 06:07:51 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 30 Mar 2011 14:07:51 +1000 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 3:59 AM, Eric Snow wrote: > Here's another outlandish idea. ?How about if descriptors could be used > outside of classes. ?I.e. any global or local variable could be assigned a > descriptor object and the descriptor protocol would be respected for that > variable. ?This would be a pretty messy change, and I have no illusions that > the idea will go anywhere. ?However, would there be room for this in python? Not really, because globals() both promises to return a normal dictionary and to respect changes to the module globals made via that dictionary. All bets are off with locals(), but the globals() aspect already spikes the idea, as it does many other ideas to do with speeding or otherwise enhancing namespace lookups. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From luoyonggang at gmail.com Wed Mar 30 08:07:17 2011 From: luoyonggang at gmail.com (=?UTF-8?B?572X5YuH5YiaKFlvbmdnYW5nIEx1bykg?=) Date: Wed, 30 Mar 2011 14:07:17 +0800 Subject: [Python-ideas] How to Set Environment Variables Message-ID: http://code.activestate.com/recipes/159462-how-to-set-environment-variables/ Writes environment variables using a batch file wrapper. Overcomes an operating system limitation. *setvar.bat* *----------* *@echo off* *python setvarp.py %1 %2 %3 %4 %5 %6 %7 %8 %9* *settmp* *del settmp.bat* * * *setvarp.py* *----------* *import sys, time, math* *key = sys.argv[1]* *value = eval(' '.join(sys.argv[2:]))* *command = 'set %s=%s\n' % (key, value)* *open('settmp.bat', 'w').write(command)* * * * * *sample command line session* *---------------------------* *C>setvar ts time.ctime()* *C>setvar pi 22.0 / 7.0* *C>setvar pyver sys.version* *C>set* * * *TS=Sun Oct 27 18:12:23 2002* *PI=3.14285714286* *PYVER=2.3a0 (#29, Oct 22 2002, 01:41:41) [MSC 32 bit (Intel)]* Environment variables can be read with os.environ. They can be written (for sub-shells only) using os.putenv(key, value). However, there is no direct way to modify the global environment that the python script is running in. The indirect method shown above writes a set command to a temporary batch file which is in the enclosing environment by another batch file used to launch the python script. In the example above, arbitrary expressions can be evaluated and the result assigned to an environment variable. For security, the eval() function can be replaced with str(). Usually, writing to an environment variable should be avoided in favor of sharing values through a pipe or a common data file. However, when it can't be avoided, the above technique is an effective, though hackish, work-around. from page http://docs.python.org/library/subprocess.html#subprocess.Popen,we know the constructor for Popen is class subprocess.Popen(args, bufsize=0, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None,close_fds=False, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0)? we know after calling Popen, it's won't affect the parent environment, but, with many conditions, we wan't to modify the current environment variables by outer Bash script or DOS bash script. So I suppose to add an extra parameter update_parent_env (Default to False), to implement such a function. -- ?? ? ??? Yours sincerely, Yonggang Luo -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkbbwr at gmail.com Wed Mar 30 12:42:53 2011 From: jkbbwr at gmail.com (Jakob Bowyer) Date: Wed, 30 Mar 2011 11:42:53 +0100 Subject: [Python-ideas] Serialization of custom classes. Message-ID: """ Currently json dumping an object is sketchy because of the need to serialise. E.g. """ import json class Something(object): def __init__(self, arg1): self.arg1 = arg1 def __str__(self): return str(self.arg1) def __repr__(self): return str(self.arg1) def double(self): return self.arg1 * 2 test = Something(42) #json.dumps(test) """This ofc raises a TypeError as shown here ------------------------------------------------------------ Traceback (most recent call last): File "", line 1, in File "C:\Python26\Lib\site-packages\spyderlib\widgets\externalshell\startup.py", line 122, in runfile execfile(filename, glbs) File "C:\infobarb\test.py", line 22, in json.dumps(test) File "C:\Python26\lib\json\__init__.py", line 230, in dumps return _default_encoder.encode(obj) File "C:\Python26\lib\json\encoder.py", line 367, in encode chunks = list(self.iterencode(o)) File "C:\Python26\lib\json\encoder.py", line 317, in _iterencode for chunk in self._iterencode_default(o, markers): File "C:\Python26\lib\json\encoder.py", line 323, in _iterencode_default newobj = self.default(o) File "C:\Python26\lib\json\encoder.py", line 344, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: 42 is not JSON serializable My suggested fix and an idea used by several others could be to add a __json__ method to objects, this method would be tried by json.dumps() before it tries to serialise the argument. It should return all of the instance variables in a json format and any other infomation considered to be correct for the dump. This could be extended to provide a __json__ and a __jsons__ format where the latter returns a string format of the __json__ serialzation an example of which (crudely constructed ofc) is below. But for now I consider __json__ to return a string serialized for json dumping. """ import json def customjsondumps(obj): try: return json.dumps(obj.__json__()) except AttributeError: raise AttributeError('Object has no __json__ method.') """There are ofc several ideas for the __json__ method e.g.""" def __json__(self): """This method returns a serialized string for json.dumps""" return self.__dict__ """Or returning some fancy constructed dict, list or other serialised form for dumping into json.""" class Something(object): def __init__(self, arg1): self.arg1 = arg1 def __str__(self): return str(self.arg1) def __repr__(self): return str(self.arg1) def double(self): return self.arg1 * 2 def __json__(self): ret = self.__dict__.copy() for key, item in ret.iteritems(): ret[key] = item.__repr__() return ret test = Something(41) From masklinn at masklinn.net Wed Mar 30 13:05:49 2011 From: masklinn at masklinn.net (Masklinn) Date: Wed, 30 Mar 2011 13:05:49 +0200 Subject: [Python-ideas] Serialization of custom classes. In-Reply-To: References: Message-ID: <4EDAC7F4-5EB5-43FD-947E-BE46AA6C994D@masklinn.net> On 2011-03-30, at 12:42 , Jakob Bowyer wrote: > > My suggested fix and an idea used by several others could be to add a __json__ > method to objects, this method would be tried by json.dumps() before it tries to > serialise the argument. It should return all of the instance variables in a json > format and any other infomation considered to be correct for the dump. > This could be extended to provide a __json__ and a __jsons__ format where the > latter returns a string format of the __json__ serialzation an example of which > (crudely constructed ofc) is below. But for now I consider __json__ to return a > string serialized for json dumping. The documented method for serializing non-literal types (such as custom types) is to simply provide subclasses of ``JSONEncoder`` overriding the method ``default``. Why not just do that? Hell, you could even implement your serialization scheme through this, no need for "customjsondumps". From masklinn at masklinn.net Wed Mar 30 13:17:48 2011 From: masklinn at masklinn.net (Masklinn) Date: Wed, 30 Mar 2011 13:17:48 +0200 Subject: [Python-ideas] Serialization of custom classes. In-Reply-To: References: <4EDAC7F4-5EB5-43FD-947E-BE46AA6C994D@masklinn.net> Message-ID: <3D62D370-6536-441E-9508-B65E4887823E@masklinn.net> On 2011-03-30, at 13:07 , Jakob Bowyer wrote: > Ignore customjsondumps for now. Im more getting at __json__ being a > class method? As I said, there is a blessed extension mechanism in subclassing JSONEncoder, and as long as all JSONEncoder subclasses correctly call super() they should be composable. And if you decide that your objects will all implement __json__ you can just throw in a generic JSONEncoder for that. What significant gain would an additional __json__ hook provide over it? PS: the simplejson mailing list may be a better suggestion for that kind of things, it's probably where the tip of the development happens. From sturla at molden.no Wed Mar 30 16:36:48 2011 From: sturla at molden.no (Sturla Molden) Date: Wed, 30 Mar 2011 16:36:48 +0200 Subject: [Python-ideas] {Python-ideas] C-API exposure In-Reply-To: References: <4D9233C0.30708@molden.no> Message-ID: <4D934000.80607@molden.no> Den 30.03.2011 06:00, skrev Nick Coghlan: > Yikes, at least stick a try-finally in there! > If you must practice evil, practice safe evil ;) I wasn't suggestion that one should actually do this. It was jut to show that the C API is exposed to Python. Well, _Py_Ticker is not even in the C API, but it's not declared static so we can do bad things with it from outside. The point is still that ctypes.pythonapi is the DLL containing the CPython interpreter. Sturla From ericsnowcurrently at gmail.com Wed Mar 30 18:07:20 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 30 Mar 2011 10:07:20 -0600 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: Message-ID: Yeah, I figured as much. I am sure there is some convoluted way to make it work, but it would not nearly be worth it for what we would get out of it. -eric On Tue, Mar 29, 2011 at 10:07 PM, Nick Coghlan wrote: > On Wed, Mar 30, 2011 at 3:59 AM, Eric Snow > wrote: > > Here's another outlandish idea. How about if descriptors could be used > > outside of classes. I.e. any global or local variable could be assigned > a > > descriptor object and the descriptor protocol would be respected for that > > variable. This would be a pretty messy change, and I have no illusions > that > > the idea will go anywhere. However, would there be room for this in > python? > > Not really, because globals() both promises to return a normal > dictionary and to respect changes to the module globals made via that > dictionary. > > All bets are off with locals(), but the globals() aspect already > spikes the idea, as it does many other ideas to do with speeding or > otherwise enhancing namespace lookups. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at gmail.com Wed Mar 30 19:42:40 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Wed, 30 Mar 2011 18:42:40 +0100 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: Message-ID: On 30 March 2011 17:07, Eric Snow wrote: > Yeah, I figured as much. I am sure there is some convoluted way to make it > work, but it would not nearly be worth it for what we would get out of it. Well, it's still a *nice idea* even if it's impractical. For example to allow for deprecation warnings on module variables Twisted creates a subclass of modules (I believe), so that accessing the variable raises the appropriate warning. In the standard library we are unable to apply deprecation warnings to module variables because we don't have a mechanism like this. All the best, Michael > > -eric > > > On Tue, Mar 29, 2011 at 10:07 PM, Nick Coghlan wrote: > >> On Wed, Mar 30, 2011 at 3:59 AM, Eric Snow >> wrote: >> > Here's another outlandish idea. How about if descriptors could be used >> > outside of classes. I.e. any global or local variable could be assigned >> a >> > descriptor object and the descriptor protocol would be respected for >> that >> > variable. This would be a pretty messy change, and I have no illusions >> that >> > the idea will go anywhere. However, would there be room for this in >> python? >> >> Not really, because globals() both promises to return a normal >> dictionary and to respect changes to the module globals made via that >> dictionary. >> >> All bets are off with locals(), but the globals() aspect already >> spikes the idea, as it does many other ideas to do with speeding or >> otherwise enhancing namespace lookups. >> >> Cheers, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From wickedgrey at gmail.com Wed Mar 30 20:37:32 2011 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Wed, 30 Mar 2011 11:37:32 -0700 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) Message-ID: Hello, Numpy 1.6.0 adds support for a half-float (16-bit) data type, but cannot currently export a buffer interface to the data, since the closest type that PEP 3118 supports is an unsigned short ('H'). This makes working with the data from outside numpy (for example, from Cython) difficult, since even if numpy were to expose a buffer interface to the data, it's unclear that the data needs special treatment to interpret correctly (numpy does this with bit shifting functions to convert it to a float32, but it has access to the array dtype which isn't available through the buffer interface, per my understanding). What would be required to get a float16 data type added to PEP 3118 (either implicitly via inclusion of the struct module, or explicitly in the PEP itself)? I'm not currently a contributor to python, numpy or cython, but am prepared to provide patches. Some of my exploratory work for numpy and cython (which is my driving use case) is below. Numpy seems to use the 'e' format character, so I stuck with that. Thanks, Eli http://en.wikipedia.org/wiki/Half_precision_floating-point_format https://github.com/wickedgrey/cython https://github.com/wickedgrey/numpy From alexander.belopolsky at gmail.com Wed Mar 30 20:54:14 2011 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Wed, 30 Mar 2011 14:54:14 -0400 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 2:37 PM, Eli Stevens (Gmail) wrote: .. > What would be required to get a float16 data type added to PEP 3118 > (either implicitly via inclusion of the struct module, or explicitly > in the PEP itself)? I would like to see a patch adding float16 to struct and ctypes modules together with the buffer support. Adding features to PEP 3118 that cannot be exercised by the standard library is not a good idea. (Case in point: support for multi-dimensional arrays.) From dickinsm at gmail.com Wed Mar 30 20:54:54 2011 From: dickinsm at gmail.com (Mark Dickinson) Date: Wed, 30 Mar 2011 19:54:54 +0100 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 7:37 PM, Eli Stevens (Gmail) wrote: > What would be required to get a float16 data type added to PEP 3118 > (either implicitly via inclusion of the struct module, or explicitly > in the PEP itself)? Hmm. A partial list of requirements: (1) An open bugs.python.org issue. (2) Someone to provide patches (it sounds like you're up for this). (3) Someone else willing to review those patches (this is the hard part). (4) General agreement in the b.p.o. issue that this is a worthwhile feature to include; a disagreement here would punt the issue back into python-dev or python-ideas territory for wider discussion. It probably doesn't make sense to try to update the PEP itself: just propose the addition to the struct module in an issue. Work on the struct part of PEP 3118 is somewhat stalled at the moment; I had assigned some of those issues to myself, but unassigned them after finding I didn't really have proper time to think about them. If you could help out with some of the other open PEP 3118 issues, that might go a long way towards persuading someone to review your changes. For myself, I have mixed feelings on the proposed addition: while I can see how the half-precision floats would be useful in NumPy, it's not so clear that they'd be useful to Python itself. It feels a little bit odd to have NumPy driving Python additions that may not be of that much interest to non-NumPy users. Mark From dickinsm at gmail.com Wed Mar 30 21:02:15 2011 From: dickinsm at gmail.com (Mark Dickinson) Date: Wed, 30 Mar 2011 20:02:15 +0100 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 7:54 PM, Alexander Belopolsky wrote: > I would like to see a patch adding float16 to struct and ctypes > modules together with the buffer support. I'm not sure how much sense this makes for ctypes, given that float16 isn't a datatype supported by most C implementations. Mark From robert.kern at gmail.com Wed Mar 30 21:53:53 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Mar 2011 14:53:53 -0500 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On 3/30/11 1:54 PM, Mark Dickinson wrote: > For myself, I have mixed feelings on the proposed addition: while I > can see how the half-precision floats would be useful in NumPy, it's > not so clear that they'd be useful to Python itself. It feels a > little bit odd to have NumPy driving Python additions that may not be > of that much interest to non-NumPy users. Like Ellipsis, multidimensional extended slicing, complex numbers, and non-bool rich comparisons? :-) I think the major point in its favor is that PEP 3118 defines a protocol for third party libraries to communicate, the most notable of which really was numpy. Python itself needs only a subset of that, which was mostly already capably handled by the old buffer protocol. Still, it's worth defining the standard to allow third parties to communicate the full spectrum of things they want to tell each other. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dickinsm at gmail.com Wed Mar 30 22:05:25 2011 From: dickinsm at gmail.com (Mark Dickinson) Date: Wed, 30 Mar 2011 21:05:25 +0100 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 8:53 PM, Robert Kern wrote: > > Like Ellipsis, multidimensional extended slicing, complex numbers, and > non-bool rich comparisons? :-) Indeed! (BTW, I didn't know that Python's complex numbers were NumPy influenced: thanks for that.) > capably handled by the old buffer protocol. Still, it's worth defining the > standard to allow third parties to communicate the full spectrum of things > they want to tell each other. Yes, that makes sense. It's not very clear to me what the scope of the Python additions would be. [OT]: How is NumPy's float16 type implemented? Is it clever enough to do correct rounding for all basic arithmetic operations, or does it suffer from the double-rounding problems that you'd get from (convert operands to float64; do op in float64; round back to float16)? Mark From tjreedy at udel.edu Wed Mar 30 22:24:28 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 30 Mar 2011 16:24:28 -0400 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On 3/30/2011 2:54 PM, Mark Dickinson wrote: > On Wed, Mar 30, 2011 at 7:37 PM, Eli Stevens (Gmail) > wrote: >> What would be required to get a float16 data type added to PEP 3118 To start, email the two authors. >> (either implicitly via inclusion of the struct module, or explicitly >> in the PEP itself)? > > Hmm. A partial list of requirements: > > (1) An open bugs.python.org issue. If this were added to the PEP, it would be included in http://bugs.python.org/issue3132 > (2) Someone to provide patches (it sounds like you're up for this). Or do a review of Meador Inge's latest (last January) patch version. > (3) Someone else willing to review those patches (this is the hard part). > (4) General agreement in the b.p.o. issue that this is a worthwhile > feature to include; a disagreement here would punt the issue back > into python-dev or python-ideas territory for wider discussion. > > It probably doesn't make sense to try to update the PEP itself: See above. > just propose the addition to the struct module in an issue. > > Work on the struct part of PEP 3118 is somewhat stalled at the moment; > I had assigned some of those issues to myself, but unassigned them > after finding I didn't really have proper time to think about them. > If you could help out with some of the other open PEP 3118 issues, > that might go a long way towards persuading someone to review your > changes. I think that a patch to _struct.py should include all the 3118 additions, and not just this one. Searching All Test 'pep 3118' on the tracker returns 7 open issues. > For myself, I have mixed feelings on the proposed addition: while I > can see how the half-precision floats would be useful in NumPy, it's > not so clear that they'd be useful to Python itself. It feels a > little bit odd to have NumPy driving Python additions that may not be > of that much interest to non-NumPy users. I am pretty sure both extended slices and Ellipsis were first added for Numpy's ancestor Numerical Python. In any case, the intent of the pep seems to be that struct be expanded to match NumPy. "Additions to the struct string-syntax The struct string-syntax is missing some characters to fully implement data-format descriptions already available elsewhere (in ctypes and NumPy for example)." Some of the additions (such as pointers) already seem less useful than float16, which I presume struct would just expand to (or compress from) a normal, usable, Python float. -- Terry Jan Reedy From raymond.hettinger at gmail.com Wed Mar 30 22:26:15 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 30 Mar 2011 13:26:15 -0700 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: Message-ID: On Mar 29, 2011, at 10:59 AM, Eric Snow wrote: > Here's another outlandish idea. How about if descriptors could be used outside of classes. I.e. any global or local variable could be assigned a descriptor object and the descriptor protocol would be respected for that variable. This would be a pretty messy change, and I have no illusions that the idea will go anywhere. However, would there be room for this in python? FWIW, you can already do this with locals (am not saying you should do it, am just saying that you can do it). Remember, the essential mechanism for descriptors is in the lookup function, not in the descriptor itself. For example, property() objects are descriptors only because they define one of the descriptor protocol methods (__get__, et al). Whether it gets invoked solely depends on how you look it up. If you use regular dictionary lookup, a.__class__.__dict__['x'], then the property object is retrieved but no special action occurs. If you use dotted lookup, a.x, then the property's __get__ method is called. This is because the lookup function, object.__getattribute__(), has code to detect and invoke descriptors. A ultra-simplified version of the lookup functions's psuedo-code looks like this: value = kls.__dict__[key] if hasattr(value, '__get__'): return call_the_getter(kls ,key) else: return the value Knowing this, it is possible to emulate that behavior with a dictionary whose lookup function, __getitem__(), can detect and invoke some sort of descriptor protocol. Since eval/exec can use arbitrary mappings for locals, you can use your custom dictionary while executing arbitrary python code. Essentially, you're executing python code in an environment where the lookup function for locals has been trained to handle your custom descriptor protocol. Raymond ----- simple example ----- class MyDict: def __init__(self, mapping): self.mapping = mapping def __getitem__(self, key): value = self.mapping[key] if hasattr(value, '__get__'): print('Invoking descriptor on', key) return value.__get__(key) print('Getting', key) return value def __setitem__(self, key, value): self.mapping[key] = value class Property: def __init__(self, getter): self.getter = getter def __get__(self, key): return self.getter(key) if __name__ == '__main__': md = MyDict({}) md['x'] = 10 md['_y'] = 20 md['y'] = Property(lambda key: md['_'+key]) print(eval('x+y+1', {}, md)) From robert.kern at gmail.com Wed Mar 30 22:42:56 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Mar 2011 15:42:56 -0500 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On 3/30/11 3:05 PM, Mark Dickinson wrote: > On Wed, Mar 30, 2011 at 8:53 PM, Robert Kern wrote: >> >> Like Ellipsis, multidimensional extended slicing, complex numbers, and >> non-bool rich comparisons? :-) > > Indeed! > > (BTW, I didn't know that Python's complex numbers were NumPy > influenced: thanks for that.) > >> capably handled by the old buffer protocol. Still, it's worth defining the >> standard to allow third parties to communicate the full spectrum of things >> they want to tell each other. > > Yes, that makes sense. It's not very clear to me what the scope of > the Python additions would be. As far as I can tell (and I've really only looked at PEP 3118 in any detail today), only producers and consumers of the buffer actually care about the contents of the format string, and consumers are free to reject format codes that they don't understand. I think you can just treat the section of the PEP defining the format codes as informational, much like the DB-API only a little more rigorous. Adding support for it to the struct module is a good bonus. As a digression, it would be great if the format codes were defined in an extensible fashion, such that two agreeing third parties could talk to each other using their own format codes without having to modify the PEP. It already contains a little bit of this with the 't' code. If you could add a distinguishing name as well (besides the ':name:' syntax, which is reserved for adding names to fields, not types), then numpy and Cython could simply agree that '16t{half}', for example, meant a half-float without having to wait for the PEP to be modified. > [OT]: How is NumPy's float16 type implemented? Is it clever enough to > do correct rounding for all basic arithmetic operations, or does it > suffer from the double-rounding problems that you'd get from (convert > operands to float64; do op in float64; round back to float16)? We do the latter, I'm afraid. Except with float32 instead of float64. https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/loops.c.src#L1443 https://github.com/numpy/numpy/blob/master/numpy/core/src/npymath/halffloat.c -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From raymond.hettinger at gmail.com Wed Mar 30 22:44:26 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 30 Mar 2011 13:44:26 -0700 Subject: [Python-ideas] {Python-ideas] C-API exposure In-Reply-To: References: Message-ID: On Mar 29, 2011, at 12:15 PM, Eric Snow wrote: > I don't want this discussion to be an abuse of people's time to the benefit of my understanding, but I am finding these threads to be very insightful. So, thanks! The discussion has made for an interesting read, so I don't think it has been a waste of time. The python-ideas mailing list is a reasonable place for flights of fancy and random musings :-) That being said, python-ideas would be a little more sane (and less disconcerting) if the musings came in the form of "here's my wild idea, let's play with it" rather than "i don't fully understand the language we've got but am going to propose changing it anyway." If someone proposes to demolish 20 years worth of language success, it's harder to respond with an open-mind and a playful out-of-the-box outlook. Raymond From greg.ewing at canterbury.ac.nz Wed Mar 30 22:51:22 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 31 Mar 2011 09:51:22 +1300 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: Message-ID: <4D9397CA.4040706@canterbury.ac.nz> Eric Snow wrote: > I am sure there is some convoluted way to make > it work, but it would not nearly be worth it for what we would get out > of it. What would be useful from time to time is a more straightforward way of getting a module that's based on a subclass of the built-in module class. While that's currently possible, it requires some not-entirely-obvious hackery. -- Greg From ericsnowcurrently at gmail.com Wed Mar 30 22:59:24 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 30 Mar 2011 14:59:24 -0600 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: <4D9397CA.4040706@canterbury.ac.nz> References: <4D9397CA.4040706@canterbury.ac.nz> Message-ID: I was just thinking along those same lines. Sounds like twisted already does it. Does it amount to using a custom __import__? -eric On Wed, Mar 30, 2011 at 2:51 PM, Greg Ewing wrote: > Eric Snow wrote: > >> I am sure there is some convoluted way to make it work, but it would not >> nearly be worth it for what we would get out of it. >> > > What would be useful from time to time is a more straightforward > way of getting a module that's based on a subclass of the > built-in module class. While that's currently possible, it > requires some not-entirely-obvious hackery. > > -- > Greg > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Wed Mar 30 23:03:07 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 31 Mar 2011 10:03:07 +1300 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: <4D939A8B.2010706@canterbury.ac.nz> Robert Kern wrote: > Still, it's > worth defining the standard to allow third parties to communicate the > full spectrum of things they want to tell each other. But that's impossible -- there's no way the buffer protocol can explicitly cover all possible data types that any third party application might need to deal with. There needs to be some common ground, and the buffer protocol currently defines that as the set of standard C data types. -- Greg From greg.ewing at canterbury.ac.nz Wed Mar 30 23:13:00 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 31 Mar 2011 10:13:00 +1300 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: <4D9397CA.4040706@canterbury.ac.nz> Message-ID: <4D939CDC.8050706@canterbury.ac.nz> Eric Snow wrote: > I was just thinking along those same lines. Sounds like twisted already > does it. Does it amount to using a custom __import__? I don't know what Twisted does, but I was thinking of an attribute called __moduleclass__ that works a bit like the old __metaclass__ attribute. Then you could do class __moduleclass__: ... descriptor definitions go here ... -- Greg From ericsnowcurrently at gmail.com Wed Mar 30 23:18:52 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 30 Mar 2011 15:18:52 -0600 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: Message-ID: The same could be applied to the globals if module subclassing were practical. Then you could just use descriptors on that subclass. I expect that custom import functionality could provide this right now. Naturally, this would affect that promise Nick was talking about regarding globals, which could be confusing. But only in the same way that descriptors can be for classes already. Even if messing with the module class's __dict__ were legal, adding decorators there would probably not be effective since all modules would get those attributes. However, with module subclasses that would be more practical. Of course, the application of all this would be to let a module control what happens when another module tries to use the first module's namespace. But that is what descriptors are all about. -eric On Wed, Mar 30, 2011 at 2:26 PM, Raymond Hettinger < raymond.hettinger at gmail.com> wrote: > > On Mar 29, 2011, at 10:59 AM, Eric Snow wrote: > > > Here's another outlandish idea. How about if descriptors could be used > outside of classes. I.e. any global or local variable could be assigned a > descriptor object and the descriptor protocol would be respected for that > variable. This would be a pretty messy change, and I have no illusions that > the idea will go anywhere. However, would there be room for this in python? > > FWIW, you can already do this with locals (am not saying you should do it, > am just saying that you can do it). > > Remember, the essential mechanism for descriptors is in the lookup > function, not in the descriptor itself. For example, property() objects are > descriptors only because they define one of the descriptor protocol methods > (__get__, et al). Whether it gets invoked solely depends on how you look it > up. If you use regular dictionary lookup, a.__class__.__dict__['x'], then > the property object is retrieved but no special action occurs. If you use > dotted lookup, a.x, then the property's __get__ method is called. This is > because the lookup function, object.__getattribute__(), has code to detect > and invoke descriptors. > > A ultra-simplified version of the lookup functions's psuedo-code looks like > this: > > value = kls.__dict__[key] > if hasattr(value, '__get__'): > return call_the_getter(kls ,key) > else: > return the value > > Knowing this, it is possible to emulate that behavior with a dictionary > whose lookup function, __getitem__(), can detect and invoke some sort of > descriptor protocol. > > Since eval/exec can use arbitrary mappings for locals, you can use your > custom dictionary while executing arbitrary python code. Essentially, > you're executing python code in an environment where the lookup function for > locals has been trained to handle your custom descriptor protocol. > > > Raymond > > > ----- simple example ----- > > class MyDict: > def __init__(self, mapping): > self.mapping = mapping > def __getitem__(self, key): > value = self.mapping[key] > if hasattr(value, '__get__'): > print('Invoking descriptor on', key) > return value.__get__(key) > print('Getting', key) > return value > def __setitem__(self, key, value): > self.mapping[key] = value > > class Property: > def __init__(self, getter): > self.getter = getter > def __get__(self, key): > return self.getter(key) > > if __name__ == '__main__': > md = MyDict({}) > md['x'] = 10 > md['_y'] = 20 > md['y'] = Property(lambda key: md['_'+key]) > print(eval('x+y+1', {}, md)) > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Wed Mar 30 23:27:08 2011 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 30 Mar 2011 15:27:08 -0600 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: <4D939CDC.8050706@canterbury.ac.nz> References: <4D9397CA.4040706@canterbury.ac.nz> <4D939CDC.8050706@canterbury.ac.nz> Message-ID: I suppose that is more specific than the __import__ builtin. Classes have __build_class__. Functions don't have an equivalent in the global builtins. For imports you have to go through __import__. So a __module_class__ would dictate which class for import to use. By default it would be types.ModuleType. Makes sense. -eric p.s. a __build_function__ would be meaningful addition particularly if the def-from syntax were feasible. I'm just saying... :) On Wed, Mar 30, 2011 at 3:13 PM, Greg Ewing wrote: > Eric Snow wrote: > >> I was just thinking along those same lines. Sounds like twisted already >> does it. Does it amount to using a custom __import__? >> > > I don't know what Twisted does, but I was thinking of > an attribute called __moduleclass__ that works a bit > like the old __metaclass__ attribute. > > Then you could do > > class __moduleclass__: > > ... descriptor definitions go here ... > > > -- > Greg > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Mar 30 23:32:32 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Mar 2011 16:32:32 -0500 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: <4D939A8B.2010706@canterbury.ac.nz> References: <4D939A8B.2010706@canterbury.ac.nz> Message-ID: On 3/30/11 4:03 PM, Greg Ewing wrote: > Robert Kern wrote: >> Still, it's worth defining the standard to allow third parties to communicate >> the full spectrum of things they want to tell each other. > > But that's impossible -- there's no way the buffer protocol > can explicitly cover all possible data types that any third > party application might need to deal with. > > There needs to be some common ground, and the buffer > protocol currently defines that as the set of standard > C data types. And several more. I think that it would be reasonable to add more when two libraries come with a solid use case, like communicating the half-floats that are standard in OpenCL and other GPU languages. What do you think of my idea for adding extensibility to the format syntax, which should allow two libraries to communicate new types without having to modify the PEP every time? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From donspauldingii at gmail.com Wed Mar 30 23:40:06 2011 From: donspauldingii at gmail.com (Don Spaulding) Date: Wed, 30 Mar 2011 16:40:06 -0500 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 3:26 PM, Raymond Hettinger < raymond.hettinger at gmail.com> wrote: > > > Since eval/exec can use arbitrary mappings for locals, you can use your > custom dictionary while executing arbitrary python code. Essentially, > you're executing python code in an environment where the lookup function for > locals has been trained to handle your custom descriptor protocol. > > > I just signed up for python-ideas a month or two ago, how many awesome(ly dangerous) hacks like this have I missed over the years? About-to-eval-away-the-next-six-months-ly yours, Don -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Mar 30 23:48:57 2011 From: guido at python.org (Guido van Rossum) Date: Wed, 30 Mar 2011 14:48:57 -0700 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 2:40 PM, Don Spaulding wrote: > I just signed up for python-ideas a month or two ago, how many awesome(ly > dangerous) hacks like this have I missed over the years? We need a python-hacks list. :) -- --Guido van Rossum (python.org/~guido) From guido at python.org Wed Mar 30 23:53:00 2011 From: guido at python.org (Guido van Rossum) Date: Wed, 30 Mar 2011 14:53:00 -0700 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 1:05 PM, Mark Dickinson wrote: > (BTW, I didn't know that Python's complex numbers were NumPy > influenced: ?thanks for that.) You have Jim Hugunin to thank for that. I can still recall the exact location at the third Python conference (http://www.python.org/workshops/1995-12/) where Jim cornered and convinced me to add complex numbers (I don't recall which other features were part of the deal). Of course we have Jim to thank for NumPy, Jython, and IronPython as well. :-) -- --Guido van Rossum (python.org/~guido) From raymond.hettinger at gmail.com Thu Mar 31 00:03:06 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 30 Mar 2011 15:03:06 -0700 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On Mar 30, 2011, at 11:37 AM, Eli Stevens (Gmail) wrote: > > Numpy 1.6.0 adds support for a half-float (16-bit) data type, but > cannot currently export a buffer interface to the data, since the > closest type that PEP 3118 supports is an unsigned short ('H'). This > makes working with the data from outside numpy (for example, from > Cython) difficult, since even if numpy were to expose a buffer > interface to the data, it's unclear that the data needs special > treatment to interpret correctly (numpy does this with bit shifting > functions to convert it to a float32, but it has access to the array > dtype which isn't available through the buffer interface, per my > understanding). > > What would be required to get a float16 data type added to PEP 3118 > (either implicitly via inclusion of the struct module, or explicitly > in the PEP itself)? +1 I would support adding float16 to the struct module. It's a well defined format so we might as well provide an accessor. Just open a feature request for it. Any issues surrounding its use (i.e. double-rounding) are no different that the usual float/double conversion issues. Raymond From fuzzyman at gmail.com Thu Mar 31 00:04:06 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Wed, 30 Mar 2011 23:04:06 +0100 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: <4D939CDC.8050706@canterbury.ac.nz> References: <4D9397CA.4040706@canterbury.ac.nz> <4D939CDC.8050706@canterbury.ac.nz> Message-ID: On 30 March 2011 22:13, Greg Ewing wrote: > Eric Snow wrote: > >> I was just thinking along those same lines. Sounds like twisted already >> does it. Does it amount to using a custom __import__? >> > > I don't know what Twisted does, I'm pretty sure it creates a module subclass that forwards all attribute access to the real module and inserts itself into sys.modules in place of the "real" module. Pretty evil really. :-) I may be mistaken about this, it is based off my memory of a previous discussion. Michael > but I was thinking of > an attribute called __moduleclass__ that works a bit > like the old __metaclass__ attribute. > > Then you could do > > class __moduleclass__: > > ... descriptor definitions go here ... > > > -- > Greg > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > http://mail.python.org/mailman/listinfo/python-ideas > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Mar 31 01:00:37 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 31 Mar 2011 09:00:37 +1000 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: <4D9397CA.4040706@canterbury.ac.nz> <4D939CDC.8050706@canterbury.ac.nz> Message-ID: On Thu, Mar 31, 2011 at 8:04 AM, Michael Foord wrote: >>> I was just thinking along those same lines. ?Sounds like twisted already >>> does it. ?Does it amount to using a custom __import__? >> >> I don't know what Twisted does, > > I'm pretty sure it creates a module subclass that forwards all attribute > access to the real module and inserts itself into sys.modules in place of > the "real" module. Pretty evil really. :-) > I may be mistaken about this, it is based off my memory of a previous > discussion. That's certainly the trick people use to implement lazy import handlers in the absence of a proper implementation of post-import hooks (ala PEP 369). (I'll point out that the object inserted into sys.modules doesn't need to be, and often isn't, an instance of the module type) The module code itself doesn't even need to know that the intervening class exists. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From wickedgrey at gmail.com Thu Mar 31 01:34:29 2011 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Wed, 30 Mar 2011 16:34:29 -0700 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 3:03 PM, Raymond Hettinger wrote: > I would support adding float16 to the struct module. > It's a well defined format so we might as well provide an accessor. > Just open a feature request for it. This seems like a simple solution, however: On Wed, Mar 30, 2011 at 1:24 PM, Terry Reedy wrote: > If this were added to the PEP, it would be included in > http://bugs.python.org/issue3132 I'm still working through the issue/patch, but it seems to be concerned with how to handle long (long?) doubles cleanly on various platforms with varying levels of support for it (at least, that's the impression I got; I'm still a little unclear about what exactly was deficient prior to the patch). That seems like it would be a separate issue to me; can you explain in more detail how they're related? Is just that the new work should be based on the source post patch? Also, am I correct in my understanding that any code changes to _struct.c, etc. would not show up in a production release before 3.3? I'm based out of a strictly 2.7 shop, so if I'm going to need to develop patches, I'll have to make sure I have some place to test things (for our purposes, we just need a spec that numpy and cython can standardize on, but if a patch to the struct module is what it's going to take to make that happen, I'll give it a shot :). Eli From raymond.hettinger at gmail.com Thu Mar 31 02:06:16 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 30 Mar 2011 17:06:16 -0700 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On Mar 30, 2011, at 4:34 PM, Eli Stevens (Gmail) wrote: > On Wed, Mar 30, 2011 at 3:03 PM, Raymond Hettinger > wrote: >> I would support adding float16 to the struct module. >> It's a well defined format so we might as well provide an accessor. >> Just open a feature request for it. > > This seems like a simple solution, however: > > On Wed, Mar 30, 2011 at 1:24 PM, Terry Reedy wrote: >> If this were added to the PEP, it would be included in >> http://bugs.python.org/issue3132 I think the struct module addition for float16 could be handled separately and much more easily (since a half would fit in a double, but a long long double won't). > > Also, am I correct in my understanding that any code changes to > _struct.c, etc. would not show up in a production release before 3.3? Yes, that's right. If you need something for today, it's not hard to write pure python code using struct to read in an int16 and then do the bit manipulations to pick apart the sign, exponent, and mantissa to create the float value. > I'm based out of a strictly 2.7 shop, so if I'm going to need to > develop patches, I'll have to make sure I have some place to test > things (for our purposes, we just need a spec that numpy and cython > can standardize on, but if a patch to the struct module is what it's > going to take to make that happen, I'll give it a shot :). I don't follow what your issue is? Can you check-out a copy of the current Hg repository and build your patch against the default branch? Raymond From wickedgrey at gmail.com Thu Mar 31 02:32:13 2011 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Wed, 30 Mar 2011 17:32:13 -0700 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 5:06 PM, Raymond Hettinger wrote: > I think the struct module addition for float16 could be handled separately and much more easily (since a half would fit in a double, but a long long double won't). Okay, if no other objections get raised, I'll open a new issue for it (probably sometime tomorrow). > If you need something for today, it's not hard to write pure python code using struct to read in an int16 and then do the bit manipulations to pick apart the sign, exponent, and mantissa to create the float value. My particular use case focuses on getting float16 data from numpy (which handles the bit fiddling already) to be exposed in cython (which doesn't ATM, and can't know to do so without a specific float16 data format type). Step one of that is to update the spec to include a float16 type, either by changing PEP 3118, or adding it to the struct module (which is referenced by the PEP). Once that happens, I think there's a valid case to be made for numpy to export the float16 via the buffer interface, and a decent shot at getting some special case code added to cython. I don't need any CPython code changes for my use case, I don't think. > I don't follow what your issue is? ?Can you check-out a copy of the current Hg repository and build your patch against the default branch? Sorry, I'm juggling three different threads on this topic (python-ideas, cython-users, numpy-discussion), and am doing a poor job of keeping the contexts sorted out. :) Yes, I will try and compile/test CPython and build a patch for _struct.c from the current repo. Thanks! Eli From greg.ewing at canterbury.ac.nz Thu Mar 31 03:52:33 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 31 Mar 2011 14:52:33 +1300 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: Message-ID: <4D93DE61.9020506@canterbury.ac.nz> Eric Snow wrote: > Of course, the application of all this would be to let a module control > what happens when another module tries to use the first module's > namespace. BTW, if anyone's wondering about use cases for this, I have one in PyGUI where the top-level module auto-imports names from submodules the first time you refer to them. This avoids the overhead of loading modules that an application doesn't use, without requiring the user to memorise which names come from which submodules. It also gives me the flexibility to move things around between submodules if I want. This is currently done using a custom module subclass with a __getattr__ method. Although I'm thinking about using a different strategy, because the current one confuses the heck out of py2app and py2exe. :-( -- Greg From greg.ewing at canterbury.ac.nz Thu Mar 31 04:07:38 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 31 Mar 2011 15:07:38 +1300 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: <4D9397CA.4040706@canterbury.ac.nz> <4D939CDC.8050706@canterbury.ac.nz> Message-ID: <4D93E1EA.4090308@canterbury.ac.nz> Eric Snow wrote: > For imports you have to go through __import__. So a __module_class__ > would dictate which class for import to use. By default it would be > types.ModuleType. Makes sense. There's one tricky point, though -- you need a module object before you can execute the module's code, and it's the module's code that creates the __moduleclass__ entry. What should probably happen is that a standard module object gets created initially, and then after executing the module body, replace the module's __class__ with the __moduleclass__, if any. You can't currently do this in Python code, because it won't let you change the __class__ of a builtin module object. So either that restriction would have to be lifted, or the machinery implementing this would have to be written in C. An alternative (which is what PyGUI currently does) is to create a new module object of the specified class, copy the __dict__ of the original module into it, and then replace the entry in sys.modules. This would be second-best, though, because it would mean that if the module imported itself, it would end up with the old module object instead of the new one. The same thing would also happen to any other modules that imported the first module while it was still loading. -- Greg From greg.ewing at canterbury.ac.nz Thu Mar 31 04:18:26 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 31 Mar 2011 15:18:26 +1300 Subject: [Python-ideas] descriptors outside of classes In-Reply-To: References: Message-ID: <4D93E472.2040003@canterbury.ac.nz> Guido van Rossum wrote: > We need a python-hacks list. :) Very tangentially related, here's my latest piece of dubious hackery, written the other day to work around the fact that multiple inheritance seems to be broken in conjunction with gobject introspection. It plays fast and loose with the method resolution order, but it got me out of a tight corner. def mix_in(*src_classes): # Workaround for do_xxx method overrides not working properly # with multiple inheritance. # # Usage: # # class MyClass(Gtk.SomeBaseClass): # mix_in(Class1, Class2, ...) # import sys frame = sys._getframe(1) dst_dict = frame.f_locals for src_class in src_classes: for name, value in src_class.__dict__.iteritems(): if name not in dst_dict: dst_dict[name] = value -- Greg From greg.ewing at canterbury.ac.nz Thu Mar 31 04:32:34 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 31 Mar 2011 15:32:34 +1300 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: <4D93E7C2.2000009@canterbury.ac.nz> Just out of curiosity, is the layout of numpy's float16 based on any existing standard, or is it something purely invented by numpy? If it's a standard format, that would lend more weight to the idea of supporting it in the buffer interface. -- Greg From wickedgrey at gmail.com Thu Mar 31 05:39:16 2011 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Wed, 30 Mar 2011 20:39:16 -0700 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: <4D93E7C2.2000009@canterbury.ac.nz> References: <4D93E7C2.2000009@canterbury.ac.nz> Message-ID: On Wed, Mar 30, 2011 at 7:32 PM, Greg Ewing wrote: > Just out of curiosity, is the layout of numpy's float16 > based on any existing standard, or is it something purely > invented by numpy? > > If it's a standard format, that would lend more weight > to the idea of supporting it in the buffer interface. Per my understanding (I haven't gone and cross-checked the code with the spec, however), it's based on IEEE 754-2008: http://en.wikipedia.org/wiki/Half_precision_floating-point_format Eli From palla74 at gmail.com Thu Mar 31 14:39:54 2011 From: palla74 at gmail.com (Palla) Date: Thu, 31 Mar 2011 14:39:54 +0200 Subject: [Python-ideas] EuroPython 2011: call for paper is ending - Please spread the word Message-ID: Hi all, I'm Francesco and I am writing on behalf of "Python Italia APS", a no-profit association promoting EuroPython conference. (www.europython.eu) Europython End of Call for Presentations is April 6th. I'd like to ask to you to forward this mail to anyone that you feel may be interested. We're looking for proposals on every aspects of Python: programming from novice to advanced levels, applications and frameworks, or how you have been involved in introducing Python into your organisation. **First-time speakers are especially welcome**; EuroPython is a community conference and we are eager to hear about your experience. If you have friends or colleagues who have something valuable to contribute, twist their arms to tell us about it! Presenting at EuroPython ------------------------ We will accept a broad range of presentations, from reports on academic and commercial projects to tutorials and case studies. As long as the presentation is interesting and potentially useful to the Python community, it will be considered for inclusion in the programme. Can you show the conference-goers something new and useful? Can you show attendees how to: use a module? Explore a Python language feature? Package an application? If so, consider submitting a talk. Talks and hands-on trainings ---------------------------- There are two different kind of presentations that you can give as a speaker at EuroPython: * **Regular talk**. These are standard "talk with slides", allocated in slots of 45, 60 or 90 minutes, depending on your preference and scheduling constraints. A Q&A session is held at the end of the talk. * **Hands-on training**. These are advanced training sessions for a smaller audience (10-20 people), to dive into the subject with all details. These sessions are 4-hours long, and audience will be strongly encouraged to bring a laptop to experiment. They should be prepared with less slides and more source code. If possible, trainers will also give a short "teaser talk" of 30 minutes the day before the training, to tease delegates into attending the training. In the talk submission form, we assume that you intend to give a regular talk on the subject, but you will be asked if you are available for also doing a hands-on training on the very same subject. Speakers that will give a hands-on training are rewarded with a **free entrance** to EuroPython to compensate for the longer preparation required, and might also be eligible for a speaking fee (which we cannot confirm at the moment). Topics and goals ---------------- Specific topics for EuroPython presentations include, but are not limited to: - Core Python - Other implementations: Jython, IronPython, PyPy, and Stackless - Python libraries and extensions - Python 3.x migration - Databases - Documentation - GUI Programming - Game Programming - Network Programming - Open Source Python projects - Packaging Issues - Programming Tools - Project Best Practices - Embedding and Extending - Science and Math - Web-based Systems Presentation goals usually are some of the following: - Introduce audience to a new topic they are unaware of - Introduce audience to new developments on a well-known topic - Show audience real-world usage scenarios for a specific topic (case study) - Dig into advanced and relatively-unknown details on a topic - Compare different options in the market on a topic Community-based talk voting --------------------------- This year, for the first time in EuroPython history, the talk voting process is fully public. Every partecipant gains the right to vote for talks submitted during the Call For Papers, as soon as they commit to their presence at the conference by buying a ticket. See all the details in the talk voting[1] page. Contacts -------- For any further question, feel free to contact the organizers at info at pycon.it. Thank you! [1]: http://ep2011.europython.eu/talk-voting -- ->PALLA -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Mar 31 17:34:29 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 31 Mar 2011 11:34:29 -0400 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: <4D93E7C2.2000009@canterbury.ac.nz> References: <4D93E7C2.2000009@canterbury.ac.nz> Message-ID: On 3/30/2011 10:32 PM, Greg Ewing wrote: > Just out of curiosity, is the layout of numpy's float16 > based on any existing standard, or is it something purely > invented by numpy? > > If it's a standard format, that would lend more weight > to the idea of supporting it in the buffer interface. I understood Robert Kern's statement "I think that it would be reasonable to add more when two libraries come with a solid use case, like communicating the half-floats that are standard in OpenCL and other GPU languages. " to mean that numpy adopted it from OpenCL, etc. If so, I think Python should definitely add it. -- Terry Jan Reedy From alexander.belopolsky at gmail.com Thu Mar 31 17:52:16 2011 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu, 31 Mar 2011 11:52:16 -0400 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 3:02 PM, Mark Dickinson wrote: > On Wed, Mar 30, 2011 at 7:54 PM, Alexander Belopolsky > wrote: >> I would like to see a patch adding float16 to struct and ctypes >> modules together with the buffer support. > > I'm not sure how much sense this makes for ctypes, given that float16 > isn't a datatype supported by most C implementations. "On ARM targets, GCC supports half-precision (16-bit) floating point via the __fp16 type." http://gcc.gnu.org/onlinedocs/gcc/Half_002dPrecision.html However, before ctypes can support this, half-floats' support should be added to libffi through platform specific assembly hackery. So I withdraw my suggestion that ctypes support should be a prerequisite for float16 support in the buffer protocol, but I still would like to see it in struct. BTW, what letter code is proposed for half-floats? The only unassigned letter in the word "half" is 'a'. Maybe it is time to extend struct and buffer format specification to include field bit-width? From robert.kern at gmail.com Thu Mar 31 18:36:30 2011 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 31 Mar 2011 11:36:30 -0500 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On 3/31/11 10:52 AM, Alexander Belopolsky wrote: > BTW, what letter code is proposed for half-floats? The only > unassigned letter in the word "half" is 'a'. Maybe it is time to > extend struct and buffer format specification to include field > bit-width? The proposed letter code is 'e', as used in numpy. I'm not sure of the logic that went behind the choice, except perhaps that 'e' is near 'd' and 'f'. It's not too late to change, though. I don't know of any other group that has decided on such any kind of letter code for half-floats. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From alexander.belopolsky at gmail.com Thu Mar 31 19:58:35 2011 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu, 31 Mar 2011 13:58:35 -0400 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On Thu, Mar 31, 2011 at 12:36 PM, Robert Kern wrote: .. > The proposed letter code is 'e', as used in numpy. I'm not sure of the logic > that went behind the choice, except perhaps that 'e' is near 'd' and 'f'. So it is 'e' for half, 'f' for single and 'd' for double. Given that in English alphabet the order is d, e, f, I find this choice rather unintuitive. > It's not too late to change, though. I don't know of any other group that > has decided on such any kind of letter code for half-floats. > There is a language, Q, that uses "e" for single-precision floats. They call C-float "real" and C-double "float". See . Codes "e" for binary32 and "f" for binary64 make some sense alphabetically, but would suggest "d" for binary16, which would neither work for Python nor for Q because "d" is double in Python and date in Q. Note that IEEE 754-2008 also defines a binary128, quadruple precision format. If we keep assigning single letter codes to datatypes, struct/buffer format will soon resemble strftime with every letter of English alphabet having some (often non-obvious) meaning. (If we have to choose a single-letter code, I would vote for 'a' for hAlf and 'u' for qUad.) I would rather see some syntax that would allow multi-character type specifications. For example, {binary16} for half-floats and {binary128} for quadruple precision floats. This syntax may allow for support 3rd party type registry and private extensions. From robert.kern at gmail.com Thu Mar 31 20:43:17 2011 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 31 Mar 2011 13:43:17 -0500 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: On 3/31/11 12:58 PM, Alexander Belopolsky wrote: > On Thu, Mar 31, 2011 at 12:36 PM, Robert Kern wrote: > .. >> The proposed letter code is 'e', as used in numpy. I'm not sure of the logic >> that went behind the choice, except perhaps that 'e' is near 'd' and 'f'. > > So it is 'e' for half, 'f' for single and 'd' for double. Given that > in English alphabet the order is d, e, f, I find this choice rather > unintuitive. > >> It's not too late to change, though. I don't know of any other group that >> has decided on such any kind of letter code for half-floats. >> > > There is a language, Q, that uses "e" for single-precision floats. > They call C-float "real" and C-double "float". See > . Codes "e" for binary32 and "f" > for binary64 make some sense alphabetically, but would suggest "d" for > binary16, which would neither work for Python nor for Q because "d" is > double in Python and date in Q. > > Note that IEEE 754-2008 also defines a binary128, quadruple precision > format. If we keep assigning single letter codes to datatypes, > struct/buffer format will soon resemble strftime with every letter of > English alphabet having some (often non-obvious) meaning. Oh, we're well down that path. :-) > (If we have > to choose a single-letter code, I would vote for 'a' for hAlf and 'u' > for qUad.) 'u' is already reserved in PEP 3118, and 'a' is already used in numpy, though not in the PEP 3118 interface implementation. > I would rather see some syntax that would allow multi-character type > specifications. For example, {binary16} for half-floats and > {binary128} for quadruple precision floats. This syntax may allow for > support 3rd party type registry and private extensions. PEP 3118 does define a parametric 't' type: 16t would be a 16-bit field with undefined internal format. Elsewhere in the thread I suggested an extension to this add a freeform name to this type to allow 3rd parties agree on new types without needing changes to PEP 3118 or needing more single-letter codes. E.g. 16t{halffloat} -> IEEE 754-2008 half-float 128t{quadfloat} -> IEEE 754-2008 quad-float 96t{80bitfloat} -> 80-bit extended precision float stored in 96 bits etc. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From greg.ewing at canterbury.ac.nz Thu Mar 31 23:17:27 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 01 Apr 2011 10:17:27 +1300 Subject: [Python-ideas] Adding a half-float (16-bit) type to PEP 3118 (and possibly the struct module?) In-Reply-To: References: Message-ID: <4D94EF67.6060409@canterbury.ac.nz> Alexander Belopolsky wrote: > The only unassigned letter in the word "half" is 'a'. 'alf float! I like it. -- Greg From cool-rr at cool-rr.com Thu Mar 31 23:28:09 2011 From: cool-rr at cool-rr.com (cool-RR) Date: Thu, 31 Mar 2011 23:28:09 +0200 Subject: [Python-ideas] Adding a `Counter.elements_length` method Message-ID: Hello folks, I suggest a `Counter.elements_length` method should be added, that would give the same answer as `sum(counter.itervalues())`. What do you think? Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.hettinger at gmail.com Thu Mar 31 23:37:04 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Thu, 31 Mar 2011 14:37:04 -0700 Subject: [Python-ideas] Adding a `Counter.elements_length` method In-Reply-To: References: Message-ID: On Mar 31, 2011, at 2:28 PM, cool-RR wrote: > I suggest a `Counter.elements_length` method should be added, that would give the same answer as `sum(counter.itervalues())`. What do you think? Please open a feature request on the bug tracker and assign it to me. For the time being, I'm reluctant to further fatten the API, but this isn't an unreasonable request and there is precedent with Smalltalk's Bag API. On the other hand, sum(c.values()) is somewhat trivial. Raymond From cool-rr at cool-rr.com Thu Mar 31 23:42:31 2011 From: cool-rr at cool-rr.com (cool-RR) Date: Thu, 31 Mar 2011 23:42:31 +0200 Subject: [Python-ideas] Adding a `Counter.elements_length` method In-Reply-To: References: Message-ID: On Thu, Mar 31, 2011 at 11:37 PM, Raymond Hettinger < raymond.hettinger at gmail.com> wrote: > > On Mar 31, 2011, at 2:28 PM, cool-RR wrote: > > > I suggest a `Counter.elements_length` method should be added, that would > give the same answer as `sum(counter.itervalues())`. What do you think? > > Please open a feature request on the bug tracker and assign it to me. > For the time being, I'm reluctant to further fatten the API, but this > isn't an unreasonable request and there is precedent with > Smalltalk's Bag API. On the other hand, sum(c.values()) is > somewhat trivial. > > > Raymond > Done: http://bugs.python.org/issue11733 I'm unable to assign it to you. (I guess it takes some kind of credentials to do that?) I put you on the nosy list. Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: