From fijall at gmail.com Fri Jun 1 11:16:36 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 1 Jun 2012 11:16:36 +0200 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: References: <4FC7473F.5020802@hotpy.org> Message-ID: On Thu, May 31, 2012 at 6:56 PM, Benjamin Peterson wrote: > 2012/5/31 Nick Coghlan : > > On Thu, May 31, 2012 at 8:26 PM, Mark Shannon wrote: > >> Eric Snow wrote: > >>> > >>> The implementation for sys.implementation is going to use a new (but > >>> "private") type[1]. It's basically equivalent to the following: > >> > >> > >> Does this really need to be written in C rather than Python? > > > > Yes, because we want to use it in the sys module. As you get lower > > down in the interpreter stack, implementing things in Python actually > > starts getting painful because of bootstrapping issues (e.g. that's > > why both _structseq and collections.namedtuple exist). > > sys.implementation could be added by site or some other startup file. > > Yes, why not do that instead of a new thing in C? I don't care about PyPy actually (since we kind of have to implement sys.implementation in python/RPython anyway, since it'll be different) but more that more code in C means more trouble usually. Another question (might be out of topic here). What we do in PyPy to avoid bootstrapping issues (since we have quite a bit implemented in Python, rather than RPython) is to "freeze" the bytecode at compile time (or make time) and put it in the executable. This avoids all sort of filesystem access issues, but might be slightly too complicated. Cheers, fijal -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Jun 1 13:30:43 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 1 Jun 2012 21:30:43 +1000 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: References: <4FC7473F.5020802@hotpy.org> Message-ID: On Fri, Jun 1, 2012 at 7:16 PM, Maciej Fijalkowski wrote: >> sys.implementation could be added by site or some other startup file. >> > > Yes, why not do that instead of a new thing in C? I don't care about PyPy > actually (since we kind of have to implement sys.implementation in > python/RPython anyway, since it'll be different) The idea is that sys.implementation is the way some interpreter internal details are exposed to the Python layer, thus it needs to handled in the implementation language, and explicitly *not* in Python (if it's in Python, then the implementation has to come up with some *other* API for accessing those internals from Python code, thus missing a large part of the point of the exercise). > Another question (might be out of topic here). What we do in PyPy to avoid > bootstrapping issues (since we have quite a bit implemented in Python, > rather than RPython) is to "freeze" the bytecode at compile time (or make > time) and put it in the executable. This avoids all sort of filesystem > access issues, but might be slightly too complicated. Yeah, we're already doing that for importlib._bootstrap. It's a PITA (especially when changing the compiler), and certainly not easier than just writing some C code for a component that's explicitly defined as being implementation specific. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mark at hotpy.org Fri Jun 1 13:49:16 2012 From: mark at hotpy.org (Mark Shannon) Date: Fri, 01 Jun 2012 12:49:16 +0100 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: References: <4FC7473F.5020802@hotpy.org> Message-ID: <4FC8AC3C.6060005@hotpy.org> Nick Coghlan wrote: > On Fri, Jun 1, 2012 at 7:16 PM, Maciej Fijalkowski wrote: >>> sys.implementation could be added by site or some other startup file. >>> >> Yes, why not do that instead of a new thing in C? I don't care about PyPy >> actually (since we kind of have to implement sys.implementation in >> python/RPython anyway, since it'll be different) > > The idea is that sys.implementation is the way some interpreter > internal details are exposed to the Python layer, thus it needs to > handled in the implementation language, and explicitly *not* in Python > (if it's in Python, then the implementation has to come up with some > *other* API for accessing those internals from Python code, thus > missing a large part of the point of the exercise). Why? What is wrong with something like the following (for CPython)? class SysImplemention: "Define __repr__(), etc here " ... sys.implementation = SysImplemention() sys.implementation.name = 'cpython' sys.implementation.version = (3, 3, 0, 'alpha', 4) sys.implementation.hexversion = 0x30300a4 sys.implementation.cache_tag = 'cpython-33' Also, should the build/machine info be removed from sys.version and moved it to sys.implementation? Cheers, Mark. From ncoghlan at gmail.com Fri Jun 1 14:07:01 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 1 Jun 2012 22:07:01 +1000 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: <4FC8AC3C.6060005@hotpy.org> References: <4FC7473F.5020802@hotpy.org> <4FC8AC3C.6060005@hotpy.org> Message-ID: On Fri, Jun 1, 2012 at 9:49 PM, Mark Shannon wrote: > What is wrong with something like the following (for CPython)? > > class SysImplemention: > ? ?"Define __repr__(), etc here " > ? ?... > > sys.implementation = SysImplemention() > sys.implementation.name = 'cpython' > sys.implementation.version = (3, 3, 0, 'alpha', 4) > sys.implementation.hexversion = 0x30300a4 > sys.implementation.cache_tag = 'cpython-33' Because now you're double keying data in a completely unnecessary fashion. The sys module initialisation code already has access to the info needed to fill out sys.implementation correctly, moving that code somewhere else purely for the sake of getting to write it in Python instead of C would be foolish. Some things are best written in Python, some make sense to write in the underlying implementation language. This is one of the latter because it's all about implementation details. > Also, should the build/machine info be removed from sys.version > and moved it to sys.implementation? No, as the contents of sys.version are already defined as implementation dependent. It remains as the easy to print version, while sys.implementation provides a programmatic interface. There may be other CPython-specific fields currently in sys.version that it makes sense to also include in sys.implementation, but: 1. That's *as well as*, not *instead of* 2. It's something that can be looked at *after* the initial implementation of the PEP has been checked in (and should only be done with a concrete use case, such as eliminating sys.version introspection in other parts of the stdlib or in third party code) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mark at hotpy.org Fri Jun 1 15:17:09 2012 From: mark at hotpy.org (Mark Shannon) Date: Fri, 01 Jun 2012 14:17:09 +0100 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: References: <4FC7473F.5020802@hotpy.org> <4FC8AC3C.6060005@hotpy.org> Message-ID: <4FC8C0D5.7000406@hotpy.org> Nick Coghlan wrote: > On Fri, Jun 1, 2012 at 9:49 PM, Mark Shannon wrote: >> What is wrong with something like the following (for CPython)? >> >> class SysImplemention: >> "Define __repr__(), etc here " >> ... >> >> sys.implementation = SysImplemention() >> sys.implementation.name = 'cpython' >> sys.implementation.version = (3, 3, 0, 'alpha', 4) >> sys.implementation.hexversion = 0x30300a4 >> sys.implementation.cache_tag = 'cpython-33' > > Because now you're double keying data in a completely unnecessary > fashion. The sys module initialisation code already has access to the > info needed to fill out sys.implementation correctly, moving that code > somewhere else purely for the sake of getting to write it in Python > instead of C would be foolish. Some things are best written in Python, > some make sense to write in the underlying implementation language. > This is one of the latter because it's all about implementation > details. Previously you said that "it needs to handled in the implementation language, and explicitly *not* in Python". I asked why that was. Now you seem to be suggesting that Python code would break the DRY rule, but the C code would not. If the C code can avoid duplication, then so can the Python code, as follows: class SysImplementation: "Define __repr__(), etc here " ... import imp tag = imp.get_tag() sys.implementation = SysImplementation() sys.implementation.name = tag[:tag.index('-')] sys.implementation.version = sys.version_info sys.implementation.hexversion = sys.hexversion sys.implementation.cache_tag = tag Cheers, Mark. From martin at v.loewis.de Fri Jun 1 15:22:57 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Fri, 01 Jun 2012 15:22:57 +0200 Subject: [Python-Dev] PEP 11 change: Windows Support Lifecycle Message-ID: <20120601152257.Horde.RrIAMtjz9kRPyMIxaeYjH9A@webmail.df.eu> I have just codified our current policy on supporting Windows releases, namely that we only support some Windows version until Microsoft ends its extended support period. As a consequence, Windows XP will be supported until 08/04/2014, and Windows 7 until 14/01/2020 (unless Microsoft extends that date further). I have also added wording on Visual Studio support which may still require consensus. My proposed policy is this: 1. There is only one VS version supported for any feature release. Because of the different branches, multiple versions may be in use. 2. The version that we use for a new feature release must still have mainstream support (meaning it can still be purchased regularly). 3. We should strive to keep the number of VS versions used simultaneously small. VS 2008 has mainstream support until 09/04/2013, so we could have used it for 3.3 still, however, mainstream support ends within the likely lifetime of 3.3, so switching to VS 2010 was better. VS 2010 will have mainstream support until 14/07/2015, so we can likely use it for 3.4 as well, and only reconsider for 3.5 (at which point XP support will not be an issue anymore). VS 2012 is out for 3.4 as it doesn't support XP. Regards, Martin From ncoghlan at gmail.com Fri Jun 1 15:49:49 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 1 Jun 2012 23:49:49 +1000 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: <4FC8C0D5.7000406@hotpy.org> References: <4FC7473F.5020802@hotpy.org> <4FC8AC3C.6060005@hotpy.org> <4FC8C0D5.7000406@hotpy.org> Message-ID: On Fri, Jun 1, 2012 at 11:17 PM, Mark Shannon wrote: > import imp > tag = imp.get_tag() > > sys.implementation = SysImplementation() > sys.implementation.name = tag[:tag.index('-')] > sys.implementation.version = sys.version_info > sys.implementation.hexversion = sys.hexversion This is wrong. sys.version_info is the language version, while sys.implementation.version is the implementation version. They happen to be the same for CPython because it's the reference interpreter, but splitting the definition like this allows (for example) a 0.1 release of a new implementation to target Python 3.3 and clearly indicate the difference between the two version numbers. As the PEP's rationale section says: "The status quo for implementation-specific information gives us that information in a more fragile, harder to maintain way. It is spread out over different modules or inferred from other information, as we see with platform.python_implementation(). This PEP is the main alternative to that approach. It consolidates the implementation-specific information into a single namespace and makes explicit that which was implicit." The idea of the PEP is provide a standard channel from the implementation specific parts of the interpreter (i.e. written in the implementation language) through to the implementation independent code in the standard library (i.e. written in Python). It's intended to *replace* the legacy APIs in the long run, not rely on them. While we're unlikely to bother actually deprecating legacy APIs like imp.get_tag() (as it isn't worth the hassle), PEP 421 means we can at least avoid *adding* to them. To achieve the aims of the PEP without double-keying data it *has* to be written in C. The long term goal here is that all the code in the standard library should be implementation independent - PyPy, Jython, IronPython, et al should be able to grab it and just run it. That means the implementation specific stuff needs to migrate into the C code and get exposed through standard APIs. PEP 421 is one step along that road. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From brian at python.org Fri Jun 1 16:21:57 2012 From: brian at python.org (Brian Curtin) Date: Fri, 1 Jun 2012 09:21:57 -0500 Subject: [Python-Dev] PEP 11 change: Windows Support Lifecycle In-Reply-To: <20120601152257.Horde.RrIAMtjz9kRPyMIxaeYjH9A@webmail.df.eu> References: <20120601152257.Horde.RrIAMtjz9kRPyMIxaeYjH9A@webmail.df.eu> Message-ID: On Fri, Jun 1, 2012 at 8:22 AM, wrote: > I have just codified our current policy on supporting > Windows releases, namely that we only support some Windows > version until Microsoft ends its extended support period. > As a consequence, Windows XP will be supported until > 08/04/2014, and Windows 7 until 14/01/2020 (unless Microsoft > extends that date further). > > I have also added wording on Visual Studio support which may > still require consensus. My proposed policy is this: > > 1. There is only one VS version supported for any feature release. > ? Because of the different branches, multiple versions may be > ? in use. > 2. The version that we use for a new feature release must still > ? have mainstream support (meaning it can still be purchased > ? regularly). > 3. We should strive to keep the number of VS versions used > ? simultaneously small. > > VS 2008 has mainstream support until 09/04/2013, so we could have > used it for 3.3 still, however, mainstream support ends within the > likely lifetime of 3.3, so switching to VS 2010 was better. VS 2010 > will have mainstream support until 14/07/2015, so we can likely > use it for 3.4 as well, and only reconsider for 3.5 (at which point XP > support will not be an issue anymore). VS 2012 is out for 3.4 as it > doesn't support XP. This all sounds good to me. I think the rough timeline of our future releases lines up nicely, e.g., the VS version available around Python 3.5 won't support XP and neither would we. From barry at python.org Fri Jun 1 05:08:57 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 31 May 2012 23:08:57 -0400 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: References: <4FC7473F.5020802@hotpy.org> <4FC8AC3C.6060005@hotpy.org> <4FC8C0D5.7000406@hotpy.org> Message-ID: <20120531230857.2409960c@resist.wooz.org> On Jun 01, 2012, at 11:49 PM, Nick Coghlan wrote: >The long term goal here is that all the code in the standard library >should be implementation independent - PyPy, Jython, IronPython, et al >should be able to grab it and just run it. That means the >implementation specific stuff needs to migrate into the C code and get >exposed through standard APIs. PEP 421 is one step along that road. Exactly. Or to put it another way, if you implemented sys.implementation in some stdlib Python module, you wouldn't be able to share that module between the various Python implementations. I think the stdlib should strive for *more* commonality across Python implementations, not less. Yes, you could conditionalize your way around that, but why do it when writing the code in the interpreter implementation language is easy enough? Plus, who wants to maintain the ugly mass of if-statements that would probably require? Eric's C code is easily auditable to anyone who knows the C API well enough, and I can't imagine it wouldn't be pretty trivial to write it in Java, RPython, or C#. Cheers, -Barry From mark at hotpy.org Fri Jun 1 16:22:26 2012 From: mark at hotpy.org (Mark Shannon) Date: Fri, 01 Jun 2012 15:22:26 +0100 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: References: <4FC7473F.5020802@hotpy.org> <4FC8AC3C.6060005@hotpy.org> <4FC8C0D5.7000406@hotpy.org> Message-ID: <4FC8D022.9070206@hotpy.org> Nick Coghlan wrote: > On Fri, Jun 1, 2012 at 11:17 PM, Mark Shannon wrote: >> import imp >> tag = imp.get_tag() >> >> sys.implementation = SysImplementation() >> sys.implementation.name = tag[:tag.index('-')] >> sys.implementation.version = sys.version_info >> sys.implementation.hexversion = sys.hexversion > > This is wrong. sys.version_info is the language version, while > sys.implementation.version is the implementation version. They happen > to be the same for CPython because it's the reference interpreter, but I thought this list was for CPython, not other implementations ;) > splitting the definition like this allows (for example) a 0.1 release > of a new implementation to target Python 3.3 and clearly indicate the > difference between the two version numbers. > > As the PEP's rationale section says: "The status quo for > implementation-specific information gives us that information in a > more fragile, harder to maintain way. It is spread out over different > modules or inferred from other information, as we see with > platform.python_implementation(). > > This PEP is the main alternative to that approach. It consolidates the > implementation-specific information into a single namespace and makes > explicit that which was implicit." > > The idea of the PEP is provide a standard channel from the > implementation specific parts of the interpreter (i.e. written in the > implementation language) through to the implementation independent > code in the standard library (i.e. written in Python). It's intended > to *replace* the legacy APIs in the long run, not rely on them. I'm not arguing with the PEP, just discussing how to implement it. > > While we're unlikely to bother actually deprecating legacy APIs like > imp.get_tag() (as it isn't worth the hassle), PEP 421 means we can at > least avoid *adding* to them. To achieve the aims of the PEP without > double-keying data it *has* to be written in C. Could you justify that last sentence. What is special about C that means that information does not have to be duplicated, yet it must be in Python? I've already provided two implementations. The second derives all the information it needs from other sources, thus conforming to DRY. If the use of imp bothers you, then would this be OK: I just picked imp.get_tag() because it has the relevant info. Would: sys.implementation.cache_tag = (sys.implementation.name + '-' + str(sys.version_info[0]) + str(str(sys.version_info[1])) be acceptable? > > The long term goal here is that all the code in the standard library > should be implementation independent - PyPy, Jython, IronPython, et al > should be able to grab it and just run it. That means the > implementation specific stuff needs to migrate into the C code and get > exposed through standard APIs. PEP 421 is one step along that road. I don't see how that is relevant. sys.implementation can never be part of the shared stdlib. That does not mean it has to be implemented in C. Cheers, Mark From ncoghlan at gmail.com Fri Jun 1 17:01:08 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 2 Jun 2012 01:01:08 +1000 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: <4FC8D022.9070206@hotpy.org> References: <4FC7473F.5020802@hotpy.org> <4FC8AC3C.6060005@hotpy.org> <4FC8C0D5.7000406@hotpy.org> <4FC8D022.9070206@hotpy.org> Message-ID: You have the burden of proof the wrong way around. sys is a builtin module. C is the default language, absent a compelling reason to use Python instead. The code is simple enough that there is no such reason, thus the implementation will be in C. -- Sent from my phone, thus the relative brevity :) On Jun 2, 2012 12:30 AM, "Mark Shannon" wrote: > Nick Coghlan wrote: > >> On Fri, Jun 1, 2012 at 11:17 PM, Mark Shannon wrote: >> >>> import imp >>> tag = imp.get_tag() >>> >>> sys.implementation = SysImplementation() >>> sys.implementation.name = tag[:tag.index('-')] >>> sys.implementation.version = sys.version_info >>> sys.implementation.hexversion = sys.hexversion >>> >> >> This is wrong. sys.version_info is the language version, while >> sys.implementation.version is the implementation version. They happen >> to be the same for CPython because it's the reference interpreter, but >> > > I thought this list was for CPython, not other implementations ;) > > splitting the definition like this allows (for example) a 0.1 release >> of a new implementation to target Python 3.3 and clearly indicate the >> difference between the two version numbers. >> >> As the PEP's rationale section says: "The status quo for >> implementation-specific information gives us that information in a >> more fragile, harder to maintain way. It is spread out over different >> modules or inferred from other information, as we see with >> platform.python_**implementation(). >> >> This PEP is the main alternative to that approach. It consolidates the >> implementation-specific information into a single namespace and makes >> explicit that which was implicit." >> >> The idea of the PEP is provide a standard channel from the >> implementation specific parts of the interpreter (i.e. written in the >> implementation language) through to the implementation independent >> code in the standard library (i.e. written in Python). It's intended >> to *replace* the legacy APIs in the long run, not rely on them. >> > > I'm not arguing with the PEP, just discussing how to implement it. > > >> While we're unlikely to bother actually deprecating legacy APIs like >> imp.get_tag() (as it isn't worth the hassle), PEP 421 means we can at >> least avoid *adding* to them. To achieve the aims of the PEP without >> double-keying data it *has* to be written in C. >> > > Could you justify that last sentence. What is special about C that means > that information does not have to be duplicated, yet it must be in Python? > I've already provided two implementations. The second derives all the > information it needs from other sources, thus conforming to DRY. > If the use of imp bothers you, then would this be OK: > > I just picked imp.get_tag() because it has the relevant info. Would: > > sys.implementation.cache_tag = (sys.implementation.name + '-' + > str(sys.version_info[0]) + str(str(sys.version_info[1])) > > be acceptable? > > >> The long term goal here is that all the code in the standard library >> should be implementation independent - PyPy, Jython, IronPython, et al >> should be able to grab it and just run it. That means the >> implementation specific stuff needs to migrate into the C code and get >> exposed through standard APIs. PEP 421 is one step along that road. >> > > I don't see how that is relevant. sys.implementation can never be part of > the shared stdlib. That does not mean it has to be implemented in C. > > Cheers, > Mark > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > ncoghlan%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Fri Jun 1 05:57:56 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 31 May 2012 23:57:56 -0400 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: <4FC8D022.9070206@hotpy.org> References: <4FC7473F.5020802@hotpy.org> <4FC8AC3C.6060005@hotpy.org> <4FC8C0D5.7000406@hotpy.org> <4FC8D022.9070206@hotpy.org> Message-ID: <20120531235756.7bc41953@resist.wooz.org> On Jun 01, 2012, at 03:22 PM, Mark Shannon wrote: >I thought this list was for CPython, not other implementations ;) This list serves a dual purpose. Its primary purpose is to discuss development of Python-the-language. It's also where discussions about CPython-the-implementation occur, but that's because CPython is the current reference implementation of the language. While python-dev is not the primary forum for discussing implementation details of alternative implementations, I hope that those are not off-limits for this list, and should be especially welcome for issues that pertain to Python-the-language. Remember too that PEPs drive language changes, PEPs (generally) apply to all implementations of the language, and python-dev is where PEPs get discussed. Cheers, -Barry From status at bugs.python.org Fri Jun 1 18:06:50 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 1 Jun 2012 18:06:50 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120601160650.291E81CBBB@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-05-25 - 2012-06-01) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3450 (+10) closed 23308 (+54) total 26758 (+64) Open issues with patches: 1448 Issues opened (44) ================== #11820: idle3 shell os.system swallows shell command output http://bugs.python.org/issue11820 reopened by ned.deily #12370: Use of super overwrites use of __class__ in class namespace http://bugs.python.org/issue12370 reopened by ncoghlan #12510: IDLE get_the_calltip mishandles raw strings http://bugs.python.org/issue12510 reopened by terry.reedy #12932: dircmp does not allow non-shallow comparisons http://bugs.python.org/issue12932 reopened by terry.reedy #14917: Make os.symlink on Win32 detect if target is directory http://bugs.python.org/issue14917 opened by larry #14918: Incorrect explanation of TypeError exception http://bugs.python.org/issue14918 opened by tsou #14919: what disables one from adding self to the "nosy" list http://bugs.python.org/issue14919 opened by tshepang #14922: mailbox.Maildir.get_message() may fail when Maildir dirname is http://bugs.python.org/issue14922 opened by poliveira #14923: Even faster UTF-8 decoding http://bugs.python.org/issue14923 opened by storchaka #14926: random.seed docstring needs edit of "*a *is" http://bugs.python.org/issue14926 opened by smichr #14927: add "Do not supply int argument" to random.shuffle docstring http://bugs.python.org/issue14927 opened by smichr #14928: Fix importlib bootstrapping issues http://bugs.python.org/issue14928 opened by pitrou #14929: IDLE crashes on *Edit / Find in files ...* command http://bugs.python.org/issue14929 opened by fgracia #14933: Misleading documentation about weakrefs http://bugs.python.org/issue14933 opened by pitrou #14934: generator objects can clear their weakrefs before being resurr http://bugs.python.org/issue14934 opened by pitrou #14935: PEP 384 Refactoring applied to _csv module http://bugs.python.org/issue14935 opened by Robin.Schreiber #14936: PEP 3121, 384 refactoring applied to curses_panel module http://bugs.python.org/issue14936 opened by Robin.Schreiber #14937: IDLE's deficiency in the completion of file names (Python 32, http://bugs.python.org/issue14937 opened by fgracia #14940: Usage documentation for pysetup http://bugs.python.org/issue14940 opened by ncoghlan #14944: Setup & Usage documentation for pydoc, idle & 2to3 http://bugs.python.org/issue14944 opened by ncoghlan #14945: Setup & Usage documentation for selected stdlib modules http://bugs.python.org/issue14945 opened by ncoghlan #14946: packaging???s pysetup script should be named differently than http://bugs.python.org/issue14946 opened by eric.araujo #14949: Documentation should state clearly the differences between "py http://bugs.python.org/issue14949 opened by alexis #14950: Make packaging.install API less confusing and more extensible http://bugs.python.org/issue14950 opened by alexis #14953: Reimplement subset of multiprocessing.sharedctypes using memor http://bugs.python.org/issue14953 opened by sbt #14954: weakref doc clarification http://bugs.python.org/issue14954 opened by stoneleaf #14955: hmac.secure_compare() is not time-independent for unicode stri http://bugs.python.org/issue14955 opened by Jon.Oberheide #14956: custom PYTHONPATH may break apps embedding Python http://bugs.python.org/issue14956 opened by jankratochvil #14957: Improve docs for str.splitlines http://bugs.python.org/issue14957 opened by ncoghlan #14959: ttk.Scrollbar in Notebook widget freezes http://bugs.python.org/issue14959 opened by davidjamesbeck #14963: Use an iterative implementation for contextlib.ExitStack.__exi http://bugs.python.org/issue14963 opened by ncoghlan #14964: distutils2.utils.resolve_name cleanup http://bugs.python.org/issue14964 opened by Ronny.Pfannschmidt #14965: super() and property inheritance behavior http://bugs.python.org/issue14965 opened by ???.??? #14966: Fully document subprocess.CalledProcessError http://bugs.python.org/issue14966 opened by ncoghlan #14967: distutils2.utils.resolve_name cannot be implemented to give co http://bugs.python.org/issue14967 opened by Ronny.Pfannschmidt #14968: Section "Inplace Operators" of :mod:`operator` should be a sub http://bugs.python.org/issue14968 opened by larsmans #14971: (unittest) loadTestsFromName does not work on method with a de http://bugs.python.org/issue14971 opened by alex.75 #14973: restore python2 unicode literals in "ur" strings http://bugs.python.org/issue14973 opened by rurpy2 #14974: rename packaging.pypi to packaging.index http://bugs.python.org/issue14974 opened by alexis #14975: import unicodedata, DLL load failed on Python 2.7.3 http://bugs.python.org/issue14975 opened by yfdyh000 #14976: Queue.PriorityQueue() is not interrupt safe http://bugs.python.org/issue14976 opened by JohanAR #14977: mailcap does not respect precedence in the presence of wildcar http://bugs.python.org/issue14977 opened by manu-beffara #14978: distutils Extension fails to be created with unicode package n http://bugs.python.org/issue14978 opened by guyomes #14979: pdb: Link to source http://bugs.python.org/issue14979 opened by techtonik Most recent 15 issues with no replies (15) ========================================== #14979: pdb: Link to source http://bugs.python.org/issue14979 #14978: distutils Extension fails to be created with unicode package n http://bugs.python.org/issue14978 #14966: Fully document subprocess.CalledProcessError http://bugs.python.org/issue14966 #14957: Improve docs for str.splitlines http://bugs.python.org/issue14957 #14954: weakref doc clarification http://bugs.python.org/issue14954 #14953: Reimplement subset of multiprocessing.sharedctypes using memor http://bugs.python.org/issue14953 #14934: generator objects can clear their weakrefs before being resurr http://bugs.python.org/issue14934 #14933: Misleading documentation about weakrefs http://bugs.python.org/issue14933 #14926: random.seed docstring needs edit of "*a *is" http://bugs.python.org/issue14926 #14922: mailbox.Maildir.get_message() may fail when Maildir dirname is http://bugs.python.org/issue14922 #14918: Incorrect explanation of TypeError exception http://bugs.python.org/issue14918 #14917: Make os.symlink on Win32 detect if target is directory http://bugs.python.org/issue14917 #14916: PyRun_InteractiveLoop fails to run interactively when using a http://bugs.python.org/issue14916 #14914: pysetup installed distribute despite dry run option being spec http://bugs.python.org/issue14914 #14913: tokenize the source to manage Pdb breakpoints http://bugs.python.org/issue14913 Most recent 15 issues waiting for review (15) ============================================= #14978: distutils Extension fails to be created with unicode package n http://bugs.python.org/issue14978 #14968: Section "Inplace Operators" of :mod:`operator` should be a sub http://bugs.python.org/issue14968 #14964: distutils2.utils.resolve_name cleanup http://bugs.python.org/issue14964 #14963: Use an iterative implementation for contextlib.ExitStack.__exi http://bugs.python.org/issue14963 #14955: hmac.secure_compare() is not time-independent for unicode stri http://bugs.python.org/issue14955 #14954: weakref doc clarification http://bugs.python.org/issue14954 #14953: Reimplement subset of multiprocessing.sharedctypes using memor http://bugs.python.org/issue14953 #14937: IDLE's deficiency in the completion of file names (Python 32, http://bugs.python.org/issue14937 #14936: PEP 3121, 384 refactoring applied to curses_panel module http://bugs.python.org/issue14936 #14935: PEP 384 Refactoring applied to _csv module http://bugs.python.org/issue14935 #14929: IDLE crashes on *Edit / Find in files ...* command http://bugs.python.org/issue14929 #14923: Even faster UTF-8 decoding http://bugs.python.org/issue14923 #14913: tokenize the source to manage Pdb breakpoints http://bugs.python.org/issue14913 #14910: argparse: disable abbreviation http://bugs.python.org/issue14910 #14900: cProfile does not take its result headers as sort arguments http://bugs.python.org/issue14900 Top 10 most discussed issues (10) ================================= #14814: Implement PEP 3144 (the ipaddress module) http://bugs.python.org/issue14814 13 msgs #6721: Locks in python standard library should be sanitized on fork http://bugs.python.org/issue6721 12 msgs #14956: custom PYTHONPATH may break apps embedding Python http://bugs.python.org/issue14956 12 msgs #14673: add sys.implementation http://bugs.python.org/issue14673 11 msgs #13475: Add '-p'/'--path0' command line option to override sys.path[0] http://bugs.python.org/issue13475 10 msgs #14901: Python Windows FAQ is Very Outdated http://bugs.python.org/issue14901 10 msgs #3177: Add shutil.open http://bugs.python.org/issue3177 9 msgs #14852: json and ElementTree parsers misbehave on streams containing m http://bugs.python.org/issue14852 8 msgs #12510: IDLE get_the_calltip mishandles raw strings http://bugs.python.org/issue12510 7 msgs #14923: Even faster UTF-8 decoding http://bugs.python.org/issue14923 7 msgs Issues closed (55) ================== #8323: buffer objects are picklable but result is not unpicklable http://bugs.python.org/issue8323 closed by sbt #8739: Update to smtpd.py to RFC 5321 http://bugs.python.org/issue8739 closed by r.david.murray #9041: raised exception is misleading http://bugs.python.org/issue9041 closed by meador.inge #9244: multiprocessing.pool: Worker crashes if result can't be encode http://bugs.python.org/issue9244 closed by sbt #9864: email.utils.{parsedate,parsedate_tz} should have better return http://bugs.python.org/issue9864 closed by r.david.murray #10839: email module should not allow some header field repetitions http://bugs.python.org/issue10839 closed by r.david.murray #11050: email.utils.getaddresses behavior contradicts RFC2822 http://bugs.python.org/issue11050 closed by r.david.murray #11685: possible SQL injection into db APIs via table names... sqlite3 http://bugs.python.org/issue11685 closed by petri.lehtinen #11785: email subpackages documentation problems http://bugs.python.org/issue11785 closed by r.david.murray #12091: multiprocessing: simplify ApplyResult and MapResult with threa http://bugs.python.org/issue12091 closed by sbt #12338: multiprocessing.util._eintr_retry doen't recalculate timeouts http://bugs.python.org/issue12338 closed by sbt #12515: email modifies the message structure when the parsed email is http://bugs.python.org/issue12515 closed by r.david.murray #12586: Provisional new email API: new policy implementing custom head http://bugs.python.org/issue12586 closed by r.david.murray #13751: multiprocessing.pool hangs if any worker raises an Exception w http://bugs.python.org/issue13751 closed by sbt #14007: xml.etree.ElementTree - XMLParser and TreeBuilder's doctype() http://bugs.python.org/issue14007 closed by eli.bendersky #14128: _elementtree should expose types and factory functions consist http://bugs.python.org/issue14128 closed by eli.bendersky #14548: garbage collection just after multiprocessing's fork causes ex http://bugs.python.org/issue14548 closed by sbt #14690: Use monotonic time for sched, trace and subprocess modules http://bugs.python.org/issue14690 closed by python-dev #14703: Update PEP metaprocesses to describe PEP czar role http://bugs.python.org/issue14703 closed by ncoghlan #14731: Enhance Policy framework in preparation for adding email6 poli http://bugs.python.org/issue14731 closed by r.david.murray #14744: Use _PyUnicodeWriter API in str.format() internals http://bugs.python.org/issue14744 closed by haypo #14775: Dict untracking can result in quadratic dict build-up http://bugs.python.org/issue14775 closed by pitrou #14796: Calendar module test coverage improved http://bugs.python.org/issue14796 closed by r.david.murray #14818: C implementation of ElementTree: Some functions should support http://bugs.python.org/issue14818 closed by eli.bendersky #14835: plistlib: output empty elements correctly http://bugs.python.org/issue14835 closed by hynek #14857: Direct access to lexically scoped __class__ is broken in 3.3 http://bugs.python.org/issue14857 closed by python-dev #14876: IDLE highlighting theme does not preview with user-selected fo http://bugs.python.org/issue14876 closed by terry.reedy #14881: multiprocessing.dummy craches when self._parent._children does http://bugs.python.org/issue14881 closed by sbt #14909: Fix incorrect use of *Realloc() and *Resize() http://bugs.python.org/issue14909 closed by kristjan.jonsson #14920: help(urllib.parse) fails when LANG=C http://bugs.python.org/issue14920 closed by orsenthil #14921: New trove classifier for simple printers of nested lists http://bugs.python.org/issue14921 closed by dholth #14924: re.finditer() oddity http://bugs.python.org/issue14924 closed by ezio.melotti #14925: email package does not register defect when blank line between http://bugs.python.org/issue14925 closed by r.david.murray #14930: Make memoryview weakrefable http://bugs.python.org/issue14930 closed by sbt #14931: Compattible http://bugs.python.org/issue14931 closed by r.david.murray #14932: Python 3.3.0a3 build fails on MacOS 10.7 with XCode 4.3.2 http://bugs.python.org/issue14932 closed by hynek #14938: 'import my_pkg.__init__' creates duplicate modules http://bugs.python.org/issue14938 closed by brett.cannon #14939: Usage documentation for pyvenv http://bugs.python.org/issue14939 closed by vinay.sajip #14941: "Using Python on Windows" end user docs are out of date http://bugs.python.org/issue14941 closed by brian.curtin #14942: add PyType_New() http://bugs.python.org/issue14942 closed by ncoghlan #14943: winreg OpenKey doesn't work as documented http://bugs.python.org/issue14943 closed by brian.curtin #14947: Missing cross reference in types.new_class documentation http://bugs.python.org/issue14947 closed by python-dev #14948: setup.cfg - rename home_page to homepage http://bugs.python.org/issue14948 closed by eric.araujo #14951: Daikon/KVasir report: Invalid read of size 4 http://bugs.python.org/issue14951 closed by cassou #14952: Cannot run regrtest with amd64/debug on windows http://bugs.python.org/issue14952 closed by kristjan.jonsson #14958: IDLE 3 and PEP414 - highlighting unicode literals http://bugs.python.org/issue14958 closed by ned.deily #14960: about the slowly HTTPServer http://bugs.python.org/issue14960 closed by neologix #14961: map() and filter() methods for iterators http://bugs.python.org/issue14961 closed by ncoghlan #14962: When changing IDLE configuration all text in shell window lose http://bugs.python.org/issue14962 closed by ned.deily #14969: Exception context is not suppressed correctly in contextlib.Ex http://bugs.python.org/issue14969 closed by python-dev #14970: -v command line option is broken http://bugs.python.org/issue14970 closed by brett.cannon #14972: listcomp with nested classes http://bugs.python.org/issue14972 closed by flox #14980: OSX: ranlib: file: libpython2.7.a(pymath.o) has no symbols http://bugs.python.org/issue14980 closed by ronaldoussoren #10997: Duplicate entries in IDLE "Recent Files" menu item on OS X http://bugs.python.org/issue10997 closed by ned.deily #1672568: silent error in email.message.Message.get_payload http://bugs.python.org/issue1672568 closed by r.david.murray From ericsnowcurrently at gmail.com Fri Jun 1 18:11:31 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 1 Jun 2012 10:11:31 -0600 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: References: <4FC7473F.5020802@hotpy.org> <4FC8AC3C.6060005@hotpy.org> Message-ID: On Fri, Jun 1, 2012 at 6:07 AM, Nick Coghlan wrote: > There may be other CPython-specific fields currently in sys.version > that it makes sense to also include in sys.implementation, but: > 1. That's *as well as*, not *instead of* > 2. It's something that can be looked at *after* the initial > implementation of the PEP has been checked in (and should only be done > with a concrete use case, such as eliminating sys.version > introspection in other parts of the stdlib or in third party code) Precisely. The PEP addresses this point directly: http://www.python.org/dev/peps/pep-0421/#adding-new-required-attributes -eric From ericsnowcurrently at gmail.com Fri Jun 1 18:17:10 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 1 Jun 2012 10:17:10 -0600 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: <4FC8C0D5.7000406@hotpy.org> References: <4FC7473F.5020802@hotpy.org> <4FC8AC3C.6060005@hotpy.org> <4FC8C0D5.7000406@hotpy.org> Message-ID: On Fri, Jun 1, 2012 at 7:17 AM, Mark Shannon wrote: > Previously you said that "it needs to handled in the implementation > language, and explicitly *not* in Python". > I asked why that was. > > Now you seem to be suggesting that Python code would break the DRY rule, > but the C code would not. If the C code can avoid duplication, then so > can the Python code, as follows: > > class SysImplementation: > > ? ?"Define __repr__(), etc here " > ? ?... > > import imp > tag = imp.get_tag() > > sys.implementation = SysImplementation() > sys.implementation.name = tag[:tag.index('-')] > sys.implementation.version = sys.version_info > sys.implementation.hexversion = sys.hexversion > sys.implementation.cache_tag = tag This was actually the big motivator for PEP 421. Once PEP 421 is final, imp.get_tag() will get its value from sys.implementation, rather than the other way around. The point is to pull the implementation-specific values into one place (as much as is reasonable). -eric From ericsnowcurrently at gmail.com Fri Jun 1 18:28:22 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 1 Jun 2012 10:28:22 -0600 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: <20120531230857.2409960c@resist.wooz.org> References: <4FC7473F.5020802@hotpy.org> <4FC8AC3C.6060005@hotpy.org> <4FC8C0D5.7000406@hotpy.org> <20120531230857.2409960c@resist.wooz.org> Message-ID: On Thu, May 31, 2012 at 9:08 PM, Barry Warsaw wrote: > On Jun 01, 2012, at 11:49 PM, Nick Coghlan wrote: > >>The long term goal here is that all the code in the standard library >>should be implementation independent - PyPy, Jython, IronPython, et al >>should be able to grab it and just run it. That means the >>implementation specific stuff needs to migrate into the C code and get >>exposed through standard APIs. PEP 421 is one step along that road. > > Exactly. ?Or to put it another way, if you implemented sys.implementation in > some stdlib Python module, you wouldn't be able to share that module between > the various Python implementations. ?I think the stdlib should strive for > *more* commonality across Python implementations, not less. ?Yes, you could > conditionalize your way around that, but why do it when writing the code in > the interpreter implementation language is easy enough? ?Plus, who wants to > maintain the ugly mass of if-statements that would probably require? Not only that, but any new/experimental/etc. implementation would either have be blessed in that module by Python committers (a la the platform module*) or would have to use a fork of the standard library. * I don't mean to put down the platform module, with has and will continue to serve us well. Rather, just pointing out that a small part of it demonstrates a limitation in the stdlib relative to alternate implementations. > > Eric's C code is easily auditable to anyone who knows the C API well enough, > and I can't imagine it wouldn't be pretty trivial to write it in Java, > RPython, or C#. And I'm by no means a C veteran. :) -eric From alon at horev.net Fri Jun 1 18:59:16 2012 From: alon at horev.net (Alon Horev) Date: Fri, 1 Jun 2012 19:59:16 +0300 Subject: [Python-Dev] setprofile and settrace inconsistency In-Reply-To: References: Message-ID: Hi, When setting a trace function with settrace, the trace function when called with a new scope can return another trace function or None, indicating the inner scope should not be traced. I used settrace for some time but calling the trace function for every line of code is a performance killer. So I moved on to setprofile, which calls a trace function every function entry/exit. now here's the problem: the return value from the trace function is ignored (intentionally), denying the possibility to skip tracing of 'hot' or 'not interesting' code. I would like to propose two alternatives: 1. setprofile will not ignore the return value and mimic settrace's behavior. 2. setprofile is just a wrapper around settrace that limits it's functionality, lets make settrace more flexible so setprofile will be redundant. here's how: settrace will recieve an argument called 'events', the trace function will fire only on events contained in that list. for example: setprofile = partial(settrace, events=['call', 'return']) I personally prefer the second. Some context to this issue: I'm building a python tracer - a logger that records each and every function call. In order for it to run in production systems, the overhead should be minimal. I would like to allow the user to say which function/module/classes to trace or skip, for example: the user will skip all math/cpu intensive operations. another example: the user will want to trace his django app code but not the django framework. your thoughts? Thanks, Alon Horev -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Fri Jun 1 19:27:34 2012 From: brett at python.org (Brett Cannon) Date: Fri, 1 Jun 2012 13:27:34 -0400 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? Message-ID: About the only thing I can think of from the language summit that we discussed doing for Python 3.3 that has not come about is accepting the regex module and getting it into the stdlib. Is this still being worked towards? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alon at horev.net Fri Jun 1 17:22:42 2012 From: alon at horev.net (Alon Horev) Date: Fri, 1 Jun 2012 18:22:42 +0300 Subject: [Python-Dev] setprofile and settrace inconsistency Message-ID: Hi, When setting a trace function with settrace, the trace function when called with a new scope can return another trace function or None, indicating the inner scope should not be traced. I used settrace for some time but calling the trace function for every line of code is a performance killer. So I moved on to setprofile, which calls a trace function every function entry/exit. now here's the problem: the return value from the trace function is ignored (intentionally), denying the possibility to skip tracing of 'hot' or 'not interesting' code. I would like to propose two alternatives: 1. setprofile will not ignore the return value and mimic settrace's behavior. 2. setprofile is just a wrapper around settrace that limits it's functionality, lets make settrace more flexible so setprofile will be redundant. here's how: settrace will recieve an argument called 'events', the trace function will fire only on events contained in that list. for example: setprofile = partial(settrace, events=['call', 'return']) I personally prefer the second. Some context to this issue: I'm building a python tracer - a logger that records each and every function call. In order for it to run in production systems, the overhead should be minimal. I would like to allow the user to say which function/module/classes to trace or skip, for example: the user will skip all math/cpu intensive operations. another example: the user will want to trace his django app code but not the django framework. your thoughts? Thanks, Alon Horev -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Fri Jun 1 19:33:49 2012 From: brett at python.org (Brett Cannon) Date: Fri, 1 Jun 2012 13:33:49 -0400 Subject: [Python-Dev] whither PEP 407 and 413 (release cycle PEPs)? Message-ID: Are these dead in the water or are we going to try to change our release cycle? I'm just asking since 3.3 final is due out in about 3 months and deciding on this along with shifting things if we do make a change could end up taking that long and I suspect if we don't do this for 3.3 we are probably never going to do it for Python 3 series as a whole. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Fri Jun 1 20:15:32 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 01 Jun 2012 14:15:32 -0400 Subject: [Python-Dev] setprofile and settrace inconsistency In-Reply-To: References: Message-ID: On 6/1/2012 11:22 AM, Alon Horev wrote: > your thoughts? Your post on python-ideas is the right place for this and discussion should be concentrated there. -- Terry Jan Reedy From tjreedy at udel.edu Fri Jun 1 20:57:46 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 01 Jun 2012 14:57:46 -0400 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: On 6/1/2012 1:27 PM, Brett Cannon wrote: > About the only thing I can think of from the language summit that we > discussed doing for Python 3.3 that has not come about is accepting the > regex module and getting it into the stdlib. Is this still being worked > towards? Since there is no PEP to define the details* of such an addition, I assume that no particular core developer focused on this. There have been a lot of other additions to take people's attention. Also, I do not remember seeing anything from Mathew Barnett about his views on the proposal. * Some details: Replacement of or addition to re. Relation to continued external project and 'ownership' of code. Relation, if any, to the Unicode Regular Expression Technical Standard, and its levels of conformance. http://unicode.org/reports/tr18/ The addition of ipaddress was not a drop-in process. There have been some docstrings changes and clarifications, some code changes and cleanups, and removal of stuff only needed for 2.x. I suspect that regex would get at least as much tuning once seriously looked at. -- Terry Jan Reedy From brian at python.org Fri Jun 1 21:14:36 2012 From: brian at python.org (Brian Curtin) Date: Fri, 1 Jun 2012 14:14:36 -0500 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: On Fri, Jun 1, 2012 at 1:57 PM, Terry Reedy wrote: > On 6/1/2012 1:27 PM, Brett Cannon wrote: >> >> About the only thing I can think of from the language summit that we >> discussed doing for Python 3.3 that has not come about is accepting the >> regex module and getting it into the stdlib. Is this still being worked >> towards? > > > Since there is no PEP to define the details* of such an addition, I assume > that no particular core developer focused on this. There have been a lot of > other additions to take people's attention. Also, I do not remember seeing > anything from Mathew Barnett about his views on the proposal. > > * Some details: > > Replacement of or addition to re. At the language summit it was proposed that this regex project would enter as re, and the current re moves to sre. Everyone seemed to agree. > Relation to continued external project and 'ownership' of code. As with anything else, no more external. From larry at hastings.org Fri Jun 1 23:23:18 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 01 Jun 2012 14:23:18 -0700 Subject: [Python-Dev] Python Language Summit, Florence, July 2012 In-Reply-To: <4FC60D44.7050602@hastings.org> References: <4FC60D44.7050602@hastings.org> Message-ID: <4FC932C6.6030101@hastings.org> On 05/30/2012 05:06 AM, Larry Hastings wrote: > Like Python? Like Italy? Like meetings? Then I've got a treat for you! > > I'll be chairing a Python Language Summit this July in historic > Florence, Italy. It'll be on July 1st (the day before EuroPython > starts) at the Grand Hotel Mediterraneo conference center. I have an update and a clarification. First, the clarification: the Language Summit is really intended for core developers. If you're not a core developer, then the meeting isn't for you. (Don't worry, you're not missing anything. It's about as interesting as doing your taxes.) Second, an update: enough people said it was too early for them to attend that we've rescheduled. It's now Saturday July 7th, the first day of the sprints. As before, please email me if you can attend (and you're a core developer!) so I can get a rough headcount. Ciao, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Jun 2 02:23:50 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 2 Jun 2012 10:23:50 +1000 Subject: [Python-Dev] whither PEP 407 and 413 (release cycle PEPs)? In-Reply-To: References: Message-ID: My preference is that we plan and prepare during the 3.4 cycle, with a view to making a change for 3.5. I'd also like the first 3.4 alpha to be released in parallel with 3.3.1 Both PEPs should be updated with concrete transition and communication plans before any other action can seriously be considered. We have the potential to seriously upset a *lot* of people if we handle such a change badly (particularly when so many are still annoyed about Python 3). Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Jun 2 02:37:06 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 2 Jun 2012 10:37:06 +1000 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: ipaddress really made it in because I personally ran into the limitations of not having IP address support in the stdlib. I ended up doing quite a bit of prompting to ensure the process of cleaning up the API to modern stdlib standards didn't stall (even now, generating a module reference from the docstrings is still a pending task) With regex, the pain isn't there, since re already covers such a large subset of what regex provides. My perspective is that it's now too late to make a change that big for 3.3, but the in principle approval holds for anyone that wants to work with MRAB and get the idea written up as a PEP for 3.4. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From breamoreboy at yahoo.co.uk Sat Jun 2 02:37:18 2012 From: breamoreboy at yahoo.co.uk (Mark Lawrence) Date: Sat, 02 Jun 2012 01:37:18 +0100 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: On 01/06/2012 18:27, Brett Cannon wrote: > About the only thing I can think of from the language summit that we > discussed doing for Python 3.3 that has not come about is accepting the > regex module and getting it into the stdlib. Is this still being worked > towards? > Umpteen versions of regex have been available on pypi for years. Umpteen bugs against the original re module have been fixed. If regex can't now go into the standard library, what on earth can? -- Cheers. Mark Lawrence. From brian at python.org Sat Jun 2 02:40:14 2012 From: brian at python.org (Brian Curtin) Date: Fri, 1 Jun 2012 19:40:14 -0500 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: On Fri, Jun 1, 2012 at 7:37 PM, Mark Lawrence wrote: > Umpteen versions of regex have been available on pypi for years. Umpteen > bugs against the original re module have been fixed. ?If regex can't now go > into the standard library, what on earth can? Reviewing a 4000 line patch is probably the main roadblock here. From ncoghlan at gmail.com Sat Jun 2 02:42:13 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 2 Jun 2012 10:42:13 +1000 Subject: [Python-Dev] [Python-checkins] cpython (3.2): #14957: clarify splitlines docs. In-Reply-To: References: Message-ID: On Jun 2, 2012 6:21 AM, "r.david.murray" wrote: > > http://hg.python.org/cpython/rev/24572015e24f > changeset: 77288:24572015e24f > branch: 3.2 > parent: 77285:bf6305bce3af > user: R David Murray > date: Fri Jun 01 16:19:36 2012 -0400 > summary: > #14957: clarify splitlines docs. > > Initial patch by Michael Driscoll, I added the example. > > files: > Doc/library/stdtypes.rst | 8 +++++++- > 1 files changed, 7 insertions(+), 1 deletions(-) > > > diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst > --- a/Doc/library/stdtypes.rst > +++ b/Doc/library/stdtypes.rst > @@ -1329,7 +1329,13 @@ > > Return a list of the lines in the string, breaking at line boundaries. Line > breaks are not included in the resulting list unless *keepends* is given and > - true. > + true. This method uses the universal newlines approach to splitting lines. > + Unlike :meth:`~str.split`, if the string ends with line boundary characters > + the returned list does ``not`` have an empty last element. > + > + For example, ``'ab c\n\nde fg\rkl\r\n'.splitlines()`` returns > + ``['ab c', '', 'de fg', 'kl']``, while the same call with ``splinelines(True)`` > + returns ``['ab c\n', '\n, 'de fg\r', 'kl\r\n']``. s/splinelines/splitlines/ Maybe also show what split() would do for that string? > > > .. method:: str.startswith(prefix[, start[, end]]) > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Sat Jun 2 05:24:34 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 01 Jun 2012 23:24:34 -0400 Subject: [Python-Dev] [Python-checkins] cpython (3.2): #14957: clarify splitlines docs. In-Reply-To: References: Message-ID: <20120602032434.D6B3A250072@webabinitio.net> On Sat, 02 Jun 2012 10:42:13 +1000, Nick Coghlan wrote: > > + For example, ``'ab c\n\nde fg\rkl\r\n'.splitlines()`` returns > > + ``['ab c', '', 'de fg', 'kl']``, while the same call with > ``splinelines(True)`` > > + returns ``['ab c\n', '\n, 'de fg\r', 'kl\r\n']``. > > s/splinelines/splitlines/ Oops. > Maybe also show what split() would do for that string? I'd rather not, since the split examples are just above it in the docs. --David From ncoghlan at gmail.com Sat Jun 2 07:14:02 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 2 Jun 2012 15:14:02 +1000 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: On Sat, Jun 2, 2012 at 10:37 AM, Mark Lawrence wrote: > On 01/06/2012 18:27, Brett Cannon wrote: >> >> About the only thing I can think of from the language summit that we >> discussed doing for Python 3.3 that has not come about is accepting the >> regex module and getting it into the stdlib. Is this still being worked >> towards? >> > > Umpteen versions of regex have been available on pypi for years. Umpteen > bugs against the original re module have been fixed. ?If regex can't now go > into the standard library, what on earth can? That's why it's approved *in principle* already. However, it's not a simple matter of dropping something into the standard library and calling it done, especially an extension module as complex as regex. Even integrating a simple pure Python module like ipaddr took substantial effort: 1. The API had to be reviewed to see if it was suitable for someone that was *not* familiar with the problem domain, but was instead learning about it from the standard library documentation. This isn't a big concern for regex, since it is replacing the existing re module, but this is the main reason ipaddr became ipaddress before PEP 3144 was approved (The ipaddr API plays fast and loose with network terminology in a way that someone that already *knows* that terminology can easily grasp, but would have been incredibly confusing to someone that is discovering those terms for the first time). 2. The code had to actually be added to the standard library (not a big effort for PEP 3144 - saving ipaddress.py into Lib/ and test_ipaddress.py into Lib/test/ pretty much covered it) 3. Redundant 2.x cruft needed to be weeded out (ongoing) 4. The howto guide needed to be incorporated into the documentation (and rewritten to be more suitable for genuine beginners) 5. An API module reference still needs to be incorporated into the standard library reference The effort to integrate regex is going to be substantially higher, since it's a significantly more complicated module: 1. A new, non-trivial C extension needs to be incorporated into both the autotools and Windows build processes 2. Due to PEP 393, there's a major change to the string implementation in 3.3. Does regex still build against that? Even if it builds, it should probably be ported to the new API for performance reasons. 3. Does regex build cleanly on all platforms supported by CPython? If not, do we need to keep the existing re module around as a fallback mechanism? 4. How do we merge the test suites? Do we keep the existing test suite, add the regex test suite, then filter for duplication afterwards? 5. What, precisely, *are* the backwards incompatibilities between regex and re? Does the standard library trigger any of them? Does the test suite? 6. How will the PyPI backport be maintained in the future? The amount of backwards compatibility cruft in standard library code should be minimised, but that potentially makes backports more difficult. ipaddress is in the 3.3 standard library because Peter Moody cared enough about the concept to initially submit it for inclusion, and because I volunteered to drive the review and integration process forward and to be the final arbiter of what counted as "good enough" for inclusion. That hasn't happened yet for regex - either nobody has cared enough to write a PEP for it, or the bystander effect has kicked in and everyone that cares is assuming *someone else* will take up the burden of being the PEP champion. So that's the first step: someone needs to take http://bugs.python.org/issue2636 and turn it into a PEP (searching the python-dev and python-ideas archives for references to previous discussions of the topic would also be good, along with summarising the open Unicode related re bugs reported by Tom Christensen where the answer is currently "use regex from PyPI instead of the standard library's re module" [1]). [1] http://bugs.python.org/issue?%40search_text=&ignore=file%3Acontent&title=&%40columns=title&id=&%40columns=id&stage=&creation=&creator=tchrist&activity=&%40columns=activity&%40sort=activity&actor=&nosy=&type=&components=&versions=&dependencies=&assignee=&keywords=&priority=&%40group=priority&status=1&%40columns=status&resolution=&nosy_count=&message_count=&%40pagesize=50&%40startwith=0&%40queryname=&%40old-queryname=&%40action=search Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Sat Jun 2 07:16:34 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 2 Jun 2012 15:16:34 +1000 Subject: [Python-Dev] [Python-checkins] cpython (3.2): #14957: clarify splitlines docs. In-Reply-To: <20120602032434.D6B3A250072@webabinitio.net> References: <20120602032434.D6B3A250072@webabinitio.net> Message-ID: On Sat, Jun 2, 2012 at 1:24 PM, R. David Murray wrote: >> Maybe also show what split() would do for that string? > > I'd rather not, since the split examples are just above it in > the docs. Fair point - one of the downsides of reviewing a diff out of context :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tismer at stackless.com Sat Jun 2 18:59:38 2012 From: tismer at stackless.com (Christian Tismer) Date: Sat, 02 Jun 2012 18:59:38 +0200 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: <4F626278.7030701@oddbird.net> References: <4F626278.7030701@oddbird.net> Message-ID: <4FCA467A.8060008@stackless.com> On 15.03.12 22:43, Carl Meyer wrote: > A brief status update on PEP 405 (built-in virtualenv) and the open issues: > > 1. As mentioned in the updated version of the language summit notes, > Nick Coghlan has agreed to pronounce on the PEP. > > 2. Ned Deily discovered at the PyCon sprints that the current reference > implementation does not work with an OS X framework build of Python. > We're still working to discover the reason for that and determine > possible fixes. > > 3. If anyone knows of a pair of packages in which both need to build > compiled extensions, and the compilation of the second depends on header > files from the first, that would be helpful to me in testing the other > open issue (installation of header files). (I thought numpy and scipy > might fit this bill, but I'm currently not able to install numpy at all > under Python 3 using pysetup, easy_install, or pip.) > Hi Carl, I appreciate this effort very well, as we are heavily using virtualenv in a project. One urgent question: will this feature be backported to Python 2.7? We still need 2.7 for certain reasons (PyPy is not ready for 3.x). cheers - chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de work +49 173 24 18 776 mobile +49 173 24 18 776 fax n.a. PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From tismer at stackless.com Sat Jun 2 19:33:20 2012 From: tismer at stackless.com (Christian Tismer) Date: Sat, 02 Jun 2012 19:33:20 +0200 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: References: <4F626278.7030701@oddbird.net> <4F626712.3030906@gmail.com> <4F62692E.8040203@oddbird.net> <4F678690.3000600@oddbird.net> Message-ID: <4FCA4E60.8050204@stackless.com> On 21.03.12 14:35, Kristj?n Valur J?nsson wrote: > >> -----Original Message----- >> From: Carl Meyer [mailto:carl at oddbird.net] >> Sent: 19. mars 2012 19:19 >> To: Kristj?n Valur J?nsson >> Cc: Python-Dev (python-dev at python.org) >> Subject: Re: [Python-Dev] PEP 405 (built-in virtualenv) status >> >> Hello Kristj?n, >> I think there's one important (albeit odd and magical) bit of Python's current >> behavior that you are missing in your blog post. All of the initial sys.path >> directories are constructed relative to sys.prefix and sys.exec_prefix, and >> those values in turn are determined (if PYTHONHOME is not set), by walking >> up the filesystem tree from the location of the Python binary, looking for the >> existence of a file at the relative path "lib/pythonX.X/os.py" (or "Lib/os.py" >> on Windows). Python takes the existence of this file to mean that it's found >> the standard library, and sets sys.prefix accordingly. Thus, you can achieve >> reliable full isolation from any installed Python, with no need for >> environment variables, simply by placing a file (it can even be empty) at that >> relative location from the location of your Python binary. You will still get >> some default paths added on sys.path, but they will all be relative to your >> Python binary and thus presumably under your control; nothing from any >> other location will be on sys.path. I doubt you will find this solution >> satisfyingly elegant, but you might nonetheless find it practically useful. >> > Right. Thanks for explaining this. Although, it would appear that Python also > has a mechanism for detecting that it is being run from a build environment > and ignore PYTHONHOME in that case too. > >> Beyond that possible tweak, while I certainly wouldn't oppose any effort to >> clean up / document / make-optional Python's startup sys.path-setting >> behavior, I think it's mostly orthogonal to PEP 405, and I don't think it would >> be helpful to expand the scope of PEP 405 to include that effort. > Well, it sounds as this pep can definitely be used as the basis for work to > completely customize the startup behaviour. > In my case, it would be desirable to be able to completely ignore any > PYTHONHOME environment variable (and any others). I'd also like to be able > to manually set up the sys.path. > > Perhaps if we can set things up that one key (ignore_env) will cause > the environment variables to be ignored, and then, an empty home > key will set the sys.path to point to the directory of the .cfg file. > Presumably, this would then cause a site.py found at that place > to be executed and one could code whatever extra logic one > wants into that file. > Possibly a "site" key in the .cfg file would achieve the same goal, allowing > the user to call this setup file whatever he wants. > > With something like this in place, the built in behaviour of python.exe > to realize that it is running from a "build" environment and in that case > ignore PYTHONPATH and set a special sys.path, could all be removed > from being hardcoded into being coded into some buildsite.py in the > cpython root folder. > As an old windows guy, I very much agree with Kristjan. The venv approach is great. Windows is just a quite weird situation to handle in some cases, and a super-simple way to get rid of *any* built-in behavior concerning setup would be great. The idea of moving path setup stuff into the python.exe stub makes very much sense to me. This would make pythonxx.dll a really useful library to be shared. Kristjan can then provide his own custom python.exe and be assured the python dll will not try to lurk into something unforeseen. I think this would also be a security aspect: The dll can be considered really safe for sandboxing if it does not even have the ability to change the python behavior by built-in magic. Besides that, I agree with Ethan that explicit is better than implicit, again. I am missing even more explicitness: Python has IMHO too much behavior like this: 'by default, look into xxx, but if a yyy exists, behave differently'. I don't like this, because the absense of a simple file changes the whole system behavior. I would do it the other way round: As soon as you introduce the venv.cfg file, enforce its existence completely! If that file is not there, then python exits with an error message. This way you can safely ensure its existence, and the file can be made read-only and so on. A non-existent file is just a bad thing and is hard to make read-only ;-) So please let's abandon the old 'if exists ...' pattern, at least this one time. By the explicit cfg file, the file can clearly say if there is a virtual env or not. Together with removing magic from the .dll, the situation at least for windows would greatly improve. ciao - chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de work +49 173 24 18 776 mobile +49 173 24 18 776 fax n.a. PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From benjamin at python.org Sun Jun 3 06:01:25 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 2 Jun 2012 21:01:25 -0700 Subject: [Python-Dev] [Python-checkins] Daily reference leaks (d9b7399d9e45): sum=462 In-Reply-To: References: Message-ID: 2012/6/2 : > results for d9b7399d9e45 on branch "default" > -------------------------------------------- > > test_smtplib leaked [154, 154, 154] references, sum=462 Can other people reproduce this one? I can't. -- Regards, Benjamin From eliben at gmail.com Sun Jun 3 06:28:10 2012 From: eliben at gmail.com (Eli Bendersky) Date: Sun, 3 Jun 2012 06:28:10 +0200 Subject: [Python-Dev] [Python-checkins] Daily reference leaks (d9b7399d9e45): sum=462 In-Reply-To: References: Message-ID: On Sun, Jun 3, 2012 at 6:01 AM, Benjamin Peterson wrote: > 2012/6/2 ?: >> results for d9b7399d9e45 on branch "default" >> -------------------------------------------- >> >> test_smtplib leaked [154, 154, 154] references, sum=462 > > Can other people reproduce this one? I can't. > I can't either: $ ./python -m test.regrtest -R : test_smtplib [1/1] test_smtplib beginning 9 repetitions 123456789 ......... 1 test OK. [172101 refs] (Ubuntu 10.04, x64 2.6.32-41-generic) From martin at v.loewis.de Sun Jun 3 12:51:29 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 03 Jun 2012 12:51:29 +0200 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: <4FCB41B1.70907@v.loewis.de> On 02.06.2012 02:37, Mark Lawrence wrote: > On 01/06/2012 18:27, Brett Cannon wrote: >> About the only thing I can think of from the language summit that we >> discussed doing for Python 3.3 that has not come about is accepting the >> regex module and getting it into the stdlib. Is this still being worked >> towards? >> > > Umpteen versions of regex have been available on pypi for years. Umpteen > bugs against the original re module have been fixed. If regex can't now > go into the standard library, what on earth can? Something that isn't that big so that a maintainer can really read all of it. I really wish the bug fixes had been made to SRE, instead of rewriting it all. So I'm -0 on this regex module. If this isn't added to 3.3, I'll start encouraging people to contribute changes to SRE for 3.4, and just ignore the regex module. Regards, Martin From martin at v.loewis.de Sun Jun 3 13:22:31 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 03 Jun 2012 13:22:31 +0200 Subject: [Python-Dev] whither PEP 407 and 413 (release cycle PEPs)? In-Reply-To: References: Message-ID: <4FCB48F7.60102@v.loewis.de> On 01.06.2012 19:33, Brett Cannon wrote: > Are these dead in the water or are we going to try to change our > release cycle? I'm just asking since 3.3 final is due out in about 3 > months and deciding on this along with shifting things if we do make > a change could end up taking that long and I suspect if we don't do > this for 3.3 we are probably never going to do it for Python 3 series > as a whole. I'm -1 on both PEPs. For PEP 407, I fail to see what problem it solves. The PEP is short on rationale, so let me guess what the motivation for the PEP is: - Some people (Barry in particular) are in favor of timed releases. I don't know what the actual motivation for timed releases is, but ISTM that PEP 407 is an attempt to make Python generate timed releases. I'm -1 on that because of the additional effort for release managers. In particular, a strict schedule will limit vacation time, and require the release team to coordinate their vacation plans. With two alpha, one beta, and one rc, plus LTS bugfix releases, there may well be one release of some Python version every month. - Some contributors are worried about getting their contributions "out", and some core committers are worried that we get fewer contributions because of that. While I well recall the feeling of getting changes "out", the real concerns only exist for the very first contribution: * Those gurus on python-dev are certainly working on a fix for this very important issue already, how could they not have noticed? My work will be futile, and they'll fix it the day before I submit the patch. * Now that the patch is uploaded, can somebody *please* review it? How hard can it be to look over 20 lines of code? * Now that they committed it, when can I start telling my friends about it? The next release takes ages, and waiting is not fun. While these concerns are all real, it's really a matter of contributor education to deal with them, The longer people contribute to open source (or participate in any kind of software development), the more they learn that this is just how things work. The PEP really only addresses the third concern, whereas I think that the second is much more relevant. As for us not getting enough contributions: can we please worry about that when we have all patches processed that already have been contributed? I also think that the PEP will have negative effect on Python users: incompatible changes will spread faster (people will think that it's ok to break stuff since it was announced for three releases, when it wasn't actually announced in the last LTS). Users will feel the urgency of updating, and at the same time uneasiness about doing so as it may break stuff. People *already* get behind by two or three releases (in the 2.x series), getting behind 10 releases just will make them feel sad. For PEP 413, much the same concerns apply. In addition, I think it's too complicated, both for users, and for the actual technical implementation. Regards, Martin From fijall at gmail.com Sun Jun 3 13:49:43 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 3 Jun 2012 13:49:43 +0200 Subject: [Python-Dev] JITted regex engine from pypy Message-ID: Hi I was reading a bit about the regex module and I would like to present some other solution into speeding up the re module for Python. So, as a bit of background - pypy has a re compatible module. It's also JITted and it's also exportable as a C library (that is a library you can call from C with C API, not a python extension module). I wonder if it would be worth to put some work into it to make it a library that CPython can use. On the minus side, the JIT only works on x86 and x86_64, on the plus side, since it's 100% API compatible, it can be used as a _xxx speedup module relatively easy. Do people have opinions? Cheers, fijal -------------- next part -------------- An HTML attachment was scrubbed... URL: From hs at ox.cx Sun Jun 3 14:47:30 2012 From: hs at ox.cx (Hynek Schlawack) Date: Sun, 03 Jun 2012 14:47:30 +0200 Subject: [Python-Dev] whither PEP 407 and 413 (release cycle PEPs)? In-Reply-To: <4FCB48F7.60102@v.loewis.de> References: <4FCB48F7.60102@v.loewis.de> Message-ID: <4FCB5CE2.2090503@ox.cx> Am 03.06.12 13:22, schrieb "Martin v. L?wis": > - Some contributors are worried about getting their contributions "out", > and some core committers are worried that we get fewer contributions > because of that. > > While I well recall the feeling of getting changes "out", the real > concerns only exist for the very first contribution: > * Those gurus on python-dev are certainly working on a fix for this > very important issue already, how could they not have noticed? > My work will be futile, and they'll fix it the day before I submit > the patch. > * Now that the patch is uploaded, can somebody *please* review it? > How hard can it be to look over 20 lines of code? > * Now that they committed it, when can I start telling my friends > about it? The next release takes ages, and waiting is not fun. > > While these concerns are all real, it's really a matter of contributor > education to deal with them, The longer people contribute to open > source (or participate in any kind of software development), the > more they learn that this is just how things work. The PEP really > only addresses the third concern, whereas I think that the second > is much more relevant. As a newish core developer I?d like to stress that Martin is 100% right here. Point three was never an issue to me ? the biggest satisfaction is seeing the actual commit with the own name and the appearing in ACKS ? you _can_ already tell your friends/tweet/blog about it at this point. And people do. OTOH point two is _very_ frustrating. The most colorful bikeshed is still much better than ignored patches. Personally, I gave up on CPython after my patches languished for weeks until Antoine revived the tickets three months later. I'm sure we've lost plenty of talent this way already and _if_ we want to attract more talented contributors, _this_ is the issue to tackle. The release process has nothing to do with that. I guess the PEPs (especially 413) are more about the bad rap the stdlib has been getting lately (e.g. ). > As for us not getting enough contributions: can we please worry > about that when we have all patches processed that already have > been contributed? Realistically, that means "never". Cheers, Hynek From ironfroggy at gmail.com Sun Jun 3 15:06:55 2012 From: ironfroggy at gmail.com (Calvin Spealman) Date: Sun, 3 Jun 2012 09:06:55 -0400 Subject: [Python-Dev] JITted regex engine from pypy In-Reply-To: References: Message-ID: On Sun, Jun 3, 2012 at 7:49 AM, Maciej Fijalkowski wrote: > Hi > > I was reading a bit about the regex module and I would like to present some > other solution into speeding up the re module for Python. > > So, as a bit of background - pypy has a re compatible module. It's also > JITted and it's also exportable as a C library (that is a library you can > call from C with C API, not a python extension module). I wonder if it would > be worth to put some work into it to make it a library that CPython can use. > > On the minus side, the JIT only works on x86 and x86_64, on the plus side, > since it's 100% API compatible, it can be used as a _xxx speedup module > relatively easy. > > Do people have opinions? A few questions and comments about such an idea, from someone who hasn't used PyPy yet and doesn't understand the setup involved. 1) Would PyPy be required to build this as a C-compatible library, such that CPython could use it as an extension module? That is, would it make PyPy a required part of building CPython? 2) Are there benchmarks comparing the performance of this implementation to the existing re module and the proposed regex module? 3) How would the maintenance work? Where would the module live "officially"? Does CPython fork it or is it extracted from PyPy in a way it can be installed as an external dependency? How does CPython get changes upstream? 4) I may be remembering wrong, but I recall maintenance ease to be one of the justifications for the regex module. How would your proposal compare? Is a random developer looking to fix a bug in his way going to find this easier or more difficult to get his head around? The idea is interesting. > Cheers, > fijal > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com > -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy From fijall at gmail.com Sun Jun 3 15:16:32 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 3 Jun 2012 15:16:32 +0200 Subject: [Python-Dev] JITted regex engine from pypy In-Reply-To: References: Message-ID: On Sun, Jun 3, 2012 at 3:06 PM, Calvin Spealman wrote: > On Sun, Jun 3, 2012 at 7:49 AM, Maciej Fijalkowski > wrote: > > Hi > > > > I was reading a bit about the regex module and I would like to present > some > > other solution into speeding up the re module for Python. > > > > So, as a bit of background - pypy has a re compatible module. It's also > > JITted and it's also exportable as a C library (that is a library you can > > call from C with C API, not a python extension module). I wonder if it > would > > be worth to put some work into it to make it a library that CPython can > use. > > > > On the minus side, the JIT only works on x86 and x86_64, on the plus > side, > > since it's 100% API compatible, it can be used as a _xxx speedup module > > relatively easy. > > > > Do people have opinions? > > A few questions and comments about such an idea, from someone who > hasn't used PyPy yet and doesn't understand the setup involved. > > 1) Would PyPy be required to build this as a C-compatible library, > such that CPython could use it as an extension module? That is, would > it make PyPy a required part of building CPython? > It depends a bit how we organize stuff. PyPy (as the pypy repository checkout, not the pypy interpreter) would be requires to build necessary C files (and as such also maintenance since the C files are not hand-editable), but pypy would not be required to compile C files. > > 2) Are there benchmarks comparing the performance of this > implementation to the existing re module and the proposed regex > module? > I don't think so. It really is reasonably fast in a lot of cases and it can definitely be made faster in more cases. The main power comes from JITting - so you compile a piece of assembler per regex created. I doubt C library can come close to this approach-wise. Of course there will be cases and cases, but generally speaking the approach is superior. It would be cool if someone do the benchmarks how they look like *right now*. > > 3) How would the maintenance work? Where would the module live > "officially"? Does CPython fork it or is it extracted from PyPy in a > way it can be installed as an external dependency? How does CPython > get changes upstream? > I would honestly hope it can be maintained as a part of pypy and then cpython would just use it. But those are just hopes. > > 4) I may be remembering wrong, but I recall maintenance ease to be one > of the justifications for the regex module. How would your proposal > compare? Is a random developer looking to fix a bug in his way going > to find this easier or more difficult to get his head around? > I think it's relatively easy since it's python code after all, but what do I know. Someone has to have a look, it lives here - https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/rsre I would like people to have opinions themselves whether it's more or less maintenance effort. On our side, we'll maintain this particular part of code anyway (so it's also easier because you leave it to others). Cheers, fijal -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jun 3 15:46:54 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 3 Jun 2012 23:46:54 +1000 Subject: [Python-Dev] JITted regex engine from pypy In-Reply-To: References: Message-ID: The embedded (in both senses of the term) use cases for CPython pretty much kill the idea, I'm afraid. Those cases are also one of the biggest question marks over incorporating regex wholesale instead of incrementally updating the existing engine to achieve feature parity. Publishing such a JIT compiled module via PyPI would be great, though. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sun Jun 3 16:41:48 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 3 Jun 2012 16:41:48 +0200 Subject: [Python-Dev] JITted regex engine from pypy In-Reply-To: References: Message-ID: On Sun, Jun 3, 2012 at 3:46 PM, Nick Coghlan wrote: > The embedded (in both senses of the term) use cases for CPython pretty > much kill the idea, I'm afraid. > As I said it can (and should) definitely be optional. > Those cases are also one of the biggest question marks over incorporating > regex wholesale instead of incrementally updating the existing engine to > achieve feature parity. > > Publishing such a JIT compiled module via PyPI would be great, though. > > Cheers, > Nick. > > -- > Sent from my phone, thus the relative brevity :) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Sun Jun 3 17:21:50 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 03 Jun 2012 17:21:50 +0200 Subject: [Python-Dev] JITted regex engine from pypy In-Reply-To: References: Message-ID: <4FCB810E.7050106@v.loewis.de> > On the minus side, the JIT only works on x86 and x86_64, on the plus > side, since it's 100% API compatible, it can be used as a _xxx > speedup module relatively easy. > > Do people have opinions? The main concern for re is not speed, but functionality. The Python re module needs to grow a number of features, and correct a number of bugs. So 100% compatible is actually not good enough. 95% compatible (with the features added and the bugs fixed) would be better. OTOH, sharing the re code with PyPy would be a desirable goal, as would be writing the re code in Python (although SRE already implements significant parts in Python). As a speedup module, it's uninteresting - we want to simplify maintenance, not complicate it. So this can only work if it replaces SRE. Regards, Martin From fijall at gmail.com Sun Jun 3 17:31:22 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 3 Jun 2012 17:31:22 +0200 Subject: [Python-Dev] JITted regex engine from pypy In-Reply-To: <4FCB810E.7050106@v.loewis.de> References: <4FCB810E.7050106@v.loewis.de> Message-ID: On Sun, Jun 3, 2012 at 5:21 PM, "Martin v. L?wis" wrote: > On the minus side, the JIT only works on x86 and x86_64, on the plus >> side, since it's 100% API compatible, it can be used as a _xxx >> speedup module relatively easy. >> >> Do people have opinions? >> > > The main concern for re is not speed, but functionality. The Python re > module needs to grow a number of features, and correct a number of bugs. > So 100% compatible is actually not good enough. 95% compatible (with > the features added and the bugs fixed) would be better. > > OTOH, sharing the re code with PyPy would be a desirable goal, as would > be writing the re code in Python (although SRE already implements > significant parts in Python). > We did not reimplement those parts in RPython, they're still in python (so the sre engine does not accept regex, but instead the lower-level description, etc. etc.) > > As a speedup module, it's uninteresting - we want to simplify maintenance, > not complicate it. So this can only work if it replaces > SRE. > > Regards, > Martin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Sun Jun 3 17:32:13 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 03 Jun 2012 17:32:13 +0200 Subject: [Python-Dev] Python 3.3 on Windows 2000 Message-ID: <4FCB837D.5010604@v.loewis.de> It seems that by moving to VS 2010, we have killed Windows 2000 support, for the same reason VS 2012 would kill XP support: Windows 2000 apparently won't recognize the .exe files as executables anymore. I haven't actually tested this: can somebody please confirm? A year ago, Brian put a statement into PEP 11 that 3.3 would support Windows 2000 still, but with a warning. Under my recent PEP change, Windows 2000 does not need to be supported anymore since 13.07.2010, when Microsoft's extended support expired. So I propose to just remove the claim from the PEP that 3.3 would still be supported, and not go through any notification period. Objections? As a consequence, we could then change some of the deferred-loading stuff for "new" (i.e. XP+) API into proper linking. Regards, Martin From barry at python.org Sun Jun 3 18:03:19 2012 From: barry at python.org (Barry Warsaw) Date: Sun, 3 Jun 2012 12:03:19 -0400 Subject: [Python-Dev] whither PEP 407 and 413 (release cycle PEPs)? In-Reply-To: <4FCB48F7.60102@v.loewis.de> References: <4FCB48F7.60102@v.loewis.de> Message-ID: <20120603120319.6eddcafa@resist.wooz.org> On Jun 03, 2012, at 01:22 PM, Martin v. L?wis wrote: >- Some people (Barry in particular) are in favor of timed releases. > I don't know what the actual motivation for timed releases is, but Timed releases in general can provide much better predictability for others depending on those releases. E.g. folks working on things to go into Python can plan better how to make sure their stuff is ready in time, and downstreams can *much* better plan on which Python versions to include in their products and releases. Having said that, unless there's widespread consensus within the Python developers for timed releases, then it's not going to work, either within the context of those PEPs or not. After the last round of mostly negative feedback, I don't personally have much motivation to push these through. -Barry From benjamin at python.org Sun Jun 3 18:14:47 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 3 Jun 2012 09:14:47 -0700 Subject: [Python-Dev] JITted regex engine from pypy In-Reply-To: References: Message-ID: 2012/6/3 Maciej Fijalkowski : > Hi > > I was reading a bit about the regex module and I would like to present some > other solution into speeding up the re module for Python. IMO, the most important feature of the regex module is that it fixes long standing bugs and includes long requested features especially with respect to Unicode. That it's faster is only windfall. -- Regards, Benjamin From tjreedy at udel.edu Sun Jun 3 18:25:08 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 03 Jun 2012 12:25:08 -0400 Subject: [Python-Dev] whither PEP 407 and 413 (release cycle PEPs)? In-Reply-To: <4FCB48F7.60102@v.loewis.de> References: <4FCB48F7.60102@v.loewis.de> Message-ID: On 6/3/2012 7:22 AM, "Martin v. L?wis" wrote: > On 01.06.2012 19:33, Brett Cannon wrote: >> Are these dead in the water or are we going to try to change our >> release cycle? I'm just asking since 3.3 final is due out in about 3 >> months and deciding on this along with shifting things if we do make >> a change could end up taking that long and I suspect if we don't do >> this for 3.3 we are probably never going to do it for Python 3 series >> as a whole. > > I'm -1 on both PEPs. I pretty much agree. There is certainly no consensus and the possible benefit is not obviously substantially more than the cost. > For PEP 407, I fail to see what problem it solves. The PEP is short on > rationale, so let me guess what the motivation for the PEP is: ... > While I well recall the feeling of getting changes "out", the real > concerns only exist for the very first contribution: ... > * Now that the patch is uploaded, can somebody *please* review it? > How hard can it be to look over 20 lines of code? Example http://bugs.python.org/issue13598 OP submitted revised patch in response to review 4 months ago > As for us not getting enough contributions: can we please worry > about that when we have all patches processed that already have > been contributed? I suspect that having too many unattended patches sitting on the tracker discourages one from writing and submitting more. I also suspect, for instance, that applying some of Roger Serwy's Idle patches has encouraged him to write more. -- Terry Jan Reedy From larry at hastings.org Sun Jun 3 18:18:02 2012 From: larry at hastings.org (Larry Hastings) Date: Sun, 03 Jun 2012 09:18:02 -0700 Subject: [Python-Dev] Python 3.3 on Windows 2000 In-Reply-To: <4FCB837D.5010604@v.loewis.de> References: <4FCB837D.5010604@v.loewis.de> Message-ID: <4FCB8E3A.1000504@hastings.org> On 06/03/2012 08:32 AM, "Martin v. L?wis" wrote: > So I propose to just remove the claim from the PEP that 3.3 would > still be supported, and not go through any notification period. Did you mean s/3.3/Windows 2000/ ? //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Sun Jun 3 21:31:22 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 03 Jun 2012 21:31:22 +0200 Subject: [Python-Dev] Python 3.3 on Windows 2000 In-Reply-To: <4FCB8E3A.1000504@hastings.org> References: <4FCB837D.5010604@v.loewis.de> <4FCB8E3A.1000504@hastings.org> Message-ID: <4FCBBB8A.6080908@v.loewis.de> On 03.06.2012 18:18, Larry Hastings wrote: > On 06/03/2012 08:32 AM, "Martin v. L?wis" wrote: >> So I propose to just remove the claim from the PEP that 3.3 would >> still be supported, and not go through any notification period. > > Did you mean > > s/3.3/Windows 2000/ > > ? I meant "that 3.3 would support Windows 2000". Regards, Martin From greg at krypto.org Sun Jun 3 22:25:27 2012 From: greg at krypto.org (Gregory P. Smith) Date: Sun, 3 Jun 2012 13:25:27 -0700 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: On Fri, Jun 1, 2012 at 5:37 PM, Nick Coghlan wrote: > ipaddress really made it in because I personally ran into the limitations > of not having IP address support in the stdlib. I ended up doing quite a > bit of prompting to ensure the process of cleaning up the API to modern > stdlib standards didn't stall (even now, generating a module reference from > the docstrings is still a pending task) > > With regex, the pain isn't there, since re already covers such a large > subset of what regex provides. > That last statement basically suggests that something like regex would never be accepted until a CPython core developer was actually running into pain with the many flaws in the re module (especially when it comes to Unicode). I disagree with that. Per the language summit, I think we need to just do it. Put it in as re and rename the existing re module to sre. We could pull the plug on it and leave it out if substantial as yet unknown problems that can't be fixed in time for release crop up during the beta 1 or 2 (release manager's decision). > My perspective is that it's now too late to make a change that big for > 3.3, but the in principle approval holds for anyone that wants to work with > MRAB and get the idea written up as a PEP for 3.4. > Nonsense, as long as its in before 3.3 Beta 1 (scheduled for June 23rd according to PEP 398) it can go in. I don't like to claim that a PEP for this one is *strictly* necessary but Nick raises good questions to be answered and has good suggestions for what to write up in the PEP in his earlier response that I certainly would prefer to have gathered up and documented so that is the route I suggest. The issue seems to be primarily one of "who is volunteering to do it?" -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From janssen at parc.com Sun Jun 3 22:51:22 2012 From: janssen at parc.com (Bill Janssen) Date: Sun, 3 Jun 2012 13:51:22 PDT Subject: [Python-Dev] JITted regex engine from pypy In-Reply-To: References: <4FCB810E.7050106@v.loewis.de> Message-ID: <89131.1338756682@parc.com> Maciej Fijalkowski wrote: > On Sun, Jun 3, 2012 at 5:21 PM, "Martin v. L?wis" wrote: > > > On the minus side, the JIT only works on x86 and x86_64, on the plus > >> side, since it's 100% API compatible, it can be used as a _xxx > >> speedup module relatively easy. > >> > >> Do people have opinions? > >> > > > > The main concern for re is not speed, but functionality. The Python re > > module needs to grow a number of features, and correct a number of bugs. > > So 100% compatible is actually not good enough. 95% compatible (with > > the features added and the bugs fixed) would be better. >From my point of view, for textual data reduction, the MRAB regex now has substantial improvements which enable very different kinds of uses, like "named lists" and "fuzzy" matching, which I don't believe occur (together) in any other RE library. Along with features it shares with the existing CPython "re" library, such as the ability to handle very large RE's (which IronPython, for instance, is unable to handle, apparently due to its use of the standard .NET RE library). And do so fairly efficiently. Bill > > > > OTOH, sharing the re code with PyPy would be a desirable goal, as would > > be writing the re code in Python (although SRE already implements > > significant parts in Python). > > > > We did not reimplement those parts in RPython, they're still in python (so > the sre engine does not accept regex, but instead the lower-level > description, etc. etc.) > > > > > > As a speedup module, it's uninteresting - we want to simplify maintenance, > > not complicate it. So this can only work if it replaces > > SRE. > > > > Regards, > > Martin > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/bill%40janssen.org From ncoghlan at gmail.com Sun Jun 3 23:02:52 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Jun 2012 07:02:52 +1000 Subject: [Python-Dev] whither PEP 407 and 413 (release cycle PEPs)? In-Reply-To: <4FCB48F7.60102@v.loewis.de> References: <4FCB48F7.60102@v.loewis.de> Message-ID: On Sun, Jun 3, 2012 at 9:22 PM, "Martin v. L?wis" wrote: > On 01.06.2012 19:33, Brett Cannon wrote: >> >> Are these dead in the water or are we going to try to change our >> release cycle? I'm just asking since 3.3 final is due out in about 3 >> months and deciding on this along with shifting things if we do make >> a change could end up taking that long and I suspect if we don't do >> this for 3.3 we are probably never going to do it for Python 3 series >> as a whole. > > > I'm -1 on both PEPs. Unsurprisingly, I'm -1 on PEP 407. Perhaps surprisingly, I'm also -0 on my own PEP 413 (I wrote it to present what I consider a more tolerable alternative to an idea I really don't like) I think marking both as Rejected would be an accurate reflection of python-dev's collective opinion. > For PEP 413, much the same concerns apply. In addition, I think it's > too complicated, both for users, and for the actual technical > implementation. Yup (although I think PEP 407 would need to be *at least* as complicated in practice as PEP 413 in order to make the implementation manageable, but currently glosses over the technical details). The one thing I actually *would* like to see change is for the cadence of *alpha* releases to be increased to match that of maintenance releases (that is, I'd like to see Python 3.4a1 released at the same time as Python 3.3.1: around 6 months after the release of 3.3.0). I think keeping the trunk closer to a "releasable" state will help encourage a more regular rate of contributions and provide earlier deadlines for big changes (e.g. it's significantly easier to say "we want to have the compiler changes in place for 3.4a1 in April" than it is to say "we want to have these changes in place by April, but that's just an arbitrary point in time, since the nearest release deadline will still be at least 12 months away". Scheduling things like sprints and bug days also becomes more focused, since they have a nearer term goal of getting things fixed for an alpha release that's only a few months away rather than one that's more than a year out). It also lowers the bar for getting people to tinker with and provide feedback on new syntax like PEP 380 and core features like pyvenv and pysetup that behave differently when installed instead of being run from a source checkout. At the moment, the criteria for providing early feedback on new syntax is "interested in the feature, and can build CPython from source", while the criteria for installed features is "interested in the feature, can build CPython from source, and can install the result on a target system". Early alphas means that the criteria for providing feedback becomes simply: "interested in the feature, and has access to a system that can tolerate having the alpha release installed". These alpha releases can also feed into vendor schemes such as Red Hat's tech preview program: while the system Python would always be a released version, an alpha version may still be an adequate foundation for a tech preview. As the other Python implementations catch up to the 3.x series, the alphas would also provide clear recommended synchronisation points that may make it easier for them to start targeting CPython release compatibility *before* we publish the final version. As I see it, such an approach would achieve most of the benefits of a regular release cadence with basically *none* of the seriously disruptive effects created by the more ambitious schemes described in PEP 407 or 413. I also consider it an excellent test run: if we can't even produce alpha releases of the upcoming version every 6 months or so, how on earth could we ever consider trying to create *production* releases on that schedule? Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Sun Jun 3 23:11:16 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Jun 2012 07:11:16 +1000 Subject: [Python-Dev] whither PEP 407 and 413 (release cycle PEPs)? In-Reply-To: References: <4FCB48F7.60102@v.loewis.de> Message-ID: On Mon, Jun 4, 2012 at 7:02 AM, Nick Coghlan wrote: > I think marking both as Rejected would be an accurate reflection of > python-dev's collective opinion. Slight correction: I think it would accurately reflect python-dev's *divided* opinion, using the principle of "Status quo wins a stalemate". The costs for either scheme are high, the benefits are not proven, thus the default is to stick with the status quo. Releasing alphas early, OTOH, doesn't require any real changes to our development process at all, aside from imposing a bit more discipline on trunk development in the first 12 months of the release cycle (I'm inclined to place that particular detail on the "benefit" side of the ledger, rather than the "cost" side). The *total* number of releases from the release managers and installer builders shouldn't increase much, if at all - I'd suggest we just stick with Georg's practice of 4 alpha releases, and merely space them out over the course of the release cycle rather than clustered together at the end. If Larry doesn't want to try this for 3.4, then I'll most likely volunteer as 3.5 RM and try it out then. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Sun Jun 3 23:38:49 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Jun 2012 07:38:49 +1000 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: On Mon, Jun 4, 2012 at 6:25 AM, Gregory P. Smith wrote: > > On Fri, Jun 1, 2012 at 5:37 PM, Nick Coghlan wrote: >> >> ipaddress really made it in because I personally ran into the limitations >> of not having IP address support in the stdlib. I ended up doing quite a bit >> of prompting to ensure the process of cleaning up the API to modern stdlib >> standards didn't stall (even now, generating a module reference from the >> docstrings is still a pending task) >> >> With regex, the pain isn't there, since re already covers such a large >> subset of what regex provides. > > That last statement basically suggests that something like regex would never > be accepted until a CPython core developer was actually running into pain > with the many flaws in the re module (especially when it comes to Unicode). > ?I disagree with that. No, that's not really what I meant. Driving integration of a module takes *time* and *effort*. The decision to commit that effort has to be driven by something, and personal annoyance is a great motivator. In the case of PEP 3144, I happened to be in a position to do something about a gap in the standard library after the omission was made glaringly obvious [1]. Getting this done was a combined effort from Peter (in getting the module API updated), myself and others (esp. Antoine) in reviewing the reference implementation's API and requesting changes and more recently Sandro Tosi has been doing most of the heavy lifting in getting the docs up to scratch. > Per the language summit, I think we need to just do it. ?Put it in as re and > rename the existing re module to sre. No. We almost burned Jesse out dropping multiprocessing into 2.6 at the last minute, and many longstanding issues with that module are only being addressed now that Richard has the time to be involved again. SRE already suffers from a lack of maintenance, and we've had zero indication that regex will make that situation better (and several indications that it will actually make it worse. Matthew's silence on the topic is *not* encouraging, and nobody else has even volunteered to write a PEP, let alone agree to maintain the module). > We could pull the plug on it and leave it out if substantial as yet unknown > problems that can't be fixed in time for release crop up during the beta 1 > or 2 (release manager's decision). Unwinding changes to the build process is yet more work that may not be needed. We need to remember the purpose of the standard library: most of the time, it is *not* intended to be all things to all people. The status quo is that, if you're doing basic, primarily ASCII, regular expression processing, then "import re" will serve you just fine. If you're doing more than that, then you'll probably need to do "pip install regex" (or platform specific equivalent) and change your import to "import regex as re". That's not *great* (as the number of open Unicode bugs against SRE can attest), but it's far from unworkable. I consider it preferable to adding yet another big ball of C code to the stdlib in the absence of a PEP addressing the concerns already raised. >> My perspective is that it's now too late to make a change that big for >> 3.3, but the in principle approval holds for anyone that wants to work with >> MRAB and get the idea written up as a PEP for 3.4. > > Nonsense, as long as its in before 3.3 Beta 1 (scheduled for June 23rd > according to PEP 398) it can go in. > > I don't like to claim that a PEP for this one is strictly necessary Why not? Requiring a PEP is the norm, not the exception. Even when there's agreement that something *should* be done, there's plenty of details to be thrashed out in turning in principle agreement into a concrete plan of action. > but Nick > raises good questions to be answered and has good suggestions for what to > write up in the PEP in his earlier response that I certainly would prefer to > have gathered up and documented so that is the route I suggest. > > The issue seems to be primarily one of "who is volunteering to do it?" Correct, both in figuring out the integration details and in agreeing to maintain it in the future. Remember, now is better than never, but never is often better than *right* now :) Cheers, Nick. [1] http://git.fedorahosted.org/git/?p=pulpdist.git;a=blob;f=src/pulpdist/core/validation.py;h=ebccf354c5bbec376258681a345fb73129eeeb95;hb=736250d85b758a11e1d09f70ec3877d1c022aa9a#l77 -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From merwok at netwok.org Sun Jun 3 23:56:14 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sun, 03 Jun 2012 17:56:14 -0400 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: <4FCA467A.8060008@stackless.com> References: <4F626278.7030701@oddbird.net> <4FCA467A.8060008@stackless.com> Message-ID: <4FCBDD7E.9000700@netwok.org> Hi, Le 02/06/2012 12:59, Christian Tismer a ?crit : > One urgent question: will this feature be backported to Python 2.7? Features are never backported to the stable versions. virtualenv still exists as a standalone project which is compatible with 2.7 though. Regards From greg at krypto.org Mon Jun 4 00:02:32 2012 From: greg at krypto.org (Gregory P. Smith) Date: Sun, 3 Jun 2012 15:02:32 -0700 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: On Sun, Jun 3, 2012 at 2:38 PM, Nick Coghlan wrote: > On Mon, Jun 4, 2012 at 6:25 AM, Gregory P. Smith wrote: > > > > On Fri, Jun 1, 2012 at 5:37 PM, Nick Coghlan wrote: > >> > >> ipaddress really made it in because I personally ran into the > limitations > >> of not having IP address support in the stdlib. I ended up doing quite > a bit > >> of prompting to ensure the process of cleaning up the API to modern > stdlib > >> standards didn't stall (even now, generating a module reference from the > >> docstrings is still a pending task) > >> > >> With regex, the pain isn't there, since re already covers such a large > >> subset of what regex provides. > > > > That last statement basically suggests that something like regex would > never > > be accepted until a CPython core developer was actually running into pain > > with the many flaws in the re module (especially when it comes to > Unicode). > > I disagree with that. > > No, that's not really what I meant. Driving integration of a module > takes *time* and *effort*. The decision to commit that effort has to > be driven by something, and personal annoyance is a great motivator. > In the case of PEP 3144, I happened to be in a position to do > something about a gap in the standard library after the omission was > made glaringly obvious [1]. > > Getting this done was a combined effort from Peter (in getting the > module API updated), myself and others (esp. Antoine) in reviewing the > reference implementation's API and requesting changes and more > recently Sandro Tosi has been doing most of the heavy lifting in > getting the docs up to scratch. > > > Per the language summit, I think we need to just do it. Put it in as re > and > > rename the existing re module to sre. > > No. We almost burned Jesse out dropping multiprocessing into 2.6 at > the last minute, and many longstanding issues with that module are > only being addressed now that Richard has the time to be involved > again. SRE already suffers from a lack of maintenance, and we've had > zero indication that regex will make that situation better (and > several indications that it will actually make it worse. Matthew's > silence on the topic is *not* encouraging, and nobody else has even > volunteered to write a PEP, let alone agree to maintain the module). > > > We could pull the plug on it and leave it out if substantial as yet > unknown > > problems that can't be fixed in time for release crop up during the beta > 1 > > or 2 (release manager's decision). > > Unwinding changes to the build process is yet more work that may not > be needed. We need to remember the purpose of the standard library: > most of the time, it is *not* intended to be all things to all people. > The status quo is that, if you're doing basic, primarily ASCII, > regular expression processing, then "import re" will serve you just > fine. If you're doing more than that, then you'll probably need to do > "pip install regex" (or platform specific equivalent) and change your > import to "import regex as re". > > That's not *great* (as the number of open Unicode bugs against SRE can > attest), but it's far from unworkable. I consider it preferable to > adding yet another big ball of C code to the stdlib in the absence of > a PEP addressing the concerns already raised. > > >> My perspective is that it's now too late to make a change that big for > >> 3.3, but the in principle approval holds for anyone that wants to work > with > >> MRAB and get the idea written up as a PEP for 3.4. > > > > Nonsense, as long as its in before 3.3 Beta 1 (scheduled for June 23rd > > according to PEP 398) it can go in. > > > > I don't like to claim that a PEP for this one is strictly necessary > > Why not? Requiring a PEP is the norm, not the exception. Even when > there's agreement that something *should* be done, there's plenty of > details to be thrashed out in turning in principle agreement into a > concrete plan of action. > > > but Nick > > raises good questions to be answered and has good suggestions for what to > > write up in the PEP in his earlier response that I certainly would > prefer to > > have gathered up and documented so that is the route I suggest. > > > > The issue seems to be primarily one of "who is volunteering to do it?" > > Correct, both in figuring out the integration details and in agreeing > to maintain it in the future. > > Remember, now is better than never, but never is often better than > *right* now :) > > heh. indeed. regardless, the module is available on pypi whether it goes in or not so we do at least have something to point people to when they need more than the existing undermaintained re (sre) module. There are also other options with different properties such as http://pypi.python.org/pypi/re2/. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jun 4 00:28:23 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Jun 2012 08:28:23 +1000 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: <4FCA4E60.8050204@stackless.com> References: <4F626278.7030701@oddbird.net> <4F626712.3030906@gmail.com> <4F62692E.8040203@oddbird.net> <4F678690.3000600@oddbird.net> <4FCA4E60.8050204@stackless.com> Message-ID: On Sun, Jun 3, 2012 at 3:33 AM, Christian Tismer wrote: > As an old windows guy, I very much agree with Kristjan. The venv > approach is great. Windows is just a quite weird situation to handle > in some cases, and a super-simple way to get rid of *any* built-in behavior > concerning setup would be great. > > The idea of moving path setup stuff into the python.exe stub > makes very much sense to me. This would make pythonxx.dll > a really useful library to be shared. It's mainly Py_Initialize() that triggers the magic. What may be worth exploring is a variant on that which allows embedding applications to explicitly pass in *everything* that would otherwise be guessed by inspecting the environment. (Some things can be forced to particular values already, but certainly not everything). > Python has IMHO too much behavior like this: > 'by default, look into xxx, but if a yyy exists, behave differently'. > I don't like this, because the absense of a simple file changes the whole > system behavior. > I would do it the other way round: > As soon as you introduce the venv.cfg file, enforce its existence > completely! If that file is not there, then python exits with an error > message. > This way you can safely ensure its existence, and the file can be made > read-only and so on. A non-existent file is just a bad thing and is hard to > make > read-only ;-) > So please let's abandon the old 'if exists ...' pattern, at least this one > time. > By the explicit cfg file, the file can clearly say if there is a virtual env > or not. Backwards compatibility constraints mean we simply can't do that. However, as noted above, it may make sense to provide more ways for embedding applications to selectively access the behaviour through the C API. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tismer at stackless.com Mon Jun 4 00:50:15 2012 From: tismer at stackless.com (Christian Tismer) Date: Mon, 04 Jun 2012 00:50:15 +0200 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: References: <4F626278.7030701@oddbird.net> <4F626712.3030906@gmail.com> <4F62692E.8040203@oddbird.net> <4F678690.3000600@oddbird.net> <4FCA4E60.8050204@stackless.com> Message-ID: <4FCBEA27.80701@stackless.com> On 04.06.12 00:28, Nick Coghlan wrote: > ... > Backwards compatibility constraints mean we simply can't do that. > However, as noted above, it may make sense to provide more ways for > embedding applications to selectively access the behaviour through the > C API. Why that??? I don't see this. If you have a new python version with a new file that has-to-be-there, what is then the problem? The new version carries the new file, so I don't see a compatibility issue, because this version does not want to be backward-compatible. It just introduces the new file constraint, and it produces what it needs. Am I somehow blinded, maybe? (yes, you all know that I am, so please be patient with me) -- Chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de work +49 173 24 18 776 mobile +49 173 24 18 776 fax n.a. PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From martin at v.loewis.de Mon Jun 4 00:51:56 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 04 Jun 2012 00:51:56 +0200 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: <4FCBEA8C.7080207@v.loewis.de> > That last statement basically suggests that something like regex would > never be accepted until a CPython core developer was actually running > into pain with the many flaws in the re module (especially when it comes > to Unicode). I disagree with that. > > Per the language summit, I think we need to just do it. Put it in as re > and rename the existing re module to sre. There are really places where "we" just doesn't work, even in a community project. "We" will never commit anything to revision control. Individual committers commit. So if *you* want to commit it, go ahead - I think there is general approval for that. Take the praise when it works, and take the (likely) blame for when it fails in some significant way, and then work on fixing it. > The issue seems to be primarily one of "who is volunteering to do it?" I don't think anybody is, or will be for the coming years. I wish I had trust into MRAB to stay around and work on this for the next ten years (and I think the author of the regex module really needs to commit for that timespan, see SRE's history), but I don't. So whoever commits the change now is in charge, and will either have to work hard on fixing the problems, or will be responsible for breaking Python 3 in a serious way. That's why nobody volunteers. Regards, Martin From martin at v.loewis.de Mon Jun 4 00:55:00 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 04 Jun 2012 00:55:00 +0200 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: <4FCBEB44.6020305@v.loewis.de> > heh. indeed. regardless, the module is available on pypi whether it > goes in or not so we do at least have something to point people to when > they need more than the existing undermaintained re (sre) module. I completely disagree that SRE is unmaintained. It has about monthly commits to it, to fix reported bugs, by various people. It may be aged software, but that has the advantage that more people are familiar with the code base now than back in the days when /F was still maintaining it. Regards, Martin From ncoghlan at gmail.com Mon Jun 4 01:28:33 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Jun 2012 09:28:33 +1000 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: <4FCBEB44.6020305@v.loewis.de> References: <4FCBEB44.6020305@v.loewis.de> Message-ID: I apologise, "unmaintained" is too strong a word. I mean "lacking an owner sufficiently confident in their authority and expertise and with sufficient time and energy to add,or approve the addition of, substantial new features which may require significant refactoring of internal details". Perhaps "unowned" would be a better word? Saying yes or no to major feature requests isn't the same as fixing errors in existing features. (Compare regular email package maintenance to RDM's recent updates) -- Sent from my phone, thus the relative brevity :) On Jun 4, 2012 8:55 AM, Martin v. L?wis wrote: > heh. indeed. regardless, the module is available on pypi whether it >> goes in or not so we do at least have something to point people to when >> they need more than the existing undermaintained re (sre) module. >> > > I completely disagree that SRE is unmaintained. It has about monthly > commits to it, to fix reported bugs, by various people. > > It may be aged software, but that has the advantage that more people > are familiar with the code base now than back in the days when /F > was still maintaining it. > > Regards, > Martin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Mon Jun 4 02:46:18 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 04 Jun 2012 10:46:18 +1000 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: Message-ID: <4FCC055A.7040105@pearwood.info> Gregory P. Smith wrote: > Per the language summit, I think we need to just do it. Put it in as re > and rename the existing re module to sre. I thought that the plan was to put the regex module in as regex, leaving re unchanged for backwards compatibility, with any backwards-incompatible renaming to be done some time in the future. -- Steven From ncoghlan at gmail.com Mon Jun 4 05:51:15 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Jun 2012 13:51:15 +1000 Subject: [Python-Dev] whither PEP 407 and 413 (release cycle PEPs)? In-Reply-To: References: <4FCB48F7.60102@v.loewis.de> Message-ID: On Mon, Jun 4, 2012 at 7:11 AM, Nick Coghlan wrote: > On Mon, Jun 4, 2012 at 7:02 AM, Nick Coghlan wrote: >> I think marking both as Rejected would be an accurate reflection of >> python-dev's collective opinion. > > Slight correction: I think it would accurately reflect python-dev's > *divided* opinion, using the principle of "Status quo wins a > stalemate". The costs for either scheme are high, the benefits are not > proven, thus the default is to stick with the status quo. > > Releasing alphas early, OTOH, doesn't require any real changes to our > development process at all, aside from imposing a bit more discipline > on trunk development in the first 12 months of the release cycle (I'm > inclined to place that particular detail on the "benefit" side of the > ledger, rather than the "cost" side). The *total* number of releases > from the release managers and installer builders shouldn't increase > much, if at all - I'd suggest we just stick with Georg's practice of 4 > alpha releases, and merely space them out over the course of the > release cycle rather than clustered together at the end. > > If Larry doesn't want to try this for 3.4, then I'll most likely > volunteer as 3.5 RM and try it out then. After an off-list discussion with Larry, I'm now planning to expand on this concept in PEP form (superceding 413). There's actually a little bit more to it than just releasing the alphas early - it's about harnessing the power of external deadlines to help counter innate tendencies towards procrastination :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Mon Jun 4 08:18:52 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 04 Jun 2012 02:18:52 -0400 Subject: [Python-Dev] whither PEP 407 and 413 (release cycle PEPs)? In-Reply-To: References: <4FCB48F7.60102@v.loewis.de> Message-ID: On 6/3/2012 5:02 PM, Nick Coghlan wrote: > The one thing I actually *would* like to see change is for the cadence > of *alpha* releases to be increased to match that of maintenance > releases (that is, I'd like to see Python 3.4a1 released at the same > time as Python 3.3.1: around 6 months after the release of 3.3.0). I > think keeping the trunk closer to a "releasable" state will help > encourage a more regular rate of contributions and provide earlier > deadlines for big changes (e.g. it's significantly easier to say "we > want to have the compiler changes in place for 3.4a1 in April" than it > is to say "we want to have these changes in place by April, but that's > just an arbitrary point in time, since the nearest release deadline > will still be at least 12 months away". Scheduling things like sprints > and bug days also becomes more focused, since they have a nearer term > goal of getting things fixed for an alpha release that's only a few > months away rather than one that's more than a year out). I like this idea. The main thing that makes alpha releases not 'production' releases is not having more bugs, because they generally do not, but instability of new features. So I think this might have many of the benefits of the non-accepted PEPs with much lower cost. -- Terry Jan Reedy From martin at v.loewis.de Mon Jun 4 09:36:20 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 04 Jun 2012 09:36:20 +0200 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: <4FCBEB44.6020305@v.loewis.de> Message-ID: <4FCC6574.3080100@v.loewis.de> On 04.06.2012 01:28, Nick Coghlan wrote: > I apologise, "unmaintained" is too strong a word. I mean "lacking an > owner sufficiently confident in their authority and expertise and with > sufficient time and energy to add,or approve the addition of, > substantial new features which may require significant refactoring of > internal details". > > Perhaps "unowned" would be a better word? Saying yes or no to major > feature requests isn't the same as fixing errors in existing features. > (Compare regular email package maintenance to RDM's recent updates) I see the same risk for regex. Maybe somebody steps forward and integrates the code, but I doubt that someone would then "own" the code in the sense you refer to, i.e. decide on major new features, or perform a significant internal refactoring. It would all be up to MRAB. Also, there is a chance that a maintainer for SRE may come back. Gustavo Niemeyer had that role for some time after /F left, and anybody sufficiently interested in a specific new feature might grow into that role as well. Regards, Martin From martin at v.loewis.de Mon Jun 4 12:27:57 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 04 Jun 2012 12:27:57 +0200 Subject: [Python-Dev] What should we do with cProfile? In-Reply-To: References: Message-ID: <4FCC8DAD.5090707@v.loewis.de> > So I wondering whether we should abandon the change all together or > attempt it for the next release. Personally, I slightly leaning on > the former option since the two modules are actually fairly different > underneath even though they are used similarly. And also, because it > is getting late to make such backward incompatible changes. I agree that this change is not worthwhile for Python 3. I suggest to close the issue as "won't fix". I'm not sure whether anybody uses the profile module at all, so recycling the name might have been appropriate for Python 3.0. But now that would be a backwards-incompatible change, and I agree it's doubtful whether a backwards-compatible change can be achieved. Even if profile could somehow stay compatible, nothing is gained if cProfile also stays - but it would have to, for backwards compatibility reasons. I predict that at some point, somebody contributes yet another profiling tool which may well supersede both profile and cProfile. If you are interested in profiling in general, I suggest that you could rather work on such code, and release it to PyPI. Regards, Martin From peck at us.ibm.com Mon Jun 4 12:03:04 2012 From: peck at us.ibm.com (Jon K Peck) Date: Mon, 4 Jun 2012 04:03:04 -0600 Subject: [Python-Dev] AUTO: Jon K Peck is out of the office (returning 06/06/2012) Message-ID: I am out of the office until 06/06/2012. I will be out of the office Monday and Tuesday, June 4-5. I expect to have some email access but may be delayed in responding. Note: This is an automated response to your message "Python-Dev Digest, Vol 107, Issue 9" sent on 06/04/2012 0:19:23. This is the only notification you will receive while this person is away. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dirkjan at ochtman.nl Mon Jun 4 13:19:14 2012 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Mon, 4 Jun 2012 13:19:14 +0200 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps Message-ID: I recently opened issue14908. At work, I have to do a bunch of things with dates, times and timezones, and sometimes Unix timestamps are also involved (largely for easy compatibility with legacy APIs). I find the relative obscurity when converting datetimes to timestamps rather painful; IMO it should be possible to do everything I need straight from the datetime module objects, instead of having to involve the time or calendar modules. Anyway, I was pointed to issue 2736, which seems to have got a lot of discouraged core contributors (Victor, Antoine, David and Ka-Ping, to name just a few) up against Alexander (the datetime maintainer, AFAIK). It seems like a fairly straightforward case of practicality over purity: Alexander argues that there are "easy" one-liners to do things like datetime.totimestamp(), but most other people seem to not find them so easy. They've since been added to the documentation at least, but I would like to see if there is consensus on python-dev that adding a little more timestamp support to datetime objects would make sense. I hope this won't become another epic issue like the last time-related issue... Cheers, Dirkjan From ncoghlan at gmail.com Mon Jun 4 14:11:04 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Jun 2012 22:11:04 +1000 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: Message-ID: On Mon, Jun 4, 2012 at 9:19 PM, Dirkjan Ochtman wrote: > Anyway, I was pointed to issue 2736, which seems to have got a lot of > discouraged core contributors (Victor, Antoine, David and Ka-Ping, to > name just a few) up against Alexander (the datetime maintainer, > AFAIK). It seems like a fairly straightforward case of practicality > over purity: Alexander argues that there are "easy" one-liners to do > things like datetime.totimestamp(), but most other people seem to not > find them so easy. They've since been added to the documentation at > least, but I would like to see if there is consensus on python-dev > that adding a little more timestamp support to datetime objects would > make sense. > > I hope this won't become another epic issue like the last time-related issue... My perspective is that if I'm dealing with strictly absolute time, I should only need one import: datetime If I'm dealing strictly with relative time, I should also only need one import: time I shouldn't need to import any other modules to convert betwen them and, since datetime is the higher level of the two, that's where the responsibility for handling any conversions should lie. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From dirkjan at ochtman.nl Mon Jun 4 14:18:42 2012 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Mon, 4 Jun 2012 14:18:42 +0200 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: Message-ID: On Mon, Jun 4, 2012 at 2:11 PM, Nick Coghlan wrote: > My perspective is that if I'm dealing with strictly absolute time, I > should only need one import: datetime > > If I'm dealing strictly with relative time, I should also only need > one import: time Can you define "relative time" here? The term makes me think of things like timedelta. Personally, I would really like not having to think about the time module at all, except if I wanted to go low-level (e.g. get a Unix timestamp from scratch). Cheers, Dirkjan From ncoghlan at gmail.com Mon Jun 4 14:47:10 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Jun 2012 22:47:10 +1000 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: Message-ID: On Mon, Jun 4, 2012 at 10:18 PM, Dirkjan Ochtman wrote: > Can you define "relative time" here? The term makes me think of things > like timedelta. Timeouts, performance measurement, that kind of thing. Mostly timescales of less than an hour, and usually less than a minute. > Personally, I would really like not having to think about the time > module at all, except if I wanted to go low-level (e.g. get a Unix > timestamp from scratch). Yup, that's what I meant, too. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From carl at oddbird.net Mon Jun 4 16:11:44 2012 From: carl at oddbird.net (Carl Meyer) Date: Mon, 04 Jun 2012 08:11:44 -0600 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: <4FCBDD7E.9000700@netwok.org> References: <4F626278.7030701@oddbird.net> <4FCA467A.8060008@stackless.com> <4FCBDD7E.9000700@netwok.org> Message-ID: <4FCCC220.6050203@oddbird.net> Hello Christian, On 06/03/2012 03:56 PM, ?ric Araujo wrote: > Le 02/06/2012 12:59, Christian Tismer a ?crit : >> One urgent question: will this feature be backported to Python 2.7? > > Features are never backported to the stable versions. virtualenv still > exists as a standalone project which is compatible with 2.7 though. To add to ?ric's answer: the key difference between virtualenv and pyvenv, allowing pyvenv environments to be much simpler, relies on a change to the interpreter itself. This won't be backported to 2.7, and can't be released as a standalone package. It would be possible to backport the Python API and command-line UI of pyvenv (which are different from virtualenv) as a PyPI package compatible with Python 2.7. Because it wouldn't have the interpreter change, it would have to still create environments that look like virtualenv environments (i.e. they'd have to have chunks of the stdlib symlinked in and a custom site.py). I suppose this could be useful if wanting to script creation of venvs across Python 2 and Python 3, but the utility seems limited enough that I have no plans to do this. Carl From brett at python.org Mon Jun 4 16:25:11 2012 From: brett at python.org (Brett Cannon) Date: Mon, 4 Jun 2012 10:25:11 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Eric Snow's implementation of PEP 421. In-Reply-To: References: Message-ID: [Let's try this again since my last reply was rejected for being too large] On Mon, Jun 4, 2012 at 9:52 AM, barry.warsaw wrote: > http://hg.python.org/cpython/rev/9c445f4695c1 > changeset: 77339:9c445f4695c1 > parent: 77328:0808cb8c60fd > user: Barry Warsaw > date: Sun Jun 03 16:18:47 2012 -0400 > summary: > Eric Snow's implementation of PEP 421. > > Issue 14673: Add sys.implementation > > files: > Doc/library/sys.rst | 38 ++++ > Doc/library/types.rst | 24 ++ > Include/Python.h | 1 + > Include/namespaceobject.h | 17 + > Lib/test/test_sys.py | 18 ++ > Lib/test/test_types.py | 143 ++++++++++++++++- > Lib/types.py | 1 + > Makefile.pre.in | 2 + > Objects/namespaceobject.c | 225 ++++++++++++++++++++++++++ > Objects/object.c | 3 + > Python/sysmodule.c | 72 ++++++++- > 11 files changed, 541 insertions(+), 3 deletions(-) > > > diff --git a/Doc/library/sys.rst b/Doc/library/sys.rst > --- a/Doc/library/sys.rst > +++ b/Doc/library/sys.rst > @@ -616,6 +616,44 @@ > > Thus ``2.1.0a3`` is hexversion ``0x020100a3``. > > + > +.. data:: implementation > + > + An object containing the information about the implementation of the > + currently running Python interpreter. Its attributes are the those > "the those" -> "those" > + that all Python implementations must implement. Should you mention that VMs are allowed to add their own attributes that are not listed? > They are described > + below. > + > + *name* is the implementation's identifier, like ``'cpython'``. > Is this guaranteed to be lowercase, or does it simply happen to be lowercase in this instance? > + > + *version* is a named tuple, in the same format as > + :data:`sys.version_info`. It represents the version of the Python > + *implementation*. This has a distinct meaning from the specific > + version of the Python *language* to which the currently running > + interpreter conforms, which ``sys.version_info`` represents. For > + example, for PyPy 1.8 ``sys.implementation.version`` might be > + ``sys.version_info(1, 8, 0, 'final', 0)``, whereas ``sys.version_info`` > + would be ``sys.version_info(1, 8, 0, 'final', 0)``. I think you meant to say ``sys.version_info(2, 7, 2, 'final', 0)``. What's with the ~? -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Jun 4 17:10:02 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Jun 2012 11:10:02 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Eric Snow's implementation of PEP 421. In-Reply-To: References: Message-ID: <20120604111002.221904c2@resist.wooz.org> Thanks for the second set of eyes, Brett. On Jun 04, 2012, at 10:16 AM, Brett Cannon wrote: >> +.. data:: implementation >> + >> + An object containing the information about the implementation of the >> + currently running Python interpreter. Its attributes are the those >> > >"the those" -> "those" I actually rewrote this section a bit: An object containing information about the implementation of the currently running Python interpreter. The following attributes are required to exist in all Python implementations. >> + that all Python implementations must implement. > >Should you mention that VMs are allowed to add their own attributes that >are not listed? Here's how I rewrote it: :data:`sys.implementation` may contain additional attributes specific to the Python implementation. These non-standard attributes must start with an underscore, and are not described here. Regardless of its contents, :data:`sys.implementation` will not change during a run of the interpreter, nor between implementation versions. (It may change between Python language versions, however.) See `PEP 421` for more information. >> They are described >> + below. >> + >> + *name* is the implementation's identifier, like ``'cpython'``. > >Is this guaranteed to be lowercase, or does it simply happen to be >lowercase in this instance? Yes, PEP 421 guarantees them to be lower cased. *name* is the implementation's identifier, e.g. ``'cpython'``. The actual string is defined by the Python implementation, but it is guaranteed to be lower case. >I think you meant to say ``sys.version_info(2, 7, 2, 'final', 0)``. Fixed. >> + However, for a structured record type use >> :func:`~collections.namedtuple` >> > >What's with the ~? I'm not sure, but it seems to result in a cross-reference, and I see tildes used elsewhere, so I guess it's some reST/docutils magic. I left this one in there. Cheers, -Barry From fuzzyman at voidspace.org.uk Mon Jun 4 17:15:01 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Mon, 4 Jun 2012 16:15:01 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Eric Snow's implementation of PEP 421. In-Reply-To: <20120604111002.221904c2@resist.wooz.org> References: <20120604111002.221904c2@resist.wooz.org> Message-ID: <649B6D9A-E79B-4DC8-8F30-3AE0C9FFCC47@voidspace.org.uk> On 4 Jun 2012, at 16:10, Barry Warsaw wrote: > [snip...] >>> + However, for a structured record type use >>> :func:`~collections.namedtuple` >>> >> >> What's with the ~? > > I'm not sure, but it seems to result in a cross-reference, and I see tildes > used elsewhere, so I guess it's some reST/docutils magic. I left this one in > there. > It means display the link with the text "namedtuple" instead of the full "collections.namedtuple". You can't just omit the "collections" as Sphinx needs to know where to find the link target. Michael > Cheers, > -Barry > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From brett at python.org Mon Jun 4 17:39:40 2012 From: brett at python.org (Brett Cannon) Date: Mon, 4 Jun 2012 11:39:40 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Eric Snow's implementation of PEP 421. In-Reply-To: <20120604111002.221904c2@resist.wooz.org> References: <20120604111002.221904c2@resist.wooz.org> Message-ID: On Mon, Jun 4, 2012 at 11:10 AM, Barry Warsaw wrote: > Thanks for the second set of eyes, Brett. > > On Jun 04, 2012, at 10:16 AM, Brett Cannon wrote: > > >> +.. data:: implementation > >> + > >> + An object containing the information about the implementation of the > >> + currently running Python interpreter. Its attributes are the those > >> > > > >"the those" -> "those" > > I actually rewrote this section a bit: > > An object containing information about the implementation of the > currently running Python interpreter. The following attributes are > required to exist in all Python implementations. > > >> + that all Python implementations must implement. > > > >Should you mention that VMs are allowed to add their own attributes that > >are not listed? > > Here's how I rewrote it: > > :data:`sys.implementation` may contain additional attributes specific to > the Python implementation. These non-standard attributes must start with > an underscore, and are not described here. Regardless of its contents, > :data:`sys.implementation` will not change during a run of the > interpreter, > nor between implementation versions. (It may change between Python > language versions, however.) See `PEP 421` for more information. > > >> They are described > >> + below. > >> + > >> + *name* is the implementation's identifier, like ``'cpython'``. > > > >Is this guaranteed to be lowercase, or does it simply happen to be > >lowercase in this instance? > > Yes, PEP 421 guarantees them to be lower cased. > *name* is the implementation's identifier, e.g. ``'cpython'``. The > actual > string is defined by the Python implementation, but it is guaranteed to > be > lower case. > > OK, then I would add a test to make sure this happens, like ``self.assertEqual(sys.implementation.name, sys.implement.name.lower())`` if you don't want to bother documenting it to make sure other VMs conform. -Brett >I think you meant to say ``sys.version_info(2, 7, 2, 'final', 0)``. > > Fixed. > > >> + However, for a structured record type use > >> :func:`~collections.namedtuple` > >> > > > >What's with the ~? > > I'm not sure, but it seems to result in a cross-reference, and I see tildes > used elsewhere, so I guess it's some reST/docutils magic. I left this one > in > there. > > Cheers, > -Barry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Jun 4 17:45:29 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Jun 2012 11:45:29 -0400 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: Message-ID: <20120604114529.69d7e0d1@resist.wooz.org> On Jun 04, 2012, at 01:19 PM, Dirkjan Ochtman wrote: >I recently opened issue14908. At work, I have to do a bunch of things >with dates, times and timezones, and sometimes Unix timestamps are >also involved (largely for easy compatibility with legacy APIs). I >find the relative obscurity when converting datetimes to timestamps >rather painful; IMO it should be possible to do everything I need >straight from the datetime module objects, instead of having to >involve the time or calendar modules. I completely agree. I've long considered this a wart in the stdlib, but never got off my butt to do anything about it. Thanks for filing this Dirkjan. JFDI-ly y'rs, -Barry From barry at python.org Mon Jun 4 17:51:02 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Jun 2012 11:51:02 -0400 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: Message-ID: <20120604115102.7e9b770b@resist.wooz.org> On Jun 04, 2012, at 02:18 PM, Dirkjan Ochtman wrote: >Personally, I would really like not having to think about the time >module at all, except if I wanted to go low-level (e.g. get a Unix >timestamp from scratch). +1 Oh and, practicality beats purity. -Barry From barry at python.org Mon Jun 4 18:03:08 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Jun 2012 12:03:08 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Eric Snow's implementation of PEP 421. In-Reply-To: References: <20120604111002.221904c2@resist.wooz.org> Message-ID: <20120604120308.73c41732@resist.wooz.org> On Jun 04, 2012, at 11:39 AM, Brett Cannon wrote: >OK, then I would add a test to make sure this happens, like >``self.assertEqual(sys.implementation.name, sys.implement.name.lower())`` >if you don't want to bother documenting it to make sure other VMs conform. Good idea. Done. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From guido at python.org Mon Jun 4 19:12:01 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 4 Jun 2012 10:12:01 -0700 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: <20120604115102.7e9b770b@resist.wooz.org> References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Mon, Jun 4, 2012 at 8:51 AM, Barry Warsaw wrote: > On Jun 04, 2012, at 02:18 PM, Dirkjan Ochtman wrote: > >>Personally, I would really like not having to think about the time >>module at all, except if I wanted to go low-level (e.g. get a Unix >>timestamp from scratch). > > +1 > > Oh and, practicality beats purity. :-) A big +1 on making conversions between POSIX timestamps and datetime (with or without timezone) easier. FWIW, I see a lot of people around me struggling with datetime objects who would be better off using POSIX timestamps. E.g. lat week I heard a colleague complain that he'd lost several hours trying to figure out how to determine whether two datetimes were within 24h of each other, getting confused by what was happening when the two were on different sides of a DST transition (or worse, in the middle of one). This falls under Nick's header "relative time", but the problem was that he was trying to add this functionality to a framework that was storing datetimes in a database, and they were previously used to record when something had happened. He ended up having two versions of some code -- one using timestamps, one using datetimes. Clearly that's suboptimal. -- --Guido van Rossum (python.org/~guido) From solipsis at pitrou.net Mon Jun 4 19:36:52 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 04 Jun 2012 19:36:52 +0200 Subject: [Python-Dev] Daily reference leaks (d9b7399d9e45): sum=462 In-Reply-To: References: Message-ID: Le 03/06/2012 06:01, Benjamin Peterson a ?crit : > 2012/6/2: >> results for d9b7399d9e45 on branch "default" >> -------------------------------------------- >> >> test_smtplib leaked [154, 154, 154] references, sum=462 > > Can other people reproduce this one? I can't. $ ./python -m test -R 3:3 -uall test_smtpd test_smtplib [1/2] test_smtpd beginning 6 repetitions 123456 ...... [2/2] test_smtplib beginning 6 repetitions 123456 ...... test_smtplib leaked [154, 154, 154] references, sum=462 From tismer at stackless.com Mon Jun 4 20:01:28 2012 From: tismer at stackless.com (Christian Tismer) Date: Mon, 04 Jun 2012 20:01:28 +0200 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: <4FCCC220.6050203@oddbird.net> References: <4F626278.7030701@oddbird.net> <4FCA467A.8060008@stackless.com> <4FCBDD7E.9000700@netwok.org> <4FCCC220.6050203@oddbird.net> Message-ID: <4FCCF7F8.2030700@stackless.com> On 6/4/12 4:11 PM, Carl Meyer wrote: > Hello Christian, > > On 06/03/2012 03:56 PM, ?ric Araujo wrote: >> Le 02/06/2012 12:59, Christian Tismer a ?crit : >>> One urgent question: will this feature be backported to Python 2.7? >> Features are never backported to the stable versions. virtualenv still >> exists as a standalone project which is compatible with 2.7 though. > To add to ?ric's answer: the key difference between virtualenv and > pyvenv, allowing pyvenv environments to be much simpler, relies on a > change to the interpreter itself. This won't be backported to 2.7, and > can't be released as a standalone package. > > It would be possible to backport the Python API and command-line UI of > pyvenv (which are different from virtualenv) as a PyPI package > compatible with Python 2.7. Because it wouldn't have the interpreter > change, it would have to still create environments that look like > virtualenv environments (i.e. they'd have to have chunks of the stdlib > symlinked in and a custom site.py). I suppose this could be useful if > wanting to script creation of venvs across Python 2 and Python 3, but > the utility seems limited enough that I have no plans to do this. > Thank you call. Sad, but I see. I guess I could produce an extension as add-on that mutates python2.7's behavior ;-) kidding-ly y'rs - chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de work +49 173 24 18 776 mobile +49 173 24 18 776 fax n.a. PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From alexander.belopolsky at gmail.com Mon Jun 4 20:30:25 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 4 Jun 2012 14:30:25 -0400 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Mon, Jun 4, 2012 at 1:12 PM, Guido van Rossum wrote: > A big +1 on making conversions between POSIX timestamps and datetime > (with or without timezone) easier. I am all for achieving this goal, but I think the root of the problem is not the lack of mxDT's ticks() method. Note that someone who requires robust behavior across the DST change would still be puzzled about what to supply as arguments to the .ticks(offset=0.0,dst=-1) method. I think the key feature that is missing from datetime is the ability to obtain local time complete with timezone offset. See issue 9527. Once we have that, adding .ticks() that requires a timezone aware datetime instance will complete the puzzle. The problem with adding .ticks() to naive datetime instances is the inherent ambiguity in what such instances represent. mxDT resolves this by offering offset and dst arguments, but this is just moving the problem from one place to another without solving it. From guido at python.org Mon Jun 4 20:52:07 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 4 Jun 2012 11:52:07 -0700 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: Agreed that having a robust tzinfo object representing "local time, whatever it is" would be a good feature too. This shouldn't have to depend on the Olson tz database; it should just consult the libc localtime function. --Guido On Mon, Jun 4, 2012 at 11:30 AM, Alexander Belopolsky wrote: > On Mon, Jun 4, 2012 at 1:12 PM, Guido van Rossum wrote: >> A big +1 on making conversions between POSIX timestamps and datetime >> (with or without timezone) easier. > > I am all for achieving this goal, but I think the root of the problem > is not the lack of mxDT's ticks() method. ?Note that someone who > requires robust behavior across the DST change would still be puzzled > about what to supply as arguments to the .ticks(offset=0.0,dst=-1) > method. > > I think the key feature that is missing from datetime is the ability > to obtain local time complete with timezone offset. ?See issue 9527. > Once we have that, adding ?.ticks() that requires a timezone aware > datetime instance will complete the puzzle. > > The problem with adding .ticks() to naive datetime instances is the > inherent ambiguity in what such instances represent. mxDT resolves > this by offering offset and dst arguments, but this is just moving the > problem from one place to another without solving it. -- --Guido van Rossum (python.org/~guido) From alexander.belopolsky at gmail.com Mon Jun 4 20:46:25 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 4 Jun 2012 14:46:25 -0400 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Mon, Jun 4, 2012 at 1:12 PM, Guido van Rossum wrote: > ... I heard > a colleague complain that he'd lost several hours trying to figure out > how to determine whether two datetimes were within 24h of each other, > getting confused by what was happening when the two were on different > sides of a DST transition (or worse, in the middle of one). I don't think this is a problem that a general purpose module such as datetime can resolve. Assuming that all instances are timezone aware, either of the following tests may be appropriate for a given application: 1) dt1 - dt2 == timedelta(1) 2) dt1.date() - dt2.date() == timedelta(1) and dt1.time() == dt2.time() If your application deals with physical processes - (1) may be appropriate, but if it deals with human schedules - (2) may be appropriate. The only right solution is to lobby your government to abandon the DST. From guido at python.org Mon Jun 4 21:05:10 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 4 Jun 2012 12:05:10 -0700 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Mon, Jun 4, 2012 at 11:46 AM, Alexander Belopolsky wrote: > On Mon, Jun 4, 2012 at 1:12 PM, Guido van Rossum wrote: >> ... ?I heard >> a colleague complain that he'd lost several hours trying to figure out >> how to determine whether two datetimes were within 24h of each other, >> getting confused by what was happening when the two were on different >> sides of a DST transition (or worse, in the middle of one). > > I don't think this is a problem that a general purpose module such as > datetime can resolve. ?Assuming that all instances are timezone aware, > either of the following tests may be appropriate for a given > application: > > 1) dt1 - dt2 == timedelta(1) > > 2) dt1.date() - dt2.date() == timedelta(1) and dt1.time() == dt2.time() > > If your application deals with physical processes - (1) may be > appropriate, but if it deals with human schedules - (2) may be > appropriate. You seem to have misread -- I don't want to check if they are exactly 24 hours apart. I want to check if they are at most 24 hours apart. The timezone can be assumed to be the same on dt1 and dt2. A variant of (1) was what was needed -- the user had just confused themselves into thinking they needed to convert to UTC first, and done a poor job of that. This is a common situation. > The only right solution is to lobby your government to abandon the DST. That's not helping. -- --Guido van Rossum (python.org/~guido) From victor.stinner at gmail.com Mon Jun 4 21:27:17 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 4 Jun 2012 21:27:17 +0200 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: Message-ID: > Anyway, I was pointed to issue 2736, which seems to have got a lot of > discouraged core contributors (Victor, Antoine, David and Ka-Ping, to > name just a few) up against Alexander (the datetime maintainer, > AFAIK). It seems like a fairly straightforward case of practicality > over purity: Alexander argues that there are "easy" one-liners to do > things like datetime.totimestamp(), but most other people seem to not > find them so easy. Does mktime(dt.timetuple()) handle correctly tzinfo? And how do you get a UNIX timestamp in the UTC timezone? (dt.utctotimestamp()) I tried to implement datetime.totimestamp() but I lost my mind in timezone. It took me weeks to understand that the French timezone lost two hour near 1940 because of the World War II (to uniformize French and German timezones)... France has not least than 12 timezones (the country, not Metropolitan France) :-) There is also the question of daylight saving time... Handling time is too complex for my brain :-) So I'm +1 for a simple datetime.totimestamp() method. But I'm unable to write or review it. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Mon Jun 4 21:57:56 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 04 Jun 2012 15:57:56 -0400 Subject: [Python-Dev] Daily reference leaks (d9b7399d9e45): sum=462 In-Reply-To: References: Message-ID: <20120604195757.0284925009E@webabinitio.net> On Mon, 04 Jun 2012 19:36:52 +0200, Antoine Pitrou wrote: > Le 03/06/2012 06:01, Benjamin Peterson a ??crit : > > 2012/6/2: > >> results for d9b7399d9e45 on branch "default" > >> -------------------------------------------- > >> > >> test_smtplib leaked [154, 154, 154] references, sum=462 > > > > Can other people reproduce this one? I can't. > > $ ./python -m test -R 3:3 -uall test_smtpd test_smtplib > [1/2] test_smtpd > beginning 6 repetitions > 123456 > ...... > [2/2] test_smtplib > beginning 6 repetitions > 123456 > ...... > test_smtplib leaked [154, 154, 154] references, sum=462 Gah. Looks like a copy and paste error by one of the many people (including me) who contributed to the last smtpd patch. Should be fixed by 079c1942eedf. --David From alexander.belopolsky at gmail.com Mon Jun 4 22:57:09 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 4 Jun 2012 16:57:09 -0400 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Mon, Jun 4, 2012 at 3:05 PM, Guido van Rossum wrote: > You seem to have misread -- I don't want to check if they are exactly > 24 hours apart. I want to check if they are at most 24 hours apart. > The timezone can be assumed to be the same on dt1 and dt2. > > A variant of (1) was what was needed -- the user had just confused > themselves into thinking they needed to convert to UTC first, and done > a poor job of that. This is a common situation. It looks like if we had datetime.ticks() method, the user would simply use it improperly and never ask his or her question. Hardly the result that we want. From guido at python.org Mon Jun 4 23:25:34 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 4 Jun 2012 14:25:34 -0700 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Mon, Jun 4, 2012 at 1:57 PM, Alexander Belopolsky wrote: > On Mon, Jun 4, 2012 at 3:05 PM, Guido van Rossum wrote: >> You seem to have misread -- I don't want to check if they are exactly >> 24 hours apart. I want to check if they are at most 24 hours apart. >> The timezone can be assumed to be the same on dt1 and dt2. >> >> A variant of (1) was what was needed -- the user had just confused >> themselves into thinking they needed to convert to UTC first, and done >> a poor job of that. This is a common situation. > > It looks like if we had datetime.ticks() method, the user would simply > use it improperly and never ask his or her question. ?Hardly the > result that we want. That's not my assessment of the situation. But I don't know what ticks() is supposed to do. I am assuming we would create totimestamp() and utctotimestamp() that mirror fromtimestamp() and utcfromtimestamp(). Since we trust the user with the latter we should trust them with the former. -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Tue Jun 5 00:15:25 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Jun 2012 08:15:25 +1000 Subject: [Python-Dev] Language reference updated for metaclasses In-Reply-To: References: Message-ID: It's actually the pre-decoration class, since the cell is initialised before the class is passed to the first decorator. I agree it's a little weird, but I did try to describe it accurately in the new docs. -- Sent from my phone, thus the relative brevity :) On Jun 5, 2012 7:52 AM, "PJ Eby" wrote: > On Sun, May 20, 2012 at 4:38 AM, Nick Coghlan wrote: > >> When writing the docs for types.new_class(), I discovered that the >> description of the class creation process in the language reference >> was not only hard to follow, it was actually *incorrect* when it came >> to describing the algorithm for determining the correct metaclass. >> >> I rewrote the offending section of the language reference to both >> describe the correct algorithm, and hopefully also to be easier to >> read. Once people have had a chance to review the changes in the 3.3 >> docs, I'll backport the update to 3.2. >> >> Previous docs: >> http://docs.python.org/py3k/reference/datamodel.html#customizing-class-creation >> Updated docs: >> http://docs.python.org/dev/reference/datamodel.html#customizing-class-creation >> > > This is only sort-of-related, but while reviewing the above, the bit about > __class__ caught my eye and brought this question to mind: how do class > decorators interact with __class__? Specifically, what happens (or more to > the point, is *supposed* to happen and documented as such) if a class > decorator returns a different class object? > > PEP 3135 doesn't address this, AFAICT. It refers only to "the class", but > doesn't say whether this is the class-as-returned-by-decorator or original > defined class. (ISTM that it should be the decorated class, since > otherwise this would be different behavior compared to code that explicitly > named the class.) > > (Oh, and the rewrite looked good!) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Mon Jun 4 23:52:13 2012 From: pje at telecommunity.com (PJ Eby) Date: Mon, 4 Jun 2012 17:52:13 -0400 Subject: [Python-Dev] Language reference updated for metaclasses In-Reply-To: References: Message-ID: On Sun, May 20, 2012 at 4:38 AM, Nick Coghlan wrote: > When writing the docs for types.new_class(), I discovered that the > description of the class creation process in the language reference > was not only hard to follow, it was actually *incorrect* when it came > to describing the algorithm for determining the correct metaclass. > > I rewrote the offending section of the language reference to both > describe the correct algorithm, and hopefully also to be easier to > read. Once people have had a chance to review the changes in the 3.3 > docs, I'll backport the update to 3.2. > > Previous docs: > http://docs.python.org/py3k/reference/datamodel.html#customizing-class-creation > Updated docs: > http://docs.python.org/dev/reference/datamodel.html#customizing-class-creation > This is only sort-of-related, but while reviewing the above, the bit about __class__ caught my eye and brought this question to mind: how do class decorators interact with __class__? Specifically, what happens (or more to the point, is *supposed* to happen and documented as such) if a class decorator returns a different class object? PEP 3135 doesn't address this, AFAICT. It refers only to "the class", but doesn't say whether this is the class-as-returned-by-decorator or original defined class. (ISTM that it should be the decorated class, since otherwise this would be different behavior compared to code that explicitly named the class.) (Oh, and the rewrite looked good!) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Tue Jun 5 00:58:06 2012 From: pje at telecommunity.com (PJ Eby) Date: Mon, 4 Jun 2012 18:58:06 -0400 Subject: [Python-Dev] Language reference updated for metaclasses In-Reply-To: References: Message-ID: On Mon, Jun 4, 2012 at 6:15 PM, Nick Coghlan wrote: > It's actually the pre-decoration class, since the cell is initialised > before the class is passed to the first decorator. I agree it's a little > weird, but I did try to describe it accurately in the new docs. > I see that now; it might be helpful to explicitly call that out. This is adding to my list of Python 3 metaclass gripes, though. In Python 2, I have in-the-body-of-a-class decorators implemented using metaclasses, that will no longer work because of PEP 3115... and if I switch to using class decorators instead, then they won't work because of PEP 3135. :-( Meanwhile, mixing metaclasses is more difficult than ever, due to __prepare__, and none of these flaws can be worked around officially, because __build_class__ is an "implementation detail". I *really* want to like Python 3, but am still hoping against hope for the restoration of hooks that __metaclass__ allowed, or some alternative mechanism that would serve the same use cases. Specifically, my main use case is method-level decorators and attribute descriptors that need to interact with a user-defined class, *without* requiring that user-defined class to either 1) redundantly decorate the class or 2) inherit from some specific base or inject a specific metaclass. I only use __metaclass__ in 2.x for this because it's the only way for code executed in a class body to gain access to the class at creation time. The reason for wanting this to be transparent is that 1) if you forget the redundant class-decorator, mixin, or metaclass, stuff will silently not work, and 2) mixing bases or metaclasses has much higher coupling to the library providing the decorators or descriptors, and greatly increases the likelihood of mixing metaclasses. And at the moment, the only workaround I can come up with that *doesn't* involve replacing __build_class__ is abusing the system trace hook; ISTM that replacing __build_class__ is the better of those two options. At this point, with the additions of types.new_class(), ISTM that every Python implementation will have to *have* a __build_class__ function or its equivalent; all that remains is the question of whether they allow *replacing* it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Jun 5 01:18:07 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Jun 2012 09:18:07 +1000 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) Message-ID: On Tue, Jun 5, 2012 at 8:58 AM, PJ Eby wrote: > On Mon, Jun 4, 2012 at 6:15 PM, Nick Coghlan wrote: >> >> It's actually the pre-decoration class, since the cell is initialised >> before the class is passed to the first decorator. I agree it's a little >> weird, but I did try to describe it accurately in the new docs. > > I see that now; it might be helpful to explicitly call that out. > > This is adding to my list of Python 3 metaclass gripes, though.? In Python > 2, I have in-the-body-of-a-class decorators implemented using metaclasses, > that will no longer work because of PEP 3115... I'm not quite following this one - do you mean they won't support __prepare__, won't play nicely with other metaclasses that implement __prepare__, or something else? >? and if I switch to using > class decorators instead, then they won't work because of PEP 3135.? :-( Yeah, 3135 has had some "interesting" consequences :P (e.g. class body level __class__ definitions are still broken at the moment if you also reference super: http://bugs.python.org/issue12370) > Meanwhile, mixing metaclasses is more difficult than ever, due to > __prepare__, and none of these flaws can be worked around officially, > because __build_class__ is an "implementation detail".? I *really* want to > like Python 3, but am still hoping against hope for the restoration of hooks > that __metaclass__ allowed, or some alternative mechanism that would serve > the same use cases. > > Specifically, my main use case is method-level decorators and attribute > descriptors that need to interact with a user-defined class, *without* > requiring that user-defined class to either 1) redundantly decorate the > class or 2) inherit from some specific base or inject a specific metaclass. > I only use __metaclass__ in 2.x for this because it's the only way for code > executed in a class body to gain access to the class at creation time. > > The reason for wanting this to be transparent is that 1) if you forget the > redundant class-decorator, mixin, or metaclass, stuff will silently not > work, Why would it silently not work? What's preventing you from having decorators that create wrapped functions that fail noisily when called, then providing a class decorator that unwraps those functions, fixes them up with the class references they need and stores the unwrapped and updated versions back on the class. You call it redundant, I call it explicit. > and 2) mixing bases or metaclasses has much higher coupling to the > library providing the decorators or descriptors, and greatly increases the > likelihood of mixing metaclasses. So don't do that, then. Be explicit. > And at the moment, the only workaround I can come up with that *doesn't* > involve replacing __build_class__ is abusing the system trace hook; ISTM > that replacing __build_class__ is the better of those two options. Stop trying to be implicit. Implicit magic sucks, don't add more (PEP 3135 is bad enough). > At this point, with the additions of types.new_class(), ISTM that every > Python implementation will have to *have* a __build_class__ function or its > equivalent; all that remains is the question of whether they allow > *replacing* it. types.new_class() is actually a pure Python reimplementation of the PEP 3115 algorithm. Why would it imply the need for a __build_class__ function? Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From naijaexboy at gmail.com Tue Jun 5 01:27:28 2012 From: naijaexboy at gmail.com (ayodele akingbulu) Date: Tue, 5 Jun 2012 00:27:28 +0100 Subject: [Python-Dev] issues with installing and importing rdflib and rdfextras in python and eclipse pydev In-Reply-To: References: Message-ID: Hi, I have issues with the installation of rdflib and rdfextras packages on windows for python 3.2.3. I cannot find anywhere in the document detailing how to install this packages and succesfully import them in a python program. not to talk less of accessing it on eclipse indigo. Kindly help to look into this issue as it might be a bug. Regards, Ayo -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Tue Jun 5 01:43:21 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 04 Jun 2012 16:43:21 -0700 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: <4FCD4819.6030109@stoneleaf.us> Nick Coghlan wrote: > On Tue, Jun 5, 2012 at 8:58 AM, PJ Eby wrote: >> The reason for wanting this to be transparent is that 1) if you forget the >> redundant class-decorator, mixin, or metaclass, stuff will silently not >> work, > > Why would it silently not work? What's preventing you from having > decorators that create wrapped functions that fail noisily when > called, then providing a class decorator that unwraps those functions, > fixes them up with the class references they need and stores the > unwrapped and updated versions back on the class. > > You call it redundant, I call it explicit. > The first time you specify something, it's explicit; if you have to specify the same thing a second time, it's redundant; if this was a good thing why do we say DRY so often? ~Ethan~ From python at mrabarnett.plus.com Tue Jun 5 02:00:44 2012 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 05 Jun 2012 01:00:44 +0100 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? Message-ID: <4FCD4C2C.6040901@mrabarnett.plus.com> (I've been having trouble with my email recently, so I missed this thread amongst others.) I personally am no longer that bothered about whether the regex module makes it into stdlib, but I am still be maintaining it on PyPI. If someone else wants to integrate it I would, of course, be willing to help out. As long as they basically have the same source code, any bugs or other problems in the integrated module would be shared by the separate module, and I would want to fix them, so any fix in the separate module could be replicated easily in the integrated module. It already works with Python 3.3 alpha, including PEP 393, BTW. From victor.stinner at gmail.com Tue Jun 5 02:05:26 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 5 Jun 2012 02:05:26 +0200 Subject: [Python-Dev] Issue #11022: locale.getpreferredencoding() must not set temporary LC_CTYPE Message-ID: Hi, I would like to know if it is too late (or not) to change the behaviour of open() for text files (TextIOWrapper). Currently, it calls locale.getpreferredencoding() to get the locale encoding by default. It is convinient and I like this behaviour... except that it changes temporary the LC_CTYPE locale to get the user prefered encoding instead of using the current encoding. Python 3 does already uses the user preferred encoding as the current encoding at startup. Changing temporary the current encoding to the user preferred encoding is useless and dangerous (may cause issues in multithreaded applications). Setting the current locale using locale.setlocale() does not affect TextIOWrapper, it's also surprising. The change is just to replace locale.getpreferredencoding() by locale.getpreferredencoding(False) in the io module. Can I change this behaviour (before the first beta) in Python 3.3? See the issue #11022 (and maybe also #6203) for the discussion on this change. -- Leaving LC_CTYPE unchanged (use the "C" locale, which is ASCII in most cases) at Python startup would be a major change in Python 3. I don't want to do that. You would see a lot of mojibake in your GUIs and get a lot of ugly surrogate characters in filenames because of the PEP 393. Setting the LC_CTYPE to the user preferred encoding is just very convinient and helps Python to speak to the user though the console, to the filesystem, to pass arguments on a command line of a subprocess, etc. For example, you cannot pass non-ASCII characters to a subprocess, characters written by the user in your GUI, if your current LC_CTYPE locale is C (ASCII): you get an Unicode encode error. Victor From pje at telecommunity.com Tue Jun 5 02:10:33 2012 From: pje at telecommunity.com (PJ Eby) Date: Mon, 4 Jun 2012 20:10:33 -0400 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: On Mon, Jun 4, 2012 at 7:18 PM, Nick Coghlan wrote: > On Tue, Jun 5, 2012 at 8:58 AM, PJ Eby wrote: > > On Mon, Jun 4, 2012 at 6:15 PM, Nick Coghlan wrote: > >> > >> It's actually the pre-decoration class, since the cell is initialised > >> before the class is passed to the first decorator. I agree it's a little > >> weird, but I did try to describe it accurately in the new docs. > > > > I see that now; it might be helpful to explicitly call that out. > > > > This is adding to my list of Python 3 metaclass gripes, though. In > Python > > 2, I have in-the-body-of-a-class decorators implemented using > metaclasses, > > that will no longer work because of PEP 3115... > > I'm not quite following this one - do you mean they won't support > __prepare__, won't play nicely with other metaclasses that implement > __prepare__, or something else? > I mean that class-level __metaclass__ is no longer supported as of PEP 3115, so I can't use that as a way to non-invasively obtain the enclosing class at class creation time. (Unfortunately, I didn't realize until relatively recently that it wasn't supported any more; the PEP itself doesn't say the functionality will be removed. Otherwise, I'd have lobbied sooner for a better migration path.) > > Meanwhile, mixing metaclasses is more difficult than ever, due to > > __prepare__, and none of these flaws can be worked around officially, > > because __build_class__ is an "implementation detail". I *really* want > to > > like Python 3, but am still hoping against hope for the restoration of > hooks > > that __metaclass__ allowed, or some alternative mechanism that would > serve > > the same use cases. > > > > Specifically, my main use case is method-level decorators and attribute > > descriptors that need to interact with a user-defined class, *without* > > requiring that user-defined class to either 1) redundantly decorate the > > class or 2) inherit from some specific base or inject a specific > metaclass. > > I only use __metaclass__ in 2.x for this because it's the only way for > code > > executed in a class body to gain access to the class at creation time. > > > > The reason for wanting this to be transparent is that 1) if you forget > the > > redundant class-decorator, mixin, or metaclass, stuff will silently not > > work, > > Why would it silently not work? What's preventing you from having > decorators that create wrapped functions that fail noisily when > called, then providing a class decorator that unwraps those functions, > fixes them up with the class references they need and stores the > unwrapped and updated versions back on the class. > If you are registering functions or attributes in some type of registry either at the class level or in some sort of global registry, then the function will never be called or the attribute accessed, so there is no opportunity to give an error message saying, "you should have done this". Something will simply fail to happen that should have happened. Essentially, the lack of this hook forces things to be done outside of class bodies that make more sense to be done *inside* class bodies. > and 2) mixing bases or metaclasses has much higher coupling to the > > library providing the decorators or descriptors, and greatly increases > the > > likelihood of mixing metaclasses. > > So don't do that, then. Be explicit. > Easy for you to say. However, I have *many* decorators and descriptors which follow this pattern, in various separately-distributed libraries. If *each* of these libraries now grows a redundant class decorator for Python 3, then any code which uses more than one in the same class will now have a big stack of decorator noise on top of it... and missing any one of them will cause silent failure. And of course, all other libraries which use these decorators and descriptors will *also* have to add this line-noise in order to port to Python 3... including the applications that depend on those libraries. And let's not forget the people who subclass or extend any of my decorators or descriptors -- they will need to tell *their* users to begin adding redundant class-level decorators, and so on. > > And at the moment, the only workaround I can come up with that *doesn't* > > involve replacing __build_class__ is abusing the system trace hook; ISTM > > that replacing __build_class__ is the better of those two options. > > Stop trying to be implicit. Implicit magic sucks, The fact that a method-level decorator or descriptor might need access to its containing class when the class is created does not in any way make it *intrinsically* "magical" or "implicit". The focus here is on the decorator or descriptor: the class access is just an implementation detail, and generally doesn't involve modifying the class itself. So, please let's not start FUDding here. None of the descriptors or decorators I'm describing are implicit or magical in the least. They are documented as doing what they do, and they use perfectly legal mechanisms of Python 2 to implement those behaviors. If tomorrow a new PEP were introduced that provided an alternate way of doing this, I'd gladly use it to migrate these features forward. Implying that my use cases "suck" does not help; I could just as easily say, "redundancy sucks", and we are no further along to a solution. It might be more helpful to propose some alternate mechanism to provide the same feature to be introduced in a future Python version, like a special __decorators__ key in a class dictionary that the default type would iterate over and call with the class. Or perhaps a proposal that the default type creation code simply iterate over the contents of the class namespace and call "ob.__used_in_class__(cls, attrname)" on each found object. Or something else altogether, any of which I would happily use in place of the old __metaclass__ hook, and which would certainly be less obscure. > At this point, with the additions of types.new_class(), ISTM that every > > Python implementation will have to *have* a __build_class__ function or > its > > equivalent; all that remains is the question of whether they allow > > *replacing* it. > > types.new_class() is actually a pure Python reimplementation of the > PEP 3115 algorithm. Why would it imply the need for a __build_class__ > function? > Because now every Python implementation will contain the code to do this *twice*: once inside its normal class creation machinery, and once inside of types.new_class(). (Well, unless they share code internally, of course.) IOW, saying that __build_class__ is an implementation detail of CPython doesn't quite wash: thanks to types.new_class() every Python implementation now *must* support programmatically creating a class in this way. The only detail that remains is whether it's allowed to replace the built-in equivalent to __build_class__, and whether it's documented as a standard feature ala __import__ (and its __metaclass__ predecessor). Note, though, that I've only focused on __build_class__ because it's low-hanging fruit: it could be used to work around the absence of __metaclass__ or a more-specific hook like the ones I mentioned above, *and* it's already implemented in Python 3.x. (That doesn't make it the best solution, of course, just a low-hanging one.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Tue Jun 5 02:31:10 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 05 Jun 2012 10:31:10 +1000 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: <4FCD4C2C.6040901@mrabarnett.plus.com> References: <4FCD4C2C.6040901@mrabarnett.plus.com> Message-ID: <4FCD534E.8020305@pearwood.info> MRAB wrote: > I personally am no longer that bothered about whether the regex module > makes it into stdlib, but I am still be maintaining it on PyPI. If > someone else wants to integrate it I would, of course, be willing to > help out. Are you withdrawing your offer to maintain regex in the stdlib? > As long as they basically have the same source code, any bugs or other > problems in the integrated module would be shared by the separate > module, and I would want to fix them, so any fix in the separate module > could be replicated easily in the integrated module. But changes to the stdlib (bug fixes or functional changes) are very likely to run at a slower pace to what third-party packages can afford. If you continue to develop regex outside of the stdlib, that could cause complications. -- Steven From brian at python.org Tue Jun 5 02:41:30 2012 From: brian at python.org (Brian Curtin) Date: Mon, 4 Jun 2012 19:41:30 -0500 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: <4FCD534E.8020305@pearwood.info> References: <4FCD4C2C.6040901@mrabarnett.plus.com> <4FCD534E.8020305@pearwood.info> Message-ID: On Mon, Jun 4, 2012 at 7:31 PM, Steven D'Aprano wrote: > But changes to the stdlib (bug fixes or functional changes) are very likely > to run at a slower pace to what third-party packages can afford. If you > continue to develop regex outside of the stdlib, that could cause > complications. Developing outside of the standard library isn't an option. You could always backport things to the external version like the unittest2 project, but standard library modules need to be grown and fixed first in the standard library. From alexander.belopolsky at gmail.com Tue Jun 5 02:51:29 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 4 Jun 2012 20:51:29 -0400 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Mon, Jun 4, 2012 at 5:25 PM, Guido van Rossum wrote: .. > But I don't know what ticks() is supposed to do. """ .ticks(offset=0.0,dst=-1) Returns a float representing the instances value in ticks (see above). The conversion routine assumes that the stored date/time value is given in local time. The given value for dst is used by the conversion (0 = DST off, 1 = DST on, -1 = unkown) and offset is subtracted from the resulting value. The method raises a RangeErrorexception if the objects value does not fit into the system's ticks range. Note:On some platforms the C lib's mktime() function that this method uses does not allow setting DST to an arbitrary value. The module checks for this and raises a SystemError in case setting DST to 0 or 1 does not result in valid results. """ mxDateTime Documentation, > I am assuming we > would create totimestamp() and utctotimestamp() that mirror > fromtimestamp() and utcfromtimestamp(). First, with the introduction of timezone.utc, the use of utc... methods should be discouraged. fromtimestamp(s,timezeone.utc) is better than utcfromtimestamp(); now(timezeone.utc) is better than utcnow(); and dt.astimezone(timezone.utc).timetuple() is better than dt.utctimetuple() The advantage of the first two is that they produce aware datetime instances. The last example may appear more verbose, but in most applications, dt.astimezone(timezone.utc) is really what is needed. > Since we trust the user with > the latter we should trust them with the former. > This does not follow. When you deal with non-monotonic functions, defining the inverse functions requires considerably more care than defining the original. For example, defining sqrt(), requires choosing the positive root. For arcsin(), you need to choose between infinitely many branches. I don't think many users would appreciate a math library where sqrt() and arcsin() take an optional branch= argument, but mxDT's ticks() features this very design with its dst= flag. Most users will ignore optional dst= and live with the resulting bugs. We had several such bugs in stdlib and they went unnoticed for years. From python at mrabarnett.plus.com Tue Jun 5 03:22:04 2012 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 05 Jun 2012 02:22:04 +0100 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: <4FCD534E.8020305@pearwood.info> References: <4FCD4C2C.6040901@mrabarnett.plus.com> <4FCD534E.8020305@pearwood.info> Message-ID: <4FCD5F3C.7000105@mrabarnett.plus.com> On 05/06/2012 01:31, Steven D'Aprano wrote: > MRAB wrote: > >> I personally am no longer that bothered about whether the regex module >> makes it into stdlib, but I am still be maintaining it on PyPI. If >> someone else wants to integrate it I would, of course, be willing to >> help out. > > Are you withdrawing your offer to maintain regex in the stdlib? > > >> As long as they basically have the same source code, any bugs or other >> problems in the integrated module would be shared by the separate >> module, and I would want to fix them, so any fix in the separate module >> could be replicated easily in the integrated module. > > But changes to the stdlib (bug fixes or functional changes) are very likely to > run at a slower pace to what third-party packages can afford. If you continue > to develop regex outside of the stdlib, that could cause complications. > I'm not planning any further changes to regex. I think it already has enough features... That just leaves 1) bug fixes, which you'd also want fixed in the stdlib, and 2) functional changes requested for the stdlib, which you'd presumably also want in the third-party package for those using earlier versions of Python. From tjreedy at udel.edu Tue Jun 5 04:14:57 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 04 Jun 2012 22:14:57 -0400 Subject: [Python-Dev] issues with installing and importing rdflib and rdfextras in python and eclipse pydev In-Reply-To: References: Message-ID: <4FCD6BA1.1040407@udel.edu> On 6/4/2012 7:27 PM, ayodele akingbulu wrote: > I have issues with the installation of rdflib and rdfextras packages on > windows for python 3.2.3. I cannot find anywhere in the document > detailing how to install this packages and succesfully import them in a > python program. not to talk less of accessing it on eclipse indigo. pydev list is for the development of future Python and cpython, not for usage questions about current python and definitely not for questions about 3rd party programs. http://pypi.python.org/pypi/rdflib says "Support is available through the rdflib-dev group: http://groups.google.com/group/rdflib-dev " Inquire there. > Kindly help to look into this issue as it might be a bug. We have no responsibility for bugs in rdflib and its docs. -- Terry Jan Reedy From ncoghlan at gmail.com Tue Jun 5 04:25:34 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Jun 2012 12:25:34 +1000 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: On Tue, Jun 5, 2012 at 10:10 AM, PJ Eby wrote: > On Mon, Jun 4, 2012 at 7:18 PM, Nick Coghlan wrote: >> >> On Tue, Jun 5, 2012 at 8:58 AM, PJ Eby wrote: >> > On Mon, Jun 4, 2012 at 6:15 PM, Nick Coghlan wrote: >> >> >> >> It's actually the pre-decoration class, since the cell is initialised >> >> before the class is passed to the first decorator. I agree it's a >> >> little >> >> weird, but I did try to describe it accurately in the new docs. >> > >> > I see that now; it might be helpful to explicitly call that out. >> > >> > This is adding to my list of Python 3 metaclass gripes, though.? In >> > Python >> > 2, I have in-the-body-of-a-class decorators implemented using >> > metaclasses, >> > that will no longer work because of PEP 3115... >> >> I'm not quite following this one - do you mean they won't support >> __prepare__, won't play nicely with other metaclasses that implement >> __prepare__, or something else? > > > I mean that class-level __metaclass__ is no longer supported as of PEP 3115, > so I can't use that as a way to non-invasively obtain the enclosing class at > class creation time. > > (Unfortunately, I didn't realize until relatively recently that it wasn't > supported any more; the PEP itself doesn't say the functionality will be > removed.? Otherwise, I'd have lobbied sooner for a better migration path.) As in the "def __metaclass__(name, bases, ns): return type(name, bases, ns)" functionality? You can still pass an ordinary callable as the metaclass parameter and it will behave the same as the old class level __metaclass__ definition. I personally wouldn't be averse to bringing back the old spelling for the case where __prepare__ isn't needed - you're right that it's a convenient way to do a custom callable that gets inherited by subclasses. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Tue Jun 5 04:40:22 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 04 Jun 2012 22:40:22 -0400 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: <4FCD5F3C.7000105@mrabarnett.plus.com> References: <4FCD4C2C.6040901@mrabarnett.plus.com> <4FCD534E.8020305@pearwood.info> <4FCD5F3C.7000105@mrabarnett.plus.com> Message-ID: On 6/4/2012 9:22 PM, MRAB wrote: > I'm not planning any further changes to regex. I think it already has > enough features... Do you have any idea where regex + Python stands in regard to Unicode TR18 support levels? http://unicode.org/reports/tr18/ While most of the Tailored Support Level 3 strikes me as out of scope for the stdlib, I can imagine getting requests for any other stuff not already included. -- Terry Jan Reedy From ericsnowcurrently at gmail.com Tue Jun 5 04:43:09 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 4 Jun 2012 20:43:09 -0600 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: On Mon, Jun 4, 2012 at 6:10 PM, PJ Eby wrote: > I mean that class-level __metaclass__ is no longer supported as of PEP 3115, > so I can't use that as a way to non-invasively obtain the enclosing class at > class creation time. Depends on what you mean by non-invasive: * http://code.activestate.com/recipes/577813/ * http://code.activestate.com/recipes/577867/ -eric From eliben at gmail.com Tue Jun 5 05:24:23 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 5 Jun 2012 05:24:23 +0200 Subject: [Python-Dev] Language reference updated for metaclasses In-Reply-To: References: Message-ID: On Sun, May 20, 2012 at 10:38 AM, Nick Coghlan wrote: > When writing the docs for types.new_class(), I discovered that the > description of the class creation process in the language reference > was not only hard to follow, it was actually *incorrect* when it came > to describing the algorithm for determining the correct metaclass. > > I rewrote the offending section of the language reference to both > describe the correct algorithm, and hopefully also to be easier to > read. Once people have had a chance to review the changes in the 3.3 > docs, I'll backport the update to 3.2. > > Previous docs: http://docs.python.org/py3k/reference/datamodel.html#customizing-class-creation > Updated docs: http://docs.python.org/dev/reference/datamodel.html#customizing-class-creation > "if an explicit metaclass is given and it is not an instance of type(), then it is used directly as the metaclass" Could you elaborate on this point? Would it perhaps be clearer to say "if an explicit metaclass is given and it is not a class"? Eli From pje at telecommunity.com Tue Jun 5 05:36:41 2012 From: pje at telecommunity.com (PJ Eby) Date: Mon, 4 Jun 2012 23:36:41 -0400 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: On Mon, Jun 4, 2012 at 10:25 PM, Nick Coghlan wrote: > On Tue, Jun 5, 2012 at 10:10 AM, PJ Eby wrote: > > On Mon, Jun 4, 2012 at 7:18 PM, Nick Coghlan wrote: > >> > >> On Tue, Jun 5, 2012 at 8:58 AM, PJ Eby wrote: > >> > On Mon, Jun 4, 2012 at 6:15 PM, Nick Coghlan > wrote: > >> >> > >> >> It's actually the pre-decoration class, since the cell is initialised > >> >> before the class is passed to the first decorator. I agree it's a > >> >> little > >> >> weird, but I did try to describe it accurately in the new docs. > >> > > >> > I see that now; it might be helpful to explicitly call that out. > >> > > >> > This is adding to my list of Python 3 metaclass gripes, though. In > >> > Python > >> > 2, I have in-the-body-of-a-class decorators implemented using > >> > metaclasses, > >> > that will no longer work because of PEP 3115... > >> > >> I'm not quite following this one - do you mean they won't support > >> __prepare__, won't play nicely with other metaclasses that implement > >> __prepare__, or something else? > > > > > > I mean that class-level __metaclass__ is no longer supported as of PEP > 3115, > > so I can't use that as a way to non-invasively obtain the enclosing > class at > > class creation time. > > > > (Unfortunately, I didn't realize until relatively recently that it wasn't > > supported any more; the PEP itself doesn't say the functionality will be > > removed. Otherwise, I'd have lobbied sooner for a better migration > path.) > > As in the "def __metaclass__(name, bases, ns): return type(name, > bases, ns)" functionality? > The part where you can do that *dynamically in the class body* from a decorator or other code, yes. You can still pass an ordinary callable as the metaclass parameter and > it will behave the same as the old class level __metaclass__ > definition. > Which runs afoul of the requirement that users of the in-body decorator or descriptor not have to 1) make the extra declaration and 2) deal with the problem of needing multiple metaclasses. (Every framework that wants to do this sort of thing will end up having to have its own metaclass, and you won't be able to use them together without first creating a mixed metaclass.) > I personally wouldn't be averse to bringing back the old spelling for > the case where __prepare__ isn't needed - you're right that it's a > convenient way to do a custom callable that gets inherited by > subclasses. > That was a nice hack, but not one I'd lose any sleep over; the translation to PEP 3115 syntax is straightforward even if less elegant. It's the *dynamic* character that's missing in 3.x. In any case, I don't use the __metaclass__ hook to actually *change* the metaclass at all; it's simply the easiest way to get at the class as soon as its built in 2.x. If there were a way to register a function to be called as soon as an enclosing class is created, I would use that instead. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Tue Jun 5 06:03:17 2012 From: pje at telecommunity.com (PJ Eby) Date: Tue, 5 Jun 2012 00:03:17 -0400 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: On Mon, Jun 4, 2012 at 10:43 PM, Eric Snow wrote: > On Mon, Jun 4, 2012 at 6:10 PM, PJ Eby wrote: > > I mean that class-level __metaclass__ is no longer supported as of PEP > 3115, > > so I can't use that as a way to non-invasively obtain the enclosing > class at > > class creation time. > > Depends on what you mean by non-invasive: > Non-invasive meaning, "not requiring the user of the descriptor or decorator to make extra declarations-at-a-distance, especially ones that increase the likelihood of clashing machinery if multiple frameworks require the same functionality." ;-) That means class decorators, mixins, and explicit metaclasses don't work. The class decorator adds yak shaving, and the others lead to functional clashing. Currently, my choices for porting these libraries (and their dependents) to Python 3 are (in roughly descending order of preference): 1. Replace __builtins__.__build_class__ and hope PyPy et al follow CPython's lead, 2. Abuse sys.set_trace() and __class__ to get a callback at the right moment (because set_trace() *is* supported on other implementations in practice right now, at least for 2.x), or 3. Write a class decorator like "@py3_is_less_dynamic_than_py2", put it in a common library, and ask everybody and their dog to use it on any class whose body contains any decorator or descriptor from any of my libraries (or which someone else *derived* from any of my libraries, etc. ad nauseam), oh and good luck finding which ones all of them are, and yeah, if you miss it, stuff might not work. (Note, by the way, that this goes against the advice of not changing APIs while migrating to Python 3... and let's not forget all the documentation that has to be changed, each piece of which must explain and motivate this new and apparently-pointless decorator to the library's user.) I would prefer to have an option 4 or 5 (where there's a standard Python way to get a class creation callback from a class body), but #1 is honestly the most attractive at the moment, as I might only need to implement it in *one* library, with no changes to clients, including people who've built stuff based on that library or any of its clients, recursively. (#2 would also do it, but I was *really* hoping to get rid of that hack in Python 3.) Given these choices, I hope it's more understandable why I'd want to lobby for at least documenting __build_class__ and its replaceability, and perhaps encouraging other Python 3 implementations to offer the same feature. Given that implementing PEP 3115 and types.new_class() mean the same functionality has to be present, and given that class creation is generally not a performance-critical function, there's little reason for a sufficiently dynamic Python implementation (by which I basically mean Jython, IronPython, and PyPy) to support it. (More-static implementations don't always even support metaclasses to begin with, so they're not going to lose anything important by not supporting dynamic __build_class__.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Tue Jun 5 07:52:26 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 05 Jun 2012 07:52:26 +0200 Subject: [Python-Dev] Issue #11022: locale.getpreferredencoding() must not set temporary LC_CTYPE In-Reply-To: References: Message-ID: <4FCD9E9A.1090401@v.loewis.de> > Can I change this behaviour (before the first beta) in Python 3.3? Fine with me. That code predates 43e32b2b4004. I don't recall discussion to set the LC_CTYPE locale and not take it back, but apparently, this is what Python currently does, which means that another setlocale call is not necessary. So in theory, your change should have no effect, unless somebody has modified some environment variables. Regards, Martin From eliben at gmail.com Tue Jun 5 07:59:37 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 5 Jun 2012 08:59:37 +0300 Subject: [Python-Dev] [Python-checkins] cpython: Fix potential NameError in multiprocessing.Condition.wait() In-Reply-To: References: Message-ID: Can you add a testcase for this? On Mon, Jun 4, 2012 at 9:01 PM, richard.oudkerk wrote: > http://hg.python.org/cpython/rev/3baeb5e13dd2 > changeset: ? 77348:3baeb5e13dd2 > user: ? ? ? ?Richard Oudkerk > date: ? ? ? ?Mon Jun 04 18:59:07 2012 +0100 > summary: > ?Fix potential NameError in multiprocessing.Condition.wait() > > files: > ?Lib/multiprocessing/synchronize.py | ?3 +-- > ?1 files changed, 1 insertions(+), 2 deletions(-) > > > diff --git a/Lib/multiprocessing/synchronize.py b/Lib/multiprocessing/synchronize.py > --- a/Lib/multiprocessing/synchronize.py > +++ b/Lib/multiprocessing/synchronize.py > @@ -216,7 +216,7 @@ > > ? ? ? ? try: > ? ? ? ? ? ? # wait for notification or timeout > - ? ? ? ? ? ?ret = self._wait_semaphore.acquire(True, timeout) > + ? ? ? ? ? ?return self._wait_semaphore.acquire(True, timeout) > ? ? ? ? finally: > ? ? ? ? ? ? # indicate that this thread has woken > ? ? ? ? ? ? self._woken_count.release() > @@ -224,7 +224,6 @@ > ? ? ? ? ? ? # reacquire lock > ? ? ? ? ? ? for i in range(count): > ? ? ? ? ? ? ? ? self._lock.acquire() > - ? ? ? ? ? ?return ret > > ? ? def notify(self): > ? ? ? ? assert self._lock._semlock._is_mine(), 'lock is not owned' > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From ncoghlan at gmail.com Tue Jun 5 09:18:57 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Jun 2012 17:18:57 +1000 Subject: [Python-Dev] Language reference updated for metaclasses In-Reply-To: References: Message-ID: On Tue, Jun 5, 2012 at 1:24 PM, Eli Bendersky wrote: > > "if an explicit metaclass is given and it is not an instance of > type(), then it is used directly as the metaclass" > > Could you elaborate on this point? Would it perhaps be clearer to say > "if an explicit metaclass is given and it is not a class"? Unfortunately, the term "a class" is slightly ambiguous. "cls is a class" can mean either "isinstance(cls, type)" or it can be shorthand for "cls is a user-defined class (i.e. not a builtin type)". I'll likely rephrase that section of the docs based on the current discussion with PJE, though. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From eliben at gmail.com Tue Jun 5 09:20:58 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 5 Jun 2012 10:20:58 +0300 Subject: [Python-Dev] Language reference updated for metaclasses In-Reply-To: References: Message-ID: On Tue, Jun 5, 2012 at 10:18 AM, Nick Coghlan wrote: > On Tue, Jun 5, 2012 at 1:24 PM, Eli Bendersky wrote: >> >> "if an explicit metaclass is given and it is not an instance of >> type(), then it is used directly as the metaclass" >> >> Could you elaborate on this point? Would it perhaps be clearer to say >> "if an explicit metaclass is given and it is not a class"? > > Unfortunately, the term "a class" is slightly ambiguous. "cls is a > class" can mean either "isinstance(cls, type)" or it can be shorthand > for "cls is a user-defined class (i.e. not a builtin type)". > Yes, confusing it is (http://eli.thegreenplace.net/2012/03/30/python-objects-types-classes-and-instances-a-glossary/) Still, instance of type()" is a bit too cryptic for mere mortals, IMHO. Eli From ncoghlan at gmail.com Tue Jun 5 09:53:05 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Jun 2012 17:53:05 +1000 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: On Tue, Jun 5, 2012 at 2:03 PM, PJ Eby wrote: > On Mon, Jun 4, 2012 at 10:43 PM, Eric Snow > wrote: >> >> On Mon, Jun 4, 2012 at 6:10 PM, PJ Eby wrote: >> > I mean that class-level __metaclass__ is no longer supported as of PEP >> > 3115, >> > so I can't use that as a way to non-invasively obtain the enclosing >> > class at >> > class creation time. >> >> Depends on what you mean by non-invasive: > > > Non-invasive meaning, "not requiring the user of the descriptor or decorator > to make extra declarations-at-a-distance, especially ones that increase the > likelihood of clashing machinery if multiple frameworks require the same > functionality."? ;-) > > That means class decorators, mixins, and explicit metaclasses don't work. > The class decorator adds yak shaving, and the others lead to functional > clashing. > > Currently, my choices for porting these libraries (and their dependents) to > Python 3 are (in roughly descending order of preference): > > 1. Replace __builtins__.__build_class__ and hope PyPy et al follow CPython's > lead, Please don't try to coerce everyone else into supporting such an ugly hack by abusing an implementation detail. There's a reason types.new_class() uses a different signature. Deliberately attempting to present python-dev with a fait accompli instead of building consensus for officially adding a feature to the language is *not cool*. > I would prefer to have an option 4 or 5 (where there's a standard Python way > to get a class creation callback from a class body), but #1 is honestly the > most attractive at the moment, as I might only need to implement it in *one* > library, with no changes to clients, including people who've built stuff > based on that library or any of its clients, recursively.? (#2 would also do > it, but I was *really* hoping to get rid of that hack in Python 3.) Please be patient and let us work out a solution that has at least some level of consensus associated with it, rather than running off and doing your own thing. As I understand it, what you currently do is, from a running decorator, walk the stack with sys._getframes() and insert a "__metaclass__" value into the class namespace. Now, one minor annoyance with current class decorators is that they're *not* inherited. This is sometimes what you want, but sometimes you would prefer to automatically decorate all subclasses as well. Currently, that means writing a custom metaclass to automatically apply the decorators. This has all the problems you have noted with composability. It seems then, that a potentially clean solution may be found by adding a *dynamic* class decoration hook. As a quick sketch of such a scheme, add the following step to the class creation process (between the current class creation process, but before the execution of lexical decorators): for mro_cls in cls.mro(): decorators = mro_cls.__dict__.get("__decorators__", ()) for deco in reversed(decorators): cls = deco(cls) Would such a dynamic class decoration hook meet your use case? Such a hook has use cases (specifically involving decorator inheritance) that *don't* require the use of sys._getframes(), so is far more likely to achieve the necessary level of consensus. As a specific example, the unittest module could use it to provide test parameterisation without needing a custom metaclass. > Given these choices, I hope it's more understandable why I'd want to lobby > for at least documenting __build_class__ and its replaceability, and perhaps > encouraging other Python 3 implementations to offer the same feature. You've jumped to a hackish solution instead of taking a step back and trying to think of a potentially elegant addition to the language that is more composable and doesn't require modification of process global state. You complain that metaclasses are hard to compose, and your "solution" is to monkeypatch a deliberately undocumented builtin? Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From steve at pearwood.info Tue Jun 5 09:58:45 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 5 Jun 2012 17:58:45 +1000 Subject: [Python-Dev] Language reference updated for metaclasses In-Reply-To: References: Message-ID: <20120605075845.GB17873@ando> On Tue, Jun 05, 2012 at 10:20:58AM +0300, Eli Bendersky wrote: > Still, instance of type()" is a bit too cryptic for mere mortals, IMHO. I think that if somebody finds "instance of type" too cryptic, they won't have any chance at all to understand metaclasses. Personally, I think there is a lot confusing about metaclasses, but the idea that classes are instances (objects) is not one of them. -- Steven From mark at hotpy.org Tue Jun 5 10:34:37 2012 From: mark at hotpy.org (Mark Shannon) Date: Tue, 05 Jun 2012 09:34:37 +0100 Subject: [Python-Dev] Language reference updated for metaclasses In-Reply-To: <20120605075845.GB17873@ando> References: <20120605075845.GB17873@ando> Message-ID: <4FCDC49D.8020603@hotpy.org> Steven D'Aprano wrote: > On Tue, Jun 05, 2012 at 10:20:58AM +0300, Eli Bendersky wrote: > >> Still, instance of type()" is a bit too cryptic for mere mortals, IMHO. > > I think that if somebody finds "instance of type" too cryptic, they > won't have any chance at all to understand metaclasses. > > Personally, I think there is a lot confusing about metaclasses, but the > idea that classes are instances (objects) is not one of them. > One thing that *is* confusing is that the metaclass parameter in class creation is not the metaclass (class of the class), but the class factory. For example: def silly(*args): print(*args) return int class C(metaclass=silly): def m(self): pass C () {'m': , '__qualname__': 'C', '__module__': '__main__'} print(C) int In this example the metaclass (ie the class of C) is type (C is int), even though the declared metaclass is 'silly'. I assume it is too late to change the name of the 'metaclass' keyword to 'factory', but we could use that terminology in the docs. Cheers, Mark From fuzzyman at voidspace.org.uk Tue Jun 5 10:55:04 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Jun 2012 09:55:04 +0100 Subject: [Python-Dev] Language reference updated for metaclasses In-Reply-To: <4FCDC49D.8020603@hotpy.org> References: <20120605075845.GB17873@ando> <4FCDC49D.8020603@hotpy.org> Message-ID: On 5 Jun 2012, at 09:34, Mark Shannon wrote: > Steven D'Aprano wrote: >> On Tue, Jun 05, 2012 at 10:20:58AM +0300, Eli Bendersky wrote: >>> Still, instance of type()" is a bit too cryptic for mere mortals, IMHO. >> I think that if somebody finds "instance of type" too cryptic, they won't have any chance at all to understand metaclasses. >> Personally, I think there is a lot confusing about metaclasses, but the idea that classes are instances (objects) is not one of them. > > One thing that *is* confusing is that the metaclass parameter in class creation is not the metaclass (class of the class), but the class factory. For example: > > def silly(*args): > print(*args) > return int > > class C(metaclass=silly): > def m(self): pass > > C () {'m': , '__qualname__': 'C', '__module__': '__main__'} > > print(C) > int > > In this example the metaclass (ie the class of C) is type (C is int), > even though the declared metaclass is 'silly'. > > I assume it is too late to change the name of the 'metaclass' keyword to 'factory', but we could use that terminology in the docs. Well, the same was always true in Python 2 as well - __metaclass__ could be a function that was identically "silly". The real "metaclass" (type of the class) is whatever you use to construct the class. Michael > > Cheers, > Mark > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From walter at livinglogic.de Tue Jun 5 10:36:42 2012 From: walter at livinglogic.de (=?UTF-8?B?V2FsdGVyIETDtnJ3YWxk?=) Date: Tue, 05 Jun 2012 10:36:42 +0200 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: Message-ID: <4FCDC51A.2010703@livinglogic.de> On 04.06.12 13:19, Dirkjan Ochtman wrote: > I recently opened issue14908. At work, I have to do a bunch of things > with dates, times and timezones, and sometimes Unix timestamps are > also involved (largely for easy compatibility with legacy APIs). I > find the relative obscurity when converting datetimes to timestamps > rather painful; IMO it should be possible to do everything I need > straight from the datetime module objects, instead of having to > involve the time or calendar modules. > > Anyway, I was pointed to issue 2736, which seems to have got a lot of > discouraged core contributors (Victor, Antoine, David and Ka-Ping, to > name just a few) up against Alexander (the datetime maintainer, > AFAIK). Also see: http://bugs.python.org/issue665194 (datetime-RFC2822 roundtripping) > It seems like a fairly straightforward case of practicality > over purity: Alexander argues that there are "easy" one-liners to do > things like datetime.totimestamp(), I don't want one-liners, I want one-callers! ;) > but most other people seem to not > find them so easy. They've since been added to the documentation at > least, but I would like to see if there is consensus on python-dev > that adding a little more timestamp support to datetime objects would > make sense. Servus, Walter From fuzzyman at voidspace.org.uk Tue Jun 5 11:11:50 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Jun 2012 10:11:50 +0100 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: <5954463C-4512-4F3B-9D62-61C54367ED59@voidspace.org.uk> On 5 Jun 2012, at 08:53, Nick Coghlan wrote: > [snip...] > > Now, one minor annoyance with current class decorators is that they're > *not* inherited. This is sometimes what you want, but sometimes you > would prefer to automatically decorate all subclasses as well. > Currently, that means writing a custom metaclass to automatically > apply the decorators. This has all the problems you have noted with > composability. > > It seems then, that a potentially clean solution may be found by > adding a *dynamic* class decoration hook. As a quick sketch of such a > scheme, add the following step to the class creation process (between > the current class creation process, but before the execution of > lexical decorators): > > for mro_cls in cls.mro(): > decorators = mro_cls.__dict__.get("__decorators__", ()) > for deco in reversed(decorators): > cls = deco(cls) > > Would such a dynamic class decoration hook meet your use case? Such a > hook has use cases (specifically involving decorator inheritance) that > *don't* require the use of sys._getframes(), so is far more likely to > achieve the necessary level of consensus. > > As a specific example, the unittest module could use it to provide > test parameterisation without needing a custom metaclass. Heh, you're effectively restoring the old Python 2 metaclass machinery with a slightly-less-esoteric mechanism. That aside I really like it. Michael -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From ncoghlan at gmail.com Tue Jun 5 12:20:42 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Jun 2012 20:20:42 +1000 Subject: [Python-Dev] Language reference updated for metaclasses In-Reply-To: <4FCDC49D.8020603@hotpy.org> References: <20120605075845.GB17873@ando> <4FCDC49D.8020603@hotpy.org> Message-ID: On Tue, Jun 5, 2012 at 6:34 PM, Mark Shannon wrote: > In this example the metaclass (ie the class of C) is type (C is int), > even though the declared metaclass is 'silly'. > > I assume it is too late to change the name of the 'metaclass' keyword to > 'factory', but we could use that terminology in the docs. "factory" is also wrong (since a more derived metaclass from a base class may be used instead). "metaclass_hint" or "requested_metaclass" would be more accurate names - as in Python 2, the value provided in the class definition is only one input to the algorithm that determines the metaclass (which is now correctly described in the language reference), rather than a simple factory function or direct specification of __class__. That slightly blurry terminology isn't new in Python 3 though, it's been around for pretty much as long as Python has supported metaclasses. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Tue Jun 5 12:25:19 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Jun 2012 20:25:19 +1000 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: <5954463C-4512-4F3B-9D62-61C54367ED59@voidspace.org.uk> References: <5954463C-4512-4F3B-9D62-61C54367ED59@voidspace.org.uk> Message-ID: On Tue, Jun 5, 2012 at 7:11 PM, Michael Foord wrote: > > On 5 Jun 2012, at 08:53, Nick Coghlan wrote: > >> [snip...] >> >> Now, one minor annoyance with current class decorators is that they're >> *not* inherited. This is sometimes what you want, but sometimes you >> would prefer to automatically decorate all subclasses as well. >> Currently, that means writing a custom metaclass to automatically >> apply the decorators. This has all the problems you have noted with >> composability. >> >> It seems then, that a potentially clean solution may be found by >> adding a *dynamic* class decoration hook. As a quick sketch of such a >> scheme, add the following step to the class creation process (between >> the current class creation process, but before the execution of >> lexical decorators): >> >> ? ?for mro_cls in cls.mro(): >> ? ? ? ?decorators = mro_cls.__dict__.get("__decorators__", ()) >> ? ? ? ?for deco in reversed(decorators): >> ? ? ? ? ? ?cls = deco(cls) >> >> Would such a dynamic class decoration hook meet your use case? Such a >> hook has use cases (specifically involving decorator inheritance) that >> *don't* require the use of sys._getframes(), so is far more likely to >> achieve the necessary level of consensus. >> >> As a specific example, the unittest module could use it to provide >> test parameterisation without needing a custom metaclass. > > > Heh, you're effectively restoring the old Python 2 metaclass machinery with a slightly-less-esoteric mechanism. That aside I really like it. Yup, writing a PEP now - I'm mostly interested in the inheritance aspects, but providing PJE with a way to get the effect he wants without monkeypatching undocumented builtins at runtime and effectively forking CPython's runtime behaviour is a useful bonus. Metaclasses *do* have a problem with composition and lexical decorators don't play nicely with inheritance, but a dynamic decorator system modelled to some degree on the old __metaclass__ mechanics could fill the gap nicely. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From shibturn at gmail.com Tue Jun 5 13:00:47 2012 From: shibturn at gmail.com (shibturn) Date: Tue, 05 Jun 2012 12:00:47 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Fix potential NameError in multiprocessing.Condition.wait() In-Reply-To: References: Message-ID: On 05/06/2012 6:59am, Eli Bendersky wrote: > Can you add a testcase for this? > ... OK. BTW, I should have written UnboundLocalError not NameError: >>> import multiprocessing as mp [81047 refs] >>> c = mp.Condition() [88148 refs] >>> with mp.Condition() as c: c.wait() ... Traceback (most recent call last): File "C:\Repos\cpython-dirty\lib\multiprocessing\synchronize.py", line 219, in wait ret = self._wait_semaphore.acquire(True, timeout) KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "C:\Repos\cpython-dirty\lib\multiprocessing\synchronize.py", line 227, in wait return ret UnboundLocalError: local variable 'ret' referenced before assignment [88218 refs] From victor.stinner at gmail.com Tue Jun 5 13:55:20 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 5 Jun 2012 13:55:20 +0200 Subject: [Python-Dev] Issue #11022: locale.getpreferredencoding() must not set temporary LC_CTYPE In-Reply-To: <4FCD9E9A.1090401@v.loewis.de> References: <4FCD9E9A.1090401@v.loewis.de> Message-ID: > Fine with me. Ok, done with changeset 2587328c7c9c. > So in theory, your change should have no effect, unless somebody has > modified some environment variables. Changing TextIOWrapper to call locale.getpreferredlocale(False) instead of getpreferredlocale() has these two effects: 1) without the patch, setting LC_ALL, LC_CTYPE or LANG environment variable changes the encoding used by TextIOWrapper. 2) with the patch, setting LC_CTYPE (with locale.setlocale) changes the the encoding used by TextIOWrapper. IMO (2) is less surprising than (1) For example, it is the expected behaviour of the reporter of the issue #11022. In practice, it should not change anything for most people. Victor From jsbueno at python.org.br Tue Jun 5 14:21:39 2012 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Tue, 5 Jun 2012 09:21:39 -0300 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: On 4 June 2012 21:10, PJ Eby wrote: >> > I only use __metaclass__ in 2.x for this because it's the only way for >> > code >> > executed in a class body to gain access to the class at creation time. >> > PJ, it maybe just me, but what does this code do that can't be done at the metaclass' __new__ method? You might have to rewrite some method-decorators, so that they just mark a method at class body execution time, and then, whatever the decorator used to do at this time, would be done at meta's __new__ - I have this working in some code (and in Python 2 already). js -><- From ncoghlan at gmail.com Tue Jun 5 14:24:05 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Jun 2012 22:24:05 +1000 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: <5954463C-4512-4F3B-9D62-61C54367ED59@voidspace.org.uk> Message-ID: On Tue, Jun 5, 2012 at 8:25 PM, Nick Coghlan wrote: > On Tue, Jun 5, 2012 at 7:11 PM, Michael Foord wrote: >> >> On 5 Jun 2012, at 08:53, Nick Coghlan wrote: >> >>> [snip...] >>> >>> Now, one minor annoyance with current class decorators is that they're >>> *not* inherited. This is sometimes what you want, but sometimes you >>> would prefer to automatically decorate all subclasses as well. >>> Currently, that means writing a custom metaclass to automatically >>> apply the decorators. This has all the problems you have noted with >>> composability. >>> >>> It seems then, that a potentially clean solution may be found by >>> adding a *dynamic* class decoration hook. As a quick sketch of such a >>> scheme, add the following step to the class creation process (between >>> the current class creation process, but before the execution of >>> lexical decorators): >>> >>> ? ?for mro_cls in cls.mro(): >>> ? ? ? ?decorators = mro_cls.__dict__.get("__decorators__", ()) >>> ? ? ? ?for deco in reversed(decorators): >>> ? ? ? ? ? ?cls = deco(cls) >>> >>> Would such a dynamic class decoration hook meet your use case? Such a >>> hook has use cases (specifically involving decorator inheritance) that >>> *don't* require the use of sys._getframes(), so is far more likely to >>> achieve the necessary level of consensus. >>> >>> As a specific example, the unittest module could use it to provide >>> test parameterisation without needing a custom metaclass. >> >> >> Heh, you're effectively restoring the old Python 2 metaclass machinery with a slightly-less-esoteric mechanism. That aside I really like it. > > Yup, writing a PEP now - I'm mostly interested in the inheritance > aspects, but providing PJE with a way to get the effect he wants > without monkeypatching undocumented builtins at runtime and > effectively forking CPython's runtime behaviour is a useful bonus. > > Metaclasses *do* have a problem with composition and lexical > decorators don't play nicely with inheritance, but a dynamic decorator > system modelled to some degree on the old __metaclass__ mechanics > could fill the gap nicely. PEP written and posted: http://www.python.org/dev/peps/pep-0422/ More toy examples here: https://bitbucket.org/ncoghlan/misc/src/default/pep422.py Yes, it means requiring the use of a specific metaclass in 3.2 (either directly or via inheritance), but monkeypatching an undocumented builtin is going to pathological lengths just to avoid requiring that people explicitly interoperate with your particular metaclass mechanisms. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From catch-all at masklinn.net Tue Jun 5 14:37:47 2012 From: catch-all at masklinn.net (Xavier Morel) Date: Tue, 5 Jun 2012 14:37:47 +0200 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: <5954463C-4512-4F3B-9D62-61C54367ED59@voidspace.org.uk> Message-ID: <7479E377-FFF5-4D55-B3B1-73AC0276ACDF@masklinn.net> On 5 juin 2012, at 14:24, Nick Coghlan wrote: > On Tue, Jun 5, 2012 at 8:25 PM, Nick Coghlan wrote: >> On Tue, Jun 5, 2012 at 7:11 PM, Michael Foord wrote: >>> >>> On 5 Jun 2012, at 08:53, Nick Coghlan wrote: >>> >>>> [snip...] >>>> >>>> Now, one minor annoyance with current class decorators is that they're >>>> *not* inherited. This is sometimes what you want, but sometimes you >>>> would prefer to automatically decorate all subclasses as well. >>>> Currently, that means writing a custom metaclass to automatically >>>> apply the decorators. This has all the problems you have noted with >>>> composability. >>>> >>>> It seems then, that a potentially clean solution may be found by >>>> adding a *dynamic* class decoration hook. As a quick sketch of such a >>>> scheme, add the following step to the class creation process (between >>>> the current class creation process, but before the execution of >>>> lexical decorators): >>>> >>>> for mro_cls in cls.mro(): >>>> decorators = mro_cls.__dict__.get("__decorators__", ()) >>>> for deco in reversed(decorators): >>>> cls = deco(cls) >>>> >>>> Would such a dynamic class decoration hook meet your use case? Such a >>>> hook has use cases (specifically involving decorator inheritance) that >>>> *don't* require the use of sys._getframes(), so is far more likely to >>>> achieve the necessary level of consensus. >>>> >>>> As a specific example, the unittest module could use it to provide >>>> test parameterisation without needing a custom metaclass. >>> >>> >>> Heh, you're effectively restoring the old Python 2 metaclass machinery with a slightly-less-esoteric mechanism. That aside I really like it. >> >> Yup, writing a PEP now - I'm mostly interested in the inheritance >> aspects, but providing PJE with a way to get the effect he wants >> without monkeypatching undocumented builtins at runtime and >> effectively forking CPython's runtime behaviour is a useful bonus. >> >> Metaclasses *do* have a problem with composition and lexical >> decorators don't play nicely with inheritance, but a dynamic decorator >> system modelled to some degree on the old __metaclass__ mechanics >> could fill the gap nicely. > > PEP written and posted: http://www.python.org/dev/peps/pep-0422/ > More toy examples here: > https://bitbucket.org/ncoghlan/misc/src/default/pep422.py > Does it really make sense to meld decorators and inheritance so? With this, class "decorators" become way more than mere decorators and start straying quite far away from their function-based counterparts (which are not "inherited" when overriding methods but are statically applied instead) From ncoghlan at gmail.com Tue Jun 5 15:17:54 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Jun 2012 23:17:54 +1000 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: <7479E377-FFF5-4D55-B3B1-73AC0276ACDF@masklinn.net> References: <5954463C-4512-4F3B-9D62-61C54367ED59@voidspace.org.uk> <7479E377-FFF5-4D55-B3B1-73AC0276ACDF@masklinn.net> Message-ID: On Tue, Jun 5, 2012 at 10:37 PM, Xavier Morel wrote: > On 5 juin 2012, at 14:24, Nick Coghlan wrote: >>> Metaclasses *do* have a problem with composition and lexical >>> decorators don't play nicely with inheritance, but a dynamic decorator >>> system modelled to some degree on the old __metaclass__ mechanics >>> could fill the gap nicely. >> >> PEP written and posted: http://www.python.org/dev/peps/pep-0422/ >> More toy examples here: >> https://bitbucket.org/ncoghlan/misc/src/default/pep422.py >> > > Does it really make sense to meld decorators and inheritance so? With this, class "decorators" become way more than mere decorators and start straying quite far away from their function-based counterparts (which are not "inherited" when overriding methods but are statically applied instead) Lexical class decorators won't go away, and will still only apply to the associated class definition. There are a couple of key points about class decorators that make them a *lot* easier to reason about than metaclasses: 1. They have a very simple expected API: "class in, class out" 2. As a result of 1, they can typically be pipelined easily: in the absence of decorator abuse, you can take the output of one class decorator and feed it to the input of the next one Metaclasses live at a different level of complexity all together - in order to make use of them, you need to have some understanding of how a name, a sequence of bases and a namespace can be combined to create a class object. Further, because it's a transformative API (name, bases, ns -> class object), combining them requires a lot more care (generally involving inheriting from type and making appropriate use of super() calls). However, even when all you really want is to decorate the class after it has been created, defining a metaclass is currently still necessary if you also want to be notified when new subclasses are defined. This PEP proposes that those two characteristics be split: if all you want is inheritance of decorators, then dynamic decorators are a much simpler, more multiple inheritance friendly solution than using a custom metaclass, leave full metaclass definitions for cases where you really *do* want almost complete control over the class creation process. That said, actively discouraging PJE from his current scheme that involves monkeypatching __build_class__ and publishing "Python 3 (with monkeypatched undocumented builtins) compatible" packages on PyPI is definitely a key motivation in putting this proposal together. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From guido at python.org Tue Jun 5 15:45:34 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 5 Jun 2012 06:45:34 -0700 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Mon, Jun 4, 2012 at 5:51 PM, Alexander Belopolsky wrote: > On Mon, Jun 4, 2012 at 5:25 PM, Guido van Rossum wrote: > .. >> But I don't know what ticks() is supposed to do. > > """ > .ticks(offset=0.0,dst=-1) > > Returns a float representing the instances value in ticks (see above). > > The conversion routine assumes that the stored date/time value is > given in local time. > > The given value for dst is used by the conversion (0 = DST off, 1 = > DST on, -1 = unkown) and offset is subtracted from the resulting > value. > > The method raises a RangeErrorexception if the objects value does not > fit into the system's ticks range. > > Note:On some platforms the C lib's mktime() function that this method > uses does not allow setting DST to an arbitrary value. The module > checks for this and raises a SystemError in case setting DST to 0 or 1 > does not result in valid results. > """ mxDateTime Documentation, > I agree that would be a step backward. >> I am assuming we >> would create totimestamp() and utctotimestamp() that mirror >> fromtimestamp() and utcfromtimestamp(). > > First, with the introduction of timezone.utc, the use of utc... > methods should be discouraged. Speak for yourself. TZ-less datetimes aren't going away and have plenty of use in contexts where the tz is either universally known or irrelevant. > fromtimestamp(s,timezeone.utc) is better than utcfromtimestamp(); > now(timezeone.utc) ?is better than utcnow(); and > dt.astimezone(timezone.utc).timetuple() is better than dt.utctimetuple() No, they are not better. They are simply different. > The advantage of the first two is that they produce aware datetime > instances. Whether that really is an advantage is up to the user and the API they are working with. > The last example may appear more verbose, but in most > applications, dt.astimezone(timezone.utc) is really what is needed. So you think. >> Since we trust the user with >> the latter we should trust them with the former. > This does not follow. ?When you deal with non-monotonic functions, > defining the inverse functions requires considerably more care than > defining the original. ?For example, defining sqrt(), requires > choosing the positive root. ?For arcsin(), you need to choose between > infinitely many branches. ? I don't think many users would appreciate > a math library where sqrt() and arcsin() take an optional branch= > argument, but mxDT's ticks() features this very design with its dst= > flag. ?Most users will ignore optional dst= and live with the > resulting bugs. ?We had several such bugs in stdlib and they went > unnoticed for years. That's why I don't recommend adding ticks(). But those math functions *do* exist and we need their equivalent in the stdlib. Users will have a *need* to convert their datetime objects (with or without tzinfo) to POSIX timestamps, and the one-liners from the docs are too hard to find and remember. While I agree that not every handy three lines of code need to become a function in the standard library, the reverse also doesn't follow: it just being three lines does not make it a sin to add it to the stdlib if those three lines are particularly hard to come up with. For datetimes with tzinfo, dt.totimestamp() should return (dt - epoch).total_seconds() where epoch is datetime.datetime.fromtimestamp(0, datetime.timezone.utc); for timezones without tzinfo, a similar calculation should be performed assuming local time. The utctotimestamp() method should insist that dt has no tzinfo and then do a similar calculation again assuming the implied UTC timezone. I think these definitions are both sufficiently unsurprising and useful to add to the datetime module. -- --Guido van Rossum (python.org/~guido) From dirkjan at ochtman.nl Tue Jun 5 16:19:50 2012 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 5 Jun 2012 16:19:50 +0200 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 3:45 PM, Guido van Rossum wrote: > For datetimes with tzinfo, dt.totimestamp() should return (dt - > epoch).total_seconds() where epoch is > datetime.datetime.fromtimestamp(0, datetime.timezone.utc); for > timezones without tzinfo, a similar calculation should be performed > assuming local time. The utctotimestamp() method should insist that dt > has no tzinfo and then do a similar calculation again assuming the > implied UTC timezone. It would be nice if utctotimestamp() also worked with datetimes that have tzinfo set to UTC. And while I don't think we really need it, if there are concerns that some other epochs may be useful, we could add an optional epoch argument. Cheers, Dirkjan From guido at python.org Tue Jun 5 17:07:06 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 5 Jun 2012 08:07:06 -0700 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 7:19 AM, Dirkjan Ochtman wrote: > On Tue, Jun 5, 2012 at 3:45 PM, Guido van Rossum wrote: >> For datetimes with tzinfo, dt.totimestamp() should return (dt - >> epoch).total_seconds() where epoch is >> datetime.datetime.fromtimestamp(0, datetime.timezone.utc); for >> timezones without tzinfo, a similar calculation should be performed >> assuming local time. The utctotimestamp() method should insist that dt >> has no tzinfo and then do a similar calculation again assuming the >> implied UTC timezone. > > It would be nice if utctotimestamp() also worked with datetimes that > have tzinfo set to UTC. Hm, but utcfromtimestamp() never returns a datetime that has a tzinfo (it doesn't take a tzinfo argument like fromtimestamp() does). But if we want to do this, we should just say that utcfromtimestamp() with a datetime that has a tzinfo honors the tzinfo, always. In fact, we might go further and define fromtimestamp() to do the same thing. Then the only difference between the two would be what timezone they use if there is *no* tzinfo. That's fine with me. > And while I don't think we really need it, if there are concerns that > some other epochs may be useful, we could add an optional epoch > argument. No, that's hypergeneralization. People working with other epochs can write the three lines of code -- or they can add or substract a constant that is the difference between *their* epoch and 1/1/1970. Alternate epochs just aren't that common (and where they are, the datetime module probably isn't what you want -- you're probably doing calendrical calculations using some other calendar). -- --Guido van Rossum (python.org/~guido) From dirkjan at ochtman.nl Tue Jun 5 17:25:19 2012 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 5 Jun 2012 17:25:19 +0200 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 5:07 PM, Guido van Rossum wrote: > In fact, we might go further and define fromtimestamp() to do the same > thing. Then the only difference between the two would be what timezone > they use if there is *no* tzinfo. That's fine with me. Yeah, that sounds good. Cheers, Dirkjan From pje at telecommunity.com Tue Jun 5 17:28:21 2012 From: pje at telecommunity.com (PJ Eby) Date: Tue, 5 Jun 2012 11:28:21 -0400 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: On Tue, Jun 5, 2012 at 3:53 AM, Nick Coghlan wrote: > Please don't try to coerce everyone else into supporting such an ugly > hack by abusing an implementation detail. Whoa, whoa there. Again with the FUD. Sorry if I gave the impression that I'm about to unleash the monkeypatching hordes tomorrow or something. I'm unlikely to begin serious Python 3 porting of the relevant libraries before 3.3 is released; the reason I'm talking about this now is because there's currently Python-Dev discussion regarding metaclasses, so it seemed like a good time to bring the subject up, to see if there were any *good* solutions. Just because my three existing options are, well, the only options if I started porting today, doesn't mean I think they're the *only* options. As I said, an option 4 or 5 would be fantastic, and your new PEP 422 is wonderful in that regard. Thank you VERY much for putting that together, I appreciate that very much. (I just wish you hadn't felt forced or coerced to do it -- that was not my intention *at all*!) Frankly, a big part of my leaning towards the __build_class__ option was that it's the *least work by Python implementors* (at least in terms of coding) to get my use case met. The reason I didn't write a PEP myself is because I didn't want to load up Python with yet another protocol, when my use case could be met by stuff that's already implemented in CPython. IOW, my motivation for saying, "hey, can't I just use this nice hook here" was to avoid asking for a *new* feature, if there weren't enough other people interested in a decoration protocol of this sort. That is, I was trying to NOT make anybody do a bunch of work on my behalf. (Clearly, I wasn't successful in the attempt, but I should at least get credit for trying. ;-) > Now, one minor annoyance with current class decorators is that they're > *not* inherited. This is sometimes what you want, but sometimes you > would prefer to automatically decorate all subclasses as well. > Currently, that means writing a custom metaclass to automatically > apply the decorators. This has all the problems you have noted with > composability. > > It seems then, that a potentially clean solution may be found by > adding a *dynamic* class decoration hook. As a quick sketch of such a > scheme, add the following step to the class creation process (between > the current class creation process, but before the execution of > lexical decorators): > > for mro_cls in cls.mro(): > decorators = mro_cls.__dict__.get("__decorators__", ()) > for deco in reversed(decorators): > cls = deco(cls) > > Would such a dynamic class decoration hook meet your use case? Such a > hook has use cases (specifically involving decorator inheritance) that > *don't* require the use of sys._getframes(), so is far more likely to > achieve the necessary level of consensus. > Absolutely. The main challenge with it is that I would need stateful decorators, so that they do nothing when called more than once; the motivating use cases for me do not require actual decorator inheritance. But I do have *other* stuff I'm currently using metaclasses for in 2.x, that could probably be eliminated with PEP 422. (For example, the #1 metaclass I use in 2.x is one that basically lets classes have __class_new__ and __class_init__ methods: a rough equivalent to in-line metaclasses or inheritable decorators!) In general, implementing what's effectively inherited decoration is what *most* metaclasses actually get used for, so PEP 422 is a big step forward in that. Sketching something to get a feel for the PEP... def inheritable(*decos): """Wrap a class with inheritable decorators""" def decorate(cls): cls.__decorators__ = list(decos)+list(cls.__dict__.get('__decorators__',())) for deco in reversed(decos): cls = deco(cls) return cls Hm. There are a few interesting consequences of the PEP as written. In-body decorators affect the __class__ closure (and thus super()), but out-of-body decorators don't. By me this is a good thing, but it is a bit of complexity that needs mentioning. Likewise, the need for inheritable decorators to be idempotent, in case two base classes list the same decorator. (For my own use attribute/method use cases, I can just have them remove themselves from the class's __decorators__ upon execution.) You complain that metaclasses are hard to compose, and your > "solution" is to monkeypatch a deliberately undocumented builtin? > To be clear, what I specifically proposed (as I mentioned in an earlier thread) was simply to patch __build_class__ in order to restore the missing __metaclass__ hook. (Which, incidentally, would make ALL code using __metaclass__ cross-version compatible between 2.x and 3.x: a potentially valuable thing in and of itself!) As for metaclasses being hard to compose, PEP 422 is definitely a step in the right direction. (Automatic metaclass combining is about the only thing that would improve it any further.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbueno at python.org.br Tue Jun 5 18:00:50 2012 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Tue, 5 Jun 2012 13:00:50 -0300 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: <5954463C-4512-4F3B-9D62-61C54367ED59@voidspace.org.uk> Message-ID: On 5 June 2012 09:24, Nick Coghlan wrote: > > PEP written and posted: http://www.python.org/dev/peps/pep-0422/ > More toy examples here: > https://bitbucket.org/ncoghlan/misc/src/default/pep422.py > > Yes, it means requiring the use of a specific metaclass in 3.2 (either > directly or via inheritance), but monkeypatching an undocumented > builtin is going to pathological lengths just to avoid requiring that > people explicitly interoperate with your particular metaclass > mechanisms. When reading the PEP, I got the impression that having a "__decorate__" method on "type", which would perform its thing, would be not add magic exceptions, would be more explicit and more flexible than having an extra step to be called between the metaclass execution and decorator applying. So, I think that settling for having the decorators applied - as described in the PEP - in a __decorate__ method of the metaclass would be nicer and cleaner. js -><- > > Regards, > Nick. > > -- > Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jsbueno%40python.org.br From python at mrabarnett.plus.com Tue Jun 5 18:46:32 2012 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 05 Jun 2012 17:46:32 +0100 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: References: <4FCD4C2C.6040901@mrabarnett.plus.com> <4FCD534E.8020305@pearwood.info> <4FCD5F3C.7000105@mrabarnett.plus.com> Message-ID: <4FCE37E8.10001@mrabarnett.plus.com> On 05/06/2012 03:40, Terry Reedy wrote: > On 6/4/2012 9:22 PM, MRAB wrote: > >> I'm not planning any further changes to regex. I think it already has >> enough features... > > Do you have any idea where regex + Python stands in regard to Unicode > TR18 support levels? http://unicode.org/reports/tr18/ > While most of the Tailored Support Level 3 strikes me as out of scope > for the stdlib, I can imagine getting requests for any other stuff not > already included. > It supports Basic Unicode Support (Level 1), plus default word boundaries and \X (grapheme cluster). From alexander.belopolsky at gmail.com Tue Jun 5 19:21:17 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 5 Jun 2012 13:21:17 -0400 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 9:45 AM, Guido van Rossum wrote: > TZ-less datetimes aren't going away and have > plenty of use in contexts where the tz is either universally known or > irrelevant. I agree, but in these contexts naive datetime objects almost always represent local time in some "universally known or irrelevant" timezone. I rarely see people use naive datetime objects to represent UTC time and with timezone.utc added to datetime module already the cost of supplying tzinfo to UTC datetime objects is low. Based on tracker comments, I believe users ask for the following function: def timestamp(self, dst=-1): "Return POSIX timestamp as float" if self.tzinfo is None: return _time.mktime((self.year, self.month, self.day, self.hour, self.minute, self.second, -1, -1, dst)) + self.microsecond / 1e6 else: return (self - _EPOCH).total_seconds() You seem to advocate for def utctimestamp(self): return (self - _EPOCH).total_seconds() in addition or instead of timestamp(). In mxDT, utctimestamp() is called gmticks(). Is this an accurate summary? From guido at python.org Tue Jun 5 19:41:40 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 5 Jun 2012 10:41:40 -0700 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 10:21 AM, Alexander Belopolsky wrote: > On Tue, Jun 5, 2012 at 9:45 AM, Guido van Rossum wrote: >> TZ-less datetimes aren't going away and have >> plenty of use in contexts where the tz is either universally known or >> irrelevant. > > I agree, but in these contexts naive datetime objects almost always > represent local time in some "universally known or irrelevant" > timezone. ? I rarely see people use naive datetime objects to > represent UTC time and with timezone.utc added to datetime module > already the cost of supplying tzinfo to UTC datetime objects is low. Maybe you need to get out more. :-) This is how datetime is represented in App Engine's datastore: https://developers.google.com/appengine/docs/python/datastore/typesandpropertyclasses#DateTimeProperty (Note: These docs are unclear about whether a tzinfo attribute is present. The code is clear that it isn't.) > Based on tracker comments, I believe users ask for the following function: > > ? ?def timestamp(self, dst=-1): > ? ? ? ?"Return POSIX timestamp as float" > ? ? ? ?if self.tzinfo is None: > ? ? ? ? ? ?return _time.mktime((self.year, self.month, self.day, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? self.hour, self.minute, self.second, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? -1, -1, dst)) + self.microsecond / 1e6 > ? ? ? ?else: > ? ? ? ? ? ?return (self - _EPOCH).total_seconds() What do they want to set the dst flag for? > You seem to advocate for > > ? ?def utctimestamp(self): > ? ? ? ? ? ?return (self - _EPOCH).total_seconds() Not literally, because this would crash when self.tzinfo is None. I think I am advocating for the former but without the dst flag. > in addition or instead of timestamp(). ?In mxDT, utctimestamp() is > called gmticks(). > > Is this an accurate summary? -- --Guido van Rossum (python.org/~guido) From tjreedy at udel.edu Tue Jun 5 18:42:38 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 05 Jun 2012 12:42:38 -0400 Subject: [Python-Dev] [Python-checkins] peps: Add PEP 422: Dynamic Class Decorators In-Reply-To: References: Message-ID: <4FCE36FE.3080704@udel.edu> On 6/5/2012 8:09 AM, nick.coghlan wrote: > Add PEP 422: Dynamic Class Decorators > +Iterating over decorator entries in reverse order > +------------------------------------------------- > + > +This order was chosen to match the layout of lexical decorators when > +converted to ordinary function calls. Just as the following are equivalent:: > + > + @deco2 > + @deco1 > + class C: > + pass > + > + class C: > + pass > + C = deco2(deco1(C)) > + > +So too will the following be roughly equivalent (aside from inheritance):: > + > + class C: > + __decorators__ = [deco2, deco1] I think you should just store the decorators in the correct order of use + __decorators__ = [deco1, deco2] and avoid the nonsense (time-waste) of making an indirect copy via list_iterator and reversing it each time the attribute is used. If the list is constructed in reversed order, immediately reverse it. > + > + class C: > + pass > + C = deco2(deco1(C)) Terry Jan Reedy From alexander.belopolsky at gmail.com Tue Jun 5 20:14:50 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 5 Jun 2012 14:14:50 -0400 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 1:41 PM, Guido van Rossum wrote: > What do they want to set the dst flag for? To shift the responsibility to deal with the DST ambiguity to the user. This is what POSIX mktime with varying degree of success. > I think I am advocating for the former but without the dst flag. The cost of dst flag is low and most users will ignore it anyways, but by providing it we will at least acknowledge the issue. I don't care much one way or another. The remaining issue is the return type. Most of the use cases that have been brought up cast the timestamp to int as soon as it is computed. I recall a recent discussion about high-presision timestamps, but don't recall the conclusion. I guess we should offer timestamp() returning float and let those who care about range or precision write their own solution. From guido at python.org Tue Jun 5 20:19:53 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 5 Jun 2012 11:19:53 -0700 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 11:14 AM, Alexander Belopolsky wrote: > On Tue, Jun 5, 2012 at 1:41 PM, Guido van Rossum wrote: >> What do they want to set the dst flag for? > > To shift the responsibility to deal with the DST ambiguity to the > user. ? This is what POSIX mktime with varying degree of success. > >> I think I am advocating for the former but without the dst flag. > > The cost of dst flag is low and most users will ignore it anyways, but > by providing it we will at least acknowledge the issue. ?I don't care > much one way or another. Me neither, TBH. Although if we ever get that "local time" tzinfo object, we may regret it. So I propose to launch without it and see if people object. There simply isn't a way to roundtrip for times that fall in the DST->std transition, and I doubt that many users will want to think about it (they'd have to store an extra bit with all their datetime objects -- it would be better to get them to use tzinfo objects instead...). > The remaining issue is the return type. ?Most of the use cases that > have been brought up cast the timestamp to int as soon as it is > computed. ? I recall a recent discussion about high-presision > timestamps, but don't recall the conclusion. ?I guess we should offer > timestamp() returning float and let those who care about range or > precision write their own solution. Python uses floats pretty much everywhere for posix timestamps. So let it return a float. -- --Guido van Rossum (python.org/~guido) From alexander.belopolsky at gmail.com Tue Jun 5 20:21:45 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 5 Jun 2012 14:21:45 -0400 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 1:41 PM, Guido van Rossum wrote: > Maybe you need to get out more. :-) This is how datetime is > represented in App Engine's datastore: > https://developers.google.com/appengine/docs/python/datastore/typesandpropertyclasses#DateTimeProperty > (Note: These docs are unclear about whether a tzinfo attribute is > present. The code is clear that it isn't.) >From the docs: "Some libraries use the TZ environment variable to control the time zone applied to date-time values. App Engine sets this environment variable to 'UTC'." This means that App Engine's local timezone is UTC and strictly speaking this is not a counter example to what I said. Proposed mktime() based code will still work in this case. From guido at python.org Tue Jun 5 20:24:16 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 5 Jun 2012 11:24:16 -0700 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 11:21 AM, Alexander Belopolsky wrote: > On Tue, Jun 5, 2012 at 1:41 PM, Guido van Rossum wrote: >> Maybe you need to get out more. :-) This is how datetime is >> represented in App Engine's datastore: >> https://developers.google.com/appengine/docs/python/datastore/typesandpropertyclasses#DateTimeProperty >> (Note: These docs are unclear about whether a tzinfo attribute is >> present. The code is clear that it isn't.) > > From the docs: "Some libraries use the TZ environment variable to > control the time zone applied to date-time values. App Engine sets > this environment variable to 'UTC'." ?This means that ?App Engine's > local timezone is UTC and strictly speaking this is not a counter > example to what I said. ?Proposed mktime() based code will still work > in this case. Fair enough. Perhaps unrelated do you think a tzinfo representing "local time" can be built? -- --Guido van Rossum (python.org/~guido) From pje at telecommunity.com Tue Jun 5 20:26:10 2012 From: pje at telecommunity.com (PJ Eby) Date: Tue, 5 Jun 2012 14:26:10 -0400 Subject: [Python-Dev] [Python-checkins] peps: Add PEP 422: Dynamic Class Decorators In-Reply-To: <4FCE36FE.3080704@udel.edu> References: <4FCE36FE.3080704@udel.edu> Message-ID: On Tue, Jun 5, 2012 at 12:42 PM, Terry Reedy wrote: > On 6/5/2012 8:09 AM, nick.coghlan wrote: > > Add PEP 422: Dynamic Class Decorators >> > > +Iterating over decorator entries in reverse order >> +-----------------------------**-------------------- >> + >> +This order was chosen to match the layout of lexical decorators when >> +converted to ordinary function calls. Just as the following are >> equivalent:: >> + >> + @deco2 >> + @deco1 >> + class C: >> + pass >> + >> + class C: >> + pass >> + C = deco2(deco1(C)) >> + >> +So too will the following be roughly equivalent (aside from >> inheritance):: >> + >> + class C: >> + __decorators__ = [deco2, deco1] >> > > I think you should just store the decorators in the correct order of use > + __decorators__ = [deco1, deco2] > and avoid the nonsense (time-waste) of making an indirect copy via > list_iterator and reversing it each time the attribute is used. > It's for symmetry and straightforward translation with stacked decorators, i.e. between: @deco1 @deco2 [declaration] and __decorators__ = [deco1, deco2] Doing it the other way now means a different order for people to remember; there should be One Obvious Order for decorators, and the one we have now is it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Tue Jun 5 20:29:59 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 5 Jun 2012 14:29:59 -0400 Subject: [Python-Dev] TZ-aware local time Message-ID: Changing subject to reflect a change of topic. On Tue, Jun 5, 2012 at 2:19 PM, Guido van Rossum wrote: > .. Although if we ever get that "local time" tzinfo > object, we may regret it. So I propose to launch without it and see if > people object. There simply isn't a way to roundtrip for times that > fall in the DST->std transition, and I doubt that many users will want > to think about it (they'd have to store an extra bit with all their > datetime objects -- it would be better to get them to use tzinfo > objects instead...). I've also been arguing against "local time" tzinfo and my proposal was to create a function that would produce TZ-aware local time with tzinfo bound to a fixed-offset timezone. In New York that would mean EDT in the summer and EST in the winter. This is what Unix date utility does. See http://bugs.python.org/issue9527 . From solipsis at pitrou.net Tue Jun 5 20:26:55 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 05 Jun 2012 20:26:55 +0200 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: Le 05/06/2012 19:21, Alexander Belopolsky a ?crit : > with timezone.utc added to datetime module > already the cost of supplying tzinfo to UTC datetime objects is low. This is nice when your datetime objects are freshly created. It is not so nice when some of them already exist e.g. in a database (using an ORM layer). Mixing naive and aware datetimes is currently a catastrophe, since even basic operations such as equality comparison fail with a TypeError (it must be pretty much the only type in the stdlib with such poisonous behaviour). Regards Antoine. From alexandre.zani at gmail.com Tue Jun 5 20:37:42 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Tue, 5 Jun 2012 11:37:42 -0700 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 11:26 AM, Antoine Pitrou wrote: > Le 05/06/2012 19:21, Alexander Belopolsky a ?crit : > >> with timezone.utc added to datetime module >> already the cost of supplying tzinfo to UTC datetime objects is low. > > > This is nice when your datetime objects are freshly created. It is not so > nice when some of them already exist e.g. in a database (using an ORM > layer). Mixing naive and aware datetimes is currently a catastrophe, since > even basic operations such as equality comparison fail with a TypeError (it > must be pretty much the only type in the stdlib with such poisonous > behaviour). Comparing aware and naive datetime objects doesn't make much sense but it's an easy mistake to make. I would say the TypeError is a sensible way to warn you while simply returning False could lead to much confusion. > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com From solipsis at pitrou.net Tue Jun 5 21:01:44 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 05 Jun 2012 21:01:44 +0200 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: Le 05/06/2012 20:37, Alexandre Zani a ?crit : >> >> This is nice when your datetime objects are freshly created. It is not so >> nice when some of them already exist e.g. in a database (using an ORM >> layer). Mixing naive and aware datetimes is currently a catastrophe, since >> even basic operations such as equality comparison fail with a TypeError (it >> must be pretty much the only type in the stdlib with such poisonous >> behaviour). > > Comparing aware and naive datetime objects doesn't make much sense but > it's an easy mistake to make. I would say the TypeError is a sensible > way to warn you while simply returning False could lead to much > confusion. You could say the same about equally "confusing" results, yet equality never raises TypeError (except between datetime instances): >>> () == [] False Raising an exception has very serious implications, such as making it impossible to put these objects in the same dictionary. Regards Antoine. From alexander.belopolsky at gmail.com Tue Jun 5 21:15:35 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 5 Jun 2012 15:15:35 -0400 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 3:01 PM, Antoine Pitrou wrote: > You could say the same about equally "confusing" results, yet equality never > raises TypeError (except between datetime instances): > >>>> () == [] > False And even closer to home, >>> date(2012,6,1) == datetime(2012,6,1) False I agree, equality comparison should not raise an exception. From guido at python.org Tue Jun 5 21:17:58 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 5 Jun 2012 12:17:58 -0700 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 12:15 PM, Alexander Belopolsky wrote: > On Tue, Jun 5, 2012 at 3:01 PM, Antoine Pitrou wrote: >> You could say the same about equally "confusing" results, yet equality never >> raises TypeError (except between datetime instances): >> >>>>> () == [] >> False > > And even closer to home, > >>>> date(2012,6,1) == datetime(2012,6,1) > False > > I agree, equality comparison should not raise an exception. Let's make it so. -- --Guido van Rossum (python.org/~guido) From tarek.sheasha at gmail.com Tue Jun 5 22:24:56 2012 From: tarek.sheasha at gmail.com (Tarek Sheasha) Date: Tue, 5 Jun 2012 22:24:56 +0200 Subject: [Python-Dev] Cross-compiling python and PyQt Message-ID: Hello, I have been working for a long time on cross-compiling python for android I have used projects like: http://code.google.com/p/android-python27/ I am stuck in a certain area, when I am cross-compiling python I would like to install SIP and PyQt4 on the cross-compiled python, I have tried all the possible ways I could think of but have had no success. So if you can help me by giving me some guidelines on how to install third-party software for cross-compiled python for android I would be very helpful. Thanks a lot -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.zani at gmail.com Tue Jun 5 23:05:49 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Tue, 5 Jun 2012 14:05:49 -0700 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: Good point. On Tue, Jun 5, 2012 at 12:17 PM, Guido van Rossum wrote: > On Tue, Jun 5, 2012 at 12:15 PM, Alexander Belopolsky > wrote: >> On Tue, Jun 5, 2012 at 3:01 PM, Antoine Pitrou wrote: >>> You could say the same about equally "confusing" results, yet equality never >>> raises TypeError (except between datetime instances): >>> >>>>>> () == [] >>> False >> >> And even closer to home, >> >>>>> date(2012,6,1) == datetime(2012,6,1) >> False >> >> I agree, equality comparison should not raise an exception. > > Let's make it so. > > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com From ncoghlan at gmail.com Tue Jun 5 23:24:18 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jun 2012 07:24:18 +1000 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: Just to correct a misapprehension seen in this thread: there are still plenty of closed systems on the planet where every machine is set to UTC so that "local" time is UTC regardless of where the machine is physically located. You just won't hear about many of those systems if you're not working on or using them directly. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zuo at chopin.edu.pl Tue Jun 5 23:30:38 2012 From: zuo at chopin.edu.pl (Jan Kaliszewski) Date: Tue, 5 Jun 2012 23:30:38 +0200 Subject: [Python-Dev] [Python-checkins] peps: Add PEP 422: Dynamic Class Decorators In-Reply-To: <4FCE36FE.3080704@udel.edu> References: <4FCE36FE.3080704@udel.edu> Message-ID: <20120605213038.GA1797@chopin.edu.pl> Terry Reedy dixit (2012-06-05, 12:42): > On 6/5/2012 8:09 AM, nick.coghlan wrote: > > > Add PEP 422: Dynamic Class Decorators [snip] > >+So too will the following be roughly equivalent (aside from inheritance):: > >+ > >+ class C: > >+ __decorators__ = [deco2, deco1] > > I think you should just store the decorators in the correct order of use > + __decorators__ = [deco1, deco2] > and avoid the nonsense (time-waste) of making an indirect copy via > list_iterator and reversing it each time the attribute is used. +1. For @-syntax the inverted order seems to be somehow natural. But I feel the list order should not mimic that... *** Another idea: what about... @@dynamic_deco2 @@dynamic_deco1 class C: pass ...being an equivalent of: class C: __decorators__ = [dynamic_deco1, dynamic_deco2] ...as well as of: @@dynamic_deco2 class C: __decorators__ = [dynamic_deco1] ? Cheers. *j From tjreedy at udel.edu Tue Jun 5 23:31:15 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 05 Jun 2012 17:31:15 -0400 Subject: [Python-Dev] [Python-checkins] peps: Add PEP 422: Dynamic Class Decorators In-Reply-To: References: <4FCE36FE.3080704@udel.edu> Message-ID: On 6/5/2012 2:26 PM, PJ Eby wrote: > On Tue, Jun 5, 2012 at 12:42 PM, Terry Reedy > wrote: > I think you should just store the decorators in the correct order of use > + __decorators__ = [deco1, deco2] > and avoid the nonsense (time-waste) of making an indirect copy via > list_iterator and reversing it each time the attribute is used. > > > It's for symmetry and straightforward translation with stacked > decorators, i.e. between: > > @deco1 > @deco2 > [declaration] > > and __decorators__ = [deco1, deco2] > > Doing it the other way now means a different order for people to > remember; there should be One Obvious Order for decorators, and the one > we have now is it. You and I have different ideas of 'obvious' in this context. But since you will use this and and me probably not, let your idea rule. -- Terry Jan Reedy From tjreedy at udel.edu Tue Jun 5 23:35:16 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 05 Jun 2012 17:35:16 -0400 Subject: [Python-Dev] Cross-compiling python and PyQt In-Reply-To: References: Message-ID: On 6/5/2012 4:24 PM, Tarek Sheasha wrote: > Hello, > I have been working for a long time on cross-compiling python for > android I have used projects like: > http://code.google.com/p/android-python27/ > > I am stuck in a certain area, when I am cross-compiling python I would > like to install SIP and PyQt4 on the cross-compiled python, I have tried > all the possible ways I could think of but have had no success. So if > you can help me by giving me some guidelines on how to install > third-party software for cross-compiled python for android I would be > very helpful. This is off-topic for pydev list (which is for development *of* Python rather than development *with*). I suggest python-list (post in text only, please) or other lists for better help. -- Terry Jan Reedy From tjreedy at udel.edu Tue Jun 5 23:48:28 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 05 Jun 2012 17:48:28 -0400 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On 6/5/2012 3:17 PM, Guido van Rossum wrote: > On Tue, Jun 5, 2012 at 12:15 PM, Alexander Belopolsky > wrote: >> On Tue, Jun 5, 2012 at 3:01 PM, Antoine Pitrou wrote: >>> You could say the same about equally "confusing" results, yet equality never >>> raises TypeError (except between datetime instances): >>> >>>>>> () == [] >>> False >> >> And even closer to home, >> >>>>> date(2012,6,1) == datetime(2012,6,1) >> False >> >> I agree, equality comparison should not raise an exception. > > Let's make it so. 3.3 enhancement or backported bugfix? The doc strongly suggests that rich comparisons should return *something* and by implication, not raise. In particular, return NotImplemented instead of raising a TypeError for mis-matched arguments. "A rich comparison method may return the singleton NotImplemented if it does not implement the operation for a given pair of arguments. By convention, False and True are returned for a successful comparison. However, these methods can return any value," -- Terry Jan Reedy From guido at python.org Wed Jun 6 00:07:21 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 5 Jun 2012 15:07:21 -0700 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: On Tue, Jun 5, 2012 at 11:29 AM, Alexander Belopolsky wrote: > Changing subject to reflect a change of topic. > > On Tue, Jun 5, 2012 at 2:19 PM, Guido van Rossum wrote: >> .. Although if we ever get that "local time" tzinfo >> object, we may regret it. So I propose to launch without it and see if >> people object. There simply isn't a way to roundtrip for times that >> fall in the DST->std transition, and I doubt that many users will want >> to think about it (they'd have to store an extra bit with all their >> datetime objects -- it would be better to get them to use tzinfo >> objects instead...). > > I've also been arguing against "local time" tzinfo Why? I don't see your argumentation against such a tzinfo in the bug (but I may have missed it, it's a lot of comments). > and my proposal was > to create a function that would produce TZ-aware local time with > tzinfo bound to a fixed-offset timezone. ?In New York that would mean > EDT in the summer and EST in the winter. ? This is what Unix date > utility does. > > See http://bugs.python.org/issue9527 . I don't like that function. It returns two different timezone objects depending on whether DST is in effect. Also it adds a new method to the datetime class, which I think is unnecessary here. I understand returning two different timezone objects appear simpler, but it is also more limiting: there are things you can do with timezone objects that you can't do with these fixed-offset timezone objects, such as adding a timedelta that produces a result in a different DST state. E.g. adding 6 months to June 1st, 2012, PDT, would return Dec 1st, 2012, PDT, which is not local time here. Reading the requirements for a timezone implementation and the time.localtime() function, it would seem that a timezone object representing "local time" can certainly be constructed, as long as the time module uses or emulates the Unix/C behavior. On platforms where it doesn't, it is okay to guess based on the limited information it does have. But it looks like it should work on Unix/Linux, Windows, and Mac. The crux is that the localtime() function "knows" the rules for local DST in the past and future. It is true that it may not always be right (e.g. if politics change the rules for the future, or if the algorithm used is simplified, making it incorrect for times far in the past), but it would still be consistent with the time module's notion of local time, and we can consider this a bug in the OS (or libc) rather than in Python. If in the past I was ever opposed to that (as Raymond assumes in a comment in that issue) I have changed my mind, but I think it's more likely that we put off the creation of concrete timezone implementations until later and then got distracted -- after all we didn't even supply a UTC timezone, which surely is a lot simpler than the local timezone. -- --Guido van Rossum (python.org/~guido) From alexander.belopolsky at gmail.com Wed Jun 6 00:09:20 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 5 Jun 2012 18:09:20 -0400 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 5:48 PM, Terry Reedy wrote: > 3.3 enhancement or backported bugfix? Please move this discussion to the tracker: http://bugs.python.org/issue15006 From alexander.belopolsky at gmail.com Wed Jun 6 00:33:28 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 5 Jun 2012 18:33:28 -0400 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: On Tue, Jun 5, 2012 at 6:07 PM, Guido van Rossum wrote: >> I've also been arguing against "local time" tzinfo > > Why? I don't see your argumentation against such a tzinfo in the bug See http://bugs.python.org/issue9063 . The problem is again the DST ambiguity. One day a year, datetime(y, m, d, 1, 30, tzinfo=Local) represents two different times and another day it represents no valid time. Many applications can ignore this problem but stdlib should not. The documentation example (fixed in issue 9063) addresses the ambiguity by defaulting to standard time, but it does this at a cost of having no way to spell "the other hour." From guido at python.org Wed Jun 6 00:49:41 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 5 Jun 2012 15:49:41 -0700 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: On Tue, Jun 5, 2012 at 3:33 PM, Alexander Belopolsky wrote: > On Tue, Jun 5, 2012 at 6:07 PM, Guido van Rossum wrote: >>> I've also been arguing against "local time" tzinfo >> >> Why? I don't see your argumentation against such a tzinfo in the bug > > See http://bugs.python.org/issue9063 . > > The problem is again the DST ambiguity. ?One day a year, datetime(y, > m, d, 1, 30, tzinfo=Local) represents two different times and another > day it represents no valid time. ?Many applications can ignore this > problem but stdlib should not. This seems the blocking issue. We disagree on whether the stdlib should not offer this functionality because it cannot do so perfectly. > The documentation example (fixed in issue 9063) addresses the > ambiguity by defaulting to standard time, but it does this at a cost > of having no way to spell "the other hour." Again, this is a problem of DST and timezones, and not something you can ever "fix" (even if DST would be abandoned worldwide tomorrow, it would still remain a problem for historic times). We should just acknowledge the problem, implement the best we can do, document the limitation, and move on. Apps that care should just use some other way to represent local time. -- --Guido van Rossum (python.org/~guido) From pje at telecommunity.com Wed Jun 6 01:06:38 2012 From: pje at telecommunity.com (PJ Eby) Date: Tue, 5 Jun 2012 19:06:38 -0400 Subject: [Python-Dev] [Python-checkins] peps: Add PEP 422: Dynamic Class Decorators In-Reply-To: References: <4FCE36FE.3080704@udel.edu> Message-ID: On Tue, Jun 5, 2012 at 5:31 PM, Terry Reedy wrote: > On 6/5/2012 2:26 PM, PJ Eby wrote: > >> On Tue, Jun 5, 2012 at 12:42 PM, Terry Reedy > > wrote: >> > > I think you should just store the decorators in the correct order of >> use >> + __decorators__ = [deco1, deco2] >> and avoid the nonsense (time-waste) of making an indirect copy via >> list_iterator and reversing it each time the attribute is used. >> >> >> It's for symmetry and straightforward translation with stacked >> decorators, i.e. between: >> >> @deco1 >> @deco2 >> [declaration] >> >> and __decorators__ = [deco1, deco2] >> >> Doing it the other way now means a different order for people to >> remember; there should be One Obvious Order for decorators, and the one >> we have now is it. >> > > You and I have different ideas of 'obvious' in this context. To be clearer: I've written other APIs which take multiple decorators, or things like decorators that just happen to be a pipeline of functions to be applied, and every time the question of what order to put the API in, I always put them in this order because then in order to remember what the order was, I just have to think of decorators. This is easier than trying to remember which APIs use decorator order, and which ones use reverse decorator order. So, even though in itself there is no good reason for one order over the other, consistency wins because less thinking. At the least, if they're not going to be in decorator order, the member shouldn't be called "__decorators__". ;-) > But since you will use this and and me probably not, let your idea rule. For my motivating use case, I actually don't care about the order within a class very much. Nick's proposal will actually be the reverse of the application order used by my in-class decorators, but I can easily work around that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Wed Jun 6 01:13:33 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 06 Jun 2012 11:13:33 +1200 Subject: [Python-Dev] [Python-checkins] peps: Add PEP 422: Dynamic Class Decorators In-Reply-To: References: <4FCE36FE.3080704@udel.edu> Message-ID: <4FCE929D.20802@canterbury.ac.nz> PJ Eby wrote: > At the least, if they're > not going to be in decorator order, the member shouldn't be called > "__decorators__". ;-) Obviously it should be called __srotaroced__. -- Greg From ncoghlan at gmail.com Wed Jun 6 01:38:39 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jun 2012 09:38:39 +1000 Subject: [Python-Dev] [Python-checkins] peps: Add PEP 422: Dynamic Class Decorators In-Reply-To: References: <4FCE36FE.3080704@udel.edu> Message-ID: On Wed, Jun 6, 2012 at 9:06 AM, PJ Eby wrote: > > > On Tue, Jun 5, 2012 at 5:31 PM, Terry Reedy wrote: >> >> On 6/5/2012 2:26 PM, PJ Eby wrote: >>> It's for symmetry and straightforward translation with stacked >>> decorators, i.e. between: >>> >>> @deco1 >>> @deco2 >>> [declaration] >>> >>> and __decorators__ = [deco1, deco2] >>> >>> Doing it the other way now means a different order for people to >>> remember; there should be One Obvious Order for decorators, and the one >>> we have now is it. >> >> >> You and I have different ideas of 'obvious' in this context. > > > To be clearer: I've written other APIs which take multiple decorators, or > things like decorators that just happen to be a pipeline of functions to be > applied, and every time the question of what order to put the API in, I > always put them in this order because then in order to remember what the > order was, I just have to think of decorators.? This is easier than trying > to remember which APIs use decorator order, and which ones use reverse > decorator order. > > So, even though in itself there is no good reason for one order over the > other, consistency wins because less thinking.? At the least, if they're not > going to be in decorator order, the member shouldn't be called > "__decorators__".? ;-) Yeah, I can actually make decent arguments in favour of either order, but it was specifically "same order as lexical decorators" that tipped the balance in favour of the approach I wrote up in the PEP. It's also more consistent given how the base classes are walked. While I'm not proposing to calculate it this way, you can look at the scheme the PEP as: # Walk the MRO to build a complete decorator list decorators = [] for entry in cls.mro(): decorators.extend(cls.__dict__.get("__decorators__", ()) # Apply the decorators in "Last In, First Out" order, just like unwinding a chain of super() calls for deco in reversed(decorators): cls = deco(cls) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From alexander.belopolsky at gmail.com Wed Jun 6 01:41:28 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 5 Jun 2012 19:41:28 -0400 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: On Tue, Jun 5, 2012 at 6:49 PM, Guido van Rossum wrote: >> The problem is again the DST ambiguity. ?One day a year, datetime(y, >> m, d, 1, 30, tzinfo=Local) represents two different times and another >> day it represents no valid time. ?Many applications can ignore this >> problem but stdlib should not. > > This seems the blocking issue. We disagree on whether the stdlib > should not offer this functionality because it cannot do so perfectly. > I would put it differently. There are two features missing from datetime: 1. We cannot create TZ-aware local time datetime objects from a timestamp. In other words, if I want to implement UNIX date utility: $ date Tue Jun 5 18:56:50 EDT 2012 I have to use the lower level time module. 2. We cannot imbue a naive datetime object presumed to represent local time with a tzinfo instance. Note that the first feature does not suffer from the DST ambiguity. Many modern systems extend POSIX struct tm to supply accurate UTC offset and zone name taking into account all historical changes. The second feature has its uses. If I want wake up at 7 AM every weekday, I don't want my alarm clock ask me whether I mean standard or daylight saving time, but if I attempt to set it to 1:30 AM on the day when 1:30 AM happens twice, I don't want it to go off twice or divine which 1:30 AM I had in mind. I think stdlib should allow me to write a robust application that knows that some naive datetime objects correspond to two points in time and some correspond to none. This does not preclude implementing LocalTimezone, but we have to decide what to do if user specifies ambiguous or invalid naive time. My preference would be to default to standard time for the ambiguous hour and raise a ValueError for the invalid hour. >> The documentation example (fixed in issue 9063) addresses the >> ambiguity by defaulting to standard time, but it does this at a cost >> of having no way to spell "the other hour." > > Again, this is a problem of DST and timezones, and not something you > can ever "fix" ... POSIX has the "fix." It is not perfect if the DST rules change historically, but it does let you round-trip between timestamp and broken down local time. The fix is the DST flag. For most values stored in struct tm, the tm_isdst flag can be divined from time fields, but for the ambiguous hour this flag is necessary to read the actual time. To use mathematical analogy, tm_isdst specifies the branch of a multi-valued function. I don't advocate adding isdst flag to datetime. This solution has its own limitations and as I mentioned above, modern systems extend struct tm further adding non-standard tm_zone and tm_offset fields. What I suggest is that we skip the isdst kludge and provide a way to get system-supplied tm_zone and tm_offset to the aware datetime object. This will not solve the alarm clock problem, but will solve the what time is now and what time it was when I created this file problems. From ncoghlan at gmail.com Wed Jun 6 02:11:42 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jun 2012 10:11:42 +1000 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: On Wed, Jun 6, 2012 at 1:28 AM, PJ Eby wrote: > IOW, my motivation for saying, "hey, can't I just use this nice hook here" > was to avoid asking for a *new* feature, if there weren't enough other > people interested in a decoration protocol of this sort. > > That is, I was trying to NOT make anybody do a bunch of work on my behalf. > (Clearly, I wasn't successful in the attempt, but I should at least get > credit for trying.? ;-) OK, I can accept that. I mainly just wanted to head off the idea of overloading __build_class__ *anyway*, even if you weren't successful in getting agreement in bringing back __metaclass__ support, or a sufficiently powerful replacement mechanism. The idea of making __build_class__ itself public *has* been discussed, and the outcome of that discussion was the creation of types.new_class() as a way to programmatically define classes that respects the new __prepare__() hook. > In general, implementing what's effectively inherited decoration is what > *most* metaclasses actually get used for, so PEP 422 is a big step forward > in that. Yeah, that's what I realised (and will try to explain better in the next version of the PEP, based on my reply to Xavier). > Sketching something to get a feel for the PEP... > > def inheritable(*decos): > ??? """Wrap a class with inheritable decorators""" > ??? def decorate(cls): > ??????? cls.__decorators__ = > list(decos)+list(cls.__dict__.get('__decorators__',())) > ??????? for deco in reversed(decos): > ??????????? cls = deco(cls) > ??????? return cls Yep, that should work. > Hm.? There are a few interesting consequences of the PEP as written. > In-body decorators affect the __class__ closure (and thus super()), but > out-of-body decorators don't.? By me this is a good thing, but it is a bit > of complexity that needs mentioning. It's also worth highlighting this as a limitation of the currently supported metaclass based approach. When a metaclass runs, __class__ hasn't been filled in yet, so any attempts to call methods that use it (including via zero-argument super()) will fail. That's why my _register example at [1] ended up relying on the default argument hack: using __class__ (which is what I tried first without thinking it through) actually fails, complaining that the cell hasn't been populated. Under PEP 422, the default argument hack wouldn't be necessary, you could just write it as: def _register(cls): __class__._registry.append(cls) return cls [1] https://bitbucket.org/ncoghlan/misc/src/default/pep422.py > Likewise, the need for inheritable > decorators to be idempotent, in case two base classes list the same > decorator. Yeah, I'll add a section on "well-behaved" dynamic decorators, which need to be aware that they may run multiple times on a single class. If they're not naturally idempotent, they may need additional code to be made so. >? (For my own use attribute/method use cases, I can just have them > remove themselves from the class's __decorators__ upon execution.) Indeed, although that's probably unique to the approach of programmatically modifying __decorators__. Normally, if you don't want a decorator to be inherited and automatically applied to subclasses you would just use an ordinary lexical decorator. >> ?You complain that metaclasses are hard to compose, and your >> "solution" is to monkeypatch a deliberately undocumented builtin? > > To be clear, what I specifically proposed (as I mentioned in an earlier > thread) was simply to patch __build_class__ in order to restore the missing > __metaclass__ hook.? (Which, incidentally, would make ALL code using > __metaclass__ cross-version compatible between 2.x and 3.x: a potentially > valuable thing in and of itself!) I did briefly consider proposing that, but then I had the dynamic decorators idea which I like a *lot* more (since it also simplifies other use cases that currently require a metaclass when that isn't really what you want). > (Automatic metaclass combining is about the only thing > that would improve it any further.) Automatic metaclass derivation only works in practice if the metaclasses are all written to support cooperative multiple inheritance though, which is a fairly large "if" (albeit, far more likely than in the general inheritance case, since the signatures of the methods involved in type construction are significantly more constrained than those involved in instantiating arbitrary objects). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Wed Jun 6 02:14:44 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jun 2012 10:14:44 +1000 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: <5954463C-4512-4F3B-9D62-61C54367ED59@voidspace.org.uk> Message-ID: On Wed, Jun 6, 2012 at 2:00 AM, Joao S. O. Bueno wrote: > On 5 June 2012 09:24, Nick Coghlan wrote: > >> >> PEP written and posted: http://www.python.org/dev/peps/pep-0422/ >> More toy examples here: >> https://bitbucket.org/ncoghlan/misc/src/default/pep422.py >> >> Yes, it means requiring the use of a specific metaclass in 3.2 (either >> directly or via inheritance), but monkeypatching an undocumented >> builtin is going to pathological lengths just to avoid requiring that >> people explicitly interoperate with your particular metaclass >> mechanisms. > > When reading the PEP, I got the impression that having a > "__decorate__" method on "type", which would perform its thing, would > be not add magic exceptions, would be more explicit and more flexible > than having an extra step to be called between the metaclass execution > and decorator applying. > > So, I think that settling for having the decorators applied - as > described in the PEP - in a __decorate__ method of the metaclass would > be nicer and cleaner. On reflection, I'm actually inclined to agree. The next version of the PEP will propose the addition of type.__decorate__(). This new method will be invoked *after* the class is created and the __class__ cell is populated, but *before* lexical decorators are applied. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From barry at python.org Wed Jun 6 02:18:54 2012 From: barry at python.org (Barry Warsaw) Date: Tue, 5 Jun 2012 20:18:54 -0400 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: <20120605201854.5e027298@limelight.wooz.org> On Jun 05, 2012, at 07:41 PM, Alexander Belopolsky wrote: >The second feature has its uses. If I want wake up at 7 AM every >weekday, I don't want my alarm clock ask me whether I mean standard or >daylight saving time, but if I attempt to set it to 1:30 AM on the day >when 1:30 AM happens twice, I don't want it to go off twice or divine >which 1:30 AM I had in mind. I think stdlib should allow me to write >a robust application that knows that some naive datetime objects >correspond to two points in time and some correspond to none. Really? Why would naive datetimes know that? I would expect that an aware datetime would have that information but not naive ones. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From brett at python.org Wed Jun 6 02:26:59 2012 From: brett at python.org (Brett Cannon) Date: Tue, 5 Jun 2012 20:26:59 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) Message-ID: On behalf of Yury, Larry, Jiwon (wherever he ended up), and myself, here is an updated version of PEP 362 to address Guido's earlier comments. Credit for most of the update should go to Yury with Larry also helping out. At this point I need a BDFAP and someone to do a code review: http://bugs.python.org/issue15008 . ----------------------------------------------- PEP: 362 Title: Function Signature Object Version: $Revision$ Last-Modified: $Date$ Author: Brett Cannon , Jiwon Seo , Yury Selivanov , Larry Hastings < larry at hastings.org> Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 21-Aug-2006 Python-Version: 3.3 Post-History: 04-Jun-2012 Abstract ======== Python has always supported powerful introspection capabilities, including introspecting functions and methods. (For the rest of this PEP, "function" refers to both functions and methods). By examining a function object you can fully reconstruct the function's signature. Unfortunately this information is stored in an inconvenient manner, and is spread across a half-dozen deeply nested attributes. This PEP proposes a new representation for function signatures. The new representation contains all necessary information about a function and its parameters, and makes introspection easy and straightforward. However, this object does not replace the existing function metadata, which is used by Python itself to execute those functions. The new metadata object is intended solely to make function introspection easier for Python programmers. Signature Object ================ A Signature object represents the overall signature of a function. It stores a `Parameter object`_ for each parameter accepted by the function, as well as information specific to the function itself. A Signature object has the following public attributes and methods: * name : str Name of the function. * qualname : str Fully qualified name of the function. * return_annotation : object The annotation for the return type of the function if specified. If the function has no annotation for its return type, this attribute is not set. * parameters : OrderedDict An ordered mapping of parameters' names to the corresponding Parameter objects (keyword-only arguments are in the same order as listed in ``code.co_varnames``). * bind(\*args, \*\*kwargs) -> BoundArguments Creates a mapping from positional and keyword arguments to parameters. Once a Signature object is created for a particular function, it's cached in the ``__signature__`` attribute of that function. Changes to the Signature object, or to any of its data members, do not affect the function itself. Parameter Object ================ Python's expressive syntax means functions can accept many different kinds of parameters with many subtle semantic differences. We propose a rich Parameter object designed to represent any possible function parameter. The structure of the Parameter object is: * name : str The name of the parameter as a string. * default : object The default value for the parameter if specified. If the parameter has no default value, this attribute is not set. * annotation : object The annotation for the parameter if specified. If the parameter has no annotation, this attribute is not set. * is_keyword_only : bool True if the parameter is keyword-only, else False. * is_args : bool True if the parameter accepts variable number of arguments (``\*args``-like), else False. * is_kwargs : bool True if the parameter accepts variable number of keyword arguments (``\*\*kwargs``-like), else False. * is_implemented : bool True if the parameter is implemented for use. Some platforms implement functions but can't support specific parameters (e.g. "mode" for os.mkdir). Passing in an unimplemented parameter may result in the parameter being ignored, or in NotImplementedError being raised. It is intended that all conditions where ``is_implemented`` may be False be thoroughly documented. BoundArguments Object ===================== Result of a ``Signature.bind`` call. Holds the mapping of arguments to the function's parameters. Has the following public attributes: * arguments : OrderedDict An ordered mutable mapping of parameters' names to arguments' values. Does not contain arguments' default values. * args : tuple Tuple of positional arguments values. Dynamically computed from the 'arguments' attribute. * kwargs : dict Dict of keyword arguments values. Dynamically computed from the 'arguments' attribute. The ``arguments`` attribute should be used in conjunction with ``Signature.parameters`` for any arguments processing purposes. ``args`` and ``kwargs`` properties should be used to invoke functions: :: def test(a, *, b): ... sig = signature(test) ba = sig.bind(10, b=20) test(*ba.args, **ba.kwargs) Implementation ============== An implementation for Python 3.3 can be found here: [#impl]_. A python issue was also created: [#issue]_. The implementation adds a new function ``signature()`` to the ``inspect`` module. ``signature()`` returns the value stored on the ``__signature__`` attribute if it exists, otherwise it creates the Signature object for the function and caches it in the function's ``__signature__``. (For methods this is stored directly in the ``__func__`` function object, since that is what decorators work with.) Examples ======== Function Signature Renderer --------------------------- :: def render_signature(signature): '''Renders function definition by its signature. Example: >>> def test(a:'foo', *, b:'bar', c=True, **kwargs:None) -> 'spam': ... pass >>> render_signature(inspect.signature(test)) test(a:'foo', *, b:'bar', c=True, **kwargs:None) -> 'spam' ''' result = [] render_kw_only_separator = True for param in signature.parameters.values(): formatted = param.name # Add annotation and default value if hasattr(param, 'annotation'): formatted = '{}:{!r}'.format(formatted, param.annotation) if hasattr(param, 'default'): formatted = '{}={!r}'.format(formatted, param.default) # Handle *args and **kwargs -like parameters if param.is_args: formatted = '*' + formatted elif param.is_kwargs: formatted = '**' + formatted if param.is_args: # OK, we have an '*args'-like parameter, so we won't need # a '*' to separate keyword-only arguments render_kw_only_separator = False elif param.is_keyword_only and render_kw_only_separator: # We have a keyword-only parameter to render and we haven't # rendered an '*args'-like parameter before, so add a '*' # separator to the parameters list ("foo(arg1, *, arg2)" case) result.append('*') # This condition should be only triggered once, so # reset the flag render_kw_only_separator = False result.append(formatted) rendered = '{}({})'.format(signature.name, ', '.join(result)) if hasattr(signature, 'return_annotation'): rendered += ' -> {!r}'.format(signature.return_annotation) return rendered Annotation Checker ------------------ :: import inspect import functools def checktypes(func): '''Decorator to verify arguments and return types Example: >>> @checktypes ... def test(a:int, b:str) -> int: ... return int(a * b) >>> test(10, '1') 1111111111 >>> test(10, 1) Traceback (most recent call last): ... ValueError: foo: wrong type of 'b' argument, 'str' expected, got 'int' ''' sig = inspect.signature(func) types = {} for param in sig.parameters.values(): # Iterate through function's parameters and build the list of # arguments types try: type_ = param.annotation except AttributeError: continue else: if not inspect.isclass(type_): # Not a type, skip it continue types[param.name] = type_ # If the argument has a type specified, let's check that its # default value (if present) conforms with the type. try: default = param.default except AttributeError: continue else: if not isinstance(default, type_): raise ValueError("{func}: wrong type of a default value for {arg!r}". \ format(func=sig.qualname, arg= param.name)) def check_type(sig, arg_name, arg_type, arg_value): # Internal function that incapsulates arguments type checking if not isinstance(arg_value, arg_type): raise ValueError("{func}: wrong type of {arg!r} argument, " \ "{exp!r} expected, got {got!r}". \ format(func=sig.qualname, arg=arg_name, exp=arg_type.__name__, got=type(arg_value).__name__)) @functools.wraps(func) def wrapper(*args, **kwargs): # Let's bind the arguments ba = sig.bind(*args, **kwargs) for arg_name, arg in ba.arguments.items(): # And iterate through the bound arguments try: type_ = types[arg_name] except KeyError: continue else: # OK, we have a type for the argument, lets get the corresponding # parameter description from the signature object param = sig.parameters[arg_name] if param.is_args: # If this parameter is a variable-argument parameter, # then we need to check each of its values for value in arg: check_type(sig, arg_name, type_, value) elif param.is_kwargs: # If this parameter is a variable-keyword-argument parameter: for subname, value in arg.items(): check_type(sig, arg_name + ':' + subname, type_, value) else: # And, finally, if this parameter a regular one: check_type(sig, arg_name, type_, arg) result = func(*ba.args, **ba.kwargs) # The last bit - let's check that the result is correct try: return_type = sig.return_annotation except AttributeError: # Looks like we don't have any restriction on the return type pass else: if isinstance(return_type, type) and not isinstance(result, return_type): raise ValueError('{func}: wrong return type, {exp} expected, got {got}'. \ format(func=sig.qualname, exp=return_type.__name__, got=type(result).__name__)) return result return wrapper Open Issues =========== When to construct the Signature object? --------------------------------------- The Signature object can either be created in an eager or lazy fashion. In the eager situation, the object can be created during creation of the function object. In the lazy situation, one would pass a function object to a function and that would generate the Signature object and store it to ``__signature__`` if needed, and then return the value of ``__signature__``. In the current implementation, signatures are created only on demand ("lazy"). Deprecate ``inspect.getfullargspec()`` and ``inspect.getcallargs()``? --------------------------------------------------------------------- Since the Signature object replicates the use of ``getfullargspec()`` and ``getcallargs()`` from the ``inspect`` module it might make sense to begin deprecating them in 3.3. References ========== .. [#impl] pep362 branch (https://bitbucket.org/1st1/cpython/overview) .. [#issue] issue 15008 (http://bugs.python.org/issue15008) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Wed Jun 6 02:53:44 2012 From: tseaver at palladion.com (Tres Seaver) Date: Tue, 05 Jun 2012 20:53:44 -0400 Subject: [Python-Dev] [Python-checkins] peps: Add PEP 422: Dynamic Class Decorators In-Reply-To: References: <4FCE36FE.3080704@udel.edu> Message-ID: On 06/05/2012 07:38 PM, Nick Coghlan wrote: > On Wed, Jun 6, 2012 at 9:06 AM, PJ Eby wrote: >> >> >> On Tue, Jun 5, 2012 at 5:31 PM, Terry Reedy >> wrote: >>> >>> On 6/5/2012 2:26 PM, PJ Eby wrote: >>>> It's for symmetry and straightforward translation with stacked >>>> decorators, i.e. between: >>>> >>>> @deco1 @deco2 [declaration] >>>> >>>> and __decorators__ = [deco1, deco2] >>>> >>>> Doing it the other way now means a different order for people >>>> to remember; there should be One Obvious Order for decorators, >>>> and the one we have now is it. >>> >>> >>> You and I have different ideas of 'obvious' in this context. >> >> >> To be clearer: I've written other APIs which take multiple >> decorators, or things like decorators that just happen to be a >> pipeline of functions to be applied, and every time the question of >> what order to put the API in, I always put them in this order >> because then in order to remember what the order was, I just have to >> think of decorators. This is easier than trying to remember which >> APIs use decorator order, and which ones use reverse decorator >> order. >> >> So, even though in itself there is no good reason for one order over >> the other, consistency wins because less thinking. At the least, if >> they're not going to be in decorator order, the member shouldn't be >> called "__decorators__". ;-) > > Yeah, I can actually make decent arguments in favour of either order, > but it was specifically "same order as lexical decorators" that > tipped the balance in favour of the approach I wrote up in the PEP. > > It's also more consistent given how the base classes are walked. > While I'm not proposing to calculate it this way, you can look at the > scheme the PEP as: > > # Walk the MRO to build a complete decorator list decorators = [] for > entry in cls.mro(): > decorators.extend(cls.__dict__.get("__decorators__", ()) # Apply the > decorators in "Last In, First Out" order, just like unwinding a chain > of super() calls for deco in reversed(decorators): cls = deco(cls) Or, to make it obvious we are treating 'decorators' as a stack:: while decorators: cls = decorators.pop()(cls) -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com From ncoghlan at gmail.com Wed Jun 6 03:16:56 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jun 2012 11:16:56 +1000 Subject: [Python-Dev] TZ-aware local time In-Reply-To: <20120605201854.5e027298@limelight.wooz.org> References: <20120605201854.5e027298@limelight.wooz.org> Message-ID: On Wed, Jun 6, 2012 at 10:18 AM, Barry Warsaw wrote: > On Jun 05, 2012, at 07:41 PM, Alexander Belopolsky wrote: > >>The second feature has its uses. ?If I want wake up at 7 AM every >>weekday, I don't want my alarm clock ask me whether I mean standard or >>daylight saving time, but if I attempt to set it to 1:30 AM on the day >>when 1:30 AM happens twice, I don't want it to go off twice or divine >>which 1:30 AM I had in mind. ?I think stdlib should allow me to write >>a robust application that knows that some naive datetime objects >>correspond to two points in time and some correspond to none. > > Really? ?Why would naive datetimes know that? ?I would expect that an aware > datetime would have that information but not naive ones. Indeed. As I mentioned in my earlier email, we need to remember that there's a *system* level solution to many of these problems Alexander raises: configure machines to use UTC and completely ignore local time definitions. Many military systems do this. They schedule everything in Zulu time anyway, and have the kind of control over their mission critical systems that means they can enforce the rule of configuring absolutely everything to run as UTC. Such systems often span multiple time zones, and sharing a consistent time base with other system components is considered far more important than the vagaries of local time definitions (coping with DST is the least of your worries in that context, since you would also need to consider situations like entire countries deciding to change time zones: http://www.bbc.co.uk/news/world-13334229). You *cannot* do robust date processing with local times, because the meaning of "local time" changes. Clients, servers, peers, will all have different ideas about the local time, but they will all agree on UTC. For some platforms (especially supersonic aircraft, satellites and extraplanetary exploration vehicles) the very notion of "local time" is completely meaningless (the International Space Station takes 90 minutes to complete an orbit, so it takes less than 4 minutes to cross any given terrestrial "time zone"). And if everything is known to be in UTC, then it doesn't matter if you're using "naive" timezone objects, or "aware" ones that happen to have their timezone specifically set to UTC. Local time should only be used for displaying dates and times to humans (since we care about little things like local sunrise and sunset, local business hours, etc) and for inter-system coordination where such details are relevant. It's analagous to the situation with Unicode text and encodings: the "true" time representation is UTC, just as the "true" text representation is Unicode. "tzinfo" objects are the date/time equivalent of Unicode encodings. Passing tzinfo objects around is the equivalent of using "tagged bytestrings" (i.e. binary data with an associated encoding) as your internal representation instead of a Unicode based text object. You really want to deal with timezone variations solely at your system boundaries and get rid of them ASAP when it comes to internal processing. The datetime module should be designed to make this *as easy as possible*. Adding a "local time" tzinfo object (with the ambigous hour favouring the non-DST time, and the missing hour triggering an error at construction time) would be a good step in the right direction: it allows local times to be clearly flagged, even though they're explicitly *not* appropriate for many kinds of processing and need to be converted to a more suitable format (such as a naive datetime object, or one with the timezone set to UTC) first. This is directly analogous to text in a specific encoding: it's useful for some purposes, but you'll need to decode it to Unicode for many other manipulations. Personally, I'd like to see the datetime module make an explicit assumption that "all naive datetime objects are considered to be UTC", with the interactions between naive and aware objects updated accordingly. (The difference between this and the implicit conversion hell that is the Python 2 Unicode model is that there shouldn't be a range mismatch between the canonical time representation and the timezone aware representations the way there is between the range of the full Unicode representation and what can be represented in an arbitrary text encoding). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From alexander.belopolsky at gmail.com Wed Jun 6 03:17:46 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 5 Jun 2012 21:17:46 -0400 Subject: [Python-Dev] TZ-aware local time In-Reply-To: <20120605201854.5e027298@limelight.wooz.org> References: <20120605201854.5e027298@limelight.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 8:18 PM, Barry Warsaw wrote: >> I think stdlib should allow me to write >>a robust application that knows that some naive datetime objects >>correspond to two points in time and some correspond to none. > > Really? ?Why would naive datetimes know that? ?I would expect that an aware > datetime would have that information but not naive ones. I did not say that it is the datetime objects who would have that information. It is the application or the library that should embody such knowledge. So if I try to specify time as 2012-03-11 at 2:30 AM "New York time," the application should know that no such time exists and reject it as it would 2012-03-11 at 2:66 AM. At the end of the day, users want an easy way to solve the equation datetime.fromtimestamp(x) = dt, but this equation can have 0, 1 or 2 solutions depending on the value of dt. An ultimate solution would be to provide a function that returns a list length 0, 1, or 2. Short of that, we can assume standard time when there are two solutions and raise an exception when there are none. This is the math.sqrt() approach. From glyph at twistedmatrix.com Wed Jun 6 03:31:20 2012 From: glyph at twistedmatrix.com (Glyph) Date: Tue, 5 Jun 2012 18:31:20 -0700 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: <20120605201854.5e027298@limelight.wooz.org> Message-ID: <2E8E9264-9575-41A9-BEDE-1E5C5D4BD671@twistedmatrix.com> Le Jun 5, 2012 ? 6:16 PM, Nick Coghlan a ?crit : > Personally, I'd like to see the datetime module make an explicit > assumption that "all naive datetime objects are considered to be UTC", > with the interactions between naive and aware objects updated > accordingly I would absolutely love it if this were true. In fact, I would go a step further and say that the whole concept of a "naive" datetime is simply a bug. We don't have a "naive" unicode, for example, where it's text in some encoding but you decline to decide which one when you decode it, leaving that to the caller. When we addresed this problem for ourselves at Divmod some time ago, naive=UTC is exactly what we did: -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Wed Jun 6 03:39:07 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 5 Jun 2012 21:39:07 -0400 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: <20120605201854.5e027298@limelight.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 9:16 PM, Nick Coghlan wrote: > ... Local time should only be used for displaying > dates and times to humans (since we care about little things like > local sunrise and sunset, local business hours, etc) and for > inter-system coordination where such details are relevant. > Displaying local time would be addressed by what I called the first feature: given the timestamp find the local time and the TZ name/offset: $ date Tue Jun 5 21:28:21 EDT 2012 Most humans will ignore the TZ information, but the format above can represent any moment in time unambiguously. A related but different problem is taking time input from humans. Air traffic control systems may handle all times in UTC, but when I search for my flight, I enter local time. There may be two flights one at 1:30 AM EDT and the other at 1:30 AM EST. In this case I need some way to figure out which one is mine and I will look at the TZ part. (Hopefully the user interface will present a helpful explanation of what is going on.) > The datetime module should be designed to make this *as easy as > possible*. Adding a "local time" tzinfo object (with the ambigous hour > favouring the non-DST time, and the missing hour triggering an error > at construction time) would be a good step in the right direction: it > allows local times to be clearly flagged, even though they're > explicitly *not* appropriate for many kinds of processing and need to > be converted to a more suitable format (such as a naive datetime > object, or one with the timezone set to UTC) first. This is exactly my proposal, but it does not help when you need to get on the right 1:30 AM flight. From ncoghlan at gmail.com Wed Jun 6 03:45:23 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jun 2012 11:45:23 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: Message-ID: On Wed, Jun 6, 2012 at 10:26 AM, Brett Cannon wrote: > On behalf of Yury, Larry, Jiwon (wherever he ended up), and myself, here is > an updated version of PEP 362 to address Guido's earlier comments. Credit > for most of the update should go to Yury with Larry also helping out. > > At this point I need a BDFAP and someone to do a code > review:?http://bugs.python.org/issue15008 . I suspect I have too many ideas for what I want the PEP to accomplish to make a good BDFL delegate for this one. One thing I would like to see this PEP achieve is to improve function introspection for @functools.wraps decorated functions where the wrapper uses a (*args, **kwds) signature. To that end, I would prefer to see it added to the functools module rather than to the inspect module. I would also like to see native support for "early binding" validation of call parameters. The idea will be for interfaces that support "lazy calls" (such as contextlib.ExitStack, unittest.TestCase.addCleanup) to detect parameter binding errors when the function is *added*, rather than when it is called. Such lazy call errors are notoriously hard to debug, since the error site (where the callback is invoked) often provides no indication as to where the actual error occurred (that is, when it was registered with the callback interface). Specific proposals: - the goals of the PEP be expanded to include error checking of parameter binding for delayed calls and improve introspection of function wrappers that accept arbitrary arguments, rather than the more nebulous "improve introspection support for functions". - the main new API be added as "functools.signature" (any necessary infrastructure from inspect would also migrate to functools as private implementations. affected inspect APIs would either be updated to use the signature API, or else to just republish the functools implementation) - "functools.update_wrapper" be enhanced to set "wrapper.__signature__ = signature(wrapped)" - At least contextlib and unittest be updated to use "functools.signature(f).bind(*args, **kwds)" for the relevant callback APIs Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From greg.ewing at canterbury.ac.nz Wed Jun 6 01:11:38 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 06 Jun 2012 11:11:38 +1200 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: <4FCE922A.2010003@canterbury.ac.nz> Alexander Belopolsky wrote: > The problem is again the DST ambiguity. One day a year, datetime(y, > m, d, 1, 30, tzinfo=Local) represents two different times and another > day it represents no valid time. > > The documentation example (fixed in issue 9063) addresses the > ambiguity by defaulting to standard time, but it does this at a cost > of having no way to spell "the other hour." What would be so bad about giving datetime objects a DST flag? Apps that don't care could ignore it and get results no worse than the status quo. -- Greg From greg.ewing at canterbury.ac.nz Wed Jun 6 01:16:33 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 06 Jun 2012 11:16:33 +1200 Subject: [Python-Dev] Issue 2736: datetimes and Unix timestamps In-Reply-To: References: <20120604115102.7e9b770b@resist.wooz.org> Message-ID: <4FCE9351.2050604@canterbury.ac.nz> Terry Reedy wrote: > "A rich comparison method may return the singleton NotImplemented if it > does not implement the operation for a given pair of arguments. By > convention, False and True are returned for a successful comparison. > However, these methods can return any value," That's to give the other operand a chance to handle the operation. If they both return NotImplemented, then a TypeError is raised by the interpreter. -- Greg From rdmurray at bitdance.com Wed Jun 6 03:53:22 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 05 Jun 2012 21:53:22 -0400 Subject: [Python-Dev] [Python-checkins] peps: Add PEP 422: Dynamic Class Decorators In-Reply-To: References: <4FCE36FE.3080704@udel.edu> Message-ID: <20120606015323.049322500AA@webabinitio.net> On Wed, 06 Jun 2012 09:38:39 +1000, Nick Coghlan wrote: > On Wed, Jun 6, 2012 at 9:06 AM, PJ Eby wrote: > > > > > > On Tue, Jun 5, 2012 at 5:31 PM, Terry Reedy wrote: > >> > >> On 6/5/2012 2:26 PM, PJ Eby wrote: > >>> It's for symmetry and straightforward translation with stacked > >>> decorators, i.e. between: > >>> > >>> @deco1 > >>> @deco2 > >>> [declaration] > >>> > >>> and __decorators__ = [deco1, deco2] > >>> > >>> Doing it the other way now means a different order for people to > >>> remember; there should be One Obvious Order for decorators, and the one > >>> we have now is it. > >> > >> > >> You and I have different ideas of 'obvious' in this context. > > > > > > To be clearer: I've written other APIs which take multiple decorators, or > > things like decorators that just happen to be a pipeline of functions to be > > applied, and every time the question of what order to put the API in, I > > always put them in this order because then in order to remember what the > > order was, I just have to think of decorators.?? This is easier than trying > > to remember which APIs use decorator order, and which ones use reverse > > decorator order. > > > > So, even though in itself there is no good reason for one order over the > > other, consistency wins because less thinking.?? At the least, if they're not > > going to be in decorator order, the member shouldn't be called > > "__decorators__".?? ;-) > > Yeah, I can actually make decent arguments in favour of either order, > but it was specifically "same order as lexical decorators" that tipped > the balance in favour of the approach I wrote up in the PEP. I don't think about data structures lexically, though, I think of them programmatically. So I'm with Terry here, I would expect them to be in the list in the order they are going to get applied. I can see the other argument, though, and presumably other people's brains work differently and they'd be more confused by non-lexical ordering. > It's also more consistent given how the base classes are walked. While > I'm not proposing to calculate it this way, you can look at the scheme > the PEP as: > > # Walk the MRO to build a complete decorator list > decorators = [] > for entry in cls.mro(): > decorators.extend(cls.__dict__.get("__decorators__", ()) > # Apply the decorators in "Last In, First Out" order, just like > unwinding a chain of super() calls > for deco in reversed(decorators): > cls = deco(cls) Assuming I got this right (no guarantees :), the following is actually easier for me to understand (I had to think to understand what "just like unwinding a chain of super() calls" meant): # Walk the MRO from the root, applying the decorators. for entry in reversed(cls.mro()): for deco in cls.__dict__.get("__decorators__", ()): cls = deco(cls) --David From alexander.belopolsky at gmail.com Wed Jun 6 03:57:48 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 5 Jun 2012 21:57:48 -0400 Subject: [Python-Dev] TZ-aware local time In-Reply-To: <4FCE922A.2010003@canterbury.ac.nz> References: <4FCE922A.2010003@canterbury.ac.nz> Message-ID: On Tue, Jun 5, 2012 at 7:11 PM, Greg Ewing wrote: > What would be so bad about giving datetime objects > a DST flag? Apps that don't care could ignore it and > get results no worse than the status quo. This would neatly solve the round-trip problem, but will open a different can of worms: what happens to the dst flag when you add a timedelta? If dst flag is optional, should you be able to compare dst-aware and dst-naive instances? From rdmurray at bitdance.com Wed Jun 6 04:05:38 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 05 Jun 2012 22:05:38 -0400 Subject: [Python-Dev] TZ-aware local time In-Reply-To: <2E8E9264-9575-41A9-BEDE-1E5C5D4BD671@twistedmatrix.com> References: <20120605201854.5e027298@limelight.wooz.org> <2E8E9264-9575-41A9-BEDE-1E5C5D4BD671@twistedmatrix.com> Message-ID: <20120606020539.50A742500AA@webabinitio.net> On Tue, 05 Jun 2012 18:31:20 -0700, Glyph wrote: > > Le Jun 5, 2012 ?? 6:16 PM, Nick Coghlan a ??crit : > > > Personally, I'd like to see the datetime module make an explicit > > assumption that "all naive datetime objects are considered to be UTC", > > with the interactions between naive and aware objects updated > > accordingly > > I would absolutely love it if this were true. In fact, I would go a step further and say that the whole concept of a "naive" datetime is simply a bug. We don't have a "naive" unicode, for example, where it's text in some encoding but you decline to decide which one when you decode it, leaving that to the caller. > > When we addresed this problem for ourselves at Divmod some time ago, naive=UTC is exactly what we did: > > Note that (after several conversations with Alexander, and based on the analogous decision in the email RFCs), the email package makes the same call. A naive datetime is treated as a UTC time with no information about what timezone it originated from (coded as -0000 per RFC 5322), while an aware datetime will generate an accurate offset (+0000 for "really UTC", and as appropriate for others). So in some sense I've already nudged the stdlib in this direction...unless I get overruled now that I've pointed it out :) --David From ncoghlan at gmail.com Wed Jun 6 04:44:17 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jun 2012 12:44:17 +1000 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: <4FCE922A.2010003@canterbury.ac.nz> Message-ID: On Wed, Jun 6, 2012 at 11:57 AM, Alexander Belopolsky wrote: > On Tue, Jun 5, 2012 at 7:11 PM, Greg Ewing wrote: >> What would be so bad about giving datetime objects >> a DST flag? Apps that don't care could ignore it and >> get results no worse than the status quo. > > This would neatly solve the round-trip problem, but will open a > different can of worms: what happens to the dst flag when you add a > timedelta? ?If dst flag is optional, should you be able to compare > dst-aware and dst-naive instances? Yeah, I think it's cleaner to lean on tzinfo as much as possible. If we decide we need to support both "local, treating the overlapping hour as standard time" and "local, treating the overlapping hour as DST", then that could be represented as two different timezone objects and follow the normal rules for reconciling timezones rather than adding a new flag anywhere. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Wed Jun 6 04:51:30 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 5 Jun 2012 22:51:30 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: Message-ID: Nick, On 2012-06-05, at 9:45 PM, Nick Coghlan wrote: > Specific proposals: > > - the goals of the PEP be expanded to include error checking of > parameter binding for delayed calls and improve introspection of > function wrappers that accept arbitrary arguments, rather than the > more nebulous "improve introspection support for functions". It's already supported, if I understand your request correctly. 'Signature.bind' already does all necessary arguments validation (see the unit tests). Hence, `unittest.TestCase.addCleanup` may be rewritten as: def addCleanup(self, function, *args, **kwargs): # Bind *args & **kwargs. May raise a BindError in case # of incorrect callback arguments. bound_args = signature(function).bind(*args, **kwargs) self._cleanups.append((function, bound_args)) And later, in `doCleanups`: while self._cleanups: function, bound_args = self._cleanups.pop() part = lambda: function(*bound_args.args, **bound_args.kwargs) self._executeTestPart(part, outcome) ... And same for `contextlib.ExitStack`: def callback(self, callback, *args, **kwds): cb_bound_args = signature(callback).bind(*args, **kwds) def _exit_wrapper(exc_type, exc, tb): callback(*cb_bound_args.args, **cb_bound_args.kwargs) ... > - the main new API be added as "functools.signature" (any necessary > infrastructure from inspect would also migrate to functools as private > implementations. affected inspect APIs would either be updated to use > the signature API, or else to just republish the functools > implementation) Only 'ismethod' and 'isfunction' from the 'inspect' module are used. Those could be easily replaced with direct 'isinstance' calls, so we have no dependancy on inspect's guts. As for moving Signature object to `functools`, we had this discussion with Brett, and here is what he suggested: Functools contains code that transforms what a function does while inspect is about introspection. These objects are all about introspection and not about transforming what a function does. > - "functools.update_wrapper" be enhanced to set "wrapper.__signature__ > = signature(wrapped)" Big +1 on this one. If you give me a green light on this, I'll add this change along with the unit tests to the patch. > - At least contextlib and unittest be updated to use > "functools.signature(f).bind(*args, **kwds)" for the relevant callback > APIs +1. Thank you, - Yury From guido at python.org Wed Jun 6 04:59:48 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 5 Jun 2012 19:59:48 -0700 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: <4FCE922A.2010003@canterbury.ac.nz> Message-ID: On Tue, Jun 5, 2012 at 7:44 PM, Nick Coghlan wrote: > On Wed, Jun 6, 2012 at 11:57 AM, Alexander Belopolsky > wrote: >> On Tue, Jun 5, 2012 at 7:11 PM, Greg Ewing wrote: >>> What would be so bad about giving datetime objects >>> a DST flag? Apps that don't care could ignore it and >>> get results no worse than the status quo. >> >> This would neatly solve the round-trip problem, but will open a >> different can of worms: what happens to the dst flag when you add a >> timedelta? ?If dst flag is optional, should you be able to compare >> dst-aware and dst-naive instances? > > Yeah, I think it's cleaner to lean on tzinfo as much as possible. If > we decide we need to support both "local, treating the overlapping > hour as standard time" and "local, treating the overlapping hour as > DST", then that could be represented as two different timezone objects > and follow the normal rules for reconciling timezones rather than > adding a new flag anywhere. Let's not add a DST flag to datetime objects. -- --Guido van Rossum (python.org/~guido) From guido at python.org Wed Jun 6 05:09:18 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 5 Jun 2012 20:09:18 -0700 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: <20120605201854.5e027298@limelight.wooz.org> Message-ID: On Tue, Jun 5, 2012 at 6:39 PM, Alexander Belopolsky wrote: > On Tue, Jun 5, 2012 at 9:16 PM, Nick Coghlan wrote: >> ... ?Local time should only be used for displaying >> dates and times to humans (since we care about little things like >> local sunrise and sunset, local business hours, etc) and for >> inter-system coordination where such details are relevant. >> > > Displaying local time would be addressed by what I called the first > feature: given the timestamp find the local time and the TZ > name/offset: > > $ date > Tue Jun ?5 21:28:21 EDT 2012 > > Most humans will ignore the TZ information, but the format above can > represent any moment in time unambiguously. > > A related but different problem is taking time input from humans. ?Air > traffic control systems may handle all times in UTC, but when I search > for my flight, I enter local time. ?There may be two flights one at > 1:30 AM EDT and the other at 1:30 AM EST. ?In this case I need some > way to figure out which one is mine and I will look at the TZ part. > (Hopefully the user interface will present a helpful explanation of > what is going on.) > > >> The datetime module should be designed to make this *as easy as >> possible*. Adding a "local time" tzinfo object (with the ambigous hour >> favouring the non-DST time, and the missing hour triggering an error >> at construction time) would be a good step in the right direction: it >> allows local times to be clearly flagged, even though they're >> explicitly *not* appropriate for many kinds of processing and need to >> be converted to a more suitable format (such as a naive datetime >> object, or one with the timezone set to UTC) first. > > This is exactly my proposal, but it does not help when you need to get > on the right 1:30 AM flight. Trust me. Even if there was only one 1:30AM flight everybody including the crew would have trouble getting there on time. There's a reason why the DST change happens around 1AM. (Unrelated fun DST fact: there's a difference between the way Europe and the US coordinate DST changes across multiple timezones. In the US each timezone switches at 1AM local time, making the difference between adjacent timezones vary for an hour. In Europe the linked timezones all switch simultaneously, keeping the zone differences in sync but making the local time of the switch vary.) -- --Guido van Rossum (python.org/~guido) From guido at python.org Wed Jun 6 05:14:45 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 5 Jun 2012 20:14:45 -0700 Subject: [Python-Dev] TZ-aware local time In-Reply-To: <2E8E9264-9575-41A9-BEDE-1E5C5D4BD671@twistedmatrix.com> References: <20120605201854.5e027298@limelight.wooz.org> <2E8E9264-9575-41A9-BEDE-1E5C5D4BD671@twistedmatrix.com> Message-ID: On Tue, Jun 5, 2012 at 6:31 PM, Glyph wrote: > > Le Jun 5, 2012 ? 6:16 PM, Nick Coghlan a ?crit : > > Personally, I'd like to see the datetime module make an explicit > assumption that "all naive datetime objects are considered to be UTC", > with the interactions between naive and aware objects updated > accordingly > > I would absolutely love it if this were true. ?In fact, I would go a step > further and say that the whole concept of a "naive" datetime is simply a > bug. ?We don't have a "naive" unicode, for example, where it's text in some > encoding but you decline to decide which one when you decode it, leaving > that to the caller. > > When we addresed this problem for ourselves at Divmod some time ago, > naive=UTC is exactly what we did: > > You can try to enforce this, but users will ignore it, and happily represent local time as UTC. I've seen people do this with POSIX timestamps too -- use the UTC conversions between timestamps and time tuples, and yet use time tuples to represent local time (the timestamps are stored because integers are easier to store). And yes they get in horrible trouble around DST and they don't understand why. But they still do it. I think it's better to give users the rope they want than to try and prevent them from hanging themselves, since otherwise they'll just use the power cords as ropes and electrocute themselves. -- --Guido van Rossum (python.org/~guido) From python at mrabarnett.plus.com Wed Jun 6 05:29:45 2012 From: python at mrabarnett.plus.com (MRAB) Date: Wed, 06 Jun 2012 04:29:45 +0100 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: <4FCE922A.2010003@canterbury.ac.nz> Message-ID: <4FCECEA9.1040608@mrabarnett.plus.com> On 06/06/2012 02:57, Alexander Belopolsky wrote: > On Tue, Jun 5, 2012 at 7:11 PM, Greg Ewing wrote: >> What would be so bad about giving datetime objects >> a DST flag? Apps that don't care could ignore it and >> get results no worse than the status quo. > > This would neatly solve the round-trip problem, but will open a > different can of worms: what happens to the dst flag when you add a > timedelta? If dst flag is optional, should you be able to compare > dst-aware and dst-naive instances? > Here's my take on it: datetime objects would consist of the UTC time, time zone and DST. When comparing 2 datetime objects, it would compare the UTC times. The difference between 2 datetimes would be the difference between their UTC times. When converting to a string, you would specify whether you wanted local time (UTC + TZ + DST) or not. When converting from a string, it would assume UTC time unless the string gave the time zone and DST or you specified that it was local time. It would convert local time to UTC time and set the time zone and DST appropriately. After adding a timedelta to a datetime, it would normalise the DST according to the local rules. From ncoghlan at gmail.com Wed Jun 6 05:41:45 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jun 2012 13:41:45 +1000 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: <20120605201854.5e027298@limelight.wooz.org> <2E8E9264-9575-41A9-BEDE-1E5C5D4BD671@twistedmatrix.com> Message-ID: On Wed, Jun 6, 2012 at 1:14 PM, Guido van Rossum wrote: > You can try to enforce this, but users will ignore it, and happily > represent local time as UTC. I've seen people do this with POSIX > timestamps too -- use the UTC conversions between timestamps and time > tuples, and yet use time tuples to represent local time (the > timestamps are stored because integers are easier to store). And yes > they get in horrible trouble around DST and they don't understand why. > But they still do it. > > I think it's better to give users the rope they want than to try and > prevent them from hanging themselves, since otherwise they'll just use > the power cords as ropes and electrocute themselves. Agreed, I'm just asking that the particular brand of rope be the assumption that naive timezones are implicitly UTC and allowing transparent interoperation according to that assumption. If someone is just using them to represent local time, and only have to deal with local time in one location, then they'll still mostly be fine (setting aside DST problems). If naive times and tz-aware times can natively interoperate, then it provides a path towards making more of the stdlib tz-aware by default (such as returning objects with the timezone set to UTC). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From pje at telecommunity.com Wed Jun 6 05:48:35 2012 From: pje at telecommunity.com (PJ Eby) Date: Tue, 5 Jun 2012 23:48:35 -0400 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: <5954463C-4512-4F3B-9D62-61C54367ED59@voidspace.org.uk> Message-ID: On Tue, Jun 5, 2012 at 8:14 PM, Nick Coghlan wrote: > On reflection, I'm actually inclined to agree. The next version of the > PEP will propose the addition of type.__decorate__(). This new method > will be invoked *after* the class is created and the __class__ cell is > populated, but *before* lexical decorators are applied. > Why not have type.__call__() do the invocation? That is, why does it need to become part of the external class-building protocol? (One advantage to putting it all in type.__call__() is that you can emulate the protocol in older Pythons more easily than if it's part of the external class creation dance.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Jun 6 06:03:25 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 5 Jun 2012 21:03:25 -0700 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: <20120605201854.5e027298@limelight.wooz.org> <2E8E9264-9575-41A9-BEDE-1E5C5D4BD671@twistedmatrix.com> Message-ID: On Tue, Jun 5, 2012 at 8:41 PM, Nick Coghlan wrote: > On Wed, Jun 6, 2012 at 1:14 PM, Guido van Rossum wrote: >> You can try to enforce this, but users will ignore it, and happily >> represent local time as UTC. I've seen people do this with POSIX >> timestamps too -- use the UTC conversions between timestamps and time >> tuples, and yet use time tuples to represent local time (the >> timestamps are stored because integers are easier to store). And yes >> they get in horrible trouble around DST and they don't understand why. >> But they still do it. >> >> I think it's better to give users the rope they want than to try and >> prevent them from hanging themselves, since otherwise they'll just use >> the power cords as ropes and electrocute themselves. > > Agreed, I'm just asking that the particular brand of rope be the > assumption that naive timezones are implicitly UTC and allowing > transparent interoperation according to that assumption. If someone is > just using them to represent local time, and only have to deal with > local time in one location, then they'll still mostly be fine (setting > aside DST problems). > > If naive times and tz-aware times can natively interoperate, then it > provides a path towards making more of the stdlib tz-aware by default > (such as returning objects with the timezone set to UTC). I don't see how that follows. Forbidding the interaction between naive and tz-aware datetime objects is a fundamental part of the design and I won't be comfortable with dropping it without a whole lot more discussion of the topic. OTOH adding the "Local" timezone object to the stdlib (despite its flaws) is a no-brainer for me, since it doesn't hurt those who don't use it. -- --Guido van Rossum (python.org/~guido) From ben+python at benfinney.id.au Wed Jun 6 08:10:28 2012 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 06 Jun 2012 16:10:28 +1000 Subject: [Python-Dev] TZ-aware local time References: <4FCE922A.2010003@canterbury.ac.nz> <4FCECEA9.1040608@mrabarnett.plus.com> Message-ID: <87haup2cob.fsf@benfinney.id.au> MRAB writes: > datetime objects would consist of the UTC time, time zone and DST. ?time zone? information always entails DST information doesn't it? It isn't proper time zone information if it doesn't tell you about DST. That is, when you know the full time zone information, that includes when (if ever) DST kicks on or off. Or have I been spoiled by the Olsen database? -- \ ?Truth is stranger than fiction, but it is because fiction is | `\ obliged to stick to possibilities, truth isn't.? ?Mark Twain, | _o__) _Following the Equator_ | Ben Finney From ncoghlan at gmail.com Wed Jun 6 08:48:31 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jun 2012 16:48:31 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: Message-ID: On Wed, Jun 6, 2012 at 12:51 PM, Yury Selivanov wrote: > As for moving Signature object to `functools`, we had this discussion with Brett, and here is what he suggested: > > ? ?Functools contains code that transforms what a function > ? ?does while inspect is about introspection. These objects are > ? ?all about introspection and not about transforming what a > ? ?function does. I actually disagree with that characterisation of the new feature, as: 1. __signature__ introduces the possibility for a function (or any object, really) to say "this is my *real* signature, even if the lower level details appear to be more permissive than that" 2. Signature.bind introduces the ability to split the "bind arguments to parameters" operation from the "call object" operation Those two use cases are about function manipulation rather than pure introspection, making functools an appropriate home. >> - "functools.update_wrapper" be enhanced to set "wrapper.__signature__ >> = signature(wrapped)" > > Big +1 on this one. ?If you give me a green light on this, I'll add this change along with the unit tests to the patch. And this is actually the real reason I'm proposing functools as the home for the new feature. I think this change would a great enhancement to functools.wraps, but I also think making functools depend on the inspect module would be a crazy thing to do :) However, looking at the code, I think the split that makes sense is for a lower level functools.signature to *only* support real function objects (i.e. not even method objects). At the inspect layer, inspect.signature could then support retrieving a signature for an arbitrary callable roughly as follows: def signature(f): try: # Real functions are handled directly by functools return functools.signature(f) except TypeError: pass # Not a function object, handle other kinds of callable if isclass(f): # Figure out a Signature based on f.__new__ and f.__init__ # Complain if the signatures are contradictory # Account for the permissive behaviour of object.__new__ and object.__init__ return class_signature if ismethod(f): f = f.__func__ elif not isfunction(f): try: f = f.__call__ except AttributeError: pass return signature(f) # Use recursion for the initial implementation sketch That code is almost certainly wrong, but it should be enough to give the general idea. The short version is: 1. Provide a functools.signature that expects ordinary function objects (or objects with a __signature__ attribute) 2. Provide an inspect.signature that also handles other kinds of callable Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Wed Jun 6 08:54:29 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jun 2012 16:54:29 +1000 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: <5954463C-4512-4F3B-9D62-61C54367ED59@voidspace.org.uk> Message-ID: On Wed, Jun 6, 2012 at 1:48 PM, PJ Eby wrote: > On Tue, Jun 5, 2012 at 8:14 PM, Nick Coghlan wrote: >> >> On reflection, I'm actually inclined to agree. The next version of the >> PEP will propose the addition of type.__decorate__(). This new method >> will be invoked *after* the class is created and the __class__ cell is >> populated, but *before* lexical decorators are applied. > > > Why not have type.__call__() do the invocation?? That is, why does it need > to become part of the external class-building protocol? > > (One advantage to putting it all in type.__call__() is that you can emulate > the protocol in older Pythons more easily than if it's part of the external > class creation dance.) That's something else I need to write up in the PEP (I *had* thought about it, I just forgot to include the explanation for why it doesn't work). The problems are two-fold: 1. It doesn't play nicely with __class__ (since the cell doesn't get populated until after type.__call__ returns) 2. It doesn't play nicely with metaclass inheritance (when you call up to type.__call__ from a subclass __call__ implementation, the dynamic decorators will see a partially initialised class object) That's really two aspects of the same underlying concern though: the idea is that decorators should only be handed a fully initialised class instance, which means they have to be invoked *after* the metaclass invocation has already returned. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Wed Jun 6 09:09:06 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jun 2012 17:09:06 +1000 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: On Wed, Jun 6, 2012 at 1:28 AM, PJ Eby wrote: > To be clear, what I specifically proposed (as I mentioned in an earlier > thread) was simply to patch __build_class__ in order to restore the missing > __metaclass__ hook.? (Which, incidentally, would make ALL code using > __metaclass__ cross-version compatible between 2.x and 3.x: a potentially > valuable thing in and of itself!) > > As for metaclasses being hard to compose, PEP 422 is definitely a step in > the right direction.? (Automatic metaclass combining is about the only thing > that would improve it any further.) Just as a warning, I doubt I'll be able to persuade enough people that this is a feature worth including in the short time left before 3.3 feature freeze. It may end up being necessary to publish metaclass and explicit decorator based variants (with their known limitations), with a view to gaining support for inclusion in 3.4. Alternatively, if people can supply examples of "post-creation manipulation only" metaclasses that could be replaced with cleaner and more composable dynamic decorator based solutions, that could help make the PEP more compelling in the near term (perhaps compelling enough to make it into 3.3). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ben+python at benfinney.id.au Wed Jun 6 09:55:52 2012 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 06 Jun 2012 17:55:52 +1000 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 References: <87havz8m6p.fsf@benfinney.id.au> Message-ID: <87d35c3md3.fsf@benfinney.id.au> Ben Finney writes: > Georg Brandl writes: > > > list of possible features for 3.3 as specified by PEP 398: > > > > Candidate PEPs: > [?] > > > * PEP 3143: Standard daemon process library > > Our porting work will not be done in time for Python 3.3. I will update > this to target Python 3.4. The PEP document currently says it targets ?3.x?. I'll leave it in that state until we're more confident that the current work will be on track for a particular Python release. Do I need to do anything in particular to be explicit that PEP 3143 is not coming in Python 3.3? -- \ ?Human reason is snatching everything to itself, leaving | `\ nothing for faith.? ?Bernard of Clairvaux, 1090?1153 CE | _o__) | Ben Finney From ncoghlan at gmail.com Wed Jun 6 11:31:25 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jun 2012 19:31:25 +1000 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: On Wed, Jun 6, 2012 at 5:09 PM, Nick Coghlan wrote: > On Wed, Jun 6, 2012 at 1:28 AM, PJ Eby wrote: >> To be clear, what I specifically proposed (as I mentioned in an earlier >> thread) was simply to patch __build_class__ in order to restore the missing >> __metaclass__ hook.? (Which, incidentally, would make ALL code using >> __metaclass__ cross-version compatible between 2.x and 3.x: a potentially >> valuable thing in and of itself!) >> >> As for metaclasses being hard to compose, PEP 422 is definitely a step in >> the right direction.? (Automatic metaclass combining is about the only thing >> that would improve it any further.) > > Just as a warning, I doubt I'll be able to persuade enough people that > this is a feature worth including in the short time left before 3.3 > feature freeze. It may end up being necessary to publish metaclass > and explicit decorator based variants (with their known limitations), > with a view to gaining support for inclusion in 3.4. Upgrading this warning to a fact: there's no way this topic can be given the consideration it deserves in the space of the next three weeks. I'll be changing the title of 422, spend more time discussing the problem (rather than leaping to a conclusion) and retargeting the PEP at 3.4. If you do decide to play around with monkeypatching __build_class__, please make clear to any users that it's a temporary fix until something more robust and less implementation dependent can be devised for 3.4. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mal at egenix.com Wed Jun 6 11:37:36 2012 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 06 Jun 2012 11:37:36 +0200 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: <20120605201854.5e027298@limelight.wooz.org> Message-ID: <4FCF24E0.5010603@egenix.com> Just to add my 2 cents to this discussion as someone who's worked with mxDateTime for almost 15 years. I think we all agree that users of an application want to input date/time data using their local time (which may very well not be the timezone of the system running the application). On output they want to see their timezone as well, for obvious reasons. Now timezones are by nature not strictly defined, they change very often in history and what's worse: there's no way to predict the timezone details for the future. In many places around the world, the government defines the timezone data and they keep on changing the aspects every now and then, support day light savings time, drop the support, remove timezones for their countries, add new ones, or simply shift to a different time zone. The only timezone data that's more or less defined is historic timezone data, but even there, different sources can give different data. What does this mean for the application ? An application doesn't care about the timezone of a point in date/time. It just wants a standard way to store the date/time and a reliable way to work with it. The most commonly used standard for this is the UTC standard and so it's become good practice to convert all date/time values in applications to UTC for storage, math and manipulation. Just like with Unicode, the conversion to local time of the user happens at the UI level. Conversion from input data to UTC is easy, given the available C lib mechanisms (relying on the tz database). Conversion from UTC to local time is more difficult, but can also be done using the tz database. The timezone information of the entered data or the user's locale is usually available either through the environment, a configuration file or a database storing the original data - both on the input and on the output side. There's no need to stick this information onto the basic data types, since the application will already know anyway. For most use cases, this strategy works out really well. There are some cases, though, where you do need to work with local time instead of UTC. One such case is the definition of relative date/time values, another related one, the definition of repeating date/time values. These are often defined by users in terms of their local time or relative to other timezones they intend to travel to, so in order to convert the definitions back to UTC you have to run part of the calculation in the resp. local time zone. Repeating date/time values also tend to take other data into account such as bank holidays, opening times, etc. There's no end to making this more and more complicated :-) However, these things are not in the realm of a basic type anymore. They are application specific details. As a result, it's better to provide tools to implement all this, but not try force design decisions onto the application writer (which will eventually get in the way). BTW: That's main reason why I have so far refused to add native timezone support to the mxDateTime data types and instead let the applications decide on what's the best way for their particular use case. mxDateTime does provide extra tools for timezone support, but doesn't get in the way. It has so far worked out really well. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jun 06 2012) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2012-07-17: Python Meeting Duesseldorf ... 41 days to go 2012-07-02: EuroPython 2012, Florence, Italy ... 26 days to go 2012-05-16: Released eGenix pyOpenSSL 0.13 ... http://egenix.com/go29 ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From edreamleo at gmail.com Wed Jun 6 13:24:09 2012 From: edreamleo at gmail.com (Edward K. Ream) Date: Wed, 6 Jun 2012 06:24:09 -0500 Subject: [Python-Dev] Static type analysis Message-ID: Hello all, I'm wondering whether this is the appropriate place to discuss (global) static type analysis, a topic Guido mentioned around the 28 min mark in his PyCon 2012 keynote, http://pyvideo.org/video/956/keynote-guido-van-rossum This is a topic that has interested me for a long time, and it has important implications for Leo. Just now I have some free time to devote to it. Edward ------------------------------------------------------------------------------ Edward K. Ream email: edreamleo at gmail.com Leo: http://webpages.charter.net/edreamleo/front.html ------------------------------------------------------------------------------ From ijmorlan at uwaterloo.ca Wed Jun 6 15:28:33 2012 From: ijmorlan at uwaterloo.ca (Isaac Morland) Date: Wed, 6 Jun 2012 09:28:33 -0400 (EDT) Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: Message-ID: On Wed, 6 Jun 2012, Nick Coghlan wrote: > 2. Signature.bind introduces the ability to split the "bind arguments > to parameters" operation from the "call object" operation Has anybody considered calling bind __call__? That is, the result of calling the signature of a procedure instead of the procedure itself is the locals() dictionary the procedure would start with (except presumably missing non-parameter local variables). Isaac Morland CSCF Web Guru DC 2554C, x36650 WWW Software Specialist From yselivanov.ml at gmail.com Wed Jun 6 16:04:50 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 6 Jun 2012 10:04:50 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: Message-ID: On 2012-06-06, at 9:28 AM, Isaac Morland wrote: > On Wed, 6 Jun 2012, Nick Coghlan wrote: > >> 2. Signature.bind introduces the ability to split the "bind arguments >> to parameters" operation from the "call object" operation > > Has anybody considered calling bind __call__? That is, the result of calling the signature of a procedure instead of the procedure itself is the locals() dictionary the procedure would start with (except presumably missing non-parameter local variables). I'd stick with more explicit 'bind' method. Compare (given the 'sig = signature(func)'): ba = sig(*args, **kwargs) to: ba = sig.bind(*args, **kwargs) The second case looks more clear to me. Thanks, - Yury From yselivanov.ml at gmail.com Wed Jun 6 16:20:32 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 6 Jun 2012 10:20:32 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: Message-ID: <56CEC722-1A72-4F0E-8E36-660DB7839F0D@gmail.com> On 2012-06-06, at 2:48 AM, Nick Coghlan wrote: > However, looking at the code, I think the split that makes sense is > for a lower level functools.signature to *only* support real function > objects (i.e. not even method objects). > > At the inspect layer, inspect.signature could then support retrieving > a signature for an arbitrary callable roughly as follows: > > def signature(f): > try: > # Real functions are handled directly by functools > return functools.signature(f) > except TypeError: > pass > # Not a function object, handle other kinds of callable > if isclass(f): > # Figure out a Signature based on f.__new__ and f.__init__ > # Complain if the signatures are contradictory > # Account for the permissive behaviour of object.__new__ > and object.__init__ > return class_signature > if ismethod(f): > f = f.__func__ > elif not isfunction(f): > try: > f = f.__call__ > except AttributeError: > pass > return signature(f) # Use recursion for the initial > implementation sketch > > That code is almost certainly wrong, but it should be enough to give > the general idea. The short version is: > > 1. Provide a functools.signature that expects ordinary function > objects (or objects with a __signature__ attribute) > 2. Provide an inspect.signature that also handles other kinds of callable I like the idea of making 'signature' capable of introspecting any callable, be it a class, an object with __call__, a method, or a function. However, I don't think we should have two 'signature'-related mechanisms available in two separate modules. This will inevitably raise questions about which one to use, and which is used in some piece of code you're staring at ;) I agree, that we shouldn't make 'functools' be dependent on 'inspect' module. Moreover, this is not even currently possible, as it creates an import-loop that is hard to untie. But how about the following: 1. Separate 'Signature' object from 'inspect' module, and move it to a private '_signature.py' (that will depend only on 'collections.OrderedDict', 'itertools.chain' and 'types') 2. Publish it in the 'inspect' module 3. Make 'signature' method to work with any callable 4. Make 'Signature' class to accept only functions 5. Import '_signature' in the 'functools', and use 'Signature' class directly, as it will accept just plain functions. Would this work? - Yury From barry at python.org Wed Jun 6 16:25:05 2012 From: barry at python.org (Barry Warsaw) Date: Wed, 6 Jun 2012 10:25:05 -0400 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: <87d35c3md3.fsf@benfinney.id.au> References: <87havz8m6p.fsf@benfinney.id.au> <87d35c3md3.fsf@benfinney.id.au> Message-ID: <20120606102505.773b0a62@resist.wooz.org> On Jun 06, 2012, at 05:55 PM, Ben Finney wrote: >The PEP document currently says it targets ?3.x?. I'll leave it in that >state until we're more confident that the current work will be on track >for a particular Python release. > >Do I need to do anything in particular to be explicit that PEP 3143 is >not coming in Python 3.3? Nope, I think that's fine. -Barry From steve at pearwood.info Wed Jun 6 17:38:13 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 07 Jun 2012 01:38:13 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: Message-ID: <4FCF7965.5070102@pearwood.info> Brett Cannon wrote: > PEP: 362 > Title: Function Signature Object > Version: $Revision$ > Last-Modified: $Date$ > Author: Brett Cannon , Jiwon Seo , > Yury Selivanov , Larry Hastings < > larry at hastings.org> > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 21-Aug-2006 > Python-Version: 3.3 > Post-History: 04-Jun-2012 > > > Abstract > ======== > > Python has always supported powerful introspection capabilities, > including introspecting functions and methods. (For the rest of > this PEP, "function" refers to both functions and methods). By > examining a function object you can fully reconstruct the function's > signature. Unfortunately this information is stored in an inconvenient > manner, and is spread across a half-dozen deeply nested attributes. > > This PEP proposes a new representation for function signatures. > The new representation contains all necessary information about a function > and its parameters, and makes introspection easy and straightforward. It's already easy and straightforward, thanks to the existing inspect.getfullargspec function. If this existing inspect function is lacking something, the PEP should explain what, and why the inspect function can't be fixed. > However, this object does not replace the existing function > metadata, which is used by Python itself to execute those > functions. The new metadata object is intended solely to make > function introspection easier for Python programmers. What happens when the existing function metadata and the __signature__ object disagree? Are there use-cases where we want them to disagree, or is disagreement always a sign that something is broken? > Signature Object > ================ > > A Signature object represents the overall signature of a function. > It stores a `Parameter object`_ for each parameter accepted by the > function, as well as information specific to the function itself. There's a considerable amount of data recorded, including a number of mappings (dicts?). This potentially increase the size of functions, and the overhead of creating them. Since most functions are never introspected, or only rarely introspected, it seems rather wasteful to record all this data "just in case", particularly since it's already recorded once in the function metadata and/or code object. > A Signature object has the following public attributes and methods: > > * name : str > Name of the function. Functions already record their name (twice!), and it is simple enough to query func.__name__. What reason is there for recording it a third time, in the Signature object? Besides, I don't consider the name of the function part of the function's signature. Functions can have multiple names, or no name at all, and the calling signature remains the same. Even if we limit the discussion to distinct functions (rather than a single function with multiple names), I consider spam(x, y, z) ham(x, y, z) and eggs(x, y, z) to have the same signature. Otherwise, it makes it difficult to talk about one function having the same signature as another function, unless they also have the same name. Which would be unfortunate. > * qualname : str > Fully qualified name of the function. What's the fully qualified name of the function, and why is it needed? [...] > The structure of the Parameter object is: > * is_args : bool > True if the parameter accepts variable number of arguments > (``\*args``-like), else False. "args" is just a common name for the parameter, not for the kind of parameter. *args (or *data, *whatever) is a varargs parameter, and so the attribute should be called "is_varargs". > * is_kwargs : bool > True if the parameter accepts variable number of keyword > arguments (``\*\*kwargs``-like), else False. Likewise for **kwargs (or **kw, etc.) I'm not sure if there is a common convention for keyword varargs, so I see two options: is_varkwargs is_kwvarargs > * is_implemented : bool > True if the parameter is implemented for use. Some platforms > implement functions but can't support specific parameters > (e.g. "mode" for os.mkdir). Passing in an unimplemented > parameter may result in the parameter being ignored, > or in NotImplementedError being raised. It is intended that > all conditions where ``is_implemented`` may be False be > thoroughly documented. What to do about parameters which are partly implemented? E.g. mode='spam' is implemented but mode='ham' is not. Is there a use-case for is_implemented? [...] > Annotation Checker > def check_type(sig, arg_name, arg_type, arg_value): > # Internal function that incapsulates arguments type checking /s/incapsulates/encapsulates > Open Issues > =========== inspect.getfullargspec is currently unable to introspect builtin functions and methods. Should builtins gain a __signature__ so they can be introspected? > When to construct the Signature object? > --------------------------------------- > > The Signature object can either be created in an eager or lazy > fashion. In the eager situation, the object can be created during > creation of the function object. In the lazy situation, one would > pass a function object to a function and that would generate the > Signature object and store it to ``__signature__`` if > needed, and then return the value of ``__signature__``. > > In the current implementation, signatures are created only on demand > ("lazy"). +1 > Deprecate ``inspect.getfullargspec()`` and ``inspect.getcallargs()``? > --------------------------------------------------------------------- -1 > Since the Signature object replicates the use of ``getfullargspec()`` > and ``getcallargs()`` from the ``inspect`` module it might make sense > to begin deprecating them in 3.3. I think it is way to soon to deprecate anything. I don't think we should even consider PendingDeprecation until at least 3.4. Actually, I would go further: leave getfullargspec to extract the *actual* argument spec from the code object, and __signature__ to be the claimed argument spec. Earlier, you state: "Changes to the Signature object, or to any of its data members, do not affect the function itself." which leaves the possibility that __signature__ may no longer match the actual argument spec, for some reason. If you remove getfullargspec, people will have to reinvent it to deal with such cases. -- Steven From pje at telecommunity.com Wed Jun 6 17:39:07 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 6 Jun 2012 11:39:07 -0400 Subject: [Python-Dev] Possible rough edges in Python 3 metaclasses (was Re: Language reference updated for metaclasses) In-Reply-To: References: Message-ID: On Wed, Jun 6, 2012 at 5:31 AM, Nick Coghlan wrote: > On Wed, Jun 6, 2012 at 5:09 PM, Nick Coghlan wrote: > > On Wed, Jun 6, 2012 at 1:28 AM, PJ Eby wrote: > >> To be clear, what I specifically proposed (as I mentioned in an earlier > >> thread) was simply to patch __build_class__ in order to restore the > missing > >> __metaclass__ hook. (Which, incidentally, would make ALL code using > >> __metaclass__ cross-version compatible between 2.x and 3.x: a > potentially > >> valuable thing in and of itself!) > >> > >> As for metaclasses being hard to compose, PEP 422 is definitely a step > in > >> the right direction. (Automatic metaclass combining is about the only > thing > >> that would improve it any further.) > > > > Just as a warning, I doubt I'll be able to persuade enough people that > > this is a feature worth including in the short time left before 3.3 > > feature freeze. It may end up being necessary to publish metaclass > > and explicit decorator based variants (with their known limitations), > > with a view to gaining support for inclusion in 3.4. > > Upgrading this warning to a fact: there's no way this topic can be > given the consideration it deserves in the space of the next three > weeks. I'll be changing the title of 422, spend more time discussing > the problem (rather than leaping to a conclusion) and retargeting the > PEP at 3.4. > > If you do decide to play around with monkeypatching __build_class__, > please make clear to any users that it's a temporary fix until > something more robust and less implementation dependent can be devised > for 3.4. > Ideally, I would actually implement it as a backport of the PEP... in which case I suppose making it part of the class creation machinery (vs. embedding it in type.__call__ or some place like that) will make that process easier. Again, as I said earlier, I'm talking about this now because there was related discussion now, not because I'm actively trying to port my libraries. At this point, I've only done a few "make this library usable from 3.x as-is" changes by user request, for some of my smaller libraries that were mostly there already (e.g. simplegeneric). -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at hotpy.org Wed Jun 6 17:50:15 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 06 Jun 2012 16:50:15 +0100 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FCF7965.5070102@pearwood.info> References: <4FCF7965.5070102@pearwood.info> Message-ID: <4FCF7C37.4030403@hotpy.org> Steven D'Aprano wrote: > Brett Cannon wrote: > >> PEP: 362 >> Title: Function Signature Object >> Version: $Revision$ >> Last-Modified: $Date$ >> Author: Brett Cannon , Jiwon Seo , >> Yury Selivanov , Larry Hastings < >> larry at hastings.org> >> Status: Draft >> Type: Standards Track >> Content-Type: text/x-rst >> Created: 21-Aug-2006 >> Python-Version: 3.3 >> Post-History: 04-Jun-2012 >> >> >> Abstract >> ======== >> >> Python has always supported powerful introspection capabilities, >> including introspecting functions and methods. (For the rest of >> this PEP, "function" refers to both functions and methods). By >> examining a function object you can fully reconstruct the function's >> signature. Unfortunately this information is stored in an inconvenient >> manner, and is spread across a half-dozen deeply nested attributes. >> >> This PEP proposes a new representation for function signatures. >> The new representation contains all necessary information about a >> function >> and its parameters, and makes introspection easy and straightforward. > > It's already easy and straightforward, thanks to the existing > inspect.getfullargspec function. If this existing inspect function is > lacking something, the PEP should explain what, and why the inspect > function can't be fixed. > > >> However, this object does not replace the existing function >> metadata, which is used by Python itself to execute those >> functions. The new metadata object is intended solely to make >> function introspection easier for Python programmers. > > What happens when the existing function metadata and the __signature__ > object disagree? > > Are there use-cases where we want them to disagree, or is disagreement > always a sign that something is broken? > > > >> Signature Object >> ================ >> >> A Signature object represents the overall signature of a function. >> It stores a `Parameter object`_ for each parameter accepted by the >> function, as well as information specific to the function itself. > > There's a considerable amount of data recorded, including a number of > mappings (dicts?). This potentially increase the size of functions, and > the overhead of creating them. Since most functions are never > introspected, or only rarely introspected, it seems rather wasteful to > record all this data "just in case", particularly since it's already > recorded once in the function metadata and/or code object. > I agree with Stephen. Don't forget that each list comprehension evaluation involves creating a temporary function object. > > >> A Signature object has the following public attributes and methods: >> >> * name : str >> Name of the function. > > Functions already record their name (twice!), and it is simple enough to > query func.__name__. What reason is there for recording it a third time, > in the Signature object? > > Besides, I don't consider the name of the function part of the > function's signature. Functions can have multiple names, or no name at > all, and the calling signature remains the same. > > Even if we limit the discussion to distinct functions (rather than a > single function with multiple names), I consider spam(x, y, z) ham(x, y, > z) and eggs(x, y, z) to have the same signature. Otherwise, it makes it > difficult to talk about one function having the same signature as > another function, unless they also have the same name. Which would be > unfortunate. > > >> * qualname : str >> Fully qualified name of the function. > > What's the fully qualified name of the function, and why is it needed? > > > > [...] >> The structure of the Parameter object is: > >> * is_args : bool >> True if the parameter accepts variable number of arguments >> (``\*args``-like), else False. > > > "args" is just a common name for the parameter, not for the kind of > parameter. *args (or *data, *whatever) is a varargs parameter, and so > the attribute should be called "is_varargs". > > >> * is_kwargs : bool >> True if the parameter accepts variable number of keyword >> arguments (``\*\*kwargs``-like), else False. > > Likewise for **kwargs (or **kw, etc.) I'm not sure if there is a common > convention for keyword varargs, so I see two options: > > is_varkwargs > is_kwvarargs > > >> * is_implemented : bool >> True if the parameter is implemented for use. Some platforms >> implement functions but can't support specific parameters >> (e.g. "mode" for os.mkdir). Passing in an unimplemented >> parameter may result in the parameter being ignored, >> or in NotImplementedError being raised. It is intended that >> all conditions where ``is_implemented`` may be False be >> thoroughly documented. > > What to do about parameters which are partly implemented? E.g. > mode='spam' is implemented but mode='ham' is not. > > Is there a use-case for is_implemented? > > [...] >> Annotation Checker > >> def check_type(sig, arg_name, arg_type, arg_value): >> # Internal function that incapsulates arguments type checking > > /s/incapsulates/encapsulates > > > >> Open Issues >> =========== > > inspect.getfullargspec is currently unable to introspect builtin > functions and methods. Should builtins gain a __signature__ so they can > be introspected? I'm +0 on this, but care is needed as print and [].append are the same type in CPython. > > > >> When to construct the Signature object? >> --------------------------------------- >> >> The Signature object can either be created in an eager or lazy >> fashion. In the eager situation, the object can be created during >> creation of the function object. In the lazy situation, one would >> pass a function object to a function and that would generate the >> Signature object and store it to ``__signature__`` if >> needed, and then return the value of ``__signature__``. >> >> In the current implementation, signatures are created only on demand >> ("lazy"). > > +1 +1 also. See comment above about list comprehensions > > > >> Deprecate ``inspect.getfullargspec()`` and ``inspect.getcallargs()``? >> --------------------------------------------------------------------- > > > -1 > >> Since the Signature object replicates the use of ``getfullargspec()`` >> and ``getcallargs()`` from the ``inspect`` module it might make sense >> to begin deprecating them in 3.3. > > I think it is way to soon to deprecate anything. I don't think we should > even consider PendingDeprecation until at least 3.4. > > Actually, I would go further: leave getfullargspec to extract the > *actual* argument spec from the code object, and __signature__ to be the > claimed argument spec. Earlier, you state: > > "Changes to the Signature object, or to any of its data members, > do not affect the function itself." > > which leaves the possibility that __signature__ may no longer match the > actual argument spec, for some reason. If you remove getfullargspec, > people will have to reinvent it to deal with such cases. > > > > From larry at hastings.org Wed Jun 6 18:05:48 2012 From: larry at hastings.org (Larry Hastings) Date: Wed, 06 Jun 2012 09:05:48 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FCF7965.5070102@pearwood.info> References: <4FCF7965.5070102@pearwood.info> Message-ID: <4FCF7FDC.2080000@hastings.org> On 06/06/2012 08:38 AM, Steven D'Aprano wrote: > What's the fully qualified name of the function, and why is it needed? Please see PEP 3155. > "args" is just a common name for the parameter, not for the kind of > parameter. *args (or *data, *whatever) is a varargs parameter, and so > the attribute should be called "is_varargs". > [...] > Likewise for **kwargs (or **kw, etc.) Yury will be pleased; those were his original names. I argued for "is_args" and "is_kwargs". I assert that "args" and "kwargs" are not merely "common name[s] for the parameter[s]", they are The Convention. Any seasoned Python programmer examining a Signature object who sees "is_args" and "is_kwargs" will understand immediately what they are. Jamming "var" in the middle of these names does not make their meaning any clearer--in fact I suggest it only detracts from readability. > Is there a use-case for is_implemented? Yes, see issue 14626. > What happens when the existing function metadata and the __signature__ > object disagree? > > Are there use-cases where we want them to disagree, or is disagreement > always a sign that something is broken? > [...] > "Changes to the Signature object, or to any of its data members, > do not affect the function itself." > > which leaves the possibility that __signature__ may no longer match > the actual argument spec, for some reason. If you remove > getfullargspec, people will have to reinvent it to deal with such cases. There's no reason why they should disagree. The "some reason" would be if some doorknob decided to change it--the objects are mutable, because there's no good reason to make them immutable. We just wanted to be explicit, that information flowed from the callable to the Signature and never the other way 'round. As for "what would happen", nothing good. My advice: don't change Signature objects for no reason. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Jun 6 18:20:32 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 6 Jun 2012 12:20:32 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FCF7965.5070102@pearwood.info> References: <4FCF7965.5070102@pearwood.info> Message-ID: <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Steven, On 2012-06-06, at 11:38 AM, Steven D'Aprano wrote: > Brett Cannon wrote: >> Python has always supported powerful introspection capabilities, >> including introspecting functions and methods. (For the rest of >> this PEP, "function" refers to both functions and methods). By >> examining a function object you can fully reconstruct the function's >> signature. Unfortunately this information is stored in an inconvenient >> manner, and is spread across a half-dozen deeply nested attributes. >> This PEP proposes a new representation for function signatures. >> The new representation contains all necessary information about a function >> and its parameters, and makes introspection easy and straightforward. > > It's already easy and straightforward, thanks to the existing inspect.getfullargspec function. If this existing inspect function is lacking something, the PEP should explain what, and why the inspect function can't be fixed. Well, the PEP addresses this question by saying: "Unfortunately this information is stored in an inconvenient manner, and is spread across a half-dozen deeply nested attributes." And that's true. As of now 'inspect.getfullargspec' returns a named tuple, where all parameters are spread between different tuple items. Essentially, what 'inspect.getfullargspec' really does, is simply packing all the function and its code object attributes into a tuple. Compare it with the elegance 'Signature' and 'Parameter' class provides. Example: def foo(a, bar:int=1): pass With signature: sig = signature(foo) print('bar: annotation', sig.parameters['bar'].annotation) print('bar: default', sig.parameters['bar'].default) end with 'inspect': args_spec = inspect.getfullargspec(foo) print('bar: default', args_spec.annotations['bar']) print('bar: default', args_spec.defaults[0]) # <- '0'? Signature object is a much nicer API. It becomes especially obvious when you need to write more complicated stuff (you can see the PEP's examples.) >> However, this object does not replace the existing function >> metadata, which is used by Python itself to execute those >> functions. The new metadata object is intended solely to make >> function introspection easier for Python programmers. > > What happens when the existing function metadata and the __signature__ object disagree? That's an issue to think about. We may want to make 'Signature.parameters' immutable by hiding it behind the newly added 'types.MappingProxyType'. But I don't foresee modifications of Signature objects to be somewhat frequent. It may happen for a very specific reason in a very specific user code, and I see no point in denying this. >> Signature Object >> ================ >> A Signature object represents the overall signature of a function. >> It stores a `Parameter object`_ for each parameter accepted by the >> function, as well as information specific to the function itself. > > There's a considerable amount of data recorded, including a number of mappings (dicts?). This potentially increase the size of functions, and the overhead of creating them. Since most functions are never introspected, or only rarely introspected, it seems rather wasteful to record all this data "just in case", particularly since it's already recorded once in the function metadata and/or code object. The object is now created in a lazy manner, once requested. >> A Signature object has the following public attributes and methods: >> * name : str >> Name of the function. > > Functions already record their name (twice!), and it is simple enough to query func.__name__. What reason is there for recording it a third time, in the Signature object? Signature object holds function's information and presents it in a convenient manner. It makes sense to store the function's name, together with the information about its parameters and return annotation. > Besides, I don't consider the name of the function part of the function's signature. Functions can have multiple names, or no name at all, and the calling signature remains the same. It always have _one_ name it was defined with, unless it's a lambda function. > Even if we limit the discussion to distinct functions (rather than a single function with multiple names), I consider spam(x, y, z) ham(x, y, z) and eggs(x, y, z) to have the same signature. Otherwise, it makes it difficult to talk about one function having the same signature as another function, unless they also have the same name. Which would be unfortunate. I see the point ;) Let's see what other devs think. >> * qualname : str >> Fully qualified name of the function. > > What's the fully qualified name of the function, and why is it needed? See PEP 3155. > [...] >> The structure of the Parameter object is: > >> * is_args : bool >> True if the parameter accepts variable number of arguments >> (``\*args``-like), else False. > > > "args" is just a common name for the parameter, not for the kind of parameter. *args (or *data, *whatever) is a varargs parameter, and so the attribute should be called "is_varargs". I had a discussion regarding that with Brett and Larry. They've convinced me that the '*args', and '**kwargs' is The Ultimate Convention to name these parameters and work with them. This convention is immediately recognized by most of the developers, hence, it'd be easier to understand from the first glance what 'is_args' means, compared to 'is_varargs'. >> * is_implemented : bool >> True if the parameter is implemented for use. Some platforms >> implement functions but can't support specific parameters >> (e.g. "mode" for os.mkdir). Passing in an unimplemented >> parameter may result in the parameter being ignored, >> or in NotImplementedError being raised. It is intended that >> all conditions where ``is_implemented`` may be False be >> thoroughly documented. > > What to do about parameters which are partly implemented? E.g. mode='spam' is implemented but mode='ham' is not. > > Is there a use-case for is_implemented? That's a question to Larry. > [...] >> Annotation Checker > >> def check_type(sig, arg_name, arg_type, arg_value): >> # Internal function that incapsulates arguments type checking > > /s/incapsulates/encapsulates Thanks! >> Open Issues >> =========== > > inspect.getfullargspec is currently unable to introspect builtin functions and methods. Should builtins gain a __signature__ so they can be introspected? Again, a question to Larry. He's building a new API to work with function arguments on a C level, and basically, it can construct Signature objects automatically. Brett, on the other hand, has the idea of parsing builtin functions doc strings, as they almost always have their signature described on the first line of it: >>> any.__doc__ 'any(iterable) -> bool\n\nReturn True if bool(x) is True for any x in the iterable.' We can surely implement it (and in a short time), but I doubt that 3.3 is a good target for this particular feature. >> Deprecate ``inspect.getfullargspec()`` and ``inspect.getcallargs()``? >> --------------------------------------------------------------------- > > > -1 > >> Since the Signature object replicates the use of ``getfullargspec()`` >> and ``getcallargs()`` from the ``inspect`` module it might make sense >> to begin deprecating them in 3.3. > > I think it is way to soon to deprecate anything. I don't think we should even consider PendingDeprecation until at least 3.4. +0. > Actually, I would go further: leave getfullargspec to extract the *actual* argument spec from the code object, and __signature__ to be the claimed argument spec. Earlier, you state: > > "Changes to the Signature object, or to any of its data members, > do not affect the function itself." > > which leaves the possibility that __signature__ may no longer match the actual argument spec, for some reason. If you remove getfullargspec, people will have to reinvent it to deal with such cases. Again, as I explained above - we can make Signatures immutable, but the cases of modifying it should be extremely rare. Thank you, - Yury From larry at hastings.org Wed Jun 6 18:36:15 2012 From: larry at hastings.org (Larry Hastings) Date: Wed, 06 Jun 2012 09:36:15 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FCF7FDC.2080000@hastings.org> References: <4FCF7965.5070102@pearwood.info> <4FCF7FDC.2080000@hastings.org> Message-ID: <4FCF86FF.600@hastings.org> On 06/06/2012 09:05 AM, Larry Hastings wrote: >> Is there a use-case for is_implemented? > > Yes, see issue 14626. I should add, there are already some places in the standard library where is_implemented would be relevant. The "mode" argument to os.mkdir comes immediately to mind; on Windows it is accepted but ignored. A counter-example would be os.symlink, which takes an extra parameter on Windows that's *not even accepted* on other platforms. I am utterly convinced that, when faced with these sorts of platform-specific API differences, the first step towards sanity is to have the API accept a consistent signature everywhere. What you do after that is up for debate--in most cases where the parameter causes a significant semantic change, I think specifying it with a non-default value should throw a NotImplementedError. (With the specific case of os.mkdir on Windows, I can agree with silently ignoring the mode; it's not like the hapless Windows programmer could react and take a useful alternative approach.) Parameter objects exposing is_implemented allows LBYL in these situations, rather than having to react to NotImplementedError. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Wed Jun 6 19:02:43 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 6 Jun 2012 11:02:43 -0600 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: On Wed, Jun 6, 2012 at 10:20 AM, Yury Selivanov wrote: > On 2012-06-06, at 11:38 AM, Steven D'Aprano wrote: >> Functions already record their name (twice!), and it is simple enough to query func.__name__. What reason is there for recording it a third time, in the Signature object? > > Signature object holds function's information and presents it in a > convenient manner. ?It makes sense to store the function's name, > together with the information about its parameters and return > annotation. > >> Besides, I don't consider the name of the function part of the function's signature. Functions can have multiple names, or no name at all, and the calling signature remains the same. > > It always have _one_ name it was defined with, unless it's > a lambda function. > >> Even if we limit the discussion to distinct functions (rather than a single function with multiple names), I consider spam(x, y, z) ham(x, y, z) and eggs(x, y, z) to have the same signature. Otherwise, it makes it difficult to talk about one function having the same signature as another function, unless they also have the same name. Which would be unfortunate. > > I see the point ;) ?Let's see what other devs think. I'm with Steven on this one. What's the benefit to storing the name or qualname on the signature object? That ties the signature object to a specific function. If you access the signature object by f.__signature__ then you already have f and its name. If you get it by calling signature(f), then you also have f and the name. If you are passing signature objects for some reason and there's a use case for which the name/qualname matters, wouldn't it be better to pass the functions around anyway? What about when you create a signature object on its own and you don't care about the name or qualname...why should it need them? Does Signature.bind() need them? FWIW, I think this PEP is great and am ecstatic that someone is pushing it forward. :) -eric From steve at pearwood.info Wed Jun 6 18:16:45 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 07 Jun 2012 02:16:45 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FCF7FDC.2080000@hastings.org> References: <4FCF7965.5070102@pearwood.info> <4FCF7FDC.2080000@hastings.org> Message-ID: <4FCF826D.7040007@pearwood.info> Larry Hastings wrote: >> [...] >> "Changes to the Signature object, or to any of its data members, >> do not affect the function itself." >> >> which leaves the possibility that __signature__ may no longer match >> the actual argument spec, for some reason. If you remove >> getfullargspec, people will have to reinvent it to deal with such cases. > > There's no reason why they should disagree. The "some reason" would be > if some doorknob decided to change it--the objects are mutable, because > there's no good reason to make them immutable. Nevertheless, the world is full of doorknobs, and people will have to deal with their code. The case for deprecating getfullargspec is weak. The case for deprecating it *right now* is even weaker. Let's not rush to throw away working code. -- Steven From yselivanov.ml at gmail.com Wed Jun 6 19:10:16 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 6 Jun 2012 13:10:16 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: Eric, On 2012-06-06, at 1:02 PM, Eric Snow wrote: > On Wed, Jun 6, 2012 at 10:20 AM, Yury Selivanov wrote: >> On 2012-06-06, at 11:38 AM, Steven D'Aprano wrote: >>> Functions already record their name (twice!), and it is simple enough to query func.__name__. What reason is there for recording it a third time, in the Signature object? >> >> Signature object holds function's information and presents it in a >> convenient manner. It makes sense to store the function's name, >> together with the information about its parameters and return >> annotation. >> >>> Besides, I don't consider the name of the function part of the function's signature. Functions can have multiple names, or no name at all, and the calling signature remains the same. >> >> It always have _one_ name it was defined with, unless it's >> a lambda function. >> >>> Even if we limit the discussion to distinct functions (rather than a single function with multiple names), I consider spam(x, y, z) ham(x, y, z) and eggs(x, y, z) to have the same signature. Otherwise, it makes it difficult to talk about one function having the same signature as another function, unless they also have the same name. Which would be unfortunate. >> >> I see the point ;) Let's see what other devs think. > > I'm with Steven on this one. What's the benefit to storing the name > or qualname on the signature object? That ties the signature object > to a specific function. If you access the signature object by > f.__signature__ then you already have f and its name. If you get it > by calling signature(f), then you also have f and the name. If you > are passing signature objects for some reason and there's a use case > for which the name/qualname matters, wouldn't it be better to pass the > functions around anyway? What about when you create a signature > object on its own and you don't care about the name or qualname...why > should it need them? Does Signature.bind() need them? Yes, 'Signature.bind' needs 'qualname' for error messages. But it can be stored as a private attribute. I like the idea of 'foo(a)' and 'bar(a)' having the identical signatures, however, I don't think it's possible. I.e. we can't make it that the 'signature(foo) is signature(bar)'. We can implement the __eq__ operator though. For me, the signature of a function is not just a description of its parameters, so it seems practical to store its name too. > FWIW, I think this PEP is great and am ecstatic that someone is > pushing it forward. :) Thanks ;) - Yury From alexandre.zani at gmail.com Wed Jun 6 19:13:46 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Wed, 6 Jun 2012 10:13:46 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: A question regarding the name. I have often seen the following pattern in decorators: def decor(f): def some_func(a,b): do_stuff using f some_func.__name__ = f.__name__ return some_func What are the name and fully qualified names in the signature for the returned function? some_func.__name__ or f.__name__? On Wed, Jun 6, 2012 at 10:02 AM, Eric Snow wrote: > On Wed, Jun 6, 2012 at 10:20 AM, Yury Selivanov wrote: >> On 2012-06-06, at 11:38 AM, Steven D'Aprano wrote: >>> Functions already record their name (twice!), and it is simple enough to query func.__name__. What reason is there for recording it a third time, in the Signature object? >> >> Signature object holds function's information and presents it in a >> convenient manner. ?It makes sense to store the function's name, >> together with the information about its parameters and return >> annotation. >> >>> Besides, I don't consider the name of the function part of the function's signature. Functions can have multiple names, or no name at all, and the calling signature remains the same. >> >> It always have _one_ name it was defined with, unless it's >> a lambda function. >> >>> Even if we limit the discussion to distinct functions (rather than a single function with multiple names), I consider spam(x, y, z) ham(x, y, z) and eggs(x, y, z) to have the same signature. Otherwise, it makes it difficult to talk about one function having the same signature as another function, unless they also have the same name. Which would be unfortunate. >> >> I see the point ;) ?Let's see what other devs think. > > I'm with Steven on this one. ?What's the benefit to storing the name > or qualname on the signature object? ?That ties the signature object > to a specific function. ?If you access the signature object by > f.__signature__ then you already have f and its name. ?If you get it > by calling signature(f), then you also have f and the name. ?If you > are passing signature objects for some reason and there's a use case > for which the name/qualname matters, wouldn't it be better to pass the > functions around anyway? ?What about when you create a signature > object on its own and you don't care about the name or qualname...why > should it need them? ?Does Signature.bind() need them? > > FWIW, I think this PEP is great and am ecstatic that someone is > pushing it forward. ?:) > > -eric > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com From brett at python.org Wed Jun 6 19:15:13 2012 From: brett at python.org (Brett Cannon) Date: Wed, 6 Jun 2012 13:15:13 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FCF826D.7040007@pearwood.info> References: <4FCF7965.5070102@pearwood.info> <4FCF7FDC.2080000@hastings.org> <4FCF826D.7040007@pearwood.info> Message-ID: On Wed, Jun 6, 2012 at 12:16 PM, Steven D'Aprano wrote: > Larry Hastings wrote: > > [...] >>> "Changes to the Signature object, or to any of its data members, >>> do not affect the function itself." >>> >>> which leaves the possibility that __signature__ may no longer match the >>> actual argument spec, for some reason. If you remove getfullargspec, people >>> will have to reinvent it to deal with such cases. >>> >> >> There's no reason why they should disagree. The "some reason" would be >> if some doorknob decided to change it--the objects are mutable, because >> there's no good reason to make them immutable. >> > > Nevertheless, the world is full of doorknobs, and people will have to deal > with their code. > This is also Python, the language that assumes everyone is an consenting adult. > > The case for deprecating getfullargspec is weak. The case for deprecating > it *right now* is even weaker. Let's not rush to throw away working code. > > If people really want to keep getullargspec() around then I want to at least add a note to the function that signature objects exist as an alternative (but not vice-versa). I personally still regret the getopt/argparse situation and this feels like that on a smaller scale. -Brett > > > -- > Steven > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Wed Jun 6 19:26:21 2012 From: larry at hastings.org (Larry Hastings) Date: Wed, 06 Jun 2012 10:26:21 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FCF826D.7040007@pearwood.info> References: <4FCF7965.5070102@pearwood.info> <4FCF7FDC.2080000@hastings.org> <4FCF826D.7040007@pearwood.info> Message-ID: <4FCF92BD.5060602@hastings.org> On 06/06/2012 09:16 AM, Steven D'Aprano wrote: > Nevertheless, the world is full of doorknobs, and people will have to > deal with their code. I'm having a hard time seeing it. Can you propose a credible situation where * some programmer would have a reason (even a bad reason) to modify the cached Signature for a function, * as a result it would no longer correctly match the corresponding function, * you would be forced to interact with this code and the modified Signature, and * it would cause you problems? If you can, what adjustment would you make to the PEP to ameliorate this situation? And, as Brett cites, the consenting adults rule applies here. All of a sudden I'm thinking of "Olsen's Standard Book Of British Birds", //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Jun 6 19:28:23 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 6 Jun 2012 13:28:23 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: On 2012-06-06, at 1:13 PM, Alexandre Zani wrote: > A question regarding the name. I have often seen the following pattern > in decorators: > > def decor(f): > def some_func(a,b): > do_stuff using f > some_func.__name__ = f.__name__ > return some_func > > What are the name and fully qualified names in the signature for the > returned function? some_func.__name__ or f.__name__? Never copy attributes by hand, always use 'functools.wraps'. It copies '__name__', '__qualname__', and bunch of other attributes to the decorator object. We'll probably extend it to copy __signature__ too; then 'signature(decor(f))' will be the same as 'signature(f)'. - Yury From urban.dani+py at gmail.com Wed Jun 6 19:39:28 2012 From: urban.dani+py at gmail.com (Daniel Urban) Date: Wed, 6 Jun 2012 19:39:28 +0200 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: Message-ID: > BoundArguments Object > ===================== > > Result of a ``Signature.bind`` call. ?Holds the mapping of arguments > to the function's parameters. The Signature.bind function has changed since the previous version of the PEP. If I understand correctly, the 'arguments' attribute is the same as the return value of bind in the previous version (more or less the same as the return value of inspect.getcallargs). My question is: why we need the other 2 attributes ('args' and 'kwargs')? The "Annotation Checker" example uses it to call the function. But if we are able to call bind, we already have the arguments, so we can simply call the function with them, we don't really need these attributes. I think it would be better (easier to use and understand), if bind would simply return the mapping, as in the previous version of the PEP. > Has the following public attributes: > > * arguments : OrderedDict > ? ? An ordered mutable mapping of parameters' names to arguments' values. > ? ? Does not contain arguments' default values. Does this mean, that if the arguments passed to bind doesn't contain a value for an argument that has a default value, then the returned mapping won't contain that argument? If so, why not? inspect.getcallargs works fine with default values. Daniel From larry at hastings.org Wed Jun 6 19:48:28 2012 From: larry at hastings.org (Larry Hastings) Date: Wed, 06 Jun 2012 10:48:28 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FCF7965.5070102@pearwood.info> References: <4FCF7965.5070102@pearwood.info> Message-ID: <4FCF97EC.8010406@hastings.org> Sorry I missed answering these on my first pass. On 06/06/2012 08:38 AM, Steven D'Aprano wrote: > What to do about parameters which are partly implemented? E.g. > mode='spam' is implemented but mode='ham' is not. Parameter objects aren't sophisticated enough to represent such a situation. If you have a use case for a more sophisticated approach, and can propose a change to the Parameter object to handle it, I'd be interested to see it. In truth, the way I currently support those "unimplemented" parameters is, passing in the default parameter is still permitted. So in a way I suppose I already have this situation, kinda? But is_implemented as it stands works fine for my use case. > inspect.getfullargspec is currently unable to introspect builtin > functions and methods. Should builtins gain a __signature__ so they > can be introspected? If function signatures are useful, then they're useful, and the implementation language for the function is irrelevant. I already sent Yuri a patch adding __signature__ to PyCFunctionObject, which I thought he merged but I don't see in his repo. The problem (obviously) is generating the signature. Brett has an idea about parsing the docstring; it strikes me as hackish. I think solving the problem definitively will require a new argument parsing API and that's simply not happening for 3.3. If my patch for issue 14626 and PEP 362 both land in 3.3, my plan is to hard-code the signatures for just those functions. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Jun 6 19:55:45 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 6 Jun 2012 13:55:45 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: Message-ID: <16DEE462-CC67-4065-A230-AB24256A3225@gmail.com> Daniel, On 2012-06-06, at 1:39 PM, Daniel Urban wrote: >> BoundArguments Object >> ===================== >> >> Result of a ``Signature.bind`` call. Holds the mapping of arguments >> to the function's parameters. > > The Signature.bind function has changed since the previous version of > the PEP. If I understand correctly, the 'arguments' attribute is the > same as the return value of bind in the previous version (more or less > the same as the return value of inspect.getcallargs). My question is: > why we need the other 2 attributes ('args' and 'kwargs')? The > "Annotation Checker" example uses it to call the function. But if we > are able to call bind, we already have the arguments, so we can simply > call the function with them, we don't really need these attributes. I > think it would be better (easier to use and understand), if bind would > simply return the mapping, as in the previous version of the PEP. I'll try to answer you with the following code: >>> def foo(*args): ... print(args) >>> bound_args = signature(foo).bind(1, 2, 3) >>> bound_args.arguments OrderedDict([('args', (1, 2, 3))]) You can't invoke 'foo' by: >>> foo(**bound_args.arguments) TypeError: foo() got an unexpected keyword argument 'args' That's why we have two dynamic properties 'args', and 'kwargs': >>> bound_args.args, bound_args.kwargs ((1, 2, 3), {}) 'BoundArguments.arguments', as told in the PEP, is a mapping to work with 'Signature.parameters' (you should've seen it in the "Annotation Checker" example). 'args' & 'kwargs' are for invocation. You can even modify 'arguments'. >> Has the following public attributes: >> >> * arguments : OrderedDict >> An ordered mutable mapping of parameters' names to arguments' values. >> Does not contain arguments' default values. > > Does this mean, that if the arguments passed to bind doesn't contain a > value for an argument that has a default value, then the returned > mapping won't contain that argument? If so, why not? > inspect.getcallargs works fine with default values. Yes, it won't. It contains only arguments you've passed to the 'bind'. The reason is because we'd like to save as much of actual context as possible. If you pass some set of arguments to the bind() method, it tries to map precisely that set. This way you can deduce from the BoundArguments what it was bound with. And default values will applied by python itself. - Yury From urban.dani+py at gmail.com Wed Jun 6 20:22:26 2012 From: urban.dani+py at gmail.com (Daniel Urban) Date: Wed, 6 Jun 2012 20:22:26 +0200 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <16DEE462-CC67-4065-A230-AB24256A3225@gmail.com> References: <16DEE462-CC67-4065-A230-AB24256A3225@gmail.com> Message-ID: >>> BoundArguments Object >>> ===================== >>> >>> Result of a ``Signature.bind`` call. ?Holds the mapping of arguments >>> to the function's parameters. >> >> The Signature.bind function has changed since the previous version of >> the PEP. If I understand correctly, the 'arguments' attribute is the >> same as the return value of bind in the previous version (more or less >> the same as the return value of inspect.getcallargs). My question is: >> why we need the other 2 attributes ('args' and 'kwargs')? The >> "Annotation Checker" example uses it to call the function. But if we >> are able to call bind, we already have the arguments, so we can simply >> call the function with them, we don't really need these attributes. I >> think it would be better (easier to use and understand), if bind would >> simply return the mapping, as in the previous version of the PEP. > > I'll try to answer you with the following code: > > ? >>> def foo(*args): > ? ... ? ?print(args) > > ? >>> bound_args = signature(foo).bind(1, 2, 3) > ? >>> bound_args.arguments > ? OrderedDict([('args', (1, 2, 3))]) > > You can't invoke 'foo' by: > > ? >>> foo(**bound_args.arguments) > ? TypeError: foo() got an unexpected keyword argument 'args' Of course, but you can invoke it with "1, 2, 3", the arguments you used to create the BoundArguments instance in the first place: foo(1, 2, 3) will work fine. > That's why we have two dynamic properties 'args', and 'kwargs': Ok, but what I'm saying is, that we don't really need them. > ? >>> bound_args.args, bound_args.kwargs > ? ((1, 2, 3), {}) > > 'BoundArguments.arguments', as told in the PEP, is a mapping to work > with 'Signature.parameters' (you should've seen it in the > "Annotation Checker" example). > > 'args' & 'kwargs' are for invocation. ?You can even modify 'arguments'. > >>> Has the following public attributes: >>> >>> * arguments : OrderedDict >>> ? ? An ordered mutable mapping of parameters' names to arguments' values. >>> ? ? Does not contain arguments' default values. >> >> Does this mean, that if the arguments passed to bind doesn't contain a >> value for an argument that has a default value, then the returned >> mapping won't contain that argument? If so, why not? >> inspect.getcallargs works fine with default values. > > Yes, it won't. ?It contains only arguments you've passed to the 'bind'. > The reason is because we'd like to save as much of actual context as > possible. I don't really see, where this "context" can be useful. Maybe an example would help. > If you pass some set of arguments to the bind() method, it tries to map > precisely that set. ?This way you can deduce from the BoundArguments what > it was bound with. ?And default values will applied by python itself. Anyway, I think it would be nice to be able to obtain the full arguments mapping that the function would see, not just a subset. Of course, we can use inspect.getcallargs for that, but I think we should be able to do that with the new Signature API. Daniel From yselivanov.ml at gmail.com Wed Jun 6 20:35:51 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 6 Jun 2012 14:35:51 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <16DEE462-CC67-4065-A230-AB24256A3225@gmail.com> Message-ID: On 2012-06-06, at 2:22 PM, Daniel Urban wrote: >> I'll try to answer you with the following code: >> >> >>> def foo(*args): >> ... print(args) >> >> >>> bound_args = signature(foo).bind(1, 2, 3) >> >>> bound_args.arguments >> OrderedDict([('args', (1, 2, 3))]) >> >> You can't invoke 'foo' by: >> >> >>> foo(**bound_args.arguments) >> TypeError: foo() got an unexpected keyword argument 'args' > > Of course, but you can invoke it with "1, 2, 3", the arguments you > used to create the BoundArguments instance in the first place: foo(1, > 2, 3) will work fine. The whole point is to use BoundArguments mapping for invocation. See Nick's idea to validate callbacks, and my response to him, below in the thread. >> That's why we have two dynamic properties 'args', and 'kwargs': > > Ok, but what I'm saying is, that we don't really need them. We need them. Again, in some contexts you don't have the arguments you've passed to bind(). >> >>> bound_args.args, bound_args.kwargs >> ((1, 2, 3), {}) >> >> 'BoundArguments.arguments', as told in the PEP, is a mapping to work >> with 'Signature.parameters' (you should've seen it in the >> "Annotation Checker" example). >> >> 'args' & 'kwargs' are for invocation. You can even modify 'arguments'. >> >>>> Has the following public attributes: >>>> >>>> * arguments : OrderedDict >>>> An ordered mutable mapping of parameters' names to arguments' values. >>>> Does not contain arguments' default values. >>> >>> Does this mean, that if the arguments passed to bind doesn't contain a >>> value for an argument that has a default value, then the returned >>> mapping won't contain that argument? If so, why not? >>> inspect.getcallargs works fine with default values. >> >> Yes, it won't. It contains only arguments you've passed to the 'bind'. >> The reason is because we'd like to save as much of actual context as >> possible. > > I don't really see, where this "context" can be useful. Maybe an > example would help. For instance, for some sort of RPC mechanism, where you don't need to store/pass arguments that have default values. >> If you pass some set of arguments to the bind() method, it tries to map >> precisely that set. This way you can deduce from the BoundArguments what >> it was bound with. And default values will applied by python itself. > > Anyway, I think it would be nice to be able to obtain the full > arguments mapping that the function would see, not just a subset. Of > course, we can use inspect.getcallargs for that, but I think we should > be able to do that with the new Signature API. Well, it will take just a few lines of code to enrich BoundArguments with default values (we can add a method to do it, if it's really that required feature). But you won't be able to ever reconstruct what arguments the bind() method was called with, if we write default values to arguments from start. Also, it's better for performance. "Annotation Checker" example does defaults validation first, and never checks them again. If bind() would write default values to 'BoundArguments.arguments', you would check defaults on each call. And think about more complicated cases, where processing of argument's value is more complicated and time consuming. All in all, I consider the way 'inspect.getcallargs' treats defaults as a weakness, not as an advantage. - Yury From tjreedy at udel.edu Wed Jun 6 20:30:35 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 06 Jun 2012 14:30:35 -0400 Subject: [Python-Dev] [Python-checkins] peps: PEP 422 rewrite to present an idea that a) isn't crazy and b) it turns out In-Reply-To: References: Message-ID: <4FCFA1CB.1070405@udel.edu> On 6/6/2012 7:40 AM, nick.coghlan wrote: > PEP 422 rewrite to present an idea that a) isn't crazy and > b) it turns out Thomas Heller proposed back in 2001 > +There is currently no corresponding mechanism in Python 3 that allows the > +code executed in the class body to directly influence how the class object > +is created. Instead, the class creation process is fully defined by the > +class header, before the class body even begins executing. This makes the problem for porting code much clearer. > +* If the metaclass hint refers to an instance of ``type``, then it is /instance/subclass/? as in your class Metaclass(type) example in Alternatives? > + considered as a candidate metaclass along with the metaclasses of a > +Easier inheritance of definition time behaviour > +----------------------------------------------- > + > +Understanding Python's metaclass system requires a deep understanding of > +the type system and the class construction process. This is legitimately > +seen as confusing, due to the need to keep multiple moving parts (the code, /confusing/challenging/ The challenge is inherent in the topic. Confusion is not, but is a sign of poor explication that needs improvement. > +the metaclass hint, the actual metaclass, the class object, instances of the > +class object) clearly distinct in your mind. Your clear separation of 'metaclass hint' from 'actual metaclass' and enumeration of the multiple parts has reduced confusion, at least for me. But it remains challenging. > +Understanding the proposed class initialisation hook requires understanding > +decorators and ordinary method inheritance, which is a much simpler prospect. /much// (in my opinion) In other words, don't underplay the alternative challenge ;-). tjr From urban.dani+py at gmail.com Wed Jun 6 21:33:31 2012 From: urban.dani+py at gmail.com (Daniel Urban) Date: Wed, 6 Jun 2012 21:33:31 +0200 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <16DEE462-CC67-4065-A230-AB24256A3225@gmail.com> Message-ID: On Wed, Jun 6, 2012 at 8:35 PM, Yury Selivanov wrote: > On 2012-06-06, at 2:22 PM, Daniel Urban wrote: >>> I'll try to answer you with the following code: >>> >>> ? >>> def foo(*args): >>> ? ... ? ?print(args) >>> >>> ? >>> bound_args = signature(foo).bind(1, 2, 3) >>> ? >>> bound_args.arguments >>> ? OrderedDict([('args', (1, 2, 3))]) >>> >>> You can't invoke 'foo' by: >>> >>> ? >>> foo(**bound_args.arguments) >>> ? TypeError: foo() got an unexpected keyword argument 'args' >> >> Of course, but you can invoke it with "1, 2, 3", the arguments you >> used to create the BoundArguments instance in the first place: foo(1, >> 2, 3) will work fine. > > The whole point is to use BoundArguments mapping for invocation. > See Nick's idea to validate callbacks, and my response to him, below in > the thread. > >>> That's why we have two dynamic properties 'args', and 'kwargs': >> >> Ok, but what I'm saying is, that we don't really need them. > > We need them. ?Again, in some contexts you don't have the arguments > you've passed to bind(). But how could we *need* bind to return 'args' and 'kwargs' to us, when we wouldn't be able to call bind in the first place, if we wouldn't had the arguments? >>> ? >>> bound_args.args, bound_args.kwargs >>> ? ((1, 2, 3), {}) >>> >>> 'BoundArguments.arguments', as told in the PEP, is a mapping to work >>> with 'Signature.parameters' (you should've seen it in the >>> "Annotation Checker" example). >>> >>> 'args' & 'kwargs' are for invocation. ?You can even modify 'arguments'. >>> >>>>> Has the following public attributes: >>>>> >>>>> * arguments : OrderedDict >>>>> ? ? An ordered mutable mapping of parameters' names to arguments' values. >>>>> ? ? Does not contain arguments' default values. >>>> >>>> Does this mean, that if the arguments passed to bind doesn't contain a >>>> value for an argument that has a default value, then the returned >>>> mapping won't contain that argument? If so, why not? >>>> inspect.getcallargs works fine with default values. >>> >>> Yes, it won't. ?It contains only arguments you've passed to the 'bind'. >>> The reason is because we'd like to save as much of actual context as >>> possible. >> >> I don't really see, where this "context" can be useful. Maybe an >> example would help. > > For instance, for some sort of RPC mechanism, where you don't need to > store/pass arguments that have default values. I see. So, basically, it's an optimization. >>> If you pass some set of arguments to the bind() method, it tries to map >>> precisely that set. ?This way you can deduce from the BoundArguments what >>> it was bound with. ?And default values will applied by python itself. >> >> Anyway, I think it would be nice to be able to obtain the full >> arguments mapping that the function would see, not just a subset. Of >> course, we can use inspect.getcallargs for that, but I think we should >> be able to do that with the new Signature API. > > > Well, it will take just a few lines of code to enrich BoundArguments with > default values (we can add a method to do it, if it's really that required > feature). ?But you won't be able to ever reconstruct what arguments the > bind() method was called with, if we write default values to arguments > from start. As I've mentioned above, I don't think, we have to be able to reconstruct the arguments passed to bind from the return value of bind. If we will need the original arguments later/in another place, we will just save them, bind doesn't have to complicate its API with them. > Also, it's better for performance. ?"Annotation Checker" example does > defaults validation first, and never checks them again. ?If bind() would > write default values to 'BoundArguments.arguments', you would check > defaults on each call. ?And think about more complicated cases, where > processing of argument's value is more complicated and time consuming. Ok, so again, it is an optimization. Daniel From yselivanov.ml at gmail.com Wed Jun 6 22:10:22 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 6 Jun 2012 16:10:22 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <16DEE462-CC67-4065-A230-AB24256A3225@gmail.com> Message-ID: <47C8126D-8D90-41A3-A8F5-FFB5554155C3@gmail.com> On 2012-06-06, at 3:33 PM, Daniel Urban wrote: > On Wed, Jun 6, 2012 at 8:35 PM, Yury Selivanov wrote: >> On 2012-06-06, at 2:22 PM, Daniel Urban wrote: >>>> I'll try to answer you with the following code: >>>> >>>> >>> def foo(*args): >>>> ... print(args) >>>> >>>> >>> bound_args = signature(foo).bind(1, 2, 3) >>>> >>> bound_args.arguments >>>> OrderedDict([('args', (1, 2, 3))]) >>>> >>>> You can't invoke 'foo' by: >>>> >>>> >>> foo(**bound_args.arguments) >>>> TypeError: foo() got an unexpected keyword argument 'args' >>> >>> Of course, but you can invoke it with "1, 2, 3", the arguments you >>> used to create the BoundArguments instance in the first place: foo(1, >>> 2, 3) will work fine. >> >> The whole point is to use BoundArguments mapping for invocation. >> See Nick's idea to validate callbacks, and my response to him, below in >> the thread. >> >>>> That's why we have two dynamic properties 'args', and 'kwargs': >>> >>> Ok, but what I'm saying is, that we don't really need them. >> >> We need them. Again, in some contexts you don't have the arguments >> you've passed to bind(). > > But how could we *need* bind to return 'args' and 'kwargs' to us, when > we wouldn't be able to call bind in the first place, if we wouldn't > had the arguments? You're missing the point. BoundArguments contains properly mapped *args and **kwargs passed to Signature.bind. You can validate them after, do type casts, modify them, overwrite etc. by manipulating 'BoundArguments.arguments'. At the end you can't, however, invoke the function by doing: func(**bound_arguments.arguments) # <- this won't work as varargs will be screwed. That's why you need 'args' & 'kwargs' properties on BoundArguments. Imagine, that "Annotation Checker" example is modified to coerce all string arguments to int (those that had 'int' in annotation) and then to multiply them by 42. We'd write the following code: for arg_name, arg_value in bound_arguments.arguments.items(): # I'm skipping is_args & is_kwargs checks, and assuming # we have annotations everywhere if sig.parameters[arg_name].annotation is int \ and isinstance(arg_value, str): bound_arguments.arguments[arg_name] = int(arg_value) * 42 return func(*bound_arguments.args, **bound_arguments.kwargs) Thanks, - Yury From ncoghlan at gmail.com Wed Jun 6 23:25:16 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 7 Jun 2012 07:25:16 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FCF826D.7040007@pearwood.info> References: <4FCF7965.5070102@pearwood.info> <4FCF7FDC.2080000@hastings.org> <4FCF826D.7040007@pearwood.info> Message-ID: On Jun 7, 2012 3:11 AM, "Steven D'Aprano" wrote: > > Larry Hastings wrote: > >>> [...] >>> "Changes to the Signature object, or to any of its data members, >>> do not affect the function itself." >>> >>> which leaves the possibility that __signature__ may no longer match the actual argument spec, for some reason. If you remove getfullargspec, people will have to reinvent it to deal with such cases. >> >> >> There's no reason why they should disagree. The "some reason" would be if some doorknob decided to change it--the objects are mutable, because there's no good reason to make them immutable. > > > Nevertheless, the world is full of doorknobs, and people will have to deal with their code. Speaking as a doorknob, I plan to use this PEP to allow wrapper functions that accept arbitrary arguments to accurately report their signature as matching the underlying function. It will also be useful for allowing partial() objects (and other callables that tweak their API) to report an accurate signature. For example, I believe bound methods will misrepresent their signature with the current PEP implementation - they actually should copy the function signature and then drop the first positional parameter. However, these use cases would be easier to handle correctly with an explicit "copy()" method. Also, +1 for keeping the lower level inspect functions around. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Jun 6 23:29:13 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 7 Jun 2012 07:29:13 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <56CEC722-1A72-4F0E-8E36-660DB7839F0D@gmail.com> References: <56CEC722-1A72-4F0E-8E36-660DB7839F0D@gmail.com> Message-ID: On Jun 7, 2012 12:20 AM, "Yury Selivanov" wrote: > > I agree, that we shouldn't make 'functools' be dependent on 'inspect' module. > Moreover, this is not even currently possible, as it creates an import-loop > that is hard to untie. But how about the following: > > 1. Separate 'Signature' object from 'inspect' module, and move it to a > private '_signature.py' (that will depend only on 'collections.OrderedDict', > 'itertools.chain' and 'types') > > 2. Publish it in the 'inspect' module > > 3. Make 'signature' method to work with any callable > > 4. Make 'Signature' class to accept only functions > > 5. Import '_signature' in the 'functools', and use 'Signature' class > directly, as it will accept just plain functions. > > Would this work? Sounds like a good plan to me. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) > > - > Yury > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Wed Jun 6 23:32:41 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 6 Jun 2012 17:32:41 -0400 Subject: [Python-Dev] [Python-checkins] peps: PEP 422 rewrite to present an idea that a) isn't crazy and b) it turns out In-Reply-To: <4FCFA1CB.1070405@udel.edu> References: <4FCFA1CB.1070405@udel.edu> Message-ID: +1 on the PEP. FWIW, it may be useful to note that not only has the pattern of having a class-level init been proposed before, it's actually used: Zope has had __class_init__ and used it as a metaclass alternative since well before Thomas Heller's proposal. And in my case, about 80% of my non-dynamic metaclass needs are handled by using a metaclass whose sole purpose is to provide me with __class_init__, __class_new__, and __class_call__ methods so they can be defined as class methods instead of as metaclass methods. (Basically, it lets me avoid making new metaclasses when I can just define __class_*__ methods instead. The other use cases are all esoterica like object-relational mapping, singletons and pseudo-singletons. etc.) So, the concept is a decades-plus proven alternative to metaclasses for low-hanging metaclassy behavior. This new version of the PEP does offer one challenge to my motivating use case, though, and that's that hooking __init_class__ means any in-body decorators have to occur *after* any __init_class__ definition, or silent failure will occur. (Because a later definition of __init_class__ will overwrite the dynamically added version.) While this challenge existed for the __metaclass__ hook, it was by convention always placed at the top of the class, or very near to it. After all, knowing what metaclass a class is, is pretty important, *and* not done very often. Likewise, had the previous version of the PEP been used, it was unlikely that anybody would bury their __decorators__ list near the end of the class! The __init_class__ method, on the other hand, can quite rightly be considered a minor implementation detail internal to a class that might reasonably be placed late in the class definition. This is a relatively small apprehension, but it makes me *slightly* prefer the previous version to this one, at least for my motivating use case. But I'll admit that this version might be better for Python-as-a-whole than the previous version. Among other things, it makes my "classy" metaclass (the one that adds __class_init__, __class_call__, etc.) redundant for its most common usage (__class_init__). I'm tempted to suggest adding a __call_class__ to the mix, since in grepping my code to check my less-esoteric metaclass use cases just now, I find I implement __class_call__ methods almost as often as __class_init__ ones, but I suspect my use cases are atypical in this regard. (It's mostly used for things where you want to hook instance creation (caches, singletons, persistence, O-R mapping) while still allowing subclasses to define __new__ and/or __init__ without needing to integrate with the tricky bits.) (To be clear, by esoteric, I mean cases where I'm making classes that act like non-class objects in some regard, like a class that acts as a mapping or sequence of its instances. If all you're doing is making a class with a sprinkling of metaprogramming for improved DRYness, then __class_init__ and __class_call__ are more than enough to do it, and a full metaclass is overkill.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Wed Jun 6 23:58:24 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 06 Jun 2012 17:58:24 -0400 Subject: [Python-Dev] Static type analysis In-Reply-To: References: Message-ID: On 6/6/2012 7:24 AM, Edward K. Ream wrote: > Hello all, > > I'm wondering whether this is the appropriate place to discuss > (global) static type analysis, a topic Guido mentioned around the 28 > min mark in his PyCon 2012 keynote, > http://pyvideo.org/video/956/keynote-guido-van-rossum I think either python-list or python-ideas list would be more appropriate. Start with a proposal, statement, or question that others can respond to. -- Terry Jan Reedy From ericsnowcurrently at gmail.com Thu Jun 7 00:07:08 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 6 Jun 2012 16:07:08 -0600 Subject: [Python-Dev] [Python-checkins] peps: PEP 422 rewrite to present an idea that a) isn't crazy and b) it turns out In-Reply-To: References: Message-ID: On Wed, Jun 6, 2012 at 5:40 AM, nick.coghlan wrote: > + > +Alternatives > +============ > + Would it be worth also (briefly) rehashing why the class instance couldn't be created before the class body is executed*? It might seem like a viable alternative if you haven't looked at how classes get created. -eric * i.e. meta.__new__() would have to be called before the class body is executed for the class to exist during that execution. Perhaps in an alternate universe classes get created like modules do... From steve at pearwood.info Thu Jun 7 00:11:13 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 07 Jun 2012 08:11:13 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: <4FCFD581.4050607@pearwood.info> Yury Selivanov wrote: > I like the idea of 'foo(a)' and 'bar(a)' having the identical signatures, > however, I don't think it's possible. I.e. we can't make it that the > 'signature(foo) is signature(bar)'. We can implement the __eq__ operator > though. +1 to __eq__. I don't think we should care about them being identical. Object identity is almost always an implementation detail. -- Steven From steve at pearwood.info Thu Jun 7 00:27:29 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 07 Jun 2012 08:27:29 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FCF97EC.8010406@hastings.org> References: <4FCF7965.5070102@pearwood.info> <4FCF97EC.8010406@hastings.org> Message-ID: <4FCFD951.2050208@pearwood.info> Larry Hastings wrote: >> inspect.getfullargspec is currently unable to introspect builtin >> functions and methods. Should builtins gain a __signature__ so they >> can be introspected? > > If function signatures are useful, then they're useful, and the > implementation language for the function is irrelevant. I already sent > Yuri a patch adding __signature__ to PyCFunctionObject, which I thought > he merged but I don't see in his repo. I would love to be able to inspect builtins for their signature. I have a class decorator that needs to know the signature of the class constructor, which unfortunately falls down for the simple cases of inheriting from a builtin without a custom __init__ or __new__. -- Steven From steve at pearwood.info Thu Jun 7 00:38:33 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 07 Jun 2012 08:38:33 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <4FCF7FDC.2080000@hastings.org> <4FCF826D.7040007@pearwood.info> Message-ID: <4FCFDBE9.8010808@pearwood.info> Brett Cannon wrote: > On Wed, Jun 6, 2012 at 12:16 PM, Steven D'Aprano wrote: > >> Larry Hastings wrote: >> >> [...] >>>> "Changes to the Signature object, or to any of its data members, >>>> do not affect the function itself." >>>> >>>> which leaves the possibility that __signature__ may no longer match the >>>> actual argument spec, for some reason. If you remove getfullargspec, people >>>> will have to reinvent it to deal with such cases. >>>> >>> There's no reason why they should disagree. The "some reason" would be >>> if some doorknob decided to change it--the objects are mutable, because >>> there's no good reason to make them immutable. >>> >> Nevertheless, the world is full of doorknobs, and people will have to deal >> with their code. >> > > This is also Python, the language that assumes everyone is an consenting > adult. Exactly, which is why I'm not asking for __signature__ to be immutable. Who knows, despite Larry's skepticism (and mine!), perhaps there is a use-case for __signature__ being modified that we haven't thought of yet. But that's not really the point. It may be that nobody will be stupid enough to mangle __signature__, and inspect.getfullargspec becomes redundant. Even so, getfullargspec is not doing any harm. We're not *adding* complexity, it's already there, and breaking currently working code by deprecating and then removing it is not a step we should take lightly. API churn is itself a cost. [...] > If people really want to keep getullargspec() around then I want to at > least add a note to the function that signature objects exist as an > alternative (but not vice-versa). +1 -- Steven From pje at telecommunity.com Thu Jun 7 00:57:01 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 6 Jun 2012 18:57:01 -0400 Subject: [Python-Dev] [Python-checkins] peps: PEP 422 rewrite to present an idea that a) isn't crazy and b) it turns out In-Reply-To: References: Message-ID: On Wed, Jun 6, 2012 at 6:07 PM, Eric Snow wrote: > On Wed, Jun 6, 2012 at 5:40 AM, nick.coghlan > wrote: > > + > > +Alternatives > > +============ > > + > > Would it be worth also (briefly) rehashing why the class instance > couldn't be created before the class body is executed*? It might seem > like a viable alternative if you haven't looked at how classes get > created. > Backwards compatibility is really the only reason. So it'll have to wait till Python 4000. ;-) (Still, that approach is in some ways actually better than the current approach: you don't need a __prepare__, for example. Actually, if one were designing a class creation protocol from scratch today, it would probably be simplest to borrow the __enter__/__exit__ protocol, with __enter__() returning the namespace to be used for the suite body, and __exit__() returning a finished class... or something similar. Python-ideas stuff, to be sure, but it could likely be made a whole lot simpler than the current multitude of hooks, counter-hooks, and extended hooks.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Jun 7 01:02:15 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 06 Jun 2012 19:02:15 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FCFDBE9.8010808@pearwood.info> References: <4FCF7965.5070102@pearwood.info> <4FCF7FDC.2080000@hastings.org> <4FCF826D.7040007@pearwood.info> <4FCFDBE9.8010808@pearwood.info> Message-ID: On 6/6/2012 6:38 PM, Steven D'Aprano wrote: > redundant. Even so, getfullargspec is not doing any harm. We're not > *adding* complexity, it's already there, and breaking currently working > code by deprecating and then removing it is not a step we should take > lightly. API churn is itself a cost. The 3.x versions of idlelib.CallTips.get_argspec() uses inspect.formatargspec(*inspect.getfullargspec(fob)) to create the first line of a calltip for Python coded callables (functions and (bound) instance methods, including class(.__init__), static and class methods and (with pending patch) instance(.__call__)). Any new class would have to have a identical formatter to replace this. I do not quite see the point of deprecating these functions. It seems to that a new presentation object should build on top of the existing getfullargspec, when it is requested. I agree with Stephen that building the seldom-needed redundant representation upon creation of every function object is a bad idea. -- Terry Jan Reedy From ethan at stoneleaf.us Thu Jun 7 01:16:21 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 06 Jun 2012 16:16:21 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: <4FCFE4C5.2050705@stoneleaf.us> Yury Selivanov wrote: > We can implement the __eq__ operator though. +1 From techtonik at gmail.com Thu Jun 7 01:18:51 2012 From: techtonik at gmail.com (anatoly techtonik) Date: Thu, 7 Jun 2012 02:18:51 +0300 Subject: [Python-Dev] Cross-compiling python and PyQt In-Reply-To: References: Message-ID: On Wed, Jun 6, 2012 at 12:35 AM, Terry Reedy wrote: > On 6/5/2012 4:24 PM, Tarek Sheasha wrote: >> >> Hello, >> I have been working for a long time on cross-compiling python for >> android I have used projects like: >> http://code.google.com/p/android-python27/ >> >> I am stuck in a certain area, when I am cross-compiling python I would >> like to install SIP and PyQt4 on the cross-compiled python, I have tried >> all the possible ways I could think of but have had no success. So if >> you can help me by giving me some guidelines on how to install >> third-party software for cross-compiled python for android I would be >> very helpful. > > > This is off-topic for pydev list (which is for development *of* Python > rather than development *with*). I suggest python-list (post in text only, > please) or other lists for better help. Yes. And try PySide - it's been ported to distutils, so if distutils supports cross-compiling you may have better luck there. -- anatoly t. From ncoghlan at gmail.com Thu Jun 7 02:12:06 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 7 Jun 2012 10:12:06 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FCFDBE9.8010808@pearwood.info> References: <4FCF7965.5070102@pearwood.info> <4FCF7FDC.2080000@hastings.org> <4FCF826D.7040007@pearwood.info> <4FCFDBE9.8010808@pearwood.info> Message-ID: On Thu, Jun 7, 2012 at 8:38 AM, Steven D'Aprano wrote: > Brett Cannon wrote: >> This is also Python, the language that assumes everyone is an consenting >> adult. > > > Exactly, which is why I'm not asking for __signature__ to be immutable. Who > knows, despite Larry's skepticism (and mine!), perhaps there is a use-case > for __signature__ being modified that we haven't thought of yet. > But that's not really the point. It may be that nobody will be stupid enough > to mangle __signature__, and inspect.getfullargspec becomes redundant. I've presented use cases for doing this already. Please stop calling me stupid. It will make sense to lie in __signature__ any time there are constraints on a callable object that aren't accurately reflected in its Python level signature. The simplest example I can think of is a decorator that passes extra arguments in to the underlying function on every call. For example, here's a more elegant alternative to the default argument hack that relies on manipulating __signature__ to avoid breaking introspection: def shared_vars(*shared_args): """Decorator factory that defines shared variables that are passed to every invocation of the function""" def decorator(f): @functools.wraps(f) # Sets wrapper.__signature__ to Signature(f) def wrapper(*args, **kwds): full_args = shared_args + args return f(*full_args, **kwds) # When using this decorator, the public signature isn't the same as that # provided by the underlying function, as the first few positional # arguments are provided by the decorator sig = wrapper.__signature__ for __ in shared_args: sig.popitem() @shared_vars({}) def example(_state, arg1, arg2, arg3): # _state is for private communication between "shared_vars" and the function # callers can't set it, and never see it (unless they dig into example.__wrapped__) This has always been possible, but it's been a bad idea because of the way it breaks pydoc (including help(example)) and other automatic documentation tools. With a writable __signature__ attribute it becomes possible to have our cake and eat it too. >> If people really want to keep getullargspec() around then I want to at >> least add a note to the function that signature objects exist as an >> alternative (but not vice-versa). > > +1 Also +1, since inspect.getfullargspec() and inspect.signature() operate at different levels in order to answer different questions. The former asks "what is the *actual* signature", while the latter provides a way to ask "what is the *effective* signature". That's why I see the PEP as more than just a way to more easily introspect function signatures: the ability to set a __signature__ attribute and have the inspect module pay attention to it means it becomes possible to cleanly advertise the signature of callables that aren't actual functions, and *also* possible to derive a new signature from an existing one, *without needing to care about the details of that existing signature* (as in the example above, it's only necessary to know how the signature will *change*). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ericsnowcurrently at gmail.com Thu Jun 7 02:52:05 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 6 Jun 2012 18:52:05 -0600 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: On Wed, Jun 6, 2012 at 11:28 AM, Yury Selivanov wrote: > Never copy attributes by hand, always use 'functools.wraps'. ?It copies > '__name__', '__qualname__', and bunch of other attributes to the decorator > object. > > We'll probably extend it to copy __signature__ too; then 'signature(decor(f))' > will be the same as 'signature(f)'. Having the signature object stored on a function is useful for the cases like this, where the signature object is *explicitly* set to differ from the function's actual signature. That's a good reason to have inspect.signature(f) look for f.__signature__ and use it if available. However, I'm not seeing how the other proposed purpose, as a cache for inspect.signature(), is useful. I'd expect that if someone wants a function's signature, they'd call inspect.signature() for it. If they really need the speed of a cache then they can *explicitly* assign __signature__. Furthermore, using __signature__ as a cache may even cause problems. If the Signature object is cached then any changes to the function will not be reflected in the Signature object. Certainly that's an unlikely case, but it is a real case. If f.__signature__ is set, I'd expect it to be either an explicitly set value or exactly the same as the first time inspect.signature() was called for that function. We could make promises about that and do dynamic updates, etc., but it's not useful enough to go to the trouble. And without the guarantees, I don't think using it as a cache is a good idea. (And like I said, allowing/using an explicitly set f.__signature__ is a good thing). -eric From ncoghlan at gmail.com Thu Jun 7 03:00:20 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 7 Jun 2012 11:00:20 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: On Thu, Jun 7, 2012 at 10:52 AM, Eric Snow wrote: > Furthermore, using __signature__ as a cache may even cause problems. > If the Signature object is cached then any changes to the function > will not be reflected in the Signature object. ?Certainly that's an > unlikely case, but it is a real case. If f.__signature__ is set, I'd > expect it to be either an explicitly set value or exactly the same as > the first time inspect.signature() was called for that function. ?We > could make promises about that and do dynamic updates, etc., but it's > not useful enough to go to the trouble. ?And without the guarantees, I > don't think using it as a cache is a good idea. ?(And like I said, > allowing/using an explicitly set f.__signature__ is a good thing). +1 Providing a defined mechanism to declare a public signature is good, but using that mechanism for implicit caching seems like a questionable idea. Even when it *is* cached, I'd be happier if inspect.signature() returned a copy rather than a direct reference to the original. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Thu Jun 7 03:16:37 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 6 Jun 2012 21:16:37 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: <9221BC45-B221-4A14-9E23-F2401DB63F38@gmail.com> On 2012-06-06, at 9:00 PM, Nick Coghlan wrote: > On Thu, Jun 7, 2012 at 10:52 AM, Eric Snow wrote: >> Furthermore, using __signature__ as a cache may even cause problems. >> If the Signature object is cached then any changes to the function >> will not be reflected in the Signature object. Certainly that's an >> unlikely case, but it is a real case. If f.__signature__ is set, I'd >> expect it to be either an explicitly set value or exactly the same as >> the first time inspect.signature() was called for that function. We >> could make promises about that and do dynamic updates, etc., but it's >> not useful enough to go to the trouble. And without the guarantees, I >> don't think using it as a cache is a good idea. (And like I said, >> allowing/using an explicitly set f.__signature__ is a good thing). > > +1 > > Providing a defined mechanism to declare a public signature is good, > but using that mechanism for implicit caching seems like a > questionable idea. Even when it *is* cached, I'd be happier if > inspect.signature() returned a copy rather than a direct reference to > the original. I'm leaning towards this too. Besides, constructing a Signature object isn't an expensive operation. So, the idea for the 'signature(obj)' function is to first check if 'obj' has '__signature__' attribute set, if yes - return it, if no - create a new one (but don't cache). I have a question about fixing 'functools.wraps()' - I'm not sure we need to. I see two solutions to the problem: I) We fix 'functools.wraps' to do: 'wrapper.__signature__ = signature(wrapped)' II) We modify 'signature(obj)' function to do the following steps: 1. check if obj has '__signature__' attribute. If yes - return it. 2. check if obj has '__wrapped__' attribute. If yes: obj = obj.__wrapped__; goto 1. 3. Calculate new signature for obj and return it. I think that the second (II) approach is better, as we don't implicitly cache anything, and we don't calculate Signatures on each 'functools.wraps' call. - Yury From urban.dani+py at gmail.com Thu Jun 7 06:44:56 2012 From: urban.dani+py at gmail.com (Daniel Urban) Date: Thu, 7 Jun 2012 06:44:56 +0200 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <47C8126D-8D90-41A3-A8F5-FFB5554155C3@gmail.com> References: <16DEE462-CC67-4065-A230-AB24256A3225@gmail.com> <47C8126D-8D90-41A3-A8F5-FFB5554155C3@gmail.com> Message-ID: On Wed, Jun 6, 2012 at 10:10 PM, Yury Selivanov wrote: > On 2012-06-06, at 3:33 PM, Daniel Urban wrote: >> On Wed, Jun 6, 2012 at 8:35 PM, Yury Selivanov wrote: >>> On 2012-06-06, at 2:22 PM, Daniel Urban wrote: >>>>> I'll try to answer you with the following code: >>>>> >>>>> ? >>> def foo(*args): >>>>> ? ... ? ?print(args) >>>>> >>>>> ? >>> bound_args = signature(foo).bind(1, 2, 3) >>>>> ? >>> bound_args.arguments >>>>> ? OrderedDict([('args', (1, 2, 3))]) >>>>> >>>>> You can't invoke 'foo' by: >>>>> >>>>> ? >>> foo(**bound_args.arguments) >>>>> ? TypeError: foo() got an unexpected keyword argument 'args' >>>> >>>> Of course, but you can invoke it with "1, 2, 3", the arguments you >>>> used to create the BoundArguments instance in the first place: foo(1, >>>> 2, 3) will work fine. >>> >>> The whole point is to use BoundArguments mapping for invocation. >>> See Nick's idea to validate callbacks, and my response to him, below in >>> the thread. >>> >>>>> That's why we have two dynamic properties 'args', and 'kwargs': >>>> >>>> Ok, but what I'm saying is, that we don't really need them. >>> >>> We need them. ?Again, in some contexts you don't have the arguments >>> you've passed to bind(). >> >> But how could we *need* bind to return 'args' and 'kwargs' to us, when >> we wouldn't be able to call bind in the first place, if we wouldn't >> had the arguments? > > You're missing the point. ?BoundArguments contains properly mapped > *args and **kwargs passed to Signature.bind. ?You can validate them after, > do type casts, modify them, overwrite etc. by manipulating > 'BoundArguments.arguments'. > > At the end you can't, however, invoke the function by doing: > > ? func(**bound_arguments.arguments) # <- this won't work > > as varargs will be screwed. > > That's why you need 'args' & 'kwargs' properties on BoundArguments. > > Imagine, that "Annotation Checker" example is modified to coerce all string > arguments to int (those that had 'int' in annotation) and then to multiply > them by 42. > > We'd write the following code: > > ? for arg_name, arg_value in bound_arguments.arguments.items(): > ? ? ?# I'm skipping is_args & is_kwargs checks, and assuming > ? ? ?# we have annotations everywhere > ? ? ?if sig.parameters[arg_name].annotation is int \ > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?and isinstance(arg_value, str): > ? ? ? ? ?bound_arguments.arguments[arg_name] = int(arg_value) * 42 > > ? return func(*bound_arguments.args, **bound_arguments.kwargs) I see. Thanks, this modifying example is the first convincing use case I hear. Maybe it would be good to mention something like this in the PEP. Daniel From ncoghlan at gmail.com Thu Jun 7 08:56:56 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 7 Jun 2012 16:56:56 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <9221BC45-B221-4A14-9E23-F2401DB63F38@gmail.com> References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <9221BC45-B221-4A14-9E23-F2401DB63F38@gmail.com> Message-ID: On Thu, Jun 7, 2012 at 11:16 AM, Yury Selivanov wrote: > On 2012-06-06, at 9:00 PM, Nick Coghlan wrote: > So, the idea for the 'signature(obj)' function is to first check if > 'obj' has '__signature__' attribute set, if yes - return it, if no - > create a new one (but don't cache). I'd say return a copy in the first case to be safe against accidental modification. If someone actually wants in-place modification, they can access __signature__ directly. > I have a question about fixing 'functools.wraps()' - I'm not sure > we need to. ?I see two solutions to the problem: > > I) We fix 'functools.wraps' to do: > > ? 'wrapper.__signature__ = signature(wrapped)' > > II) We modify 'signature(obj)' function to do the following steps: > > ? 1. check if obj has '__signature__' attribute. If yes - return it. > > ? 2. check if obj has '__wrapped__' attribute. ?If yes: > ? obj = obj.__wrapped__; goto 1. > > ? 3. Calculate new signature for obj and return it. > > I think that the second (II) approach is better, as we don't > implicitly cache anything, and we don't calculate Signatures > on each 'functools.wraps' call. Oh, nice, I like it. Then the wrapped function only gets its own signature attribute if it's actually being changed by one of the wrappers and my example would become: def shared_vars(*shared_args): """Decorator factory that defines shared variables that are passed to every invocation of the function""" def decorator(f): @functools.wraps(f) def wrapper(*args, **kwds): full_args = shared_args + args return f(*full_args, **kwds) # Override signature sig = wrapper.__signature__ = inspect.signature(f) for __ in shared_args: sig.popitem() @shared_vars({}) def example(_state, arg1, arg2, arg3): # _state is for private communication between "shared_vars" and the function # callers can't set it, and never see it (unless they dig into example.__wrapped__) Bonus: without implicit signature copying in functools, you can stick with the plan of exposing everything via the inspect module. We should still look into making whatever tweaks are needed to let inspect.signature correctly handle functools.partial objects, though. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From jdmorgan at unca.edu Thu Jun 7 04:32:09 2012 From: jdmorgan at unca.edu (jdmorgan) Date: Wed, 06 Jun 2012 22:32:09 -0400 Subject: [Python-Dev] next alpha sequence value from var pointing to array Message-ID: <4FD012A9.2060901@gmail.com> Hello, I am hoping that this list is a good place to ask this question.I am still fairly new to python, but find it to be a great scripting language.Here is my issue: I am attempting to utilize a function to receive any sequence of letter characters and return to me the next value in alphabetic order e.g. send in "abc" get back "abd".I found a function on StackExchange (Rosenfield, A 1995) that seems to work well enough (I think): def next(s): strip_zs = s.rstrip('z') if strip_zs: return strip_zs[:-1] + chr(ord(strip_zs[-1]) + 1) + 'a' * (len(s) - len(strip_zs)) else: return 'a' * (len(s) + 1) I have found this function works well if I call it directly with a string enclosed in quotes: returnValue = next("abc") However, if I call the function with a variable populated from a value I obtain from an array[] it fails returning only ^K Unfortunately, because I don't fully understand this next function I can't really interpret the error.Any help would be greatly appreciated. Thanks ahead of time, Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jun 7 10:30:31 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 7 Jun 2012 18:30:31 +1000 Subject: [Python-Dev] next alpha sequence value from var pointing to array In-Reply-To: <4FD012A9.2060901@gmail.com> References: <4FD012A9.2060901@gmail.com> Message-ID: On Thu, Jun 7, 2012 at 12:32 PM, jdmorgan wrote: > Hello, > > I am hoping that this list is a good place to ask this question. Close, but not quite the right place. This is a list for the design and development *of* Python itself, rather than a list for using Python. For this kind of question, you want python-list at python.org. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From sam.partington at gmail.com Thu Jun 7 12:08:09 2012 From: sam.partington at gmail.com (Sam Partington) Date: Thu, 7 Jun 2012 11:08:09 +0100 Subject: [Python-Dev] [Python-checkins] cpython (3.2): #14957: clarify splitlines docs. In-Reply-To: References: Message-ID: > On Jun 2, 2012 6:21 AM, "r.david.murray" wrote: >> + ? For example, ``'ab c\n\nde fg\rkl\r\n'.splitlines()`` returns >> + ? ``['ab c', '', 'de fg', 'kl']``, while the same call with >> ``splinelines(True)`` >> + ? returns ``['ab c\n', '\n, 'de fg\r', 'kl\r\n']`` Wouldn't that be better written as a doctest and so avoid any other typos? Sam From steve at pearwood.info Thu Jun 7 14:04:20 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 07 Jun 2012 22:04:20 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <4FCF7FDC.2080000@hastings.org> <4FCF826D.7040007@pearwood.info> <4FCFDBE9.8010808@pearwood.info> Message-ID: <4FD098C4.7030906@pearwood.info> Nick Coghlan wrote: > On Thu, Jun 7, 2012 at 8:38 AM, Steven D'Aprano wrote: >> Brett Cannon wrote: >>> This is also Python, the language that assumes everyone is an consenting >>> adult. >> >> Exactly, which is why I'm not asking for __signature__ to be immutable. Who >> knows, despite Larry's skepticism (and mine!), perhaps there is a use-case >> for __signature__ being modified that we haven't thought of yet. >> But that's not really the point. It may be that nobody will be stupid enough >> to mangle __signature__, and inspect.getfullargspec becomes redundant. > > I've presented use cases for doing this already. Please stop calling me stupid. I'm sorry Nick, I missed your email and my choice of words was poor. Please accept my apologies. -- Steven From ncoghlan at gmail.com Thu Jun 7 14:53:21 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 7 Jun 2012 22:53:21 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FD098C4.7030906@pearwood.info> References: <4FCF7965.5070102@pearwood.info> <4FCF7FDC.2080000@hastings.org> <4FCF826D.7040007@pearwood.info> <4FCFDBE9.8010808@pearwood.info> <4FD098C4.7030906@pearwood.info> Message-ID: On Thu, Jun 7, 2012 at 10:04 PM, Steven D'Aprano wrote: > Nick Coghlan wrote: >> I've presented use cases for doing this already. Please stop calling me >> stupid. > > I'm sorry Nick, I missed your email and my choice of words was poor. Please > accept my apologies. Thanks and no worries. I can definitely see how it would seem like a crazy thing to do if you weren't looking at the problem specifically from a callable wrapping point of view :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From rdmurray at bitdance.com Thu Jun 7 14:55:57 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 07 Jun 2012 08:55:57 -0400 Subject: [Python-Dev] [Python-checkins] cpython (3.2): #14957: clarify splitlines docs. In-Reply-To: References: Message-ID: <20120607125558.41CD92500AD@webabinitio.net> On Thu, 07 Jun 2012 11:08:09 +0100, Sam Partington wrote: > > On Jun 2, 2012 6:21 AM, "r.david.murray" wrote: > >> + ?? For example, ``'ab c\n\nde fg\rkl\r\n'.splitlines()`` returns > >> + ?? ``['ab c', '', 'de fg', 'kl']``, while the same call with > >> ``splinelines(True)`` > >> + ?? returns ``['ab c\n', '\n, 'de fg\r', 'kl\r\n']`` > > Wouldn't that be better written as a doctest and so avoid any other typos? Possibly, except (1) I don't think we currently actually test the doctests in the python docs and (2) I'm following the style of the surrounding text (the split examples just above this are in the same inline style. Oh, and (3) it would make the text longer, which could be considered a negative. I have no objection myself to someone reformatting it, but if that is done the whole chapter should be gone over and given a consistent style. --David From fuzzyman at voidspace.org.uk Thu Jun 7 15:28:08 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Thu, 7 Jun 2012 14:28:08 +0100 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: On 6 Jun 2012, at 18:28, Yury Selivanov wrote: > On 2012-06-06, at 1:13 PM, Alexandre Zani wrote: >> A question regarding the name. I have often seen the following pattern >> in decorators: >> >> def decor(f): >> def some_func(a,b): >> do_stuff using f >> some_func.__name__ = f.__name__ >> return some_func >> >> What are the name and fully qualified names in the signature for the >> returned function? some_func.__name__ or f.__name__? > > Never copy attributes by hand, always use 'functools.wraps'. It copies > '__name__', '__qualname__', and bunch of other attributes to the decorator > object. > > We'll probably extend it to copy __signature__ too; then 'signature(decor(f))' > will be the same as 'signature(f)'. > I don't think functools.wraps can copy the signature by default - it's not uncommon to have decorators that modify signatures. A new parameter to functools.wraps defaulting to False? Michael > - > Yury > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From yselivanov.ml at gmail.com Thu Jun 7 15:32:22 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 7 Jun 2012 09:32:22 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: On 2012-06-07, at 9:28 AM, Michael Foord wrote: > On 6 Jun 2012, at 18:28, Yury Selivanov wrote: >> On 2012-06-06, at 1:13 PM, Alexandre Zani wrote: >> Never copy attributes by hand, always use 'functools.wraps'. It copies >> '__name__', '__qualname__', and bunch of other attributes to the decorator >> object. >> >> We'll probably extend it to copy __signature__ too; then 'signature(decor(f))' >> will be the same as 'signature(f)'. >> > > I don't think functools.wraps can copy the signature by default - it's not uncommon to have decorators that modify signatures. A new parameter to functools.wraps defaulting to False? http://mail.python.org/pipermail/python-dev/2012-June/120021.html We just won't copy it at all. See the link above. 'functools.wraps' already sets '__wrapped__' reference to the wrapped function, so we can easily traverse the chain to either first function with __signature__ defined, or to the most inner-decorated function and get a signature for it. - Yury From yselivanov.ml at gmail.com Thu Jun 7 15:37:54 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 7 Jun 2012 09:37:54 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <9221BC45-B221-4A14-9E23-F2401DB63F38@gmail.com> Message-ID: Nick, On 2012-06-07, at 2:56 AM, Nick Coghlan wrote: > On Thu, Jun 7, 2012 at 11:16 AM, Yury Selivanov wrote: >> On 2012-06-06, at 9:00 PM, Nick Coghlan wrote: >> So, the idea for the 'signature(obj)' function is to first check if >> 'obj' has '__signature__' attribute set, if yes - return it, if no - >> create a new one (but don't cache). > > I'd say return a copy in the first case to be safe against accidental > modification. If someone actually wants in-place modification, they > can access __signature__ directly. OK. And I'll implement __copy__ and __deepcopy__ for Signatures. > Bonus: without implicit signature copying in functools, you can stick > with the plan of exposing everything via the inspect module. Exactly! > We should still look into making whatever tweaks are needed to let > inspect.signature correctly handle functools.partial objects, though. Seems doable. 'partial' exposes 'func', 'args' and 'keywords' attributes, so we have all the information we need. Thank you, - Yury From ncoghlan at gmail.com Thu Jun 7 15:42:57 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 7 Jun 2012 23:42:57 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: On Thu, Jun 7, 2012 at 11:28 PM, Michael Foord wrote: >> We'll probably extend it to copy __signature__ too; then 'signature(decor(f))' >> will be the same as 'signature(f)'. > > I don't think functools.wraps can copy the signature by default - it's not uncommon to have decorators that modify signatures. A new parameter to functools.wraps defaulting to False? Most wrapped functions already report the wrong signature under introspection anyway (typically (*args, **kwds)). Following the __wrapped__ chains by default will produce an answer that is more likely to be correct in such cases. The big win from my point of view is that thanks to __signature__, decorators that modify the signature will now have the opportunity to retrieve the signature from the underlying function and accurately report a *different* signature. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From larry at hastings.org Thu Jun 7 16:00:29 2012 From: larry at hastings.org (Larry Hastings) Date: Thu, 07 Jun 2012 07:00:29 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <9221BC45-B221-4A14-9E23-F2401DB63F38@gmail.com> Message-ID: <4FD0B3FD.6060708@hastings.org> On 06/06/2012 11:56 PM, Nick Coghlan wrote: > I'd say return a copy in the first case to be safe against accidental > modification. If someone actually wants in-place modification, they > can access __signature__ directly. I really don't understand this anxiety about mutable Signature objects. Can you give a plausible example of "accidental modification" of a Signature object? I for one--as clumsy as I am--cannot recall ever "accidentally" modifying an object. I really don't think signature() should bother copying/deep-copying the Signature before returning it. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Thu Jun 7 16:12:43 2012 From: larry at hastings.org (Larry Hastings) Date: Thu, 07 Jun 2012 07:12:43 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: <4FD0B6DB.4010304@hastings.org> On 06/06/2012 06:00 PM, Nick Coghlan wrote: > On Thu, Jun 7, 2012 at 10:52 AM, Eric Snow wrote: >> Furthermore, using __signature__ as a cache may even cause problems. >> If the Signature object is cached then any changes to the function >> will not be reflected in the Signature object. Certainly that's an >> unlikely case, but it is a real case. [...] > +1 I'm missing something here. Can you give me an example of modifying an existing function object such that its Signature would change? Decorators implementing a closure with a different signature don't count--they return a new function object. > Providing a defined mechanism to declare a public signature is good, > but using that mechanism for implicit caching seems like a > questionable idea. Even when it *is* cached, I'd be happier if > inspect.signature() returned a copy rather than a direct reference to > the original. I'll say this: if we're going down this road of "don't cache Signature objects", then in the case of __signature__ we should definitely return a copy just for consistency's sakes. It'd be a miserable implementation if signature() sometimes returned the same object and sometimes returned different but equivalent objects when called multiple times on the same object. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Thu Jun 7 16:41:56 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 7 Jun 2012 10:41:56 -0400 Subject: [Python-Dev] PEP 362 Second Revision Message-ID: Hello, The new revision of PEP 362 has been posted: http://www.python.org/dev/peps/pep-0362/ Thanks to Brett, Larry, Nick, and everybody else on python-dev for your corrections/suggestions. Summary of changes: 1. We don't cache signatures in __signature__ attribute implicitly 2. signature() function is now more complex, but supports methods, partial objects, classes, callables, and decorated functions 3. Signatures are always constructed on demand 4. Dropped the deprecation section The implementation is not aligned with the latest PEP yet, I'll try to update it tonight. Thanks, - Yury From rdmurray at bitdance.com Thu Jun 7 16:45:34 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 07 Jun 2012 10:45:34 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FD0B3FD.6060708@hastings.org> References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <9221BC45-B221-4A14-9E23-F2401DB63F38@gmail.com> <4FD0B3FD.6060708@hastings.org> Message-ID: <20120607144534.92AF32500AD@webabinitio.net> On Thu, 07 Jun 2012 07:00:29 -0700, Larry Hastings wrote: > On 06/06/2012 11:56 PM, Nick Coghlan wrote: > > I'd say return a copy in the first case to be safe against accidental > > modification. If someone actually wants in-place modification, they > > can access __signature__ directly. > > I really don't understand this anxiety about mutable Signature objects. > Can you give a plausible example of "accidental modification" of a > Signature object? I for one--as clumsy as I am--cannot recall ever > "accidentally" modifying an object. Maybe it would make more sense if you read that as "naively" rather than "accidentally"? In the 3.3 email extension I made a similar decision, although there I went even further and made the objects read only. My logic for doing this is that a naive user would...naively...try to set the attributes and expect the object they got it from to change, but that object (a string subclass) is inherently read-only. I am thinking that in fact we may ultimately want to return copies of these objects that are mutable, because that might be useful, but I'm starting with read-only because it is easy to make them mutable later but pretty much impossible (backward compatibility wise) to make them immutable if they start mutable. I see the signature object as a very parallel case to this, except that it is already obvious that having them be a mutable copy is useful. --David From yselivanov.ml at gmail.com Thu Jun 7 16:56:14 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 7 Jun 2012 10:56:14 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <20120607144534.92AF32500AD@webabinitio.net> References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <9221BC45-B221-4A14-9E23-F2401DB63F38@gmail.com> <4FD0B3FD.6060708@hastings.org> <20120607144534.92AF32500AD@webabinitio.net> Message-ID: On 2012-06-07, at 10:45 AM, R. David Murray wrote: > On Thu, 07 Jun 2012 07:00:29 -0700, Larry Hastings wrote: >> On 06/06/2012 11:56 PM, Nick Coghlan wrote: >>> I'd say return a copy in the first case to be safe against accidental >>> modification. If someone actually wants in-place modification, they >>> can access __signature__ directly. >> >> I really don't understand this anxiety about mutable Signature objects. >> Can you give a plausible example of "accidental modification" of a >> Signature object? I for one--as clumsy as I am--cannot recall ever >> "accidentally" modifying an object. > > Maybe it would make more sense if you read that as "naively" rather than > "accidentally"? > > In the 3.3 email extension I made a similar decision, although there I > went even further and made the objects read only. My logic for doing > this is that a naive user would...naively...try to set the attributes > and expect the object they got it from to change, but that object (a > string subclass) is inherently read-only. > > I am thinking that in fact we may ultimately want to return copies of > these objects that are mutable, because that might be useful, but I'm > starting with read-only because it is easy to make them mutable later > but pretty much impossible (backward compatibility wise) to make them > immutable if they start mutable. > > I see the signature object as a very parallel case to this, except that > it is already obvious that having them be a mutable copy is useful. I think the copy approach is better for Signatures, the immutability. It should be OK to, for instance, get a signature, modify some parameters information, and then try to bind some arguments. Again, there's a care for decorators, where they get a signature, and modify it, as the wrapper's signature is different from the original function's. - Yury From urban.dani+py at gmail.com Thu Jun 7 17:45:29 2012 From: urban.dani+py at gmail.com (Daniel Urban) Date: Thu, 7 Jun 2012 17:45:29 +0200 Subject: [Python-Dev] [Python-checkins] peps: Update 422 based on python-dev feedback In-Reply-To: References: Message-ID: On Thu, Jun 7, 2012 at 2:08 PM, nick.coghlan wrote: > -* If the metaclass hint refers to an instance of ``type``, then it is > +* If the metaclass hint refers to a subclass of ``type``, then it is > ? considered as a candidate metaclass along with the metaclasses of all of > ? the parents of the class being defined. If a more appropriate metaclass is > ? found amongst the candidates, then it will be used instead of the one I think here "instance" was correct (see http://hg.python.org/cpython/file/default/Lib/types.py#l76 and http://hg.python.org/cpython/file/cedc68440a67/Python/bltinmodule.c#l90). Daniel From ericsnowcurrently at gmail.com Thu Jun 7 18:54:15 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Thu, 7 Jun 2012 10:54:15 -0600 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: On Wed, Jun 6, 2012 at 11:10 AM, Yury Selivanov wrote: > On 2012-06-06, at 1:02 PM, Eric Snow wrote: >> I'm with Steven on this one. ?What's the benefit to storing the name >> or qualname on the signature object? ?That ties the signature object >> to a specific function. ?If you access the signature object by >> f.__signature__ then you already have f and its name. ?If you get it >> by calling signature(f), then you also have f and the name. ?If you >> are passing signature objects for some reason and there's a use case >> for which the name/qualname matters, wouldn't it be better to pass the >> functions around anyway? ?What about when you create a signature >> object on its own and you don't care about the name or qualname...why >> should it need them? ?Does Signature.bind() need them? > > Yes, 'Signature.bind' needs 'qualname' for error messages. ?But it can be > stored as a private attribute. > > I like the idea of 'foo(a)' and 'bar(a)' having the identical signatures, > however, I don't think it's possible. ?I.e. we can't make it that the > 'signature(foo) is signature(bar)'. ?We can implement the __eq__ operator > though. Adding __eq__ and __ne__ is a good thing. However, I don't care so much about the identity/equality relationship as I do about the relationship between Signature objects and functions. Regarding name and qualname, several questions: * Is the use of qualname in errors part of the PEP or an implementation detail? * Can name and qualname be None? * If so, do the errors that rely on them still work? * What do you get when you pass a lambda to inspect.signature()? Thinking more about this, for me this PEP is more about call signatures than about function declaration signatures. For a call you don't need the callable's name (think lambdas). Having a name and qualname attribute communicates that Signature objects are indelibly tied to functions. So I still don't think name and qualname are intrinsic attributes to call signatures. In the end this is a relatively minor point that does not detract from my support of this PEP. I just think we should strip any superfluous details we can now. Adding things later is easier than taking them away. :) Warning: the following may seem to contradict what I said above. Take it for what it's worth. Regardless of name/qualname, it may be useful to keep a weak reference on the Signature object to the function on which it is based, *if any*. That's not essential to the concept of call signatures, but it could still be meaningful for introspection. Alternately, let's consider the case where the PEP actually is all about introspecting function declarations (rather than call signatures). Then the attributes of a Signature object could just be properties exposing the underlying details from the attached function/code object. And finally, keep up the good work! -eric From ericsnowcurrently at gmail.com Thu Jun 7 19:08:44 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Thu, 7 Jun 2012 11:08:44 -0600 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FD0B6DB.4010304@hastings.org> References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <4FD0B6DB.4010304@hastings.org> Message-ID: On Thu, Jun 7, 2012 at 8:12 AM, Larry Hastings wrote: > On 06/06/2012 06:00 PM, Nick Coghlan wrote: > >> On Thu, Jun 7, 2012 at 10:52 AM, Eric Snow >> wrote: >> >> Furthermore, using __signature__ as a cache may even cause problems. >> If the Signature object is cached then any changes to the function >> will not be reflected in the Signature object. ?Certainly that's an >> unlikely case, but it is a real case. [...] > > +1 > > > I'm missing something here.? Can you give me an example of modifying an > existing function object such that its Signature would change?? Decorators > implementing a closure with a different signature don't count--they return a > new function object. I doubt there are any but corner cases to demonstrate here. I'd don't presume to say what use there may be in changing a function's state. However, the fact is that a change to any of the following would cause a cached __signature__ to be out of sync: * f.__annotations__ * f.__closure__ * f.__code__ * f.__defaults__ * f.__globals__ * f.__kwdefaults__ * f.__name__ * f.__qualname__ All of these are at least replaceable, if not mutable. If inspect.signature() is intended for function introspection purposes, I'd expect it's return value (cached on __signature__ or not) to reflect the function and not a previous state of the function. I prefer the idea that f.__signature__ means it was set explicitly. Then there's no doubt as to what it's meant to reflect. >> Providing a defined mechanism to declare a public signature is good, >> but using that mechanism for implicit caching seems like a >> questionable idea. Even when it *is* cached, I'd be happier if >> inspect.signature() returned a copy rather than a direct reference to >> the original. > > > I'll say this: if we're going down this road of "don't cache Signature > objects", then in the case of __signature__ we should definitely return a > copy just for consistency's sakes.? It'd be a miserable implementation if > signature() sometimes returned the same object and sometimes returned > different but equivalent objects when called multiple times on the same > object. +1 -eric From yselivanov.ml at gmail.com Thu Jun 7 20:14:22 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 7 Jun 2012 14:14:22 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> Message-ID: Eric, On 2012-06-07, at 12:54 PM, Eric Snow wrote: > On Wed, Jun 6, 2012 at 11:10 AM, Yury Selivanov wrote: >> I like the idea of 'foo(a)' and 'bar(a)' having the identical signatures, >> however, I don't think it's possible. I.e. we can't make it that the >> 'signature(foo) is signature(bar)'. We can implement the __eq__ operator >> though. > > Adding __eq__ and __ne__ is a good thing. We still don't have a good use case for this, though. > However, I don't care so > much about the identity/equality relationship as I do about the > relationship between Signature objects and functions. Yes, that's the core issue we have - the relation between Signature objects and functions. > Regarding name and qualname, several questions: > > * Is the use of qualname in errors part of the PEP or an implementation detail? It's more of an implementation detail. And those errors may be not that important, after all. Probably, the 'Signature.bind()' will be used in contexts where you have either the function itself, or at least its name to catch the BindError and add some context to it. > * Can name and qualname be None? There is no explicit check for it in the code. Given the fact, that we now want ''signature()'' to work with any callable, it may be that the callable object doesn't have '__name__' and '__qualname__' attributes (unless we copy them from its __call__, and that's a doubtful move) > * If so, do the errors that rely on them still work? The code will break. > * What do you get when you pass a lambda to inspect.signature()? It will work fine, since lambdas are instances of FunctionType. > Thinking more about this, for me this PEP is more about call > signatures than about function declaration signatures. For a call you > don't need the callable's name (think lambdas). Having a name and > qualname attribute communicates that Signature objects are indelibly > tied to functions. So I still don't think name and qualname are > intrinsic attributes to call signatures. Yes, you're right. Brett and Larry also leaning towards defining Signature as a piece of information about the call semantics, disconnected from the function. I like this too, it makes everything simpler. I think we'll modify the PEP today, to drop 'name' and 'qualname' from Signature, and implement __eq__. > In the end this is a relatively minor point that does not detract from > my support of this PEP. I just think we should strip any superfluous > details we can now. Adding things later is easier than taking them > away. :) Thanks :) You're right, we can add 'name' and 'qualname' later. > Warning: the following may seem to contradict what I said above. Take > it for what it's worth. Regardless of name/qualname, it may be useful > to keep a weak reference on the Signature object to the function on > which it is based, *if any*. That's not essential to the concept of > call signatures, but it could still be meaningful for introspection. I think that maintaining an explicit link between functions and signatures (by passing them around together, or just by passing the function and calculating signature out of it when needed) in the user code is better, than do it implicitly in the Signature. Let's not overcomplicate it all ;) > Alternately, let's consider the case where the PEP actually is all > about introspecting function declarations (rather than call > signatures). Then the attributes of a Signature object could just be > properties exposing the underlying details from the attached > function/code object. How about that later, if needed, we'll add another object to 'inspect', that will have this information about the function, and will also have its signature? - Yury From larry at hastings.org Thu Jun 7 20:34:49 2012 From: larry at hastings.org (Larry Hastings) Date: Thu, 07 Jun 2012 11:34:49 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <4FD0B6DB.4010304@hastings.org> Message-ID: <4FD0F449.1080309@hastings.org> On 06/07/2012 10:08 AM, Eric Snow wrote: >> I'm missing something here. Can you give me an example of modifying an >> existing function object such that its Signature would change? Decorators >> implementing a closure with a different signature don't count--they return a >> new function object. > I doubt there are any but corner cases to demonstrate here. I'd don't > presume to say what use there may be in changing a function's state. > However, the fact is that a change to any of the following would cause > a cached __signature__ to be out of sync: > > * f.__annotations__ > * f.__closure__ > * f.__code__ > [... other dunder attributes elided ...] In other words: this is possible but extremely unlikely, and will only be done knowingly and with deliberate intent by a skilled practitioner. I think it's reasonable to declare that, if you're monkeying around with dunder attributes on a function, it's up to you to clear the f.__signature__ cache if it's set. Like Spiderman's uncle Cliff Robertson said: with great power comes great responsibility. I am now firmly in the "using __signature__ as a cache is fine, don't make copies for no reason" camp. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Jun 7 20:08:47 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 07 Jun 2012 14:08:47 -0400 Subject: [Python-Dev] [Python-checkins] cpython (3.2): Nudge readers towards a more accurate mental model for loop else clauses In-Reply-To: References: Message-ID: <4FD0EE2F.3050808@udel.edu> On 6/7/2012 8:42 AM, nick.coghlan wrote: > http://hg.python.org/cpython/rev/6e4ec47fba6a > changeset: 77369:6e4ec47fba6a > branch: 3.2 > parent: 77363:aa9cfeea07ad > user: Nick Coghlan > date: Thu Jun 07 22:41:34 2012 +1000 > summary: > Nudge readers towards a more accurate mental model for loop else clauses > > files: > Doc/tutorial/controlflow.rst | 7 +++++++ > 1 files changed, 7 insertions(+), 0 deletions(-) > > > diff --git a/Doc/tutorial/controlflow.rst b/Doc/tutorial/controlflow.rst > --- a/Doc/tutorial/controlflow.rst > +++ b/Doc/tutorial/controlflow.rst > @@ -187,6 +187,13 @@ > (Yes, this is the correct code. Look closely: the ``else`` clause belongs to > the :keyword:`for` loop, **not** the :keyword:`if` statement.) > > +When used with a loop, the ``else`` clause has more in common with the > +``else`` clause of a :keyword:`try` statement than it does that of > +:keyword:`if` statements: a :keyword:`try` statement's ``else`` clause runs > +when no exception occurs, (And there no return. But that is true of of every statement.) I think the above is wrong. I claim that try-statement else: is essentially identical to if-statement else:. The try-statement else: clause is subordinate to the except: clauses, not the try: part itself. It must follow at least one except condition just as an if-statement must follow at least one if condition. So it is really an except else, not a try else. Furthermore, 'except SomeError' is an abbreviation for 'if/elif SomeError was raised since the corresponding try' and more particularly 'if/elif isinstance(__exception__, SomeError). I use __exception__ here as the equivalent of C's errno, except that it is hidden. (If it were not hidden, we would not need 'as e', and indeed, we would not really need 'except' either.) The else clause runs when all the implied conditionals of the excepts are false. Just as an if-statement else clause runs when all the explicit conditionals of the if/elifs are false. The real contrast is between if/except else and loop else. The latter is subordinate to exactly one condition instead of possibly many, but that one condition may be evaluated multiple times instead of just once. It is the latter fact that seems to confuse people. > and a loop's ``else`` clause runs when no ``break`` occurs. As I explained on Python-ideas, the else clause runs when the loop condition is false. Period. This sentence is an incomplete equivalent to 'the loop condition is false'. The else clause runs when no 'break', no 'return', no 'raise' (explicit or implicit), AND no infinite loop occurs. I think your addition should be reverted. Here is a possible alternative if you think something more is needed. "An else clause used with a loop has two differences from an else clause used with an if statement. It is subordinate to just one condition instead of possibly many. (In for statements, the condition is implicit but still there.) That one condition is tested repeatedly instead of just once. A loop else is the same in that it triggers when that one condition is false." --- Terry Jan Reedy From tjreedy at udel.edu Thu Jun 7 21:54:36 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 07 Jun 2012 15:54:36 -0400 Subject: [Python-Dev] PEP 362 Second Revision In-Reply-To: References: Message-ID: On 6/7/2012 10:41 AM, Yury Selivanov wrote: > Hello, > > The new revision of PEP 362 has been posted: > http://www.python.org/dev/peps/pep-0362/ > > Thanks to Brett, Larry, Nick, and everybody else on python-dev > for your corrections/suggestions. > > Summary of changes: > > 1. We don't cache signatures in __signature__ attribute implicitly > > 2. signature() function is now more complex, but supports methods, > partial objects, classes, callables, and decorated functions > > 3. Signatures are always constructed on demand > > 4. Dropped the deprecation section I like this now. Being more able to get the actual signature of partials and wraps will be a win. If the signature object has a decent __str__/__repr__ method, I would (try to remember) to revise idle tooltips (for whichever version) to check for .__signature__ before inspect'ing. -- Terry Jan Reedy From urban.dani+py at gmail.com Thu Jun 7 22:17:15 2012 From: urban.dani+py at gmail.com (Daniel Urban) Date: Thu, 7 Jun 2012 22:17:15 +0200 Subject: [Python-Dev] [Python-checkins] peps: Update 422 based on python-dev feedback In-Reply-To: References: Message-ID: On Thu, Jun 7, 2012 at 9:47 PM, Terry Reedy wrote: > On 6/7/2012 11:45 AM, Daniel Urban wrote: >> >> On Thu, Jun 7, 2012 at 2:08 PM, nick.coghlan >> ?wrote: >>> >>> -* If the metaclass hint refers to an instance of ``type``, then it is >>> +* If the metaclass hint refers to a subclass of ``type``, then it is >>> ? considered as a candidate metaclass along with the metaclasses of all >>> of >>> ? the parents of the class being defined. If a more appropriate metaclass >>> is >>> ? found amongst the candidates, then it will be used instead of the one >> >> >> I think here "instance" was correct (see >> http://hg.python.org/cpython/file/default/Lib/types.py#l76 and >> http://hg.python.org/cpython/file/cedc68440a67/Python/bltinmodule.c#l90). > > > If so, then the behavior of the standard case of a type subclass is not > obviously (to me) covered. A subclass of type is also necessarily an instance of type, so that is also covered by this case. Daniel From yselivanov.ml at gmail.com Thu Jun 7 22:54:52 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 7 Jun 2012 16:54:52 -0400 Subject: [Python-Dev] PEP 362 Second Revision In-Reply-To: References: Message-ID: <2E0F03CF-4F50-4080-AFCE-45C9E95E915B@gmail.com> On 2012-06-07, at 3:54 PM, Terry Reedy wrote: > On 6/7/2012 10:41 AM, Yury Selivanov wrote: >> Hello, >> >> The new revision of PEP 362 has been posted: >> http://www.python.org/dev/peps/pep-0362/ >> >> Thanks to Brett, Larry, Nick, and everybody else on python-dev >> for your corrections/suggestions. >> >> Summary of changes: >> >> 1. We don't cache signatures in __signature__ attribute implicitly >> >> 2. signature() function is now more complex, but supports methods, >> partial objects, classes, callables, and decorated functions >> >> 3. Signatures are always constructed on demand >> >> 4. Dropped the deprecation section > > I like this now. Being more able to get the actual signature of partials and wraps will be a win. If the signature object has a decent __str__/__repr__ method, I would (try to remember) to revise idle tooltips (for whichever version) to check for .__signature__ before inspect'ing. I think we'll add a 'format' method to the Signature, that will work like 'inspect.formatargspec'. 'Signature.__str__' will use it with default parameters/formatters. I'm not sure how __repr__ should look like. Maybe default repr (object.__repr__) is good enough. - Yury From yselivanov.ml at gmail.com Thu Jun 7 23:53:57 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 7 Jun 2012 17:53:57 -0400 Subject: [Python-Dev] PEP 362 Second Revision In-Reply-To: <4FD11FAA.2040308@udel.edu> References: <2E0F03CF-4F50-4080-AFCE-45C9E95E915B@gmail.com> <4FD11FAA.2040308@udel.edu> Message-ID: <06154370-35AD-45C6-B731-50665554E111@gmail.com> On 2012-06-07, at 5:39 PM, Terry Reedy wrote: > On 6/7/2012 4:54 PM, Yury Selivanov wrote: > >> I think we'll add a 'format' method to the Signature, that will work >> like 'inspect.formatargspec'. 'Signature.__str__' will use it with >> default parameters/formatters. > > Great. If I don't like the default, I could customize. Can you tell me how do you use those customizations (i.e. formatvarargs, formatarg and other arguments of formatargspec)? >> I'm not sure how __repr__ should look like. Maybe default repr >> (object.__repr__) is good enough. > > __repr__ = __str__ is common. > > Idle tooltips use an re to strip 'self[, ]' from the inspect.formatargspec result*. I have revised the code to only do that when appropriate (for bound instance methods and callable instances), which is to say, when the user has already entered the object that will become the self parameter). If signature does the same, I might delete the code and use the signature object instead. > > *The same could be done for class methods, but I am not sure that 'cls' is standard enough to bother. Of course, any function using anything other than 'self' will also not see the deletion. Come to think of it, now that I am doing the search-and-replace conditionally rather than always, I can and should re-write the re to remove the first name rather than 'self' specifically. It will be good to have all such signature manipulations done correctly in one place. Well, signature won't strip parameters based on their names, but rather based on the callable type. If it's a method, than no matter how its first parameter is named - it will be omitted. Same applies for classmethods. - Yury From ncoghlan at gmail.com Fri Jun 8 00:05:22 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 8 Jun 2012 08:05:22 +1000 Subject: [Python-Dev] [Python-checkins] cpython (3.2): Nudge readers towards a more accurate mental model for loop else clauses In-Reply-To: <4FD0EE2F.3050808@udel.edu> References: <4FD0EE2F.3050808@udel.edu> Message-ID: The inaccuracies in the analogy are why this is in the tutorial, not the language reference. All 3 else clauses are really their own thing. For if statements, the full construct is "if/elif/else", for loops it is "for/break/else" and "while/break/else" and for try statements it is "try/except/else". Early returns and uncaught exceptions will skip any of the three. However, emphasising the link to if statements has been demonstrably confusing for years, due to the interpretation of empty iterables as False in a boolean context. The new text is intended to emphasise that, no, the for loop else clause does *not* map directly to an if statement that checks that iterable. -- Sent from my phone, thus the relative brevity :) On Jun 8, 2012 5:12 AM, "Terry Reedy" wrote: > > > On 6/7/2012 8:42 AM, nick.coghlan wrote: > >> http://hg.python.org/cpython/**rev/6e4ec47fba6a >> changeset: 77369:6e4ec47fba6a >> branch: 3.2 >> parent: 77363:aa9cfeea07ad >> user: Nick Coghlan >> date: Thu Jun 07 22:41:34 2012 +1000 >> summary: >> Nudge readers towards a more accurate mental model for loop else clauses >> >> files: >> Doc/tutorial/controlflow.rst | 7 +++++++ >> 1 files changed, 7 insertions(+), 0 deletions(-) >> >> >> diff --git a/Doc/tutorial/controlflow.rst b/Doc/tutorial/controlflow.rst >> --- a/Doc/tutorial/controlflow.rst >> +++ b/Doc/tutorial/controlflow.rst >> @@ -187,6 +187,13 @@ >> (Yes, this is the correct code. Look closely: the ``else`` clause >> belongs to >> the :keyword:`for` loop, **not** the :keyword:`if` statement.) >> >> +When used with a loop, the ``else`` clause has more in common with the >> +``else`` clause of a :keyword:`try` statement than it does that of >> +:keyword:`if` statements: a :keyword:`try` statement's ``else`` clause >> runs >> +when no exception occurs, >> > > (And there no return. But that is true of of every statement.) > > I think the above is wrong. > > I claim that try-statement else: is essentially identical to if-statement > else:. The try-statement else: clause is subordinate to the except: > clauses, not the try: part itself. It must follow at least one except > condition just as an if-statement must follow at least one if condition. So > it is really an except else, not a try else. > > Furthermore, 'except SomeError' is an abbreviation for 'if/elif SomeError > was raised since the corresponding try' and more particularly 'if/elif > isinstance(__exception__, SomeError). I use __exception__ here as the > equivalent of C's errno, except that it is hidden. (If it were not hidden, > we would not need 'as e', and indeed, we would not really need 'except' > either.) The else clause runs when all the implied conditionals of the > excepts are false. Just as an if-statement else clause runs when all the > explicit conditionals of the if/elifs are false. > > The real contrast is between if/except else and loop else. The latter is > subordinate to exactly one condition instead of possibly many, but that one > condition may be evaluated multiple times instead of just once. It is the > latter fact that seems to confuse people. > > and a loop's ``else`` clause runs when no ``break`` occurs. >> > > As I explained on Python-ideas, the else clause runs when the loop > condition is false. Period. This sentence is an incomplete equivalent to > 'the loop condition is false'. The else clause runs when no 'break', no > 'return', no 'raise' (explicit or implicit), AND no infinite loop occurs. > > I think your addition should be reverted. Here is a possible alternative > if you think something more is needed. > > "An else clause used with a loop has two differences from an else clause > used with an if statement. It is subordinate to just one condition instead > of possibly many. (In for statements, the condition is implicit but still > there.) That one condition is tested repeatedly instead of just once. A > loop else is the same in that it triggers when that one condition is false." > > --- > Terry Jan Reedy > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > ncoghlan%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Jun 7 23:39:54 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 07 Jun 2012 17:39:54 -0400 Subject: [Python-Dev] PEP 362 Second Revision In-Reply-To: <2E0F03CF-4F50-4080-AFCE-45C9E95E915B@gmail.com> References: <2E0F03CF-4F50-4080-AFCE-45C9E95E915B@gmail.com> Message-ID: <4FD11FAA.2040308@udel.edu> On 6/7/2012 4:54 PM, Yury Selivanov wrote: > I think we'll add a 'format' method to the Signature, that will work > like 'inspect.formatargspec'. 'Signature.__str__' will use it with > default parameters/formatters. Great. If I don't like the default, I could customize. > I'm not sure how __repr__ should look like. Maybe default repr > (object.__repr__) is good enough. __repr__ = __str__ is common. Idle tooltips use an re to strip 'self[, ]' from the inspect.formatargspec result*. I have revised the code to only do that when appropriate (for bound instance methods and callable instances), which is to say, when the user has already entered the object that will become the self parameter). If signature does the same, I might delete the code and use the signature object instead. *The same could be done for class methods, but I am not sure that 'cls' is standard enough to bother. Of course, any function using anything other than 'self' will also not see the deletion. Come to think of it, now that I am doing the search-and-replace conditionally rather than always, I can and should re-write the re to remove the first name rather than 'self' specifically. It will be good to have all such signature manipulations done correctly in one place. --- Terry Jan Reedy From rdmurray at bitdance.com Fri Jun 8 02:40:00 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 07 Jun 2012 20:40:00 -0400 Subject: [Python-Dev] PEP 362 Second Revision In-Reply-To: <4FD11FAA.2040308@udel.edu> References: <2E0F03CF-4F50-4080-AFCE-45C9E95E915B@gmail.com> <4FD11FAA.2040308@udel.edu> Message-ID: <20120608004001.384BC2500AD@webabinitio.net> On Thu, 07 Jun 2012 17:39:54 -0400, Terry Reedy wrote: > On 6/7/2012 4:54 PM, Yury Selivanov wrote: > > > I think we'll add a 'format' method to the Signature, that will work > > like 'inspect.formatargspec'. 'Signature.__str__' will use it with > > default parameters/formatters. > > Great. If I don't like the default, I could customize. > > > I'm not sure how __repr__ should look like. Maybe default repr > > (object.__repr__) is good enough. > > __repr__ = __str__ is common. I think you meant __str__ = __repr__. __repr__ is the more fundamental of the two, and if there is no __str__, it defaults to __repr__. IMO the __repr__ should make it clear that it is a signature object somehow. --David From ncoghlan at gmail.com Fri Jun 8 03:10:26 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 8 Jun 2012 11:10:26 +1000 Subject: [Python-Dev] [Python-checkins] peps: Update 422 based on python-dev feedback In-Reply-To: References: Message-ID: On Fri, Jun 8, 2012 at 1:45 AM, Daniel Urban wrote: > On Thu, Jun 7, 2012 at 2:08 PM, nick.coghlan wrote: >> -* If the metaclass hint refers to an instance of ``type``, then it is >> +* If the metaclass hint refers to a subclass of ``type``, then it is >> ? considered as a candidate metaclass along with the metaclasses of all of >> ? the parents of the class being defined. If a more appropriate metaclass is >> ? found amongst the candidates, then it will be used instead of the one > > I think here "instance" was correct (see > http://hg.python.org/cpython/file/default/Lib/types.py#l76 and > http://hg.python.org/cpython/file/cedc68440a67/Python/bltinmodule.c#l90). Hmm, thinking back on it, the REPL experiments that persuaded me Terry was right were flawed (I tried with object directly, but the signature of __new__/__init__ would have been wrong regardless in that case). Still, I'm kinda proving my point that I find it difficult to keep *all* the details of metaclass invocation straight in my head, even though I've been hacking on the type system for years. I've never had anything even close to that kind of problem with class methods :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Fri Jun 8 03:24:45 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 8 Jun 2012 11:24:45 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FD0F449.1080309@hastings.org> References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <4FD0B6DB.4010304@hastings.org> <4FD0F449.1080309@hastings.org> Message-ID: On Fri, Jun 8, 2012 at 4:34 AM, Larry Hastings wrote: > In other words: this is possible but extremely unlikely, and will only be > done knowingly and with deliberate intent by a skilled practitioner. > > I think it's reasonable to declare that, if you're monkeying around with > dunder attributes on a function, it's up to you to clear the f.__signature__ > cache if it's set.? Like Spiderman's uncle Cliff Robertson said: with great > power comes great responsibility. > > I am now firmly in the "using __signature__ as a cache is fine, don't make > copies for no reason" camp. I have a simpler rule: functions in the inspect module should not have side effects on the objects they're used to inspect. When I call "inspect.signature(f)", I expect to get something I can modify without affecting the state of "f". That means, even if f.__signature__ is set, the signature function will need to return a copy rather than a direct reference to the original. If f.__signature__ is going to be copied *anyway*, then there's no reason to cache it, *unless* we want to say something other than what the inspect module would automatically derive from other attributes like __func__, __wrapped__, __call__, __code__, __closure__, etc. There are lots of ways that implicit caching can go wrong and fail to reflect the true state of the system, and unlike a centralised cache such as linecache or those in importlib, a distributed cache is hard to clear when the state of the system changes. If "signature creation" ever proves to be a real bottleneck in an application, then it is free to implement it's own identity-based cache for signature lookup. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From steve at pearwood.info Fri Jun 8 04:08:09 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 08 Jun 2012 12:08:09 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <4FD0B6DB.4010304@hastings.org> <4FD0F449.1080309@hastings.org> Message-ID: <4FD15E89.2090801@pearwood.info> Nick Coghlan wrote: > On Fri, Jun 8, 2012 at 4:34 AM, Larry Hastings wrote: >> In other words: this is possible but extremely unlikely, and will only be >> done knowingly and with deliberate intent by a skilled practitioner. >> >> I think it's reasonable to declare that, if you're monkeying around with >> dunder attributes on a function, it's up to you to clear the f.__signature__ >> cache if it's set. Like Spiderman's uncle Cliff Robertson said: with great >> power comes great responsibility. >> >> I am now firmly in the "using __signature__ as a cache is fine, don't make >> copies for no reason" camp. > > I have a simpler rule: functions in the inspect module should not have > side effects on the objects they're used to inspect. > > When I call "inspect.signature(f)", I expect to get something I can > modify without affecting the state of "f". That means, even if > f.__signature__ is set, the signature function will need to return a > copy rather than a direct reference to the original. If > f.__signature__ is going to be copied *anyway*, then there's no reason > to cache it, *unless* we want to say something other than what the > inspect module would automatically derive from other attributes like > __func__, __wrapped__, __call__, __code__, __closure__, etc. There is still a potential reason to cache func.__signature__: it's a relatively large chunk of fields, which duplicates a lot of already existing data. Why make all function objects bigger when only a small minority will be inspected for their __signature__? I think that lazy evaluation of __signature__ is desirable, and caching it is certainly desirable now that you have convinced me that there are use-cases for setting __signature__. Perhaps func.__signature__ should be a computed the first time it is accessed? Something conceptually like this: class FunctionWithSignature(types.FunctionType): @property def __signature__(self): if hasattr(self._sig): return self._sig sig = self._get_signature() # Left as an exercise for the reader. self._sig = sig return sig @__signature__.setter def __signature__(self, sig): self._sig = sig @__signature__.deleter def __signature__(self): del self._sig -- Steven From larry at hastings.org Fri Jun 8 04:18:54 2012 From: larry at hastings.org (Larry Hastings) Date: Thu, 07 Jun 2012 19:18:54 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FD15E89.2090801@pearwood.info> References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <4FD0B6DB.4010304@hastings.org> <4FD0F449.1080309@hastings.org> <4FD15E89.2090801@pearwood.info> Message-ID: <4FD1610E.80403@hastings.org> On 06/07/2012 07:08 PM, Steven D'Aprano wrote: > Perhaps func.__signature__ should be a computed the first time it is > accessed? The PEP already declares that signatures are lazily generated. signature() checks to see if __signature__ is set, and if it is returns it. (Or, rather, a deepcopy of it, assuming we go down that route.) If __signature__ isn't set, signature() computes it and returns that. (Possibly caching in __signature__, possibly not, possibly caching the result then returning a deepcopy.) //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Fri Jun 8 05:01:24 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 7 Jun 2012 23:01:24 -0400 Subject: [Python-Dev] PEP 362 Second Revision In-Reply-To: <20120608004001.384BC2500AD@webabinitio.net> References: <2E0F03CF-4F50-4080-AFCE-45C9E95E915B@gmail.com> <4FD11FAA.2040308@udel.edu> <20120608004001.384BC2500AD@webabinitio.net> Message-ID: <9EC4398F-2D9D-4CD2-8554-A8B2826DA6E5@gmail.com> On 2012-06-07, at 8:40 PM, R. David Murray wrote: > IMO the __repr__ should make it clear that it is a signature object > somehow. +1. - Yury From ncoghlan at gmail.com Fri Jun 8 05:25:39 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 8 Jun 2012 13:25:39 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FD1610E.80403@hastings.org> References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <4FD0B6DB.4010304@hastings.org> <4FD0F449.1080309@hastings.org> <4FD15E89.2090801@pearwood.info> <4FD1610E.80403@hastings.org> Message-ID: On Fri, Jun 8, 2012 at 12:18 PM, Larry Hastings wrote: > On 06/07/2012 07:08 PM, Steven D'Aprano wrote: > > Perhaps func.__signature__ should be a computed the first time it is > accessed? > > > The PEP already declares that signatures are lazily generated.? signature() > checks to see if __signature__ is set, and if it is returns it.? (Or, > rather, a deepcopy of it, assuming we go down that route.)? If __signature__ > isn't set, signature() computes it and returns that.? (Possibly caching in > __signature__, possibly not, possibly caching the result then returning a > deepcopy.) The need for a consistent return value ownership guarantee (i.e. "it's OK to mutate the return value of inspect.signature()") and the implicit inspect module guarantee of "introspection is non-intrusive and doesn't affect the internal state of inspected objects" are the main reasons I think implicit caching is a bad idea. However, there are also a couple of pragmatic reasons I don't like the idea: 1. It's unlikely there will be a huge performance difference between deep copying an existing Signature object and deriving a new one 2. If someone actually *wants* cached signature lookup for speed reasons and is willing to sacrifice some memory in order to do so (e.g. when using signatures for early parameter checking), they can just do: "cached_signature = functools.lru_cache(inspect.signature)" This approach will provide *genuine* caching, with no deep copying involved when the same callable is encountered a second time. You will also get all the benefits that the LRU caching mechanism provides, rather than having a hidden unbounded caching mechanism that potentially encompasses every function in the application. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Fri Jun 8 06:04:19 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 8 Jun 2012 00:04:19 -0400 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <4FD0B6DB.4010304@hastings.org> <4FD0F449.1080309@hastings.org> <4FD15E89.2090801@pearwood.info> <4FD1610E.80403@hastings.org> Message-ID: <0939DC35-AFBB-4C0E-8389-29186DF49937@gmail.com> Nick, I'm replying to your email (re 'functools.partial') in python-ideas here, in the PEP 362 thread, as my response raises some questions regarding its design. > On 2012-06-07, at 11:40 PM, Nick Coghlan wrote: >> On Fri, Jun 8, 2012 at 12:57 PM, Yury Selivanov wrote: >>> Hello, >>> >>> While I was working on adding support for 'functools.partial' in PEP 362, >>> I discovered that it doesn't do any sanity check on passed arguments >>> upon creation. >>> >>> Example: >>> >>> def foo(a): >>> pass >>> >>> p = partial(foo, 1, 2, 3) # this line will execute >>> >>> p() # this line will fail >>> >>> Is it a bug? Or is it a feature, because we deliberately don't do any checks >>> because of performance issues? If the latter - I think it should be at least >>> documented. >> >> Partly the latter, but also a matter of "this is hard to do, so we >> don't even try". There are many other "lazy execution" APIs with the >> same problem - they accept an arbitrary underlying callable, but you >> don't find out until you try to call it that the arguments don't match >> the parameters. This leads to errors being raised far away from the >> code that actually introduced the error. >> >> If you dig up some of the older PEP 362 discussions, you'll find that >> allowing developers to reduce this problem over time is the main >> reason the Signature.bind() method was added to the PEP. While I >> wouldn't recommend it for the base partial type, I could easily see >> someone using PEP 362 to create a "checked partial" that ensures >> arguments are valid as they get passed in rather than leaving the >> validation until the call is actually made. It's not going to be that easy with the current PEP design. In order to add support for partial, I had to split the implementation of 'bind' into two functions: def _bind(self, args, kwargs, *, partial=False): ... def bind(self, *args, **kwargs): return self._bind(args, kwargs) The first one, '_bind' does all the hard work. When 'partial' flag is False - it performs all possible checks. But if it's 'True', then it allows you to bind arguments in the same way 'functools.partial' works, but still with most of the validation. So: def foo(a, b, c): pass sig = signature(foo) sig._bind((1, 2, 3, 4), partial=True) # <- this will fail sig._bind((1, 2), partial=True) # <- this is OK sig._bind((1, 2), partial=False) # <- this will fail too But the problem is - '_bind' is an implementation detail. I'd like to discuss changing of PEP 362 'bind' signature to match the '_bind' method. This will make API less nice, but will allow more. Or, we can add something like 'bind_ex' (in addition to 'bind'). - Yury From alexandre.zani at gmail.com Fri Jun 8 06:20:37 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Thu, 7 Jun 2012 21:20:37 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <0939DC35-AFBB-4C0E-8389-29186DF49937@gmail.com> References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <4FD0B6DB.4010304@hastings.org> <4FD0F449.1080309@hastings.org> <4FD15E89.2090801@pearwood.info> <4FD1610E.80403@hastings.org> <0939DC35-AFBB-4C0E-8389-29186DF49937@gmail.com> Message-ID: A comment on the way methods are handled. I have seen decorators that do something like this: import functools def dec(f): functools.wraps(f) def decorated(*args, *kwargs): cursor = databaseCursor() return f(cursor, *args, **kwargs) As a result, if the decorated function has to be something like this: class SomeClass(object): @dec def func(cursor, self, whatever): ... Perhaps the decorator should be smarter about this and detect the fact that it's dealing with a method but right now, the Signature object would drop the first argument (cursor) which doesn't seem right. Perhaps the decorator should set __signature__. I'm not sure. On Thu, Jun 7, 2012 at 9:04 PM, Yury Selivanov wrote: > Nick, > > I'm replying to your email (re 'functools.partial') in python-ideas here, > in the PEP 362 thread, as my response raises some questions regarding its > design. > >> On 2012-06-07, at 11:40 PM, Nick Coghlan wrote: >>> On Fri, Jun 8, 2012 at 12:57 PM, Yury Selivanov wrote: >>>> Hello, >>>> >>>> While I was working on adding support for 'functools.partial' in PEP 362, >>>> I discovered that it doesn't do any sanity check on passed arguments >>>> upon creation. >>>> >>>> Example: >>>> >>>> ? ?def foo(a): >>>> ? ? ? ?pass >>>> >>>> ? ?p = partial(foo, 1, 2, 3) # this line will execute >>>> >>>> ? ?p() # this line will fail >>>> >>>> Is it a bug? ?Or is it a feature, because we deliberately don't do any checks >>>> because of performance issues? ?If the latter - I think it should be at least >>>> documented. >>> >>> Partly the latter, but also a matter of "this is hard to do, so we >>> don't even try". There are many other "lazy execution" APIs with the >>> same problem - they accept an arbitrary underlying callable, but you >>> don't find out until you try to call it that the arguments don't match >>> the parameters. This leads to errors being raised far away from the >>> code that actually introduced the error. >>> >>> If you dig up some of the older PEP 362 discussions, you'll find that >>> allowing developers to reduce this problem over time is the main >>> reason the Signature.bind() method was added to the PEP. While I >>> wouldn't recommend it for the base partial type, I could easily see >>> someone using PEP 362 to create a "checked partial" that ensures >>> arguments are valid as they get passed in rather than leaving the >>> validation until the call is actually made. > > It's not going to be that easy with the current PEP design. > > In order to add support for partial, I had to split the implementation > of 'bind' into two functions: > > ? ?def _bind(self, args, kwargs, *, partial=False): > ? ? ? ?... > > ? ?def bind(self, *args, **kwargs): > ? ? ? ?return self._bind(args, kwargs) > > The first one, '_bind' does all the hard work. ?When 'partial' flag > is False - it performs all possible checks. ?But if it's 'True', then > it allows you to bind arguments in the same way 'functools.partial' > works, but still with most of the validation. > > So: > > ? ?def foo(a, b, c): > ? ? ? ?pass > > ? ?sig = signature(foo) > > ? ?sig._bind((1, 2, 3, 4), partial=True) # <- this will fail > > ? ?sig._bind((1, 2), partial=True) # <- this is OK > > ? ?sig._bind((1, 2), partial=False) # <- this will fail too > > > But the problem is - '_bind' is an implementation detail. > > I'd like to discuss changing of PEP 362 'bind' signature to match > the '_bind' method. This will make API less nice, but will allow more. > > Or, we can add something like 'bind_ex' (in addition to 'bind'). > > - > Yury > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com From ncoghlan at gmail.com Fri Jun 8 06:39:14 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 8 Jun 2012 14:39:14 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <0939DC35-AFBB-4C0E-8389-29186DF49937@gmail.com> References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <4FD0B6DB.4010304@hastings.org> <4FD0F449.1080309@hastings.org> <4FD15E89.2090801@pearwood.info> <4FD1610E.80403@hastings.org> <0939DC35-AFBB-4C0E-8389-29186DF49937@gmail.com> Message-ID: On Fri, Jun 8, 2012 at 2:04 PM, Yury Selivanov wrote: >>> If you dig up some of the older PEP 362 discussions, you'll find that >>> allowing developers to reduce this problem over time is the main >>> reason the Signature.bind() method was added to the PEP. While I >>> wouldn't recommend it for the base partial type, I could easily see >>> someone using PEP 362 to create a "checked partial" that ensures >>> arguments are valid as they get passed in rather than leaving the >>> validation until the call is actually made. > > It's not going to be that easy with the current PEP design. > > In order to add support for partial, I had to split the implementation > of 'bind' into two functions: > > ? ?def _bind(self, args, kwargs, *, partial=False): > ? ? ? ?... > > ? ?def bind(self, *args, **kwargs): > ? ? ? ?return self._bind(args, kwargs) > > The first one, '_bind' does all the hard work. ?When 'partial' flag > is False - it performs all possible checks. ?But if it's 'True', then > it allows you to bind arguments in the same way 'functools.partial' > works, but still with most of the validation. I would keep _bind() as an implementation detail (because the signature is ugly), and expose the two behaviours as bind() and bind_partial() (with the cleaner existing signature). I thought about suggesting that bind_partial() return a (Signature, BoundArguments) tuple, but realised that doesn't make sense, since there are at least two different ways to use bind_partial(): 1. to provide or change the default arguments for one or more parameters 2. to remove one or more parameters from the signature (forcing them to particular values) Rather than trying to guess specific details on how it could be used a priori, it makes more sense to just leave it up to the caller to decide how they want to update the Signature object. If additional convenience methods seem like a good idea in the future, they can either be provided in a third party enhanced signature manipulation library, and/or added in 3.4. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Fri Jun 8 06:41:13 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 8 Jun 2012 14:41:13 +1000 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <4FD0B6DB.4010304@hastings.org> <4FD0F449.1080309@hastings.org> <4FD15E89.2090801@pearwood.info> <4FD1610E.80403@hastings.org> <0939DC35-AFBB-4C0E-8389-29186DF49937@gmail.com> Message-ID: On Fri, Jun 8, 2012 at 2:20 PM, Alexandre Zani wrote: > A comment on the way methods are handled. I have seen decorators that > do something like this: > > import functools > > def dec(f): > ? ?functools.wraps(f) > ? ?def decorated(*args, *kwargs): > ? ? ? ?cursor = databaseCursor() > ? ? ? ?return f(cursor, *args, **kwargs) > > As a result, if the decorated function has to be something like this: > > class SomeClass(object): > ?@dec > ?def func(cursor, self, whatever): > ? ? ... > > Perhaps the decorator should be smarter about this and detect the fact > that it's dealing with a method but right now, the Signature object > would drop the first argument (cursor) which doesn't seem right. > Perhaps the decorator should set __signature__. I'm not sure. The decorator should set __signature__, since the API of the underlying function does not match the public API. I posted an example earlier in the thread on how to do that correctly. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From alexandre.zani at gmail.com Fri Jun 8 06:56:23 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Thu, 7 Jun 2012 21:56:23 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FCF7965.5070102@pearwood.info> <94F6B981-560F-45FE-95DD-4C8E2D49075A@gmail.com> <4FD0B6DB.4010304@hastings.org> <4FD0F449.1080309@hastings.org> <4FD15E89.2090801@pearwood.info> <4FD1610E.80403@hastings.org> <0939DC35-AFBB-4C0E-8389-29186DF49937@gmail.com> Message-ID: On Thu, Jun 7, 2012 at 9:41 PM, Nick Coghlan wrote: > On Fri, Jun 8, 2012 at 2:20 PM, Alexandre Zani wrote: >> A comment on the way methods are handled. I have seen decorators that >> do something like this: >> >> import functools >> >> def dec(f): >> ? ?functools.wraps(f) >> ? ?def decorated(*args, *kwargs): >> ? ? ? ?cursor = databaseCursor() >> ? ? ? ?return f(cursor, *args, **kwargs) >> >> As a result, if the decorated function has to be something like this: >> >> class SomeClass(object): >> ?@dec >> ?def func(cursor, self, whatever): >> ? ? ... >> >> Perhaps the decorator should be smarter about this and detect the fact >> that it's dealing with a method but right now, the Signature object >> would drop the first argument (cursor) which doesn't seem right. >> Perhaps the decorator should set __signature__. I'm not sure. > > The decorator should set __signature__, since the API of the > underlying function does not match the public API. I posted an example > earlier in the thread on how to do that correctly. OK, makes sense. > > Cheers, > Nick. > > -- > Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From stephen at xemacs.org Fri Jun 8 09:29:29 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 08 Jun 2012 16:29:29 +0900 Subject: [Python-Dev] [Python-checkins] cpython (3.2): Nudge readers towards a more accurate mental model for loop else clauses In-Reply-To: References: <4FD0EE2F.3050808@udel.edu> Message-ID: <87y5ny8dnq.fsf@uwakimon.sk.tsukuba.ac.jp> Note: reply-to set to python-ideas. Nick Coghlan writes: > The inaccuracies in the analogy are why this is in the tutorial, not the > language reference. All 3 else clauses are really their own thing. Nick, for the purpose of the tutorial, actually there are 4 else clauses: you need to distinguish *while* from *for*. It was much easier for me to get confused about *for*. I think Terry is on to something here: > > "An else clause used with a loop has two differences from an else > > clause used with an if statement. [...] [The] condition is > > tested repeatedly instead of just once. A loop else is the same > > in that it triggers when that one condition is false." I think it would be a good idea to use an example where the loop is a *while* loop, not a *for* loop. In the case of the *for* loop, it's easy to confuse the implicit test used (ie, "item is true") with a test on the iterable ("iterable is not empty"). With a *while* loop, that's not true. I don't have trouble understanding that the while suite is executed "while" the condition is true: >>> for n in range(2, 10): ... x = 2 ... while x < n: ... if n % x == 0: ... print(n, 'equals', x, '*', n//x) ... break ... x = x + 1 ... else: ... # loop fell through without finding a factor ... print(n, 'is a prime number') ... Of course it's useful to point out here that use of a while loop is bad style, and the "for x in range(2,n)" version is preferred, having *exactly the same semantics* for the *else* clause. Also, I find rewriting the if/else as if condition: do_when_true() else: do_when_false() as while condition: do_when_true() break # do only once, and skip else! else: do_when_false() to be quite mnemonic for understanding what the while/else construct really does. > > [The loop *else*] is subordinate to just one condition instead of > > possibly many. (In for statements, the condition is implicit but > > still there.) FWIW, this issue doesn't affect my understanding, as I think of if/elif/else as "indentation-optimized tail-nested" ifs anyway. I'm not sure if anybody else will feel as I do, so I don't offer a patch here. From fijall at gmail.com Fri Jun 8 11:50:34 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 8 Jun 2012 11:50:34 +0200 Subject: [Python-Dev] PyPy 1.9 - Yard Wolf Message-ID: ==================== PyPy 1.9 - Yard Wolf ==================== We're pleased to announce the 1.9 release of PyPy. This release brings mostly bugfixes, performance improvements, other small improvements and overall progress on the `numpypy`_ effort. It also brings an improved situation on Windows and OS X. You can download the PyPy 1.9 release here: http://pypy.org/download.html .. _`numpypy`: http://pypy.org/numpydonate.html What is PyPy? ============= PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7. It's fast (`pypy 1.9 and cpython 2.7.2`_ performance comparison) due to its integrated tracing JIT compiler. This release supports x86 machines running Linux 32/64, Mac OS X 64 or Windows 32. Windows 64 work is still stalling, we would welcome a volunteer to handle that. .. _`pypy 1.9 and cpython 2.7.2`: http://speed.pypy.org Thanks to our donors ==================== But first of all, we would like to say thank you to all people who donated some money to one of our four calls: * `NumPy in PyPy`_ (got so far $44502 out of $60000, 74%) * `Py3k (Python 3)`_ (got so far $43563 out of $105000, 41%) * `Software Transactional Memory`_ (got so far $21791 of $50400, 43%) * as well as our general PyPy pot. Thank you all for proving that it is indeed possible for a small team of programmers to get funded like that, at least for some time. We want to include this thank you in the present release announcement even though most of the work is not finished yet. More precisely, neither Py3k nor STM are ready to make it in an official release yet: people interested in them need to grab and (attempt to) translate PyPy from the corresponding branches (respectively ``py3k`` and ``stm-thread``). .. _`NumPy in PyPy`: http://pypy.org/numpydonate.html .. _`Py3k (Python 3)`: http://pypy.org/py3donate.html .. _`Software Transactional Memory`: http://pypy.org/tmdonate.html Highlights ========== * This release still implements Python 2.7.2. * Many bugs were corrected for Windows 32 bit. This includes new functionality to test the validity of file descriptors; and correct handling of the calling convensions for ctypes. (Still not much progress on Win64.) A lot of work on this has been done by Matti Picus and Amaury Forgeot d'Arc. * Improvements in ``cpyext``, our emulator for CPython C extension modules. For example PyOpenSSL should now work. We thank various people for help. * Sets now have strategies just like dictionaries. This means for example that a set containing only ints will be more compact (and faster). * A lot of progress on various aspects of ``numpypy``. See the `numpy-status`_ page for the automatic report. * It is now possible to create and manipulate C-like structures using the PyPy-only ``_ffi`` module. The advantage over using e.g. ``ctypes`` is that ``_ffi`` is very JIT-friendly, and getting/setting of fields is translated to few assembler instructions by the JIT. However, this is mostly intended as a low-level backend to be used by more user-friendly FFI packages, and the API might change in the future. Use it at your own risk. * The non-x86 backends for the JIT are progressing but are still not merged (ARMv7 and PPC64). * JIT hooks for inspecting the created assembler code have been improved. See `JIT hooks documentation`_ for details. * ``select.kqueue`` has been added (BSD). * Handling of keyword arguments has been drastically improved in the best-case scenario: proxy functions which simply forwards ``*args`` and ``**kwargs`` to another function now performs much better with the JIT. * List comprehension has been improved. .. _`numpy-status`: http://buildbot.pypy.org/numpy-status/latest.html .. _`JIT hooks documentation`: http://doc.pypy.org/en/latest/jit-hooks.html JitViewer ========= There will be a corresponding 1.9 release of JitViewer which is guaranteed to work with PyPy 1.9. See the `JitViewer docs`_ for details. .. _`JitViewer docs`: http://bitbucket.org/pypy/jitviewer Cheers, The PyPy Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Fri Jun 8 13:20:55 2012 From: tseaver at palladion.com (Tres Seaver) Date: Fri, 08 Jun 2012 07:20:55 -0400 Subject: [Python-Dev] [Python-checkins] cpython (3.2): #14957: clarify splitlines docs. In-Reply-To: <20120607125558.41CD92500AD@webabinitio.net> References: <20120607125558.41CD92500AD@webabinitio.net> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 06/07/2012 08:55 AM, R. David Murray wrote: > On Thu, 07 Jun 2012 11:08:09 +0100, Sam Partington > wrote: >> Wouldn't that be better written as a doctest and so avoid any other >> typos? > > Possibly, except (1) I don't think we currently actually test the > doctests in the python docs and FWIW, I've had a lot of success lately with automating testing of doctest snippets in Sphinx docs via:: $ /path/to/sphinx-build -b doctest \ -d docs/_build/doctrees docs docs/_build/doctest (That was from non-Python-core packages where the convention is that the Sphink docs are managed and built in the 'docs' subdirectory). Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/R4BAACgkQ+gerLs4ltQ426ACgzzr3WHWe8q/4QCdFJgOhYirU 9rAAoMcMZJ3ycPa6B0C4jqCihVdVY9f0 =rYxl -----END PGP SIGNATURE----- From matti.picus at gmail.com Fri Jun 8 17:13:54 2012 From: matti.picus at gmail.com (Matti Picus) Date: Fri, 8 Jun 2012 15:13:54 +0000 (UTC) Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython Message-ID: The windows port of pypy makes special demands on stdlib, specifically that files are explicitly closed. There are some other minor issues, in order to merge all the changes necessary to get pypy windows up to speed, around 10 modules or at least their tests seem to need to be modified. I have been doing a bit of work on the stdlib shipped with pypy 1.9 (version 2.7.2 unfortunately) to make it compliant. Assuming there is interest, what would be the best path to get, for instance, a modified version of mailbox.py with its tests (test_mailbox.py and test_old_mailbox.py) backported to cpython? Matti PS - I know closing files on delete is also an issue for cpython3.3, I did merge as much of 3.3 back into mailbox as I could, but there were still more issues. From status at bugs.python.org Fri Jun 8 18:06:45 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 8 Jun 2012 18:06:45 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120608160645.073951C869@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-06-01 - 2012-06-08) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3460 (+10) closed 23354 (+46) total 26814 (+56) Open issues with patches: 1460 Issues opened (44) ================== #12988: Tkinter File Dialog crashes on Win7 when saving to Documents L http://bugs.python.org/issue12988 reopened by serwy #14502: Document better what happens on releasing an unacquired lock http://bugs.python.org/issue14502 reopened by petri.lehtinen #14982: pkgutil.walk_packages seems to not work properly on Python 3.3 http://bugs.python.org/issue14982 opened by Marc.Abramowitz #14983: email.generator should always add newlines after closing bound http://bugs.python.org/issue14983 opened by mitya57 #14984: netrc module allows read of non-secured .netrc file http://bugs.python.org/issue14984 opened by bruno.Piguet #14988: _elementtree: Raise ImportError when importing of pyexpat fail http://bugs.python.org/issue14988 opened by Arfrever #14990: detect_encoding should fail with SyntaxError on invalid encodi http://bugs.python.org/issue14990 opened by flox #14991: Option for regex groupdict() to show only matching names http://bugs.python.org/issue14991 opened by rhettinger #14995: PyLong_FromString documentation should state that the string m http://bugs.python.org/issue14995 opened by rfk #14997: IDLE tries to run shell window if line is completed with F5 ra http://bugs.python.org/issue14997 opened by cuulblu #14998: pprint._safe_key is not always safe enough http://bugs.python.org/issue14998 opened by Shawn.Brown #14999: ctypes ArgumentError lists arguments from 1, not 0 http://bugs.python.org/issue14999 opened by perey #15001: segmentation fault with del sys.module['__main__'] http://bugs.python.org/issue15001 opened by amaury.forgeotdarc #15002: urllib2 does not download 4 MB file completely using ftp http://bugs.python.org/issue15002 opened by sspapilin #15003: make PyNamespace_New() public http://bugs.python.org/issue15003 opened by eric.snow #15004: add weakref support to types.SimpleNamespace http://bugs.python.org/issue15004 opened by eric.snow #15005: trace corrupts return result on chained execution http://bugs.python.org/issue15005 opened by techtonik #15006: Allow equality comparison between naive and aware datetime obj http://bugs.python.org/issue15006 opened by belopolsky #15007: Unittest CLI does not support test packages very well http://bugs.python.org/issue15007 opened by r.david.murray #15008: PEP 362 "Signature Objects" reference implementation http://bugs.python.org/issue15008 opened by Yury.Selivanov #15009: urlsplit can't round-trip relative-host urls. http://bugs.python.org/issue15009 opened by Buck.Golemon #15010: unittest: _top_level_dir is incorrectly persisted between call http://bugs.python.org/issue15010 opened by r.david.murray #15011: Change Scripts to bin on Windows http://bugs.python.org/issue15011 opened by brian.curtin #15013: smtplib: add low-level APIs to doc? http://bugs.python.org/issue15013 opened by sandro.tosi #15014: smtplib: add support for arbitrary auth methods http://bugs.python.org/issue15014 opened by sandro.tosi #15015: Access to non-existing "future" attribute in error path of fut http://bugs.python.org/issue15015 opened by pieleric #15016: Add special case for latin messages in email.mime.text http://bugs.python.org/issue15016 opened by mitya57 #15018: Incomplete Python LDFLAGS and CPPFLAGS used for extension modu http://bugs.python.org/issue15018 opened by marcusva #15019: Sting termination on Linux http://bugs.python.org/issue15019 opened by jjanis #15020: Poor default value for progname http://bugs.python.org/issue15020 opened by Joshua.Cogliati #15021: xmlrpc server hangs http://bugs.python.org/issue15021 opened by Abhishek.Singh #15022: types.SimpleNamespace needs to be picklable http://bugs.python.org/issue15022 opened by eric.snow #15025: httplib and http.client are missing response messages for defi http://bugs.python.org/issue15025 opened by craigloftus #15026: Faster UTF-16 encoding http://bugs.python.org/issue15026 opened by storchaka #15027: Faster UTF-32 encoding http://bugs.python.org/issue15027 opened by storchaka #15028: PySys_SetArgv escapes quotes in argv[] http://bugs.python.org/issue15028 opened by RMBianchi #15029: Update Defining New Types chapter according to PEP 253 http://bugs.python.org/issue15029 opened by mloskot #15030: PyPycLoader can't read cached .pyc files http://bugs.python.org/issue15030 opened by Ronan.Lamy #15031: Split .pyc parsing from module loading http://bugs.python.org/issue15031 opened by Ronan.Lamy #15032: Provide a select.select implemented using select.poll http://bugs.python.org/issue15032 opened by gregory.p.smith #15033: Different exit status when using -m http://bugs.python.org/issue15033 opened by kisielk #15034: tutorial should use best practices in user defined execeptions http://bugs.python.org/issue15034 opened by r.david.murray #15035: array.array of UCS2 values http://bugs.python.org/issue15035 opened by ronaldoussoren #15036: mailbox.mbox fails to pop two items in a row, flushing in betw http://bugs.python.org/issue15036 opened by petri.lehtinen Most recent 15 issues with no replies (15) ========================================== #15034: tutorial should use best practices in user defined execeptions http://bugs.python.org/issue15034 #15033: Different exit status when using -m http://bugs.python.org/issue15033 #15032: Provide a select.select implemented using select.poll http://bugs.python.org/issue15032 #15027: Faster UTF-32 encoding http://bugs.python.org/issue15027 #15026: Faster UTF-16 encoding http://bugs.python.org/issue15026 #15025: httplib and http.client are missing response messages for defi http://bugs.python.org/issue15025 #15018: Incomplete Python LDFLAGS and CPPFLAGS used for extension modu http://bugs.python.org/issue15018 #15015: Access to non-existing "future" attribute in error path of fut http://bugs.python.org/issue15015 #15010: unittest: _top_level_dir is incorrectly persisted between call http://bugs.python.org/issue15010 #14999: ctypes ArgumentError lists arguments from 1, not 0 http://bugs.python.org/issue14999 #14995: PyLong_FromString documentation should state that the string m http://bugs.python.org/issue14995 #14991: Option for regex groupdict() to show only matching names http://bugs.python.org/issue14991 #14988: _elementtree: Raise ImportError when importing of pyexpat fail http://bugs.python.org/issue14988 #14978: distutils Extension fails to be created with unicode package n http://bugs.python.org/issue14978 #14966: Fully document subprocess.CalledProcessError http://bugs.python.org/issue14966 Most recent 15 issues waiting for review (15) ============================================= #15031: Split .pyc parsing from module loading http://bugs.python.org/issue15031 #15030: PyPycLoader can't read cached .pyc files http://bugs.python.org/issue15030 #15027: Faster UTF-32 encoding http://bugs.python.org/issue15027 #15026: Faster UTF-16 encoding http://bugs.python.org/issue15026 #15022: types.SimpleNamespace needs to be picklable http://bugs.python.org/issue15022 #15016: Add special case for latin messages in email.mime.text http://bugs.python.org/issue15016 #15015: Access to non-existing "future" attribute in error path of fut http://bugs.python.org/issue15015 #15011: Change Scripts to bin on Windows http://bugs.python.org/issue15011 #15008: PEP 362 "Signature Objects" reference implementation http://bugs.python.org/issue15008 #15006: Allow equality comparison between naive and aware datetime obj http://bugs.python.org/issue15006 #15004: add weakref support to types.SimpleNamespace http://bugs.python.org/issue15004 #15003: make PyNamespace_New() public http://bugs.python.org/issue15003 #15001: segmentation fault with del sys.module['__main__'] http://bugs.python.org/issue15001 #14998: pprint._safe_key is not always safe enough http://bugs.python.org/issue14998 #14997: IDLE tries to run shell window if line is completed with F5 ra http://bugs.python.org/issue14997 Top 10 most discussed issues (10) ================================= #14814: Implement PEP 3144 (the ipaddress module) http://bugs.python.org/issue14814 14 msgs #14908: datetime.datetime should have a timestamp() method http://bugs.python.org/issue14908 13 msgs #12510: IDLE: calltips mishandle raw strings and other examples http://bugs.python.org/issue12510 12 msgs #14982: pkgutil.walk_packages seems to not work properly on Python 3.3 http://bugs.python.org/issue14982 11 msgs #7559: TestLoader.loadTestsFromName swallows import errors http://bugs.python.org/issue7559 9 msgs #15005: trace corrupts return result on chained execution http://bugs.python.org/issue15005 8 msgs #6721: Locks in python standard library should be sanitized on fork http://bugs.python.org/issue6721 7 msgs #14983: email.generator should always add newlines after closing bound http://bugs.python.org/issue14983 7 msgs #15003: make PyNamespace_New() public http://bugs.python.org/issue15003 7 msgs #14997: IDLE tries to run shell window if line is completed with F5 ra http://bugs.python.org/issue14997 6 msgs Issues closed (41) ================== #1079: decode_header does not follow RFC 2047 http://bugs.python.org/issue1079 closed by r.david.murray #2658: decode_header() fails on multiline headers http://bugs.python.org/issue2658 closed by r.david.murray #6203: locale documentation doesn't mention that LC_CTYPE is changed http://bugs.python.org/issue6203 closed by python-dev #6745: (curses) addstr() takes str in Python 3 http://bugs.python.org/issue6745 closed by haypo #8652: Minor improvements to the "Handling Exceptions" part of the tu http://bugs.python.org/issue8652 closed by r.david.murray #8902: add datetime.time.now() for consistency http://bugs.python.org/issue8902 closed by belopolsky #9967: encoded_word regular expression in email.header.decode_header( http://bugs.python.org/issue9967 closed by r.david.murray #10365: IDLE Crashes on File Open Dialog when code window closed befor http://bugs.python.org/issue10365 closed by terry.reedy #10574: email.header.decode_header fails if the string contains multip http://bugs.python.org/issue10574 closed by r.david.murray #11022: locale.getpreferredencoding() must not set temporary LC_CTYPE http://bugs.python.org/issue11022 closed by python-dev #11823: disassembly needs argument counts on calls with keyword args http://bugs.python.org/issue11823 closed by belopolsky #12157: join method of multiprocessing Pool object hangs if iterable a http://bugs.python.org/issue12157 closed by sbt #13854: multiprocessing: SystemExit from child with non-int, non-str a http://bugs.python.org/issue13854 closed by sbt #14006: Improve the documentation of xml.etree.ElementTree http://bugs.python.org/issue14006 closed by eli.bendersky #14078: Add 'sourceline' property to xml.etree Elements http://bugs.python.org/issue14078 closed by eli.bendersky #14424: document PyType_GenericAlloc http://bugs.python.org/issue14424 closed by eli.bendersky #14627: Fatal Python Error when Python startup is interrupted by CTRL+ http://bugs.python.org/issue14627 closed by haypo #14673: add sys.implementation http://bugs.python.org/issue14673 closed by loewis #14711: Remove os.stat_float_times http://bugs.python.org/issue14711 closed by haypo #14712: Integrate PEP 405 http://bugs.python.org/issue14712 closed by vinay.sajip #14852: json and ElementTree parsers misbehave on streams containing m http://bugs.python.org/issue14852 closed by eli.bendersky #14907: SSL module cannot handle unicode filenames http://bugs.python.org/issue14907 closed by loewis #14918: Incorrect TypeError message for wrong function arguments http://bugs.python.org/issue14918 closed by r.david.murray #14926: random.seed docstring needs edit of "*a *is" http://bugs.python.org/issue14926 closed by sandro.tosi #14937: IDLE's deficiency in the completion of file names (Python 32, http://bugs.python.org/issue14937 closed by loewis #14957: Improve docs for str.splitlines http://bugs.python.org/issue14957 closed by r.david.murray #14981: 3.3.0a4 compilation fails von MacOS Lion /10.7.4 http://bugs.python.org/issue14981 closed by hynek #14985: os.path.isfile and os.path.isdir inconsistent on OSX Lion http://bugs.python.org/issue14985 closed by hynek #14986: print() fails on latin1 characters on OSX http://bugs.python.org/issue14986 closed by hynek #14987: inspect missing warnings import http://bugs.python.org/issue14987 closed by brett.cannon #14989: http.server option to run CGIHTTPRequestHandler http://bugs.python.org/issue14989 closed by r.david.murray #14992: os.makedirs expect_ok=True test_os failure when directory has http://bugs.python.org/issue14992 closed by gregory.p.smith #14993: GCC error when using unicodeobject.h http://bugs.python.org/issue14993 closed by haypo #14994: GCC error when using pyerrors.h http://bugs.python.org/issue14994 closed by python-dev #14996: IDLE 3.2.3 crashes saving a .py file to certain folders on Win http://bugs.python.org/issue14996 closed by terry.reedy #15000: _posixsubprocess module broken on x32 http://bugs.python.org/issue15000 closed by gregory.p.smith #15012: test issue http://bugs.python.org/issue15012 closed by brian.curtin #15017: threading.Lock documentation conflict http://bugs.python.org/issue15017 closed by r.david.murray #15023: listextend (and therefore list.extend and list.__init__) peek http://bugs.python.org/issue15023 closed by r.david.murray #15024: Split enhanced assertion support out as a unittest.TestCase ba http://bugs.python.org/issue15024 closed by ncoghlan #1467619: Header.decode_header eats up spaces http://bugs.python.org/issue1467619 closed by r.david.murray From jdhardy at gmail.com Fri Jun 8 18:22:29 2012 From: jdhardy at gmail.com (Jeff Hardy) Date: Fri, 8 Jun 2012 09:22:29 -0700 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Fri, Jun 8, 2012 at 8:13 AM, Matti Picus wrote: > > The windows port of pypy makes special demands on stdlib, specifically that > files are explicitly closed. There are some other minor issues, in order to > merge all the changes necessary to get pypy windows up to speed, around 10 > modules or at ?least their tests seem to need to be modified. > I have been doing a bit of work on the stdlib shipped with pypy 1.9 > (version 2.7.2 unfortunately) to make it compliant. Assuming there is interest, > what would be the best path to get, for instance, a modified version of > mailbox.py with its tests (test_mailbox.py and test_old_mailbox.py) backported > to cpython? These fixes would also be useful for IronPython and possibly Jython as well. Unclosed files are probably the biggest set of failures when running CPython's tests on IronPython (along with assuming that sys.platform == 'win32' means Windows). Whether or not they get backported to CPython, it might be worth finding a way to share the 2.7 stdlib between the alternative implementations (changes to 3.x should go into CPython, obviously). - Jeff From fwierzbicki at gmail.com Fri Jun 8 18:39:47 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Fri, 8 Jun 2012 09:39:47 -0700 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Fri, Jun 8, 2012 at 9:22 AM, Jeff Hardy wrote: > On Fri, Jun 8, 2012 at 8:13 AM, Matti Picus wrote: >> >> The windows port of pypy makes special demands on stdlib, specifically that >> files are explicitly closed. There are some other minor issues, in order to >> merge all the changes necessary to get pypy windows up to speed, around 10 >> modules or at ?least their tests seem to need to be modified. >> I have been doing a bit of work on the stdlib shipped with pypy 1.9 >> (version 2.7.2 unfortunately) to make it compliant. Assuming there is interest, >> what would be the best path to get, for instance, a modified version of >> mailbox.py with its tests (test_mailbox.py and test_old_mailbox.py) backported >> to cpython? > > These fixes would also be useful for IronPython and possibly Jython as > well. Unclosed files are probably the biggest set of failures when > running CPython's tests on IronPython (along with assuming that > sys.platform == 'win32' means Windows). Whether or not they get > backported to CPython, it might be worth finding a way to share the > 2.7 stdlib between the alternative implementations (changes to 3.x > should go into CPython, obviously). I think it's supposed to be alright to push changes to CPython's 2.7 *tests* (like test_mailbox.py) but not other parts of the standard library (like mailbox.py) -- I'd love to find a way to share the modifications from various implementations though -- and in the 3.x future hopefully it can all just end up in CPython's Lib. -Frank From rdmurray at bitdance.com Fri Jun 8 19:08:57 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 08 Jun 2012 13:08:57 -0400 Subject: [Python-Dev] [Python-checkins] cpython (3.2): #14957: clarify splitlines docs. In-Reply-To: References: <20120607125558.41CD92500AD@webabinitio.net> Message-ID: <20120608170857.DEF472500AD@webabinitio.net> On Fri, 08 Jun 2012 07:20:55 -0400, Tres Seaver wrote: > On 06/07/2012 08:55 AM, R. David Murray wrote: > > On Thu, 07 Jun 2012 11:08:09 +0100, Sam Partington > > wrote: > > >> Wouldn't that be better written as a doctest and so avoid any other > >> typos? > > > > Possibly, except (1) I don't think we currently actually test the > > doctests in the python docs and > > FWIW, I've had a lot of success lately with automating testing of doctest > snippets in Sphinx docs via:: Oh, the *mechanics* of running the doctests in the docs are not difficult, 'make doctest' in Doc works just fine (on Python2). The are four issues: (1) we build the python3 docs using python2, so 'make doctest' on python3 doesn't currently work, and (2) not all the doctest snippets are valid doctests, (3) not all the code snippets that can (and should) be validated are recognized as such by 'make doctest', and (4) there is no buildbot-style automation for running the doc doctests. (1) is the easiest one to fix. --David PS: A year or so ago I cleaned up the doctests for turtle and multiprocessing, but I haven't re-run those tests since. I just did now: multiprocessing still passes(*), and there is one failing turtle test. The grand total on 2.7 is 1131 tests, 78 failures. (*) If I remember correctly "cleaning up the doctests" in Multiprocessing mostly meant making them not-doctests from 'make doctest's point of view, and then hand validating them. But Multiprocessing is a bit of a special case. From rdmurray at bitdance.com Fri Jun 8 19:45:47 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 08 Jun 2012 13:45:47 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: <20120608174548.6AA302500AD@webabinitio.net> On Fri, 08 Jun 2012 09:39:47 -0700, "fwierzbicki at gmail.com" wrote: > On Fri, Jun 8, 2012 at 9:22 AM, Jeff Hardy wrote: > > On Fri, Jun 8, 2012 at 8:13 AM, Matti Picus wrote: > >> > >> The windows port of pypy makes special demands on stdlib, specifically that > >> files are explicitly closed. There are some other minor issues, in order to > >> merge all the changes necessary to get pypy windows up to speed, around 10 > >> modules or at ??least their tests seem to need to be modified. > >> I have been doing a bit of work on the stdlib shipped with pypy 1.9 > >> (version 2.7.2 unfortunately) to make it compliant. Assuming there is interest, > >> what would be the best path to get, for instance, a modified version of > >> mailbox.py with its tests (test_mailbox.py and test_old_mailbox.py) backported > >> to cpython? > > > > These fixes would also be useful for IronPython and possibly Jython as > > well. Unclosed files are probably the biggest set of failures when > > running CPython's tests on IronPython (along with assuming that > > sys.platform == 'win32' means Windows). Whether or not they get > > backported to CPython, it might be worth finding a way to share the > > 2.7 stdlib between the alternative implementations (changes to 3.x > > should go into CPython, obviously). > I think it's supposed to be alright to push changes to CPython's 2.7 > *tests* (like test_mailbox.py) but not other parts of the standard > library (like mailbox.py) -- I'd love to find a way to share the > modifications from various implementations though -- and in the 3.x > future hopefully it can all just end up in CPython's Lib. If you can write a test that shows a problem with the code, it doesn't matter if you found in in a different implementation. As long as the change conforms to our backward compatibility policy it can be fixed in 2.7. And yes, I agree with your understanding that fixing tests (especially for things like resources not getting cleaned up correctly) is appreciated for the 2.7 tests. We should of course verify whether or not similar changes are needed for Python3. As for path to get them in...if you have any question about whether or not the change is appropriate, post a proposed patch to the tracker. --David From brett at python.org Fri Jun 8 19:59:08 2012 From: brett at python.org (Brett Cannon) Date: Fri, 8 Jun 2012 13:59:08 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Fri, Jun 8, 2012 at 12:39 PM, fwierzbicki at gmail.com < fwierzbicki at gmail.com> wrote: > On Fri, Jun 8, 2012 at 9:22 AM, Jeff Hardy wrote: > > On Fri, Jun 8, 2012 at 8:13 AM, Matti Picus > wrote: > >> > >> The windows port of pypy makes special demands on stdlib, specifically > that > >> files are explicitly closed. There are some other minor issues, in > order to > >> merge all the changes necessary to get pypy windows up to speed, around > 10 > >> modules or at least their tests seem to need to be modified. > >> I have been doing a bit of work on the stdlib shipped with pypy 1.9 > >> (version 2.7.2 unfortunately) to make it compliant. Assuming there is > interest, > >> what would be the best path to get, for instance, a modified version of > >> mailbox.py with its tests (test_mailbox.py and test_old_mailbox.py) > backported > >> to cpython? > > > > These fixes would also be useful for IronPython and possibly Jython as > > well. Unclosed files are probably the biggest set of failures when > > running CPython's tests on IronPython (along with assuming that > > sys.platform == 'win32' means Windows). Whether or not they get > > backported to CPython, it might be worth finding a way to share the > > 2.7 stdlib between the alternative implementations (changes to 3.x > > should go into CPython, obviously). > I think it's supposed to be alright to push changes to CPython's 2.7 > *tests* (like test_mailbox.py) but not other parts of the standard > library (like mailbox.py) R. David already replied to this, but just to reiterate: tests can always get updated, and code that fixes a bug (and leaving a file open can be considered a bug) can also go in. It's just stuff like code refactoring, speed improvements, etc. that can't go into Python 2.7 at this point. > -- I'd love to find a way to share the > modifications from various implementations though -- and in the 3.x > future hopefully it can all just end up in CPython's Lib. > If/until the stdlib is made into its own repo, should the various VMs consider keeping a common Python 2.7 repo that contains nothing but the stdlib (or at least just modifications to those) so they can modify in ways that CPython can't accept because of compatibility policy? You could keep it on hg.python.org (or wherever) and then all push to it. This might also be a good way to share Python implementations of extension modules for Python 2.7 instead of everyone maintaining there own for the next few years (although I think those modules should go into the stdlib directly for Python 3 as well). Basically this could be a test to see if communication and collaboration will be high enough among the other VMs to bother with breaking out the actual stdlib into its own repo or if it would just be a big waste of time. P.S. Do we need a python-implementations mailing list or something for discussing overall VM-related stuff among all VMs instead of always bringing this up on python-dev? E.g. I wish I had a place where I could get all the VM stakeholders' attention to make sure that importlib as it stands in Python 3.3 will skip trying to import Python bytecode properly (or if the VMs will simply provide their own setup function and that won't be a worry). And I would have no problem with keeping it like python-committers in terms of closed subscriptions, open archive in order to keep the noise low. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fwierzbicki at gmail.com Fri Jun 8 20:21:39 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Fri, 8 Jun 2012 11:21:39 -0700 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Fri, Jun 8, 2012 at 10:59 AM, Brett Cannon wrote: > R. David already replied to this, but just to reiterate: tests can always > get updated, and code that fixes a bug (and leaving a file open can be > considered a bug) can also go in. It's just stuff like code refactoring, > speed improvements, etc. that can't go into Python 2.7 at this point. Thanks for the clarification! > If/until the stdlib is made into its own repo, should the various VMs > consider keeping a common Python 2.7 repo that contains nothing but the > stdlib (or at least just modifications to those) so they can modify in ways > that CPython can't accept because of compatibility policy? You could keep it > on hg.python.org (or wherever) and then all push to it. This might also be a > good way to share Python implementations of extension modules for Python 2.7 > instead of everyone maintaining there own for the next few years (although I > think those modules should go into the stdlib directly for Python 3 as > well). Basically this could be a test to see if communication and > collaboration will be high enough among the other VMs to bother with > breaking out the actual stdlib into its own repo or if it would just be a > big waste of time. I'd be up for trying this. I don't think it's easy to fork a subdirectory of CPython though - right now I just keep an unchanged copy of the 2.7 LIb in our repo (PyPy does the same, at least the last time I checked). > P.S. Do we need a python-implementations mailing list or something for > discussing overall VM-related stuff among all VMs instead of always bringing > this up on python-dev? E.g. I wish I had a place where I could get all the > VM stakeholders' attention to make sure that importlib as it stands in > Python 3.3 will skip trying to import Python bytecode properly (or if the > VMs will simply provide their own setup function and that won't be a worry). > And I would have no problem with keeping it like python-committers in terms > of closed subscriptions, open archive in order to keep the noise low. I think a python-implementations list would be a fantastic idea - I sometimes miss multi-implementation discussions in python-dev, or at least come in very late. -Frank From brett at python.org Fri Jun 8 20:29:08 2012 From: brett at python.org (Brett Cannon) Date: Fri, 8 Jun 2012 14:29:08 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Fri, Jun 8, 2012 at 2:21 PM, fwierzbicki at gmail.com wrote: > On Fri, Jun 8, 2012 at 10:59 AM, Brett Cannon wrote: > > R. David already replied to this, but just to reiterate: tests can always > > get updated, and code that fixes a bug (and leaving a file open can be > > considered a bug) can also go in. It's just stuff like code refactoring, > > speed improvements, etc. that can't go into Python 2.7 at this point. > Thanks for the clarification! > > > If/until the stdlib is made into its own repo, should the various VMs > > consider keeping a common Python 2.7 repo that contains nothing but the > > stdlib (or at least just modifications to those) so they can modify in > ways > > that CPython can't accept because of compatibility policy? You could > keep it > > on hg.python.org (or wherever) and then all push to it. This might also > be a > > good way to share Python implementations of extension modules for Python > 2.7 > > instead of everyone maintaining there own for the next few years > (although I > > think those modules should go into the stdlib directly for Python 3 as > > well). Basically this could be a test to see if communication and > > collaboration will be high enough among the other VMs to bother with > > breaking out the actual stdlib into its own repo or if it would just be a > > big waste of time. > I'd be up for trying this. I don't think it's easy to fork a > subdirectory of CPython though - right now I just keep an unchanged > copy of the 2.7 LIb in our repo (PyPy does the same, at least the last > time I checked). > Looks like hg doesn't have support yet: http://stackoverflow.com/questions/920355/how-do-i-clone-a-sub-folder-of-a-repository-in-mercurial . The only sane option I can see then is to keep doing what you and PyPy are doing and keep a copy of the stdlib, but now you all simply share the repo instead of keeping your own copies and possibly use subrepos to pull it into your own hg repos. > > P.S. Do we need a python-implementations mailing list or something for > > discussing overall VM-related stuff among all VMs instead of always > bringing > > this up on python-dev? E.g. I wish I had a place where I could get all > the > > VM stakeholders' attention to make sure that importlib as it stands in > > Python 3.3 will skip trying to import Python bytecode properly (or if the > > VMs will simply provide their own setup function and that won't be a > worry). > > And I would have no problem with keeping it like python-committers in > terms > > of closed subscriptions, open archive in order to keep the noise low. > I think a python-implementations list would be a fantastic idea - I > sometimes miss multi-implementation discussions in python-dev, or at > least come in very late. > If other people agree then I will get Barry to create it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Fri Jun 8 20:57:30 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 8 Jun 2012 12:57:30 -0600 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Fri, Jun 8, 2012 at 12:29 PM, Brett Cannon wrote: > On Fri, Jun 8, 2012 at 2:21 PM, fwierzbicki at gmail.com > wrote: >> I think a python-implementations list would be a fantastic idea - I >> sometimes miss multi-implementation discussions in python-dev, or at >> least come in very late. > > > If other people agree then I will get Barry to create it. +1 This would have been handy for feedback on sys.implementation. -eric From fwierzbicki at gmail.com Fri Jun 8 20:59:31 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Fri, 8 Jun 2012 11:59:31 -0700 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Fri, Jun 8, 2012 at 11:57 AM, Eric Snow wrote: > This would have been handy for feedback on sys.implementation. FWIW I followed the discussion and am happy with the result :) -Frank From meadori at gmail.com Fri Jun 8 21:01:49 2012 From: meadori at gmail.com (Meador Inge) Date: Fri, 8 Jun 2012 14:01:49 -0500 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> References: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> Message-ID: On Fri, May 25, 2012 at 7:06 AM, wrote: > I hereby predict that Microsoft will revert this decision, and that VS > Express > 11 will be able to build CPython. And your prediction was right on :-) : http://blogs.msdn.com/b/visualstudio/archive/2012/06/08/visual-studio-express-2012-for-windows-desktop.aspx. -- # Meador From ericsnowcurrently at gmail.com Fri Jun 8 21:03:18 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 8 Jun 2012 13:03:18 -0600 Subject: [Python-Dev] [Python-checkins] cpython (3.2): #14957: clarify splitlines docs. In-Reply-To: <20120608170857.DEF472500AD@webabinitio.net> References: <20120607125558.41CD92500AD@webabinitio.net> <20120608170857.DEF472500AD@webabinitio.net> Message-ID: On Fri, Jun 8, 2012 at 11:08 AM, R. David Murray wrote: > The are four issues: (1) we build the python3 docs using python2, so 'make > doctest' on python3 doesn't currently work For reference: http://bugs.python.org/issue10224. Are there any others? -eric From solipsis at pitrou.net Fri Jun 8 22:45:46 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 08 Jun 2012 22:45:46 +0200 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: Le 08/06/2012 20:29, Brett Cannon a ?crit : > > > P.S. Do we need a python-implementations mailing list or > something for > > discussing overall VM-related stuff among all VMs instead of > always bringing > > this up on python-dev? E.g. I wish I had a place where I could > get all the > > VM stakeholders' attention to make sure that importlib as it > stands in > > Python 3.3 will skip trying to import Python bytecode properly > (or if the > > VMs will simply provide their own setup function and that won't > be a worry). > > And I would have no problem with keeping it like > python-committers in terms > > of closed subscriptions, open archive in order to keep the noise low. > I think a python-implementations list would be a fantastic idea - I > sometimes miss multi-implementation discussions in python-dev, or at > least come in very late. > > > If other people agree then I will get Barry to create it. Well, the question is, are many python-dev discussions CPython(specific? If not, then it doesn't make a lot of sense to create python-implementations (and it's one more subscription to manage for those of us who want to keep an eye on all core development-related discussions). Regards Antoine. From alexander.belopolsky at gmail.com Fri Jun 8 23:08:46 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 8 Jun 2012 17:08:46 -0400 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: On Tue, Jun 5, 2012 at 6:07 PM, Guido van Rossum wrote: >> See http://bugs.python.org/issue9527 . > With datetime.timestamp() method committed, I would like to get back to this issue. In some sense, an inverse of datetime.timestamp() is missing from the datetime module. Given a POSIX timestamp, I cannot get the local time as an aware datetime object. > Reading the requirements for a timezone implementation and the > time.localtime() function, it would seem that a timezone object > representing "local time" can certainly be constructed, as long as the > time module uses or emulates the Unix/C behavior. A tzinfo subclass emulating local time rules introduces the DST ambiguity to a problem that does not inherently suffer from it. See . A typical application is an e-mail agent that has to insert local time complete with UTC offset in the message header. The time.localtime() method will produce local time components together with the dst flag from which time.strftime() can produce RFC 3339 timestamp using "%z" directive. There is no ambiguity during DST transition. The only complication is that time component TZ components exhibit canceling discontinuities at those times. For example, >>> t = mktime((2010, 11, 7, 1, 0, 0, -1, -1, 0)) >>> for i in range(5): ... print(strftime("%T%z", localtime(t + i - 2))) ... 01:59:58-0400 01:59:59-0400 01:00:00-0500 01:00:01-0500 01:00:02-0500 As I explained at , it is not possible to reproduce this sequence using LocalTimezone. > I don't like that function. It returns two different timezone objects > depending on whether DST is in effect. Also it adds a new method to > the datetime class, which I think is unnecessary here. We can avoid introducing new methods. We can add aware= flag to datetime.now() and datetime.fromtimestamp(), but it may be better to introduce special values for existing tzinfo= argument instead. For example, we can allow passing an empty string in tzinfo to signify that a timezone instance should be generated filled with appropriate local offset. This choice may seem less of a hack if we introduce some simple TZ parsing and allow string values like "UTC-0500" as valid tzinfo specifications. From ethan at stoneleaf.us Sat Jun 9 00:41:06 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 08 Jun 2012 15:41:06 -0700 Subject: [Python-Dev] Import semantics? In-Reply-To: References: <4FD279AF.1010104@stoneleaf.us> Message-ID: <4FD27F82.50801@stoneleaf.us> Dan Stromberg wrote: > On Fri, Jun 8, 2012 at 3:16 PM, Ethan Furman wrote: > Dan Stromberg wrote: >> Did the import semantics change in cpython 3.3a4? >> >> I used to be able to import treap.py even though I had a treap >> directory in my cwd. With 3.3a4, I have to rename the treap >> directory to see treap.py. > > Check out PEP 420 -- Implicit Namespace Packages > [http://www.python.org/dev/peps/pep-0420/] > > > Am I misinterpreting this? It seems like according to the PEP, I should > have still been able to import treap.py despite having a treap/. But I > couldn't; I had to rename treap/ to treap-dir/ first. > > During import processing, the import machinery will continue to iterate > over each directory in the parent path as it does in Python 3.2. While > looking for a module or package named "foo", for each directory in the > parent path: > > * If /foo/__init__.py is found, a regular package is > imported and returned. > * If not, but /foo.{py,pyc,so,pyd} is found, a module > is imported and returned. The exact list of extension varies > by platform and whether the -O flag is specified. The list > here is representative. > * If not, but /foo is found and is a directory, it is > recorded and the scan continues with the next directory in the > parent path. > * Otherwise the scan continues with the next directory in the > parent path. I do not understand PEP 420 well enough to say if this is intentional or a bug -- thoughts? ~Ethan~ From eric at trueblade.com Sat Jun 9 00:47:54 2012 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 08 Jun 2012 18:47:54 -0400 Subject: [Python-Dev] Import semantics? In-Reply-To: <4FD27F82.50801@stoneleaf.us> References: <4FD279AF.1010104@stoneleaf.us> <4FD27F82.50801@stoneleaf.us> Message-ID: <4FD2811A.2020108@trueblade.com> On 6/8/2012 6:41 PM, Ethan Furman wrote: > Dan Stromberg wrote: >> On Fri, Jun 8, 2012 at 3:16 PM, Ethan Furman wrote: >> Dan Stromberg wrote: >>> Did the import semantics change in cpython 3.3a4? >>> >>> I used to be able to import treap.py even though I had a treap >>> directory in my cwd. With 3.3a4, I have to rename the treap >>> directory to see treap.py. >> >> Check out PEP 420 -- Implicit Namespace Packages >> [http://www.python.org/dev/peps/pep-0420/] >> >> >> Am I misinterpreting this? It seems like according to the PEP, I >> should have still been able to import treap.py despite having a >> treap/. But I couldn't; I had to rename treap/ to treap-dir/ first. >> >> During import processing, the import machinery will continue to >> iterate over each directory in the parent path as it does in Python >> 3.2. While looking for a module or package named "foo", for each >> directory in the parent path: >> >> * If /foo/__init__.py is found, a regular package is >> imported and returned. >> * If not, but /foo.{py,pyc,so,pyd} is found, a module >> is imported and returned. The exact list of extension varies >> by platform and whether the -O flag is specified. The list >> here is representative. >> * If not, but /foo is found and is a directory, it is >> recorded and the scan continues with the next directory in the >> parent path. >> * Otherwise the scan continues with the next directory in the >> parent path. > > I do not understand PEP 420 well enough to say if this is intentional or > a bug -- thoughts? I missed the beginning of this discussion and I need some more details. What directories are on sys.path, where do treap.py and treap/ appear in them, and is there an __init__.py in treap? At first blush it sounds like it should continue working. If you (Dan?) could re-create this in a small example and open a bug, that would be great. Eric. From guido at python.org Sat Jun 9 05:06:33 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 8 Jun 2012 20:06:33 -0700 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: On Fri, Jun 8, 2012 at 2:08 PM, Alexander Belopolsky wrote: > On Tue, Jun 5, 2012 at 6:07 PM, Guido van Rossum wrote: >>> See http://bugs.python.org/issue9527 . > With datetime.timestamp() method committed, I would like to get back > to this issue. What was committed? The bug only mentions a change to the email package. > In some sense, an inverse of ?datetime.timestamp() is > missing from the datetime module. ?Given a POSIX timestamp, I cannot > get the local time as an aware datetime object. But that's because there are (almost) no tz objects in the stdlib. >> Reading the requirements for a timezone implementation and the >> time.localtime() function, it would seem that a timezone object >> representing "local time" can certainly be constructed, as long as the >> time module uses or emulates the Unix/C behavior. > > A tzinfo subclass emulating local time rules introduces the DST > ambiguity to a problem that does not inherently suffer from it. But if you knew the name for the local time zone and constructed a datetime using it (e.g. using the excellent pytz package) it would suffer from exactly the same problem, wouldn't it? (Except on some systems it might be more correct for historic dates when a different algorithm was used.) > See > . ?A typical application is an > e-mail agent that has to insert local time complete with UTC offset in > the message header. ?The time.localtime() method will produce local > time components together with the dst flag from which time.strftime() > can produce RFC 3339 timestamp using "%z" directive. ? There is no > ambiguity during DST transition. ?The only complication is that time > component TZ components exhibit canceling discontinuities at those > times. ?For example, > >>>> t = mktime((2010, 11, 7, 1, 0, 0, -1, -1, 0)) >>>> for i in range(5): > ... ? ? print(strftime("%T%z", localtime(t + i - 2))) > ... > 01:59:58-0400 > 01:59:59-0400 > 01:00:00-0500 > 01:00:01-0500 > 01:00:02-0500 > > As I explained at , it is not > possible to reproduce this sequence using LocalTimezone. So LocalTimezone (or any named timezone that uses DST) should not be used for this purpose. That does not make LocalTimezone useless -- it is no less useless than any DST-supporting timezone. >> I don't like that function. It returns two different timezone objects >> depending on whether DST is in effect. Also it adds a new method to >> the datetime class, which I think is unnecessary here. > > We can avoid introducing new methods. ?We can add aware= flag to > datetime.now() and datetime.fromtimestamp(), but it may be better to > introduce special values for existing tzinfo= argument instead. ?For > example, we can allow passing an empty string in tzinfo to signify > that a timezone instance should be generated filled with appropriate > local offset. ?This choice may seem less of a hack if we introduce > some simple TZ parsing and allow string values like "UTC-0500" as > valid tzinfo specifications. I'm still unsure what problem you're trying to solve. Can we just introduce LocalTimezone (or whatever name it should have) and let the issue rest? -- --Guido van Rossum (python.org/~guido) From lists at cheimes.de Sat Jun 9 17:43:25 2012 From: lists at cheimes.de (Christian Heimes) Date: Sat, 09 Jun 2012 17:43:25 +0200 Subject: [Python-Dev] VS 2012 Express will support desktop apps Message-ID: <4FD36F1D.5070709@cheimes.de> Good news everybody! Microsoft has announced that the free VS 2012 Express Edition (aka VS11) will support desktop and console apps, too. Formerly only Metro apps were supported by the free edition. http://blogs.msdn.com/b/visualstudio/archive/2012/06/08/visual-studio-express-2012-for-windows-desktop.aspx Christian From ironfroggy at gmail.com Sat Jun 9 18:44:37 2012 From: ironfroggy at gmail.com (Calvin Spealman) Date: Sat, 9 Jun 2012 12:44:37 -0400 Subject: [Python-Dev] PEP 362 Second Revision In-Reply-To: References: Message-ID: On Thu, Jun 7, 2012 at 10:41 AM, Yury Selivanov wrote: > Hello, > > The new revision of PEP 362 has been posted: > http://www.python.org/dev/peps/pep-0362/ While the actual implementation is more complex, the PEP is a lot more clear and direct than it first was. Great job! I'm really looking forward to this, after working through essentially the same problems as part of tracerlib. So this has a huge thumbs up from me, fwiw. > Thanks to Brett, Larry, Nick, and everybody else on python-dev > for your corrections/suggestions. > > Summary of changes: > > 1. We don't cache signatures in __signature__ attribute implicitly > > 2. signature() function is now more complex, but supports methods, > partial objects, classes, callables, and decorated functions > > 3. Signatures are always constructed on demand > > 4. Dropped the deprecation section > > The implementation is not aligned with the latest PEP yet, > I'll try to update it tonight. > > Thanks, > - > Yury > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy From matti.picus at gmail.com Sat Jun 9 20:06:27 2012 From: matti.picus at gmail.com (Matti Picus) Date: Sat, 09 Jun 2012 21:06:27 +0300 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: <4FD390A3.2020209@gmail.com> An HTML attachment was scrubbed... URL: From brett at yvrsfo.ca Sun Jun 10 04:05:19 2012 From: brett at yvrsfo.ca (Brett Cannon) Date: Sat, 9 Jun 2012 22:05:19 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Friday, June 8, 2012, Antoine Pitrou wrote: > Le 08/06/2012 20:29, Brett Cannon a ?crit : > >> >> > P.S. Do we need a python-implementations mailing list or >> something for >> > discussing overall VM-related stuff among all VMs instead of >> always bringing >> > this up on python-dev? E.g. I wish I had a place where I could >> get all the >> > VM stakeholders' attention to make sure that importlib as it >> stands in >> > Python 3.3 will skip trying to import Python bytecode properly >> (or if the >> > VMs will simply provide their own setup function and that won't >> be a worry). >> > And I would have no problem with keeping it like >> python-committers in terms >> > of closed subscriptions, open archive in order to keep the noise >> low. >> I think a python-implementations list would be a fantastic idea - I >> sometimes miss multi-implementation discussions in python-dev, or at >> least come in very late. >> >> >> If other people agree then I will get Barry to create it. >> > > Well, the question is, are many python-dev discussions CPython(specific? > If not, then it doesn't make a lot of sense to create > python-implementations (and it's one more subscription to manage for those > of us who want to keep an eye on all core development-related discussions). > > But the other VMs don't necessarily care about the development of the language, so when the occasional thing comes up regarding all the VMs, should that require they follow python-dev in its entirety? And I don't see the list making sweeping decisions that would affect CPython and python-dev without bringing it up there later. Think of the proposed list more like a SIG than anything else. -Brett > Regards > > Antoine. > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > brett%40python.org > -- [sent from my iPad] -------------- next part -------------- An HTML attachment was scrubbed... URL: From python-dev at masklinn.net Sun Jun 10 09:42:19 2012 From: python-dev at masklinn.net (Xavier Morel) Date: Sun, 10 Jun 2012 09:42:19 +0200 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: <0D03BF1E-3403-42AE-BEEF-BBF124BD1FF7@masklinn.net> On 2012-06-08, at 20:29 , Brett Cannon wrote: > On Fri, Jun 8, 2012 at 2:21 PM, fwierzbicki at gmail.com > wrote: > >> On Fri, Jun 8, 2012 at 10:59 AM, Brett Cannon wrote: >>> R. David already replied to this, but just to reiterate: tests can always >>> get updated, and code that fixes a bug (and leaving a file open can be >>> considered a bug) can also go in. It's just stuff like code refactoring, >>> speed improvements, etc. that can't go into Python 2.7 at this point. >> Thanks for the clarification! >> >>> If/until the stdlib is made into its own repo, should the various VMs >>> consider keeping a common Python 2.7 repo that contains nothing but the >>> stdlib (or at least just modifications to those) so they can modify in >> ways >>> that CPython can't accept because of compatibility policy? You could >> keep it >>> on hg.python.org (or wherever) and then all push to it. This might also >> be a >>> good way to share Python implementations of extension modules for Python >> 2.7 >>> instead of everyone maintaining there own for the next few years >> (although I >>> think those modules should go into the stdlib directly for Python 3 as >>> well). Basically this could be a test to see if communication and >>> collaboration will be high enough among the other VMs to bother with >>> breaking out the actual stdlib into its own repo or if it would just be a >>> big waste of time. >> I'd be up for trying this. I don't think it's easy to fork a >> subdirectory of CPython though - right now I just keep an unchanged >> copy of the 2.7 LIb in our repo (PyPy does the same, at least the last >> time I checked). >> > > Looks like hg doesn't have support yet: > http://stackoverflow.com/questions/920355/how-do-i-clone-a-sub-folder-of-a-repository-in-mercurial > Using that would mean commits in the "externalized stdlib" would go into the Python 2.7 repo, which I assume is *not* desirable. A better-fitting path of action would be a hg -> hg convert using a filemap, as the first comment in your link shows. That would create a full copy (with history replay) of the standard library, in a brand new repository. Then *that* could be used by everybody. From ncoghlan at gmail.com Sun Jun 10 14:53:04 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 10 Jun 2012 22:53:04 +1000 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Sun, Jun 10, 2012 at 12:05 PM, Brett Cannon wrote: >> Well, the question is, are many python-dev discussions CPython(specific? >> If not, then it doesn't make a lot of sense to create python-implementations >> (and it's one more subscription to manage for those of us who want to keep >> an eye on all core development-related discussions). >> > > But the other VMs don't necessarily care about the development of the > language, so when the occasional thing comes up regarding all the VMs, > should that require they follow python-dev in its entirety? And I don't see > the list making sweeping decisions that would affect CPython and python-dev > without bringing it up there later. Think of the proposed list more like a > SIG than anything else. Yeah, I think it makes sense. With the current situation, the bridges between the implementations are limited to those with the personal bandwidth to follow their implementation's core list *and* python-dev. With a separate list, it becomes easier to get feedback on cases where we want to check that an idea we're considering is feasible for all the major implementations. It also creates a neutral space for the other VMs to discuss stuff like collaborating on pure Python versions of C implemented modules. If we can get to the point where there's a separate "stdlib-only" pure Python mirror based on CPython's Mercurial repo that other implementations can all share, *without* requiring changes to CPython itself, that would be pretty nice. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Sun Jun 10 15:41:55 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 10 Jun 2012 23:41:55 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Note that the _asdict() method is outdated In-Reply-To: References: Message-ID: On Sun, Jun 10, 2012 at 12:15 PM, raymond.hettinger wrote: > http://hg.python.org/cpython/rev/fecbcd5c3978 > changeset: ? 77397:fecbcd5c3978 > user: ? ? ? ?Raymond Hettinger > date: ? ? ? ?Sat Jun 09 19:15:26 2012 -0700 > summary: > ?Note that the _asdict() method is outdated This checkin changed a lot of the indentation in the collections docs. Did you mean to do that? Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From benjamin at python.org Sun Jun 10 19:59:35 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 10 Jun 2012 10:59:35 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: Message-ID: 2012/6/5 Brett Cannon : > * is_keyword_only : bool > ? ? True if the parameter is keyword-only, else False. > * is_args : bool > ? ? True if the parameter accepts variable number of arguments > ? ? (``\*args``-like), else False. How about "vararg" as its named in AST. > * is_kwargs : bool > ? ? True if the parameter accepts variable number of keyword > ? ? arguments (``\*\*kwargs``-like), else False. Can the "is_" be dropped? It's quite ugly. Even better, since these are all mutually exclusive, they could be cascaded into a single "type" attribute. > * is_implemented : bool > ? ? True if the parameter is implemented for use. ?Some platforms > ? ? implement functions but can't support specific parameters > ? ? (e.g. "mode" for os.mkdir). ?Passing in an unimplemented > ? ? parameter may result in the parameter being ignored, > ? ? or in NotImplementedError being raised. ?It is intended that > ? ? all conditions where ``is_implemented`` may be False be > ? ? thoroughly documented. -- Regards, Benjamin From larry at hastings.org Sun Jun 10 21:57:00 2012 From: larry at hastings.org (Larry Hastings) Date: Sun, 10 Jun 2012 12:57:00 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: Message-ID: <4FD4FC0C.9060500@hastings.org> On 06/10/2012 10:59 AM, Benjamin Peterson wrote: > 2012/6/5 Brett Cannon: >> * is_keyword_only : bool >> True if the parameter is keyword-only, else False. >> * is_args : bool >> True if the parameter accepts variable number of arguments >> (``\*args``-like), else False. > How about "vararg" as its named in AST. Please read the rest of the thread; this has already been discussed. > Can the "is_" be dropped? It's quite ugly. I suppose that's in the eye of the beholder: if parameter.is_kwargs: reads quite naturally to me. > Even better, since these are all mutually exclusive, > they could be cascaded into a single "type" attribute. Can you make a more concrete suggestion? "type" strikes me as a poor choice of name, as it makes one think immediately of type(), which is another, uh, variety of "type". //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Sun Jun 10 22:27:38 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 10 Jun 2012 13:27:38 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: <4FD4FC0C.9060500@hastings.org> References: <4FD4FC0C.9060500@hastings.org> Message-ID: 2012/6/10 Larry Hastings : > Can you make a more concrete suggestion?? "type" strikes me as a poor choice > of name, as it makes one think immediately of type(), which is another, uh, > variety of "type". kind -> "position" or "keword_only" or "vararg" or "kwarg" -- Regards, Benjamin From guido at python.org Mon Jun 11 02:41:45 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 10 Jun 2012 17:41:45 -0700 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: Really? Are we now proposing multiple lists? That just makes it easier to miss stuff for me. On Sun, Jun 10, 2012 at 5:53 AM, Nick Coghlan wrote: > On Sun, Jun 10, 2012 at 12:05 PM, Brett Cannon wrote: >>> Well, the question is, are many python-dev discussions CPython(specific? >>> If not, then it doesn't make a lot of sense to create python-implementations >>> (and it's one more subscription to manage for those of us who want to keep >>> an eye on all core development-related discussions). >>> >> >> But the other VMs don't necessarily care about the development of the >> language, so when the occasional thing comes up regarding all the VMs, >> should that require they follow python-dev in its entirety? And I don't see >> the list making sweeping decisions that would affect CPython and python-dev >> without bringing it up there later. Think of the proposed list more like a >> SIG than anything else. > > Yeah, I think it makes sense. With the current situation, the bridges > between the implementations are limited to those with the personal > bandwidth to follow their implementation's core list *and* python-dev. > With a separate list, it becomes easier to get feedback on cases where > we want to check that an idea we're considering is feasible for all > the major implementations. > > It also creates a neutral space for the other VMs to discuss stuff > like collaborating on pure Python versions of C implemented modules. > If we can get to the point where there's a separate "stdlib-only" pure > Python mirror based on CPython's Mercurial repo that other > implementations can all share, *without* requiring changes to CPython > itself, that would be pretty nice. > > Cheers, > Nick. > > -- > Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From brett at yvrsfo.ca Mon Jun 11 03:10:57 2012 From: brett at yvrsfo.ca (Brett Cannon) Date: Sun, 10 Jun 2012 21:10:57 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: I am proposing a single list to just discuss multi-vm issues so that it doesn't force all other VM contributors to sign up for python-dev if they don't care about language issues. We could hijack the stdlib-sig mailing list, but that isn't the right focus necessarily. On Jun 10, 2012 8:42 PM, "Guido van Rossum" wrote: > Really? Are we now proposing multiple lists? That just makes it easier > to miss stuff for me. > > On Sun, Jun 10, 2012 at 5:53 AM, Nick Coghlan wrote: > > On Sun, Jun 10, 2012 at 12:05 PM, Brett Cannon wrote: > >>> Well, the question is, are many python-dev discussions > CPython(specific? > >>> If not, then it doesn't make a lot of sense to create > python-implementations > >>> (and it's one more subscription to manage for those of us who want to > keep > >>> an eye on all core development-related discussions). > >>> > >> > >> But the other VMs don't necessarily care about the development of the > >> language, so when the occasional thing comes up regarding all the VMs, > >> should that require they follow python-dev in its entirety? And I don't > see > >> the list making sweeping decisions that would affect CPython and > python-dev > >> without bringing it up there later. Think of the proposed list more > like a > >> SIG than anything else. > > > > Yeah, I think it makes sense. With the current situation, the bridges > > between the implementations are limited to those with the personal > > bandwidth to follow their implementation's core list *and* python-dev. > > With a separate list, it becomes easier to get feedback on cases where > > we want to check that an idea we're considering is feasible for all > > the major implementations. > > > > It also creates a neutral space for the other VMs to discuss stuff > > like collaborating on pure Python versions of C implemented modules. > > If we can get to the point where there's a separate "stdlib-only" pure > > Python mirror based on CPython's Mercurial repo that other > > implementations can all share, *without* requiring changes to CPython > > itself, that would be pretty nice. > > > > Cheers, > > Nick. > > > > -- > > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > http://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Jun 11 03:29:07 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 10 Jun 2012 18:29:07 -0700 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: But what guarantee do you have that (a) the right people sign up for the new list, and (b) topics are correctly brought up there instead of on python-dev? I agree that python-dev is turning into a firehose, but I am reluctant to create backwaters where people might arrive at what they think is a consensus only because the important opinions aren't represented there. On Sun, Jun 10, 2012 at 6:10 PM, Brett Cannon wrote: > I am proposing a single list to just discuss multi-vm issues so that it > doesn't force all other VM contributors to sign up for python-dev if they > don't care about language issues. We could hijack the stdlib-sig mailing > list, but that isn't the right focus necessarily. > > On Jun 10, 2012 8:42 PM, "Guido van Rossum" wrote: >> >> Really? Are we now proposing multiple lists? That just makes it easier >> to miss stuff for me. >> >> On Sun, Jun 10, 2012 at 5:53 AM, Nick Coghlan wrote: >> > On Sun, Jun 10, 2012 at 12:05 PM, Brett Cannon wrote: >> >>> Well, the question is, are many python-dev discussions >> >>> CPython(specific? >> >>> If not, then it doesn't make a lot of sense to create >> >>> python-implementations >> >>> (and it's one more subscription to manage for those of us who want to >> >>> keep >> >>> an eye on all core development-related discussions). >> >>> >> >> >> >> But the other VMs don't necessarily care about the development of the >> >> language, so when the occasional thing comes up regarding all the VMs, >> >> should that require they follow python-dev in its entirety? And I don't >> >> see >> >> the list making sweeping decisions that would affect CPython and >> >> python-dev >> >> without bringing it up there later. Think of the proposed list more >> >> like a >> >> SIG than anything else. >> > >> > Yeah, I think it makes sense. With the current situation, the bridges >> > between the implementations are limited to those with the personal >> > bandwidth to follow their implementation's core list *and* python-dev. >> > With a separate list, it becomes easier to get feedback on cases where >> > we want to check that an idea we're considering is feasible for all >> > the major implementations. >> > >> > It also creates a neutral space for the other VMs to discuss stuff >> > like collaborating on pure Python versions of C implemented modules. >> > If we can get to the point where there's a separate "stdlib-only" pure >> > Python mirror based on CPython's Mercurial repo that other >> > implementations can all share, *without* requiring changes to CPython >> > itself, that would be pretty nice. >> > >> > Cheers, >> > Nick. >> > >> > -- >> > Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia >> > _______________________________________________ >> > Python-Dev mailing list >> > Python-Dev at python.org >> > http://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> > http://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> >> >> -- >> --Guido van Rossum (python.org/~guido) -- --Guido van Rossum (python.org/~guido) From alexandre.zani at gmail.com Mon Jun 11 05:42:39 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Sun, 10 Jun 2012 20:42:39 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FD4FC0C.9060500@hastings.org> Message-ID: On Sun, Jun 10, 2012 at 1:27 PM, Benjamin Peterson wrote: > 2012/6/10 Larry Hastings : >> Can you make a more concrete suggestion?? "type" strikes me as a poor choice >> of name, as it makes one think immediately of type(), which is another, uh, >> variety of "type". > > kind -> > "position" or > "keword_only" or > "vararg" or > "kwarg" > > > -- > Regards, > Benjamin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com I prefer the flags. Flags means I can just look at the Parameter object. A "type" or "kind" or whatever means I need to compare to a bunch of constants. That's more stuff to remember. From benjamin at python.org Mon Jun 11 08:13:19 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 10 Jun 2012 23:13:19 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FD4FC0C.9060500@hastings.org> Message-ID: 2012/6/10 Alexandre Zani : > > I prefer the flags. Flags means I can just look at the Parameter > object. A "type" or "kind" or whatever means I need to compare to a > bunch of constants. That's more stuff to remember. I don't see why remembering 4 names is any harder than remember four attributes. -- Regards, Benjamin From alexandre.zani at gmail.com Mon Jun 11 08:20:07 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Sun, 10 Jun 2012 23:20:07 -0700 Subject: [Python-Dev] Updated PEP 362 (Function Signature Object) In-Reply-To: References: <4FD4FC0C.9060500@hastings.org> Message-ID: On Sun, Jun 10, 2012 at 11:13 PM, Benjamin Peterson wrote: > 2012/6/10 Alexandre Zani : >> >> I prefer the flags. Flags means I can just look at the Parameter >> object. A "type" or "kind" or whatever means I need to compare to a >> bunch of constants. That's more stuff to remember. > > I don't see why remembering 4 names is any harder than remember four attributes. If it's 4 flags, you can tab-complete on the signature object itself, the meaning of the flags are self-documenting and if you make a mistake, you get an AttributeError which is easier to debug. Also, param.is_args is much simpler/cleaner than param.type == "args" or param.type == inspect.Parameter.VARARGS > > > -- > Regards, > Benjamin From ncoghlan at gmail.com Mon Jun 11 08:58:03 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 11 Jun 2012 16:58:03 +1000 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Mon, Jun 11, 2012 at 11:29 AM, Guido van Rossum wrote: > But what guarantee do you have that (a) the right people sign up for > the new list, and (b) topics are correctly brought up there instead of > on python-dev? I agree that python-dev is turning into a firehose, but > I am reluctant to create backwaters where people might arrive at what > they think is a consensus only because the important opinions aren't > represented there. If that's a concern, I'd be happy to limit the use of the new list to "Input from other implementations needed on python-dev thread ". At the moment, it's a PITA to chase other implementations to get confirmation that they can cope with a change we're considering, so I'd like confirmation that either: 1. Asking on python-dev is considered adequate. If an implementation wants to be consulted on changes, one or more of their developers *must* follow python-dev sufficiently closely that they don't miss cross-VM compatibility questions. (My concern is that this isn't reliable - we know from experience that other VMs can miss such questions when they're mixed in with the rest of the python-dev traffic) 2. As 1, but we adopt a subject line convention to make it easier to filter out general python-dev traffic for those that are just interested in cross-vm questions 3. Create a separate list for cross-VM discussions, *including* discussions that aren't directly relevant to Python-the-language or CPython-the-reference-interpreter (e.g. collaborating on a shared standard library fork). python-dev threads may be advertised on the new list if cross-VM feedback is considered particularly necessary. As Brett pointed out, it's similar to the resurrection of import-sig - we know that decisions aren't final until they're resolved on python-dev, but it also means we're not flooding python-dev with interminable arcane discussions on import system internals. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From jnoller at gmail.com Mon Jun 11 12:26:30 2012 From: jnoller at gmail.com (Jesse Noller) Date: Mon, 11 Jun 2012 06:26:30 -0400 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: References: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> Message-ID: On Friday, June 8, 2012 at 3:01 PM, Meador Inge wrote: > On Fri, May 25, 2012 at 7:06 AM, wrote: > > > I hereby predict that Microsoft will revert this decision, and that VS > > Express > > 11 will be able to build CPython. > > > > And your prediction was right on :-) : > http://blogs.msdn.com/b/visualstudio/archive/2012/06/08/visual-studio-express-2012-for-windows-desktop.aspx. > Martin's predictions are sometimes eerily correct. ;) From alexander.belopolsky at gmail.com Mon Jun 11 15:33:11 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 11 Jun 2012 09:33:11 -0400 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: On Fri, Jun 8, 2012 at 11:06 PM, Guido van Rossum wrote: > On Fri, Jun 8, 2012 at 2:08 PM, Alexander Belopolsky > wrote: .. >>>>> t = mktime((2010, 11, 7, 1, 0, 0, -1, -1, 0)) >>>>> for i in range(5): >> ... ? ? print(strftime("%T%z", localtime(t + i - 2))) >> ... >> 01:59:58-0400 >> 01:59:59-0400 >> 01:00:00-0500 >> 01:00:01-0500 >> 01:00:02-0500 >> >> As I explained at , it is not >> possible to reproduce this sequence using LocalTimezone. > > So LocalTimezone (or any named timezone that uses DST) should not be > used for this purpose. .. > I'm still unsure what problem you're trying to solve. The problem is: produce a timestamp (e.g. RFC 3339) complete with timezone information corresponding to the current time or any other unambiguously defined time. This is exactly what my proposed datetime.localtime() function does. > Can we just > introduce LocalTimezone (or whatever name it should have) and let the > issue rest? No. LocalTimezone adresses a different problem and does not solve this one. If you generate a timestamp using datetime.now(LocalTimezone).strftime("... %z"), your timestamps will be wrong for one hour every year. Some users may tolerate this, but the problem is not hard to solve and I think datetime module should offer one obvious and correct way to do it. From ericsnowcurrently at gmail.com Mon Jun 11 17:28:02 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 11 Jun 2012 09:28:02 -0600 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Mon, Jun 11, 2012 at 12:58 AM, Nick Coghlan wrote: > On Mon, Jun 11, 2012 at 11:29 AM, Guido van Rossum wrote: >> But what guarantee do you have that (a) the right people sign up for >> the new list, and (b) topics are correctly brought up there instead of >> on python-dev? I agree that python-dev is turning into a firehose, but >> I am reluctant to create backwaters where people might arrive at what >> they think is a consensus only because the important opinions aren't >> represented there. > > If that's a concern, I'd be happy to limit the use of the new list to > "Input from other implementations needed on python-dev thread ". > > At the moment, it's a PITA to chase other implementations to get > confirmation that they can cope with a change we're considering, so > I'd like confirmation that either: > > 1. Asking on python-dev is considered adequate. If an implementation > wants to be consulted on changes, one or more of their developers > *must* follow python-dev sufficiently closely that they don't miss > cross-VM compatibility questions. (My concern is that this isn't > reliable - we know from experience that other VMs can miss such > questions when they're mixed in with the rest of the python-dev > traffic) > 2. As 1, but we adopt a subject line convention to make it easier to > filter out general python-dev traffic for those that are just > interested in cross-vm questions > 3. Create a separate list for cross-VM discussions, *including* > discussions that aren't directly relevant to Python-the-language or > CPython-the-reference-interpreter (e.g. collaborating on a shared > standard library fork). python-dev threads may be advertised on the > new list if cross-VM feedback is considered particularly necessary. > > As Brett pointed out, it's similar to the resurrection of import-sig - > we know that decisions aren't final until they're resolved on > python-dev, but it also means we're not flooding python-dev with > interminable arcane discussions on import system internals. +1 While soliciting feedback on PEP 421 (sys.implementation), the first option got me nearly no responses from the other major Python implementations. In the end, I tracked down which would be the appropriate mailing lists for PyPy, Jython, and IronPython, and directly wrote to them, which seemed a less than optimal approach. It also means that I left out any other interested parties. As well, I'm still not positive I wrote to the best lists for those implementations. Nick's option 2 would be an improvement, but I imagine that option 3 would have been the most effective by far. Of course, the key thing is how closely the various implementors would follow the new list. Only they could say, though Frank Wierzbicki seemed positive about it. FWIW, I also like Nick's idea of "redirecting" to ongoing python-dev threads and his comparison of the proposed new list to import-sig. -eric From alex.gaynor at gmail.com Mon Jun 11 18:20:59 2012 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 11 Jun 2012 16:20:59 +0000 (UTC) Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython References: Message-ID: Eric Snow gmail.com> writes: > > Nick's option 2 would be an improvement, but I imagine that option 3 > would have been the most effective by far. Of course, the key thing > is how closely the various implementors would follow the new list. > Only they could say, though Frank Wierzbicki seemed positive about it. > -eric > I'm +1 on such a list, I don't have the time to follow every single thread on python-dev, and I'm sure I miss a lot of things, have a dedicated place for things I know are relevant to my work would be a great help. Alex From jdhardy at gmail.com Mon Jun 11 18:33:33 2012 From: jdhardy at gmail.com (Jeff Hardy) Date: Mon, 11 Jun 2012 09:33:33 -0700 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Mon, Jun 11, 2012 at 8:28 AM, Eric Snow wrote: > Nick's option 2 would be an improvement, but I imagine that option 3 > would have been the most effective by far. ?Of course, the key thing > is how closely the various implementors would follow the new list. > Only they could say, though Frank Wierzbicki seemed positive about it. This has come up a couple of times recently (discussions on PEP 421 and PEP 405), so I think it would be worth while. I don't have the time to track all of the different proposals that are in flux; it would be nice to know when they're "done" and just need a sanity check to make sure everything will work for other implementations. - Jeff From guido at python.org Mon Jun 11 19:01:57 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 11 Jun 2012 10:01:57 -0700 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: On Mon, Jun 11, 2012 at 6:33 AM, Alexander Belopolsky wrote: > On Fri, Jun 8, 2012 at 11:06 PM, Guido van Rossum wrote: >> On Fri, Jun 8, 2012 at 2:08 PM, Alexander Belopolsky >> wrote: > .. >>>>>> t = mktime((2010, 11, 7, 1, 0, 0, -1, -1, 0)) >>>>>> for i in range(5): >>> ... ? ? print(strftime("%T%z", localtime(t + i - 2))) >>> ... >>> 01:59:58-0400 >>> 01:59:59-0400 >>> 01:00:00-0500 >>> 01:00:01-0500 >>> 01:00:02-0500 >>> >>> As I explained at , it is not >>> possible to reproduce this sequence using LocalTimezone. >> >> So LocalTimezone (or any named timezone that uses DST) should not be >> used for this purpose. > .. >> I'm still unsure what problem you're trying to solve. > > The problem is: produce a timestamp (e.g. ?RFC 3339) complete with > timezone information corresponding to the current time or any other > unambiguously defined time. ?This is exactly what my proposed > datetime.localtime() function does. Maybe the problem here is the *input*? It should be a POSIX timestamp, not a datetime object. Another approach might be to tweak the existing timetuple() API to properly compute the is_dst flag? It currently always seems to return -1 for that flag. >> Can we just >> introduce LocalTimezone (or whatever name it should have) and let the >> issue rest? > > No. ?LocalTimezone adresses a different problem and does not solve > this one. ?If you generate a timestamp using > datetime.now(LocalTimezone).strftime("... %z"), your timestamps will > be wrong for one hour every year. ?Some users may tolerate this, but > the problem is not hard to solve and I think datetime module should > offer one obvious and correct way to do it. Ok, I trust that LocalTimezone doesn't solve your problem. Separately, then, I still think we should have it in the stdlib, since it probably covers the most use cases besides the utc we already have. PS. TBH I don't care for the idea that we should try to hide the time module and consider it a low-level implementation detail for the shiny high-level datetime module. The simply serve different purposes. Users should be required to understand POSIX timestamps and the importance of UTC before they try to work with multiple timezones. And they should store timestamps as POSIX timestamps, not as datetime objects with an (implicit or explicit) UTC timezone. -- --Guido van Rossum (python.org/~guido) From rdmurray at bitdance.com Mon Jun 11 19:30:38 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 11 Jun 2012 13:30:38 -0400 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: <20120611173039.707C025009E@webabinitio.net> On Mon, 11 Jun 2012 10:01:57 -0700, Guido van Rossum wrote: > On Mon, Jun 11, 2012 at 6:33 AM, Alexander Belopolsky > wrote: > > On Fri, Jun 8, 2012 at 11:06 PM, Guido van Rossum wrote: > >> On Fri, Jun 8, 2012 at 2:08 PM, Alexander Belopolsky > >> wrote: > > .. > >>>>>> t = mktime((2010, 11, 7, 1, 0, 0, -1, -1, 0)) > >>>>>> for i in range(5): > >>> ... ?? ?? print(strftime("%T%z", localtime(t + i - 2))) > >>> ... > >>> 01:59:58-0400 > >>> 01:59:59-0400 > >>> 01:00:00-0500 > >>> 01:00:01-0500 > >>> 01:00:02-0500 > >>> > >>> As I explained at , it is not > >>> possible to reproduce this sequence using LocalTimezone. > >> > >> So LocalTimezone (or any named timezone that uses DST) should not be > >> used for this purpose. > > .. > >> I'm still unsure what problem you're trying to solve. > > > > The problem is: produce a timestamp (e.g. ??RFC 3339) complete with > > timezone information corresponding to the current time or any other > > unambiguously defined time. ??This is exactly what my proposed > > datetime.localtime() function does. > > Maybe the problem here is the *input*? It should be a POSIX timestamp, > not a datetime object. > > Another approach might be to tweak the existing timetuple() API to > properly compute the is_dst flag? It currently always seems to return > -1 for that flag. > > >> Can we just > >> introduce LocalTimezone (or whatever name it should have) and let the > >> issue rest? > > > > No. ??LocalTimezone adresses a different problem and does not solve > > this one. ??If you generate a timestamp using > > datetime.now(LocalTimezone).strftime("... %z"), your timestamps will > > be wrong for one hour every year. ??Some users may tolerate this, but > > the problem is not hard to solve and I think datetime module should > > offer one obvious and correct way to do it. > > Ok, I trust that LocalTimezone doesn't solve your problem. Separately, > then, I still think we should have it in the stdlib, since it probably > covers the most use cases besides the utc we already have. > > PS. TBH I don't care for the idea that we should try to hide the time > module and consider it a low-level implementation detail for the shiny > high-level datetime module. The simply serve different purposes. Users > should be required to understand POSIX timestamps and the importance > of UTC before they try to work with multiple timezones. And they > should store timestamps as POSIX timestamps, not as datetime objects > with an (implicit or explicit) UTC timezone. For the email package, I need a way to get from 'now' to an RFC 5322 date/time string, and whatever that way is it needs to (1) be unambiguous what time in what timzeone offset it is when the user passes it to me, and (2) it needs to interoperate with date/time+timezone-offset that is obtained from other email headers. An aware timezone using a 'timezone' tzinfo object seems the most logical thing to use for that, which means I need a way to go from 'now' to such a datetime object. I don't care how it happens :) --David From alexander.belopolsky at gmail.com Mon Jun 11 20:42:07 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 11 Jun 2012 14:42:07 -0400 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: On Mon, Jun 11, 2012 at 1:01 PM, Guido van Rossum wrote: .. > Maybe the problem here is the *input*? It should be a POSIX timestamp, > not a datetime object. > No. "Seconds since epoch" or "POSIX" timestamp is a foreign data type to the datetime module. An aware datetime object with tzinfo=timezone.utc or a naive datetime object representing UTC time by convention is the datetime module way to represent absolute time. If I need to convert time obtained from an e-mail header or a log file to local time, I don't want to go through a "POSIX" timestamp. I want the obvious code to work correctly: >>> t = datetime.strptime(time_string, format) >>> local_time_string = datetime.localtime(t).strftime(format) (Note that the first statement already works in 3.2 if timezone information is compatible with %z format.) .. > Ok, I trust that LocalTimezone doesn't solve your problem. Separately, > then, I still think we should have it in the stdlib, since it probably > covers the most use cases besides the utc we already have. > I am not against adding LocalTimezone to datetime. We can copy tzinfo-examples.py to datetime.py and call it the day. However, this will not eliminate the need for the localtime() function. > PS. TBH I don't care for the idea that we should try to hide the time > module and consider it a low-level implementation detail for the shiny > high-level datetime module. I don't think I ever promoted this idea. The time module has its uses, but ATM it does not completely solve the local time problem either. See . > .. Users > should be required to understand POSIX timestamps and the importance > of UTC before they try to work with multiple timezones. I disagree. First, UTC and POSIX timestamps are not the same thing. I am not talking about leap seconds or epoch. I just find this: $ TZ=UTC date Mon Jun 11 18:15:48 UTC 2012 much easier to explain than this: $ TZ=UTC date +%s 1339438586 There is no need to expose developers of e-mail servers or log analytics to 9-10 digit integers or even longer floats. Second, UTC is only special in the way that zero is special. If you write a function converting incoming time string to local time, you don't need to special case UTC as long as incoming format includes explicit offset. > And they > should store timestamps as POSIX timestamps, not as datetime objects > with an (implicit or explicit) UTC timezone. I disagree again. At the end of the day, they should "store" timestamps in a human-readable text format. For example, http://www.w3.org/TR/xmlschema-2/#isoformats Users who want an object that stores a POSIX timestamp internally, but presents rich date-time interface can use an excellent mxDateTime class. It is one thing to provide datetime.fromtimestamp() and datetime.timestamp() methods for interoperability, it is quite another thing to require users to convert their datetime instances to timestamp for basic timezone operations. From guido at python.org Mon Jun 11 20:55:27 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 11 Jun 2012 11:55:27 -0700 Subject: [Python-Dev] TZ-aware local time In-Reply-To: References: Message-ID: Let's agree to disagree. I don't have the time to argue any more but you haven't convinced me. On Mon, Jun 11, 2012 at 11:42 AM, Alexander Belopolsky wrote: > On Mon, Jun 11, 2012 at 1:01 PM, Guido van Rossum wrote: > .. >> Maybe the problem here is the *input*? It should be a POSIX timestamp, >> not a datetime object. >> > > No. ?"Seconds since epoch" or "POSIX" timestamp is a foreign data type > to the datetime module. ?An aware datetime object with > tzinfo=timezone.utc or a naive datetime object representing UTC time > by convention is the datetime module way to represent absolute time. > If I need to convert time obtained from an e-mail header or a log file > to local time, I don't want to go through a ?"POSIX" timestamp. ?I > want the obvious code to work correctly: > >>>> t = datetime.strptime(time_string, format) >>>> local_time_string = datetime.localtime(t).strftime(format) > > (Note that the first statement already works in 3.2 if timezone > information is compatible with %z format.) > > .. >> Ok, I trust that LocalTimezone doesn't solve your problem. Separately, >> then, I still think we should have it in the stdlib, since it probably >> covers the most use cases besides the utc we already have. >> > > I am not against adding LocalTimezone to datetime. ?We can copy > tzinfo-examples.py to datetime.py and call it the day. ?However, this > will not eliminate the need for the localtime() function. > >> PS. TBH I don't care for the idea that we should try to hide the time >> module and consider it a low-level implementation detail for the shiny >> high-level datetime module. > > I don't think I ever promoted this idea. ?The time module has its > uses, but ATM it does not completely solve the local time problem > either. ?See . > >> .. Users >> should be required to understand POSIX timestamps and the importance >> of UTC before they try to work with multiple timezones. > > I disagree. ?First, UTC and POSIX timestamps are not the same thing. > I am not talking about leap seconds or epoch. ?I just find this: > > $ TZ=UTC date > Mon Jun 11 18:15:48 UTC 2012 > > much easier to explain than this: > > $ TZ=UTC date +%s > 1339438586 > > There is no need to expose developers of e-mail servers or log > analytics to 9-10 digit integers or even longer floats. ?Second, UTC > is only special in the way that zero is special. ?If you write a > function converting incoming time string to local time, you don't need > to special case UTC as long as incoming format includes explicit > offset. > >> And they >> should store timestamps as POSIX timestamps, not as datetime objects >> with an (implicit or explicit) UTC timezone. > > I disagree again. ?At the end of the day, they should "store" > timestamps in a human-readable text format. ?For example, > > http://www.w3.org/TR/xmlschema-2/#isoformats > > Users who want an object that stores a POSIX timestamp internally, but > presents rich date-time interface can use an excellent mxDateTime > class. > > It is one thing to provide datetime.fromtimestamp() and > datetime.timestamp() methods for interoperability, it is quite another > thing to require users to convert their datetime instances to > timestamp for basic timezone operations. -- --Guido van Rossum (python.org/~guido) From pje at telecommunity.com Mon Jun 11 21:35:27 2012 From: pje at telecommunity.com (PJ Eby) Date: Mon, 11 Jun 2012 15:35:27 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Mon, Jun 11, 2012 at 12:33 PM, Jeff Hardy wrote: > On Mon, Jun 11, 2012 at 8:28 AM, Eric Snow > wrote: > > Nick's option 2 would be an improvement, but I imagine that option 3 > > would have been the most effective by far. Of course, the key thing > > is how closely the various implementors would follow the new list. > > Only they could say, though Frank Wierzbicki seemed positive about it. > > This has come up a couple of times recently (discussions on PEP 421 > and PEP 405), so I think it would be worth while. I don't have the > time to track all of the different proposals that are in flux; it > would be nice to know when they're "done" and just need a sanity check > to make sure everything will work for other implementations. > > Yes, perhaps if the list were *just* a place to cc: in or send a heads-up to python-dev discussions, and not to have actual list discussions per se, that would do the trick. IOW, the idea is, "If you're a contributor to a non-CPython implementation, subscribe here to get a heads-up on Python-Dev discussions you should be following." Not, "here's a list to discuss Python implementations in general", and definitely not a place to *actually conduct discussions* at all: the only things ever posted there should be cc:'d from or to Python-Dev, or be pointers to Python-Dev threads. That way, we'd have a solution for the periodic, "hmm, we should get other implementations to weigh in on this thread" problem, that wouldn't actually divide the discussion. Instead, we'd have a "Bat Signal" (Snake Signal?) to bring the other heroes in to meet with Commissioner Guido. ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Jun 11 18:39:43 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 11 Jun 2012 12:39:43 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: <20120611123943.320f27a4@resist.wooz.org> On Jun 11, 2012, at 04:58 PM, Nick Coghlan wrote: >1. Asking on python-dev is considered adequate. If an implementation >wants to be consulted on changes, one or more of their developers >*must* follow python-dev sufficiently closely that they don't miss >cross-VM compatibility questions. That's certainly my preference. >(My concern is that this isn't >reliable - we know from experience that other VMs can miss such >questions when they're mixed in with the rest of the python-dev >traffic) > >2. As 1, but we adopt a subject line convention to make it easier to >filter out general python-dev traffic for those that are just >interested in cross-vm questions +1 >As Brett pointed out, it's similar to the resurrection of import-sig - >we know that decisions aren't final until they're resolved on >python-dev, but it also means we're not flooding python-dev with >interminable arcane discussions on import system internals. I personally already ignore much of python-dev and only chime in on subjects I both care about and delude myself into thinking I have something useful to contribute. For cases where I miss something and need to catch up, Gmane is perfect. -Barry From brett at yvrsfo.ca Mon Jun 11 22:31:14 2012 From: brett at yvrsfo.ca (Brett Cannon) Date: Mon, 11 Jun 2012 16:31:14 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Mon, Jun 11, 2012 at 3:35 PM, PJ Eby wrote: > On Mon, Jun 11, 2012 at 12:33 PM, Jeff Hardy wrote: > >> On Mon, Jun 11, 2012 at 8:28 AM, Eric Snow >> wrote: >> > Nick's option 2 would be an improvement, but I imagine that option 3 >> > would have been the most effective by far. Of course, the key thing >> > is how closely the various implementors would follow the new list. >> > Only they could say, though Frank Wierzbicki seemed positive about it. >> >> This has come up a couple of times recently (discussions on PEP 421 >> and PEP 405), so I think it would be worth while. I don't have the >> time to track all of the different proposals that are in flux; it >> would be nice to know when they're "done" and just need a sanity check >> to make sure everything will work for other implementations. >> >> > Yes, perhaps if the list were *just* a place to cc: in or send a heads-up > to python-dev discussions, and not to have actual list discussions per se, > that would do the trick. > > IOW, the idea is, "If you're a contributor to a non-CPython > implementation, subscribe here to get a heads-up on Python-Dev discussions > you should be following." Not, "here's a list to discuss Python > implementations in general", and definitely not a place to *actually > conduct discussions* at all: the only things ever posted there should be > cc:'d from or to Python-Dev, or be pointers to Python-Dev threads. > > Do you know how much of a pain that could become if you were moderator of that list? Having to potentially clear every email that goes to some thread by hand would become nearly unmanageable. > That way, we'd have a solution for the periodic, "hmm, we should get other > implementations to weigh in on this thread" problem, that wouldn't actually > divide the discussion. Instead, we'd have a "Bat Signal" (Snake Signal?) > to bring the other heroes in to meet with Commissioner Guido. ;-) > But we already have the various SIGs carry out discussions outside of python-dev and just bring forward their results to python-dev when they are ready. Why would this list be any different? -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Jun 11 22:39:24 2012 From: brett at python.org (Brett Cannon) Date: Mon, 11 Jun 2012 16:39:24 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: <20120611123943.320f27a4@resist.wooz.org> References: <20120611123943.320f27a4@resist.wooz.org> Message-ID: On Mon, Jun 11, 2012 at 12:39 PM, Barry Warsaw wrote: > On Jun 11, 2012, at 04:58 PM, Nick Coghlan wrote: > > >1. Asking on python-dev is considered adequate. If an implementation > >wants to be consulted on changes, one or more of their developers > >*must* follow python-dev sufficiently closely that they don't miss > >cross-VM compatibility questions. > > That's certainly my preference. > Right, because it's easiest for you and everyone else who follows python-dev already. =) But that doesn't improve the situation; those of us who have needed to chat with the other VMs know the status quo is not an optimal (or even decent) solution. I only pull anything off because I know someone on every VM team and so I have email addresses and names. But that shouldn't be a private email conversation, nor should it require that much effort, especially if it requires pulling in someone from some other VM team that I either don't know or didn't have a clue should be included in the conversation. > > >(My concern is that this isn't > >reliable - we know from experience that other VMs can miss such > >questions when they're mixed in with the rest of the python-dev > >traffic) > > > >2. As 1, but we adopt a subject line convention to make it easier to > >filter out general python-dev traffic for those that are just > >interested in cross-vm questions > > +1 > But that then requires new people to the list learn about this "magical" convention. We already know people don't always read the intro paragraph to the mailing list to say this is for development *of* Python, why do you think this will be any better? The anti-top-posting happens only because everyone replies inline so people just naturally follow that. I don't see people remembering to use the magical subject line consistently. This would also be the first time one has to set up a special email filtering rule for python-dev to get a result that people are expected to have available to them. > > >As Brett pointed out, it's similar to the resurrection of import-sig - > >we know that decisions aren't final until they're resolved on > >python-dev, but it also means we're not flooding python-dev with > >interminable arcane discussions on import system internals. > > I personally already ignore much of python-dev and only chime in on > subjects I > both care about and delude myself into thinking I have something useful to > contribute. For cases where I miss something and need to catch up, Gmane > is > perfect. > But your search area of interest is probably quite larger than that for other VM implementers. Being into VMs and compatibility <> into language design which the bulk of python-dev is about (and yes, I used that not-equals operator just for you, Barry, to get the point across =). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+python at benfinney.id.au Tue Jun 12 04:10:36 2012 From: ben+python at benfinney.id.au (Ben Finney) Date: Tue, 12 Jun 2012 12:10:36 +1000 Subject: [Python-Dev] TZ-aware local time References: Message-ID: <8762axz3dv.fsf@benfinney.id.au> Alexander Belopolsky writes: > On Mon, Jun 11, 2012 at 1:01 PM, Guido van Rossum wrote: > > Maybe the problem here is the *input*? It should be a POSIX > > timestamp, not a datetime object. > > No. "Seconds since epoch" or "POSIX" timestamp is a foreign data type > to the datetime module. On this point I must agree with Alexander. Unambiguous storage of absolute time can be achieved with POSIX timestamps, but that is certainly not the only nor best way to do it. For example, RFC 5322 specifies a standard serialisation for timestamp values that is in very wide usage, and those values are valid for transforming to a ?datetime.datetime? value. POSIX timestamps are not a necessary part of the data model. Another example is database fields storing timestamp values; they are surely a very common input for Python ?datetime.datetime? values. For many use cases a different storage is appropriate, a different input is appropriate, and POSIX timestamps are irrelevant for those use cases. > > .. Users should be required to understand POSIX timestamps and the > > importance of UTC before they try to work with multiple timezones. Why? If they are using, for example, a PostgreSQL ?TIMESTAMP? object to store the value, and manipulating it with Python ?datetime.datetime?, why should they have to know anything about POSIX timestamps? On the contrary, for such use cases (and database timestamp values are just one such) I think it's a needless imposition on the programmer to force them to learn about POSIX timestamps, a data type irrelevant for their purposes. -- \ ?My business is to teach my aspirations to conform themselves | `\ to fact, not to try and make facts harmonise with my | _o__) aspirations.? ?Thomas Henry Huxley, 1860-09-23 | Ben Finney From stephen at xemacs.org Tue Jun 12 04:56:56 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 12 Jun 2012 11:56:56 +0900 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: <87vcix6xvr.fsf@uwakimon.sk.tsukuba.ac.jp> Brett Cannon writes: > But we already have the various SIGs carry out discussions outside of > python-dev and just bring forward their results to python-dev when they are > ready. Why would this list be any different? (1) Because AIUI the main problem this list is supposed to solve is contacting interested parties and getting them to come to python-dev where the actual discussion will take place. Almost certainly some of the actual discussion will take place on python-dev, no? It *is* on-topic for python-dev, right? (Guido seems to think so, anyway....) So it's not going to focus discussion the way a SIG list does. (2) Because it delegates issue triage to people who don't actually know which of the various VMs will care about a particular change, so it's unlikely to be terribly accurate. (3) The SIGs attract long-term interest from a body of "usual suspects". It's worth it to them to invest in the SIG list. While the VM folks will have a long term interest, by the current definition of the new list they won't be starting threads very often! The people who should be starting threads are quite likely to have interest in only one thread, so their incentive to move it to the new list will be low; their natural tendency will be to post to python-dev and "let George move the thread if needed". None of that means the new list is a bad idea -- it might be accurate *enough* to be a big improvement, etc -- just that it clearly is different from the various SIGs in some important ways. From fwierzbicki at gmail.com Tue Jun 12 05:13:53 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Mon, 11 Jun 2012 20:13:53 -0700 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Sun, Jun 10, 2012 at 11:58 PM, Nick Coghlan wrote: > 1. Asking on python-dev is considered adequate. If an implementation > wants to be consulted on changes, one or more of their developers > *must* follow python-dev sufficiently closely that they don't miss > cross-VM compatibility questions. (My concern is that this isn't > reliable - we know from experience that other VMs can miss such > questions when they're mixed in with the rest of the python-dev > traffic) > 2. As 1, but we adopt a subject line convention to make it easier to > filter out general python-dev traffic for those that are just > interested in cross-vm questions > 3. Create a separate list for cross-VM discussions, *including* > discussions that aren't directly relevant to Python-the-language or > CPython-the-reference-interpreter (e.g. collaborating on a shared > standard library fork). python-dev threads may be advertised on the > new list if cross-VM feedback is considered particularly necessary. (2) and (3) work for me - I try to do (1) but often miss discussions until they have gone stale. I bet (2) would work well enough as long as there are enough interested participants to remember to add the conventional string to the subject of an ongoing discussion. It would be very easy for me to add a filter for such a string. -Frank From albzey at googlemail.com Tue Jun 12 05:17:53 2012 From: albzey at googlemail.com (Albert Zeyer) Date: Tue, 12 Jun 2012 05:17:53 +0200 Subject: [Python-Dev] Built-in sub modules Message-ID: Hi, I just created some code to support built-in sub modules. The naive way I tried first was just to add {"Mod.Sub1.Sub2", init_modsub1sub2} to _PyImport_Inittab. This didn't worked. Maybe it would be a nice addition so that this works. Mod itself, in my case, was a package directory with pure Python code. Mod.Sub1 also. Only Mod.Sub1.Sub2 was some native code. Now, to make it work, I added {"Mod", init_modwrapper} to _PyImport_Inittab. init_modwrapper then first loads the package just in the way it would have been loaded before (I copied some code from Python/import.c). Then, in addition, it preloads Mod.Sub1 and call the native Mod.Sub1.Sub2 initialization (this also needs some _Py_PackageContext handling) and setups everything as needed. An example implementation is here: https://github.com/albertz/python-embedded/blob/master/pycryptoutils/cryptomodule.c https://github.com/albertz/python-embedded/blob/master/pyimportconfig.c Maybe this is useful for someone. I also searched a bit around and I didn't directly found any easier way to do this. Only a post from 2009 (http://mail.python.org/pipermail/cplusplus-sig/2009-January/014178.html) which seems like a much more ugly hack. Btw., my example implementation is part of another Python embedded project (https://github.com/albertz/python-embedded/). It builds a single static library with Python and PyCrypto - no external dependencies. So far, some basic test with RSA/AES works fine. Maybe that is also interesting to someone. Regards, Albert From brett at yvrsfo.ca Tue Jun 12 05:29:29 2012 From: brett at yvrsfo.ca (Brett Cannon) Date: Mon, 11 Jun 2012 23:29:29 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: <87vcix6xvr.fsf@uwakimon.sk.tsukuba.ac.jp> References: <87vcix6xvr.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Monday, June 11, 2012, Stephen J. Turnbull wrote: > Brett Cannon writes: > > > But we already have the various SIGs carry out discussions outside of > > python-dev and just bring forward their results to python-dev when they > are > > ready. Why would this list be any different? > > (1) Because AIUI the main problem this list is supposed to solve is > contacting interested parties and getting them to come to > python-dev where the actual discussion will take place. Almost > certainly some of the actual discussion will take place on > python-dev, no? It *is* on-topic for python-dev, right? (Guido > seems to think so, anyway....) So it's not going to focus > discussion the way a SIG list does. Not necessarily. Just like discussions on SIGs can start and end there, I see no requirement that discussions on the list end up on python-dev. > > (2) Because it delegates issue triage to people who don't actually > know which of the various VMs will care about a particular change, > so it's unlikely to be terribly accurate. > > (3) The SIGs attract long-term interest from a body of "usual > suspects". It's worth it to them to invest in the SIG list. > While the VM folks will have a long term interest, by the current > definition of the new list they won't be starting threads very > often! The people who should be starting threads are quite likely > to have interest in only one thread, so their incentive to move it > to the new list will be low; their natural tendency will be to > post to python-dev and "let George move the thread if needed". > > Discussions on the list would universally affect all VMs, so there is an incentive to pay attention. > None of that means the new list is a bad idea -- it might be accurate > *enough* to be a big improvement, etc -- just that it clearly is > different from the various SIGs in some important ways. > -- [sent from my iPad] -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Jun 12 05:37:14 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 12 Jun 2012 13:37:14 +1000 Subject: [Python-Dev] Built-in sub modules In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 1:17 PM, Albert Zeyer wrote: > I also searched a bit around and I didn't directly found any easier > way to do this. Only a post from 2009 > (http://mail.python.org/pipermail/cplusplus-sig/2009-January/014178.html) > which seems like a much more ugly hack. Right, it isn't currently supported. http://bugs.python.org/issue1644818 is a long standard feature request to add this functionality. For Python 3.3, the old import system (written in C) has been replaced with importlib (written in Python). This should make it easier to extend and experiment with builtin submodule support. An importlib based solution should also work in Python 3.2. Further discussions would be best directed to import-sig, until an importlib based solution is available for consideration. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From guido at python.org Tue Jun 12 06:17:46 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 11 Jun 2012 21:17:46 -0700 Subject: [Python-Dev] TZ-aware local time In-Reply-To: <8762axz3dv.fsf@benfinney.id.au> References: <8762axz3dv.fsf@benfinney.id.au> Message-ID: On Mon, Jun 11, 2012 at 7:10 PM, Ben Finney wrote: > Alexander Belopolsky writes: > >> On Mon, Jun 11, 2012 at 1:01 PM, Guido van Rossum wrote: >> > Maybe the problem here is the *input*? It should be a POSIX >> > timestamp, not a datetime object. >> >> No. "Seconds since epoch" or "POSIX" timestamp is a foreign data type >> to the datetime module. > > On this point I must agree with Alexander. > > Unambiguous storage of absolute time can be achieved with POSIX > timestamps, but that is certainly not the only nor best way to do it. > > For example, RFC 5322 specifies a standard serialisation for timestamp > values that is in very wide usage, and those values are valid for > transforming to a ?datetime.datetime? value. POSIX timestamps are > not a necessary part of the data model. To the contrary, without the POSIX timestamp model to define the equivalency between the same point in time expressed using different timezones, sane comparisons and arithmetic on timestamps would be impossible. > Another example is database fields storing timestamp values; they are > surely a very common input for Python ?datetime.datetime? values. But how does a database represent timestamps *internally*? And does it store the timezone or not? I.e. can it distinguish between two representations of the *same* point in time using different timezones? If so, how would queries work? > For many use cases a different storage is appropriate, a different input > is appropriate, and POSIX timestamps are irrelevant for those use cases. POSIX timestamps are no good for human-entered input or human-readable output. But they are hard to beat for internal storage. >> > .. Users should be required to understand POSIX timestamps and the >> > importance of UTC before they try to work with multiple timezones. > > Why? If they are using, for example, a PostgreSQL ?TIMESTAMP? object to > store the value, and manipulating it with Python ?datetime.datetime?, > why should they have to know anything about POSIX timestamps? So they'll understand that midnight in New York != midnight In San Francisco, while 4pm in New York == 1pm in San Francisco. And so they understand that, while the difference between New York and San Francisco time is always 3 hours, the difference between San Francisco time and Sydney time can vary by two hours. > On the contrary, for such use cases (and database timestamp values are > just one such) I think it's a needless imposition on the programmer to > force them to learn about POSIX timestamps, a data type irrelevant for > their purposes. IMO you ignore the underlying POSIX timestamps at your peril as soon as you are comparing timestamps. Anyway, we're very far from the original problem statement. If the requirement is to represent timestamps as found in email and be able to reproduce them exactly, you may have to store the original string beside some parsed-out version, since there are subtleties in the string version that are lost in parsing. Hopefully the parsed-out version can be represented as a tz-aware datetime, and hopefully for most purposes that's all you need (if you don't need to be able to verify a digital signature on the text of the headers). The fixed timezones now in the stdlib are probably best for this. The rest would seem to be specific to the email package. -- --Guido van Rossum (python.org/~guido) From ben+python at benfinney.id.au Tue Jun 12 07:14:15 2012 From: ben+python at benfinney.id.au (Ben Finney) Date: Tue, 12 Jun 2012 15:14:15 +1000 Subject: [Python-Dev] TZ-aware local time References: <8762axz3dv.fsf@benfinney.id.au> Message-ID: <871ullyuvs.fsf@benfinney.id.au> Guido van Rossum writes: > On Mon, Jun 11, 2012 at 7:10 PM, Ben Finney wrote: > > Unambiguous storage of absolute time can be achieved with POSIX > > timestamps, but that is certainly not the only nor best way to do > > it. > > > > For example, RFC 5322 specifies a standard serialisation for > > timestamp values that is in very wide usage, and those values are > > valid for transforming to a ?datetime.datetime? value. POSIX > > timestamps are not a necessary part of the data model. > > To the contrary, without the POSIX timestamp model to define the > equivalency between the same point in time expressed using different > timezones, sane comparisons and arithmetic on timestamps would be > impossible. Why is the POSIX timestamp model the only possible model? To the contrary, there are many representations with different tradeoffs but with the common properties you name (?equivalency between the same point in time expressed using different timezones?). I'm objecting to your assertion that the *specific* data format of POSIX timestamps is necessary for this, rather than being a contingent format that is one of many real world formats used for timestamps. > > Another example is database fields storing timestamp values; they are > > surely a very common input for Python ?datetime.datetime? values. > > But how does a database represent timestamps *internally*? My point is that the programmer using Python ?datetime?, and not dealing with POSIX timestamps at any point, should not need to care how ?datetime? represents its values internally. You said ?the *input* [?] should be a POSIX timestamp, not a datetime object.?, but these use cases (RFC 5322 timestamps, database timestamp field values) have other inputs for timestamp values and there's no need for a POSIX timestamp in the data model. The programmer for these common use cases is dealing with input that is not POSIX timestamps, their output is not POSIX timestamps, and the data processing doesn't have any need for the concept of ?seconds since epoch?. So why claim that the POSIX timestamp is necessary for such use cases? > > For many use cases a different storage is appropriate, a different > > input is appropriate, and POSIX timestamps are irrelevant for those > > use cases. > > POSIX timestamps are no good for human-entered input or human-readable > output. But they are hard to beat for internal storage. Perhaps so, and perhaps not. Either way, that's not an argument for requiring the user of ?datetime? to deal with that internal representation. The ?datetime? module provides a useful abstraction that allows different serialisation, without tying the programmer to POSIX timestamps. > >> > .. Users should be required to understand POSIX timestamps and > >> > the importance of UTC before they try to work with multiple > >> > timezones. > > > > Why? If they are using, for example, a PostgreSQL ?TIMESTAMP? object > > to store the value, and manipulating it with Python > > ?datetime.datetime?, why should they have to know anything about > > POSIX timestamps? > > So they'll understand that midnight in New York != midnight In San > Francisco, while 4pm in New York == 1pm in San Francisco. And so they > understand that, while the difference between New York and San > Francisco time is always 3 hours, the difference between San Francisco > time and Sydney time can vary by two hours. You made two assertions in ?Users should be required to understand POSIX timestamps and the importance of UTC before they try to work with multiple timezones.? I'm not objecting to ?Users should be required to understand the importance of UTC before they try to work with multiple timezones?. I am objecting to ?Users should be required to understand POSIX timestamps before they try to work with multiple timezones?. Your argument about timezones is irrelevant to my objection about requiring users to understand POSIX timestamps. > IMO you ignore the underlying POSIX timestamps at your peril as soon > as you are comparing timestamps. If another representation (e.g. a database field) allows the comparison safely, why require the programmer to understand a data representation irrelevant to their use case? > Anyway, we're very far from the original problem statement. That might be part of the misunderstanding, and I apologise if that confused matters. I don't want a useful abstraction like Python's ?datetime? to be intentionally leaky, so I don't think ?Users should be required to understand POSIX timestamps before they try to work with multiple timezones? is helpful. That's the whole of my objection, and the only point I'm currently arguing in this wide-ranging thread. -- \ ?Whatever you do will be insignificant, but it is very | `\ important that you do it.? ?Mohandas K. Gandhi | _o__) | Ben Finney From tjreedy at udel.edu Tue Jun 12 07:24:03 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 12 Jun 2012 01:24:03 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On 6/11/2012 11:13 PM, fwierzbicki at gmail.com wrote: > On Sun, Jun 10, 2012 at 11:58 PM, Nick Coghlan wrote: >> 2. As 1, but we adopt a subject line convention to make it easier to >> filter out general python-dev traffic for those that are just >> interested in cross-vm questions > (2) and (3) work for me - I try to do (1) but often miss discussions > until they have gone stale. > > I bet (2) would work well enough as long as there are enough > interested participants to remember to add the conventional string to > the subject of an ongoing discussion. It would be very easy for me to > add a filter for such a string. This is simple to try and see what happens. [X] or [XI] for X(cross) implementation. --- Terry Jan Reedy From ncoghlan at gmail.com Tue Jun 12 07:46:01 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 12 Jun 2012 15:46:01 +1000 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 3:24 PM, Terry Reedy wrote: > This is simple to try and see what happens. > [X] or [XI] for X(cross) implementation. To allow easier transition to a separate list (if that seems necessary at a later date), my preferred colour for the bikeshed is [compatibility-sig]. I think a subject line marker is actually a reasonable approach - we can just add the header to the subject line whenever we reach a point in the discussion where we're asking "it would be good to get feedback from PyPy/Jython/IronPython/etc on this". It serves exactly the same purpose as posting the question to a separate list would, without risking splitting the discussion. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From stephen at xemacs.org Tue Jun 12 10:09:14 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 12 Jun 2012 17:09:14 +0900 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: <87vcix6xvr.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <87obop6jf9.fsf@uwakimon.sk.tsukuba.ac.jp> Brett Cannon writes: > Not necessarily. Just like discussions on SIGs can start and end > there, I see no requirement that discussions on the list end up on > python-dev. You've missed my point, which is that for many people working on CPython, python-dev will be the natural place to discuss those issues, and the thread will end up on two different lists. > Discussions on the list would universally affect all VMs, so there is an > incentive to pay attention. You've missed the point again, which is that the incentive only motivates those who care about VMs. It does not motivate those who care about features that affect VMs. From fijall at gmail.com Tue Jun 12 10:48:47 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 12 Jun 2012 10:48:47 +0200 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Mon, Jun 11, 2012 at 8:58 AM, Nick Coghlan wrote: > On Mon, Jun 11, 2012 at 11:29 AM, Guido van Rossum > wrote: > > But what guarantee do you have that (a) the right people sign up for > > the new list, and (b) topics are correctly brought up there instead of > > on python-dev? I agree that python-dev is turning into a firehose, but > > I am reluctant to create backwaters where people might arrive at what > > they think is a consensus only because the important opinions aren't > > represented there. > > If that's a concern, I'd be happy to limit the use of the new list to > "Input from other implementations needed on python-dev thread ". > > At the moment, it's a PITA to chase other implementations to get > confirmation that they can cope with a change we're considering, so > I'd like confirmation that either: > Hm. Maybe we can do something to change the python-dev ethics in order to alleviate the problem? I'm skimming python-dev and read some of the discussion. The problem is when a VM-related question shows up in the message 77 of a thread that does not have anything in topic that would catch my attention. If all the interesting questions that we have to answer are in their own topic, I think it's fine to be subscribed to python-dev, provided I don't have to read all the bikeshedding. But maybe I'm too optimistic and this is not changeable. Cheers, fijal -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristjan at ccpgames.com Tue Jun 12 13:23:28 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 12 Jun 2012 11:23:28 +0000 Subject: [Python-Dev] issue #15038 - Optimize python Locks on Windows Message-ID: Hi, Could I get some feedback on this proposed patch? It would be great to get it in before the beta. Cheers, Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From alon at horev.net Tue Jun 12 15:40:11 2012 From: alon at horev.net (Alon Horev) Date: Tue, 12 Jun 2012 16:40:11 +0300 Subject: [Python-Dev] segfault - potential double free when using iterparse Message-ID: Hi All, First of all, I'm not opening a bug yet as I'm not certain whether this is a CPython bug or lxml bug. I'm getting a segfault within python's GC (garbage collector) module. here's the stack trace: #0 0x00007fc7e9f6b76e in gc_list_remove (op=0x7fc79cef3d98) at Modules/gcmodule.c:211 #1 PyObject_GC_Del (op=0x7fc79cef3d98) at Modules/gcmodule.c:1503 #2 0x00007fc7e9f2ac0f in PyEval_EvalFrameEx (f=, throwflag=) at Python/ceval.c:2894 #3 0x00007fc7e9ea5b79 in gen_send_ex (arg=None, exc=, gen=) at Objects/genobject.c:84 #4 0x00007fc7e9ea6185 in gen_close (self=) at Objects/genobject.c:130 #5 gen_del (self=) at Objects/genobject.c:165 #6 0x00007fc7e9ea5a1b in gen_dealloc (gen=0x7fc7c1ba73c0) at Objects/genobject.c:32 In order to see what object the gc is freeing i tried casting it to a PyObject (we're freeing a lxml object): (gdb) p (PyObject*) op $17 = Similar bugs (http://osdir.com/ml/python.bugs/2000-12/msg00214.html) blame the extension module for calling dealloc explicitly more than once or doing forbidden things in __del__. this is how i use lxml: from lxml.etree import iterparse def safe_iterparse(*args, **kwargs): for event, element in iterparse(*args, **kwargs): try: yield (event, element) finally: element.clear() I don't have the data that caused the crash, hopefully I'll get it after the next crash. anyone familiar with these kind of bugs in c extensions/cpython/lxml? could you give pointers to what I should be looking for? some version info: CPython version: 2.7.2 on linux. lxml: 2.3.3 thanks, Alon Horev -------------- next part -------------- An HTML attachment was scrubbed... URL: From fwierzbicki at gmail.com Tue Jun 12 15:49:51 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Tue, 12 Jun 2012 06:49:51 -0700 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Mon, Jun 11, 2012 at 10:46 PM, Nick Coghlan wrote: > To allow easier transition to a separate list (if that seems necessary > at a later date), my preferred colour for the bikeshed is > [compatibility-sig]. I for one approve of this bikeshed colour :) -Frank From brett at yvrsfo.ca Tue Jun 12 16:40:53 2012 From: brett at yvrsfo.ca (Brett Cannon) Date: Tue, 12 Jun 2012 10:40:53 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: <87obop6jf9.fsf@uwakimon.sk.tsukuba.ac.jp> References: <87vcix6xvr.fsf@uwakimon.sk.tsukuba.ac.jp> <87obop6jf9.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Tue, Jun 12, 2012 at 4:09 AM, Stephen J. Turnbull wrote: > Brett Cannon writes: > > > Not necessarily. Just like discussions on SIGs can start and end > > there, I see no requirement that discussions on the list end up on > > python-dev. > > You've missed my point, which is that for many people working on > CPython, python-dev will be the natural place to discuss those issues, > and the thread will end up on two different lists. > Ah, but you helped make my point! For the people working on CPython, python-dev is a natural place. But what about PyPy, IronPython, or Jython (toss in Cython or any future VMs and it just became an even larger spread of teams)? Do they naturally think to discuss things on python-dev or their own list? If anything this new list would act as a showing of good will that python-dev does not view CPython as the centre of the world when it comes to VMs (but obviously does for the language) and that the other VM authors' time as just as important by not forcing them to wade through python-dev (or have to set up a special filter just for python-dev to get the occasional email thread). > > > Discussions on the list would universally affect all VMs, so there is an > > incentive to pay attention. > > You've missed the point again, which is that the incentive only > motivates those who care about VMs. It does not motivate those who > care about features that affect VMs. > But if you are doing something which will affect the VMs I hope you do care about VMs period. We can also easily tell someone that they need to discuss something on the other list if needed (we already do this when something should be discussed in a SIG first as well). Anyway, it looks like everyone else chiming in is capitulating to keeping it on python-dev with a proper subject line, so we will start with there and if it proves ineffective I will create the list. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Jun 12 16:41:21 2012 From: brett at python.org (Brett Cannon) Date: Tue, 12 Jun 2012 10:41:21 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 9:49 AM, fwierzbicki at gmail.com < fwierzbicki at gmail.com> wrote: > On Mon, Jun 11, 2012 at 10:46 PM, Nick Coghlan wrote: > > To allow easier transition to a separate list (if that seems necessary > > at a later date), my preferred colour for the bikeshed is > > [compatibility-sig]. > I for one approve of this bikeshed colour :) > Fine by me. We will start with that then. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Jun 12 16:52:18 2012 From: brett at python.org (Brett Cannon) Date: Tue, 12 Jun 2012 10:52:18 -0400 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM Message-ID: I would like to have importlib just work out of the box for all VMs in 3.3 instead of requiring a minor patch in order to not throw an exception when loading from source and there is no bytecode. The relevant code for this discussion can be seen at http://hg.python.org/cpython/file/c2910971eb86/Lib/importlib/_bootstrap.py#l691 . First question is what are all the VMs doing for imp.cache_from_source()? Are you implementing it like CPython, or are you returning None? And if you implemented it, what does marshal.loads() do? Right now cache_from_source() is implemented in importlib itself, but we can either provide a flag to control what it does or in your setup code for import you can override the function with ``lambda _, __=None: None``. Second question, what do you set sys.dont_write_bytecode to? The answers to those questions will dictate if there is anything to actually discuss. =) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Jun 12 16:54:02 2012 From: brett at python.org (Brett Cannon) Date: Tue, 12 Jun 2012 10:54:02 -0400 Subject: [Python-Dev] Built-in sub modules In-Reply-To: References: Message-ID: On Mon, Jun 11, 2012 at 11:37 PM, Nick Coghlan wrote: > On Tue, Jun 12, 2012 at 1:17 PM, Albert Zeyer > wrote: > > I also searched a bit around and I didn't directly found any easier > > way to do this. Only a post from 2009 > > (http://mail.python.org/pipermail/cplusplus-sig/2009-January/014178.html > ) > > which seems like a much more ugly hack. > > Right, it isn't currently supported. > http://bugs.python.org/issue1644818 is a long standard feature request > to add this functionality. > > For Python 3.3, the old import system (written in C) has been replaced > with importlib (written in Python). This should make it easier to > extend and experiment with builtin submodule support. An importlib > based solution should also work in Python 3.2. > > Further discussions would be best directed to import-sig, until an > importlib based solution is available for consideration. > I actually had code to make extensions modules work in packages as well, but removed it when it broke "compatibility". -------------- next part -------------- An HTML attachment was scrubbed... URL: From breamoreboy at yahoo.co.uk Tue Jun 12 16:59:15 2012 From: breamoreboy at yahoo.co.uk (Mark Lawrence) Date: Tue, 12 Jun 2012 15:59:15 +0100 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: <87vcix6xvr.fsf@uwakimon.sk.tsukuba.ac.jp> <87obop6jf9.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On 12/06/2012 15:40, Brett Cannon wrote: > > Ah, but you helped make my point! For the people working on CPython, > python-dev is a natural place. But what about PyPy, IronPython, or Jython > (toss in Cython or any future VMs and it just became an even larger spread > of teams)? Do they naturally think to discuss things on python-dev or their > own list? If anything this new list would act as a showing of good will > that python-dev does not view CPython as the centre of the world when it > comes to VMs (but obviously does for the language) and that the other VM > authors' time as just as important by not forcing them to wade through > python-dev (or have to set up a special filter just for python-dev to get > the occasional email thread). A bit late in the day and possibly the most stupid suggestion ever, but why not name python-dev cpython-dev? At least everybody would know where they stand. -- Cheers. Mark Lawrence. From alex.gaynor at gmail.com Tue Jun 12 18:38:43 2012 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Tue, 12 Jun 2012 16:38:43 +0000 (UTC) Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM References: Message-ID: For PyPy: I'm not an expert in our import, but from looking at the source 1) imp.cache_from_source is unimplemented, it's an AttributeError. 2) sys.dont_write_bytecode is always false, we don't respect that flag (we really should IMO, but it's not a high priority for me, or anyone else apparently) Alex From brett at python.org Tue Jun 12 18:44:14 2012 From: brett at python.org (Brett Cannon) Date: Tue, 12 Jun 2012 12:44:14 -0400 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: <87vcix6xvr.fsf@uwakimon.sk.tsukuba.ac.jp> <87obop6jf9.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Tue, Jun 12, 2012 at 10:59 AM, Mark Lawrence wrote: > On 12/06/2012 15:40, Brett Cannon wrote: > >> >> Ah, but you helped make my point! For the people working on CPython, >> python-dev is a natural place. But what about PyPy, IronPython, or Jython >> (toss in Cython or any future VMs and it just became an even larger spread >> of teams)? Do they naturally think to discuss things on python-dev or >> their >> own list? If anything this new list would act as a showing of good will >> that python-dev does not view CPython as the centre of the world when it >> comes to VMs (but obviously does for the language) and that the other VM >> authors' time as just as important by not forcing them to wade through >> python-dev (or have to set up a special filter just for python-dev to get >> the occasional email thread). >> > > A bit late in the day and possibly the most stupid suggestion ever, but > why not name python-dev cpython-dev? At least everybody would know where > they stand. > > Way too much stuff out there says python-dev over cpython-dev. Plus python-dev covers the language of Python as well, which is not specific to CPython. > -- > Cheers. > > Mark Lawrence. > > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Jun 12 18:47:39 2012 From: brett at python.org (Brett Cannon) Date: Tue, 12 Jun 2012 12:47:39 -0400 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 12:38 PM, Alex Gaynor wrote: > For PyPy: I'm not an expert in our import, but from looking at the source > > 1) imp.cache_from_source is unimplemented, it's an AttributeError. > Well, you will have it come Python 3.3 one way or another. =) > > 2) sys.dont_write_bytecode is always false, we don't respect that flag (we > really > should IMO, but it's not a high priority for me, or anyone else > apparently) > But doesn't PyPy read and write .pyc files ( http://doc.pypy.org/en/latest/config/objspace.usepycfiles.html suggests you do)? So I would assume you are not affected by this. Jython and IronPython, though, would be (I think). -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Jun 12 18:48:48 2012 From: brett at python.org (Brett Cannon) Date: Tue, 12 Jun 2012 12:48:48 -0400 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 10:52 AM, Brett Cannon wrote: > I would like to have importlib just work out of the box for all VMs in 3.3 > instead of requiring a minor patch in order to not throw an exception when > loading from source and there is no bytecode. The relevant code for this > discussion can be seen at > http://hg.python.org/cpython/file/c2910971eb86/Lib/importlib/_bootstrap.py#l691 > . > > First question is what are all the VMs doing for imp.cache_from_source()? > Are you implementing it like CPython, or are you returning None? And if you > implemented it, what does marshal.loads() do? Right now cache_from_source() > is implemented in importlib itself, but we can either provide a flag to > control what it does or in your setup code for import you can override the > function with ``lambda _, __=None: None``. > I should mention another option is to add sys.dont_read_bytecode (I think I have discussed this with Frank at some point). -Brett > > Second question, what do you set sys.dont_write_bytecode to? > > The answers to those questions will dictate if there is anything to > actually discuss. =) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gaynor at gmail.com Tue Jun 12 18:50:47 2012 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Tue, 12 Jun 2012 11:50:47 -0500 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 11:47 AM, Brett Cannon wrote: > > > On Tue, Jun 12, 2012 at 12:38 PM, Alex Gaynor wrote: > >> For PyPy: I'm not an expert in our import, but from looking at the source >> >> 1) imp.cache_from_source is unimplemented, it's an AttributeError. >> > > Well, you will have it come Python 3.3 one way or another. =) > > Sure, I'm not totally up to speed on the py3k effort. > >> 2) sys.dont_write_bytecode is always false, we don't respect that flag >> (we really >> should IMO, but it's not a high priority for me, or anyone else >> apparently) >> > > But doesn't PyPy read and write .pyc files ( > http://doc.pypy.org/en/latest/config/objspace.usepycfiles.html suggests > you do)? So I would assume you are not affected by this. Jython and > IronPython, though, would be (I think). > This is a compile time option, not a runtime option. However, it looks like I lied, someone did implement it correctly, so we have the same behavior as CPython. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Tue Jun 12 19:02:02 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 12 Jun 2012 19:02:02 +0200 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: 2012/6/12 Alex Gaynor > > > On Tue, Jun 12, 2012 at 11:47 AM, Brett Cannon wrote: > >> >> >> On Tue, Jun 12, 2012 at 12:38 PM, Alex Gaynor wrote: >> >>> For PyPy: I'm not an expert in our import, but from looking at the source >>> >>> 1) imp.cache_from_source is unimplemented, it's an AttributeError. >>> >> >> Well, you will have it come Python 3.3 one way or another. =) >> >> > > Sure, I'm not totally up to speed on the py3k effort. > It's indeed implemented in pypy's py3k branch. > > >> >>> 2) sys.dont_write_bytecode is always false, we don't respect that flag >>> (we really >>> should IMO, but it's not a high priority for me, or anyone else >>> apparently) >>> >> >> But doesn't PyPy read and write .pyc files ( >> http://doc.pypy.org/en/latest/config/objspace.usepycfiles.html suggests >> you do)? So I would assume you are not affected by this. Jython and >> IronPython, though, would be (I think). >> > > This is a compile time option, not a runtime option. However, it looks > like I lied, someone did implement it correctly, so we have the same > behavior as CPython. Yes, PyPy seems to respect sys.dontwritebytecode. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fwierzbicki at gmail.com Tue Jun 12 19:04:13 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Tue, 12 Jun 2012 10:04:13 -0700 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 9:38 AM, Alex Gaynor wrote: > For PyPy: I'm not an expert in our import, but from looking at the source > > 1) imp.cache_from_source is unimplemented, it's an AttributeError. Jython does not (yet) have a cache_from_source. > 2) sys.dont_write_bytecode is always false, we don't respect that flag (we really > ? should IMO, but it's not a high priority for me, or anyone else apparently) Jython does support sys.dont_write_bytecode, but doesn't support sys.dont_read_bytecode yet - do you happen to know when dont_read_bytecode was added? It should be pretty straightforward, and so I'll probably add it to our 2.7. -Frank From fwierzbicki at gmail.com Tue Jun 12 19:06:43 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Tue, 12 Jun 2012 10:06:43 -0700 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 10:04 AM, fwierzbicki at gmail.com wrote: > Jython does support sys.dont_write_bytecode, but doesn't support > sys.dont_read_bytecode yet - do you happen to know when > dont_read_bytecode was added? It should be pretty straightforward, and > so I'll probably add it to our 2.7. Looking around it seems dont_read_bytecode came in sometime in 3.x so I'll wait for 3 (and so I'll probably just use importlib....? :) -Frank From brett at python.org Tue Jun 12 19:13:50 2012 From: brett at python.org (Brett Cannon) Date: Tue, 12 Jun 2012 13:13:50 -0400 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 1:04 PM, fwierzbicki at gmail.com < fwierzbicki at gmail.com> wrote: > On Tue, Jun 12, 2012 at 9:38 AM, Alex Gaynor > wrote: > > For PyPy: I'm not an expert in our import, but from looking at the source > > > > 1) imp.cache_from_source is unimplemented, it's an AttributeError. > Jython does not (yet) have a cache_from_source. > > > 2) sys.dont_write_bytecode is always false, we don't respect that flag > (we really > > should IMO, but it's not a high priority for me, or anyone else > apparently) > Jython does support sys.dont_write_bytecode, but doesn't support > sys.dont_read_bytecode yet - do you happen to know when > dont_read_bytecode was added? It was never added since it doesn't currently exist; I said *add* sys.dont_read_bytecode, not *use*. =) Would the flag actually be beneficial? Have you had issues where people assumed that Jython should be able to read bytecode since they just didn't worry about a VM that can't read bytecode? I mean I'm open to suggestions as ways to control bytecode reading to fail gracefully in case someone runs Jython (or IronPython) in a directory where PyPy or CPython was run previously and thus the bytecode exists from them. That's why I asked what marshal.loads() does; if it returns None or raises some exception that can be distinguished from a failure of badly formatted marshal data then I could rely on that as another option. > It should be pretty straightforward, and > so I'll probably add it to our 2.7. > I would assume it would just be a flag for you. Importlib and other stdlib code would be where all the work would be to start obeying the flag (if it is added). -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Tue Jun 12 19:34:39 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 12 Jun 2012 13:34:39 -0400 Subject: [Python-Dev] segfault - potential double free when using iterparse In-Reply-To: References: Message-ID: On 6/12/2012 9:40 AM, Alon Horev wrote: > Hi All, > > First of all, I'm not opening a bug yet as I'm not certain whether this > is a CPython bug or lxml bug. lxml is more likely, making this a topic for python-list and whatever lxml list. ... > from lxml.etree import iterparse ... > anyone familiar with these kind of bugs in c extensions/cpython/lxml? > could you give pointers to what I should be looking for? I would ask the generic question on python-list and report the problem to lxml folks also. > some version info: > CPython version: 2.7.2 on linux. 2.7.3 has 6 months more of bugfixes, and if there is a problem with python, you would have to demonstrate a problem with that or a more recent build. -- Terry Jan Reedy From tjreedy at udel.edu Tue Jun 12 20:16:04 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 12 Jun 2012 14:16:04 -0400 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? Message-ID: http://bugs.python.org/issue12982 Currently, cpython requires the -O flag to *read* .pyo files as well as the write them. This is a nuisance to people who receive them from others, without the source. The originator of the issue quotes the following from the doc (without giving the location). "It is possible to have a file called spam.pyc (or spam.pyo when -O is used) without a file spam.py for the same module. This can be used to distribute a library of Python code in a form that is moderately hard to reverse engineer." There is no warning that .pyo files are viral, in a sense. The user has to use -O, which is a) a nuisance to remember if he has multiple scripts and some need it and some not, and b) makes his own .py files used with .pyo imports cached as .pyo, without docstrings, like it or not. Currently, the easiest workaround is to rename .pyo to .pyc and all seems to work fine, even with a mixture of true .pyc and renamed .pyo files. (The same is true with the -O flag and no renaming.) This suggests that there is no current reason for the restriction in that the *execution* of bytecode is not affected by the -O flag. (Another workaround might be a custom importer -- but this is not trivial, apparently.) So is the import restriction either an accident or obsolete holdover? If so, can removing it be treated as a bugfix and put into current releases, or should it be treated as an enhancement only for a future release? Or is the restriction an intentional reservation of the possibility of making *execution* depend on the flag? Which would mean that the restriction should be kept and only the doc changed? -- Terry Jan Reedy From ericsnowcurrently at gmail.com Tue Jun 12 20:28:09 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 12 Jun 2012 12:28:09 -0600 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 10:48 AM, Brett Cannon wrote: > I should mention another option is to add sys.dont_read_bytecode (I think I > have discussed this with Frank at some point). Or check for "sys.implementation.cache_tag is None"... -eric From ethan at stoneleaf.us Tue Jun 12 20:41:52 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 12 Jun 2012 11:41:52 -0700 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: References: Message-ID: <4FD78D70.4030107@stoneleaf.us> Terry Reedy wrote: > http://bugs.python.org/issue12982 > > Currently, cpython requires the -O flag to *read* .pyo files as well as > the write them. This is a nuisance to people who receive them from > others, without the source. The originator of the issue quotes the > following from the doc (without giving the location). > > "It is possible to have a file called spam.pyc (or spam.pyo when -O is > used) without a file spam.py for the same module. This can be used to > distribute a library of Python code in a form that is moderately hard to > reverse engineer." > > There is no warning that .pyo files are viral, in a sense. The user has > to use -O, which is a) a nuisance to remember if he has multiple scripts > and some need it and some not, and b) makes his own .py files used with > .pyo imports cached as .pyo, without docstrings, like it or not. > > Currently, the easiest workaround is to rename .pyo to .pyc and all > seems to work fine, even with a mixture of true .pyc and renamed .pyo > files. (The same is true with the -O flag and no renaming.) This > suggests that there is no current reason for the restriction in that the > *execution* of bytecode is not affected by the -O flag. (Another > workaround might be a custom importer -- but this is not trivial, > apparently.) > > So is the import restriction either an accident or obsolete holdover? If > so, can removing it be treated as a bugfix and put into current > releases, or should it be treated as an enhancement only for a future > release? > > Or is the restriction an intentional reservation of the possibility of > making *execution* depend on the flag? Which would mean that the > restriction should be kept and only the doc changed? I have no history so cannot say what was supposed to happen, but my $0.02 would be that if -O is *not* specified then we should try to read .pyc, then .pyo, and finally .py. In other words, I vote for -O being a write flag, not a read flag. ~Ethan~ From alexandre.zani at gmail.com Tue Jun 12 20:49:04 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Tue, 12 Jun 2012 11:49:04 -0700 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <4FD78D70.4030107@stoneleaf.us> References: <4FD78D70.4030107@stoneleaf.us> Message-ID: On Tue, Jun 12, 2012 at 11:41 AM, Ethan Furman wrote: > Terry Reedy wrote: >> >> http://bugs.python.org/issue12982 >> >> Currently, cpython requires the -O flag to *read* .pyo files as well as >> the write them. This is a nuisance to people who receive them from others, >> without the source. The originator of the issue quotes the following from >> the doc (without giving the location). >> >> "It is possible to have a file called spam.pyc (or spam.pyo when -O is >> used) without a file spam.py for the same module. This can be used to >> distribute a library of Python code in a form that is moderately hard to >> reverse engineer." >> >> There is no warning that .pyo files are viral, in a sense. The user has to >> use -O, which is a) a nuisance to remember if he has multiple scripts and >> some need it and some not, and b) makes his own .py files used with .pyo >> imports cached as .pyo, without docstrings, like it or not. >> >> Currently, the easiest workaround is to rename .pyo to .pyc and all seems >> to work fine, even with a mixture of true .pyc and renamed .pyo files. (The >> same is true with the -O flag and no renaming.) This suggests that there is >> no current reason for the restriction in that the *execution* of bytecode is >> not affected by the -O flag. (Another workaround might be a custom importer >> -- but this is not trivial, apparently.) >> >> So is the import restriction either an accident or obsolete holdover? If >> so, can removing it be treated as a bugfix and put into current releases, or >> should it be treated as an enhancement only for a future release? >> >> Or is the restriction an intentional reservation of the possibility of >> making *execution* depend on the flag? Which would mean that the restriction >> should be kept and only the doc changed? > > > I have no history so cannot say what was supposed to happen, but my $0.02 > would be that if -O is *not* specified then we should try to read .pyc, then > .pyo, and finally .py. ?In other words, I vote for -O being a write flag, > not a read flag. What if I change .py? > > ~Ethan~ > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com From brett at python.org Tue Jun 12 21:01:05 2012 From: brett at python.org (Brett Cannon) Date: Tue, 12 Jun 2012 15:01:05 -0400 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 2:28 PM, Eric Snow wrote: > On Tue, Jun 12, 2012 at 10:48 AM, Brett Cannon wrote: > > I should mention another option is to add sys.dont_read_bytecode (I > think I > > have discussed this with Frank at some point). > > Or check for "sys.implementation.cache_tag is None"... > Perfect! Will that work for Jython (Franke) and IronPython (Jeff)? This does mean, though, that imp.cache_from_source() and imp.source_from_cache() might need to be updated to raise a reasonable exception when sys.implementation.cache_tag is set to None as I believe right now it will raise a TypeError because None isn't a str. But what to raise instead? TypeError? EnvironmentError? -------------- next part -------------- An HTML attachment was scrubbed... URL: From salmanmk at live.com Tue Jun 12 21:31:43 2012 From: salmanmk at live.com (Salman Malik) Date: Tue, 12 Jun 2012 14:31:43 -0500 Subject: [Python-Dev] Using pdb with greenlet Message-ID: Hi All, I am sort of a newbie to Python ( have just started to use pdb). My problem is that I am debugging an application that uses greenlets and when I encounter something in code that spawns the coroutines or wait for an event, I lose control over the application (I mean that after that point I can no longer do 'n' or 's' on the code). Can anyone of you tell me how to tame greenlet with pdb, so that I can see step-by-step as to what event does a coroutine sees and how does it respond to it. Any help would be highly appreciated. Thanks, Salman -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Tue Jun 12 21:34:04 2012 From: brian at python.org (Brian Curtin) Date: Tue, 12 Jun 2012 14:34:04 -0500 Subject: [Python-Dev] Using pdb with greenlet In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 2:31 PM, Salman Malik wrote: > Hi All, > > I am sort of a newbie to Python ( have just started to use pdb). > My problem is that I am debugging an application that uses greenlets and > when I encounter something in code that spawns the coroutines or wait for an > event, I lose control over the application (I mean that after that point I > can no longer do 'n' or 's' on the code). Can anyone of you tell me how to > tame greenlet with pdb, so that I can see step-by-step as to what event does > a coroutine sees and how does it respond to it. > > Any help would be highly appreciated. Your question is better suited for python-list rather than python-dev. This list is for the development *of* Python, not *with* Python. From ethan at stoneleaf.us Tue Jun 12 21:14:14 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 12 Jun 2012 12:14:14 -0700 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: References: <4FD78D70.4030107@stoneleaf.us> Message-ID: <4FD79506.4070102@stoneleaf.us> Alexandre Zani wrote: > On Tue, Jun 12, 2012 at 11:41 AM, Ethan Furman wrote: >> Terry Reedy wrote: >>> http://bugs.python.org/issue12982 >>> >>> Currently, cpython requires the -O flag to *read* .pyo files as well as >>> the write them. This is a nuisance to people who receive them from others, >>> without the source. The originator of the issue quotes the following from >>> the doc (without giving the location). >>> >>> "It is possible to have a file called spam.pyc (or spam.pyo when -O is >>> used) without a file spam.py for the same module. This can be used to >>> distribute a library of Python code in a form that is moderately hard to >>> reverse engineer." >>> >>> There is no warning that .pyo files are viral, in a sense. The user has to >>> use -O, which is a) a nuisance to remember if he has multiple scripts and >>> some need it and some not, and b) makes his own .py files used with .pyo >>> imports cached as .pyo, without docstrings, like it or not. >>> >>> Currently, the easiest workaround is to rename .pyo to .pyc and all seems >>> to work fine, even with a mixture of true .pyc and renamed .pyo files. (The >>> same is true with the -O flag and no renaming.) This suggests that there is >>> no current reason for the restriction in that the *execution* of bytecode is >>> not affected by the -O flag. (Another workaround might be a custom importer >>> -- but this is not trivial, apparently.) >>> >>> So is the import restriction either an accident or obsolete holdover? If >>> so, can removing it be treated as a bugfix and put into current releases, or >>> should it be treated as an enhancement only for a future release? >>> >>> Or is the restriction an intentional reservation of the possibility of >>> making *execution* depend on the flag? Which would mean that the restriction >>> should be kept and only the doc changed? >> >> I have no history so cannot say what was supposed to happen, but my $0.02 >> would be that if -O is *not* specified then we should try to read .pyc, then >> .pyo, and finally .py. In other words, I vote for -O being a write flag, >> not a read flag. > > What if I change .py? Well, the case in question is that there is no .py available. But if it were available, and you changed it, then it would and should work just like it does now -- if .py is newer, compile it; if -O was specified, compile it optimized; now run the compiled code. ~Ethan~ From salmanmk at live.com Tue Jun 12 21:35:58 2012 From: salmanmk at live.com (Salman Malik) Date: Tue, 12 Jun 2012 14:35:58 -0500 Subject: [Python-Dev] Using pdb with greenlet In-Reply-To: References: , Message-ID: I am sorry for mailing it to this list. Thanks for the correction. Salman > Date: Tue, 12 Jun 2012 14:34:04 -0500 > Subject: Re: [Python-Dev] Using pdb with greenlet > From: brian at python.org > To: salmanmk at live.com > CC: python-dev at python.org > > On Tue, Jun 12, 2012 at 2:31 PM, Salman Malik wrote: > > Hi All, > > > > I am sort of a newbie to Python ( have just started to use pdb). > > My problem is that I am debugging an application that uses greenlets and > > when I encounter something in code that spawns the coroutines or wait for an > > event, I lose control over the application (I mean that after that point I > > can no longer do 'n' or 's' on the code). Can anyone of you tell me how to > > tame greenlet with pdb, so that I can see step-by-step as to what event does > > a coroutine sees and how does it respond to it. > > > > Any help would be highly appreciated. > > Your question is better suited for python-list rather than python-dev. > This list is for the development *of* Python, not *with* Python. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdhardy at gmail.com Tue Jun 12 21:39:48 2012 From: jdhardy at gmail.com (Jeff Hardy) Date: Tue, 12 Jun 2012 12:39:48 -0700 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: > 1) imp.cache_from_source is unimplemented, it's an AttributeError. Same for IronPython. > > 2) sys.dont_write_bytecode is always false, we don't respect that flag (we really > ? should IMO, but it's not a high priority for me, or anyone else apparently) Always True for IronPython. You can change it, but it doesn't affect anything. - Jeff From jdhardy at gmail.com Tue Jun 12 21:53:21 2012 From: jdhardy at gmail.com (Jeff Hardy) Date: Tue, 12 Jun 2012 12:53:21 -0700 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 12:01 PM, Brett Cannon wrote: > On Tue, Jun 12, 2012 at 2:28 PM, Eric Snow > wrote: >> >> On Tue, Jun 12, 2012 at 10:48 AM, Brett Cannon wrote: >> > I should mention another option is to add sys.dont_read_bytecode (I >> > think I >> > have discussed this with Frank at some point). >> >> Or check for "sys.implementation.cache_tag is None"... > > > Perfect! Will that work for Jython (Franke) and IronPython (Jeff)? IronPython will probably never *write* pyc files, but it might *read* them at some point -- as I understand cache_tag, we'd set it to whatever version of CPython's pyc files we could read (that seems to violate the spirit of sys.implementation). The combination of that and sys.dont_write_bytecode should cover all of the states; I'll just lock down sys.dont_write_bytecode so that changes are completely ignored. > > This does mean, though, that imp.cache_from_source() and > imp.source_from_cache() might need to be updated to raise a reasonable > exception when sys.implementation.cache_tag is set to None as I believe > right now it will raise a TypeError because None isn't a str. But what to > raise instead? TypeError? EnvironmentError? NotImplementedError? - Jeff From ronan.lamy at gmail.com Tue Jun 12 21:57:37 2012 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Tue, 12 Jun 2012 20:57:37 +0100 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <4FD78D70.4030107@stoneleaf.us> References: <4FD78D70.4030107@stoneleaf.us> Message-ID: <1339531057.12486.20.camel@ronan-desktop> Le mardi 12 juin 2012 ? 11:41 -0700, Ethan Furman a ?crit : > Terry Reedy wrote: > > http://bugs.python.org/issue12982 > > > > Currently, cpython requires the -O flag to *read* .pyo files as well as > > the write them. This is a nuisance to people who receive them from > > others, without the source. The originator of the issue quotes the > > following from the doc (without giving the location). > > > > "It is possible to have a file called spam.pyc (or spam.pyo when -O is > > used) without a file spam.py for the same module. This can be used to > > distribute a library of Python code in a form that is moderately hard to > > reverse engineer." > > > > There is no warning that .pyo files are viral, in a sense. The user has > > to use -O, which is a) a nuisance to remember if he has multiple scripts > > and some need it and some not, and b) makes his own .py files used with > > .pyo imports cached as .pyo, without docstrings, like it or not. > > > > Currently, the easiest workaround is to rename .pyo to .pyc and all > > seems to work fine, even with a mixture of true .pyc and renamed .pyo > > files. (The same is true with the -O flag and no renaming.) This > > suggests that there is no current reason for the restriction in that the > > *execution* of bytecode is not affected by the -O flag. (Another > > workaround might be a custom importer -- but this is not trivial, > > apparently.) > > > > So is the import restriction either an accident or obsolete holdover? If > > so, can removing it be treated as a bugfix and put into current > > releases, or should it be treated as an enhancement only for a future > > release? > > > > Or is the restriction an intentional reservation of the possibility of > > making *execution* depend on the flag? Which would mean that the > > restriction should be kept and only the doc changed? > > I have no history so cannot say what was supposed to happen, but my > $0.02 would be that if -O is *not* specified then we should try to read > .pyc, then .pyo, and finally .py. In other words, I vote for -O being a > write flag, not a read flag. I don't know much about the history either, but under PEP 3147, there are really two cases: * .pyc and .pyo as compilation caches. These live in __pycache__/ and have a cache_tag, their filename looks like pkg/__pycache__/module.cpython-33.pyc and their only role is to speed up imports. * .pyc and .pyo as standalone, precompiled sources for modules. These are found in the same place as .py files (e.g. pkg/module.pyc). In the first case, I think that -O should dictate which of .pyc and .pyo is used, while the other is completely ignored. In the second case, both .pyc and .pyo should always be considered as valid module sources, because -O is a compilation flag and loading a bytecode file doesn't involve compilation. At most, -O could switch the priority between .pyc and .pyo. 2.7 doesn't really differentiate between cached .pyc and standalone .pyc, so I don't know if a consistent behaviour can be achieved. Maybe the presence or absence of a matching .py can be used to trigger the first or second case above. From brett at python.org Tue Jun 12 22:17:02 2012 From: brett at python.org (Brett Cannon) Date: Tue, 12 Jun 2012 16:17:02 -0400 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 3:53 PM, Jeff Hardy wrote: > On Tue, Jun 12, 2012 at 12:01 PM, Brett Cannon wrote: > > On Tue, Jun 12, 2012 at 2:28 PM, Eric Snow > > wrote: > >> > >> On Tue, Jun 12, 2012 at 10:48 AM, Brett Cannon > wrote: > >> > I should mention another option is to add sys.dont_read_bytecode (I > >> > think I > >> > have discussed this with Frank at some point). > >> > >> Or check for "sys.implementation.cache_tag is None"... > > > > > > Perfect! Will that work for Jython (Franke) and IronPython (Jeff)? > > IronPython will probably never *write* pyc files, but it might *read* > them at some point -- as I understand cache_tag, we'd set it to > whatever version of CPython's pyc files we could read (that seems to > violate the spirit of sys.implementation). If you wanted to sneak around not writing your own bytecode but still reading it, then yes, you could do that. And yes, that does somewhat violate the concept of what sys.implementation.cache_tag was supposed to do. =) > The combination of that and sys.dont_write_bytecode should cover all of the states; I'll just lock > down sys.dont_write_bytecode so that changes are completely ignored. > > Great! > > > > This does mean, though, that imp.cache_from_source() and > > imp.source_from_cache() might need to be updated to raise a reasonable > > exception when sys.implementation.cache_tag is set to None as I believe > > right now it will raise a TypeError because None isn't a str. But what to > > raise instead? TypeError? EnvironmentError? > > NotImplementedError? > That's also a possibility. I also realized that importlib checks for None being returned by cache_from_source(), so that's another option (which ties into how the rest of the PEP 302 methods work). -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Jun 12 22:28:53 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 12 Jun 2012 22:28:53 +0200 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: <20120612222853.5960008d@pitrou.net> On Mon, 11 Jun 2012 20:13:53 -0700 "fwierzbicki at gmail.com" wrote: > On Sun, Jun 10, 2012 at 11:58 PM, Nick Coghlan wrote: > > 1. Asking on python-dev is considered adequate. If an implementation > > wants to be consulted on changes, one or more of their developers > > *must* follow python-dev sufficiently closely that they don't miss > > cross-VM compatibility questions. (My concern is that this isn't > > reliable - we know from experience that other VMs can miss such > > questions when they're mixed in with the rest of the python-dev > > traffic) > > 2. As 1, but we adopt a subject line convention to make it easier to > > filter out general python-dev traffic for those that are just > > interested in cross-vm questions > > 3. Create a separate list for cross-VM discussions, *including* > > discussions that aren't directly relevant to Python-the-language or > > CPython-the-reference-interpreter (e.g. collaborating on a shared > > standard library fork). python-dev threads may be advertised on the > > new list if cross-VM feedback is considered particularly necessary. > (2) and (3) work for me - I try to do (1) but often miss discussions > until they have gone stale. Either would be fine with me too. cheers Antoine. From victor.stinner at gmail.com Tue Jun 12 23:03:40 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 12 Jun 2012 23:03:40 +0200 Subject: [Python-Dev] time.clock_info() field names In-Reply-To: References: <20120504001237.GA8209@cskk.homeip.net> Message-ID: 2012/5/4 Victor Stinner : > Anyway, the implementation and/or the documentation is buggy and > should be fixed (especially the Windows case). Done, I renamed "adjusted" to "adjustable", fixed its value on Windows (time.time) and Linux (time.monotonic), and updated the doc. -- changeset: 77415:0e46e0cd368f tag: tip user: Victor Stinner date: Tue Jun 12 22:46:37 2012 +0200 files: Doc/library/time.rst Include/pytime.h Lib/test/test_time.py Misc/NEWS Modules/timemodule.c Python/pytime.c description: PEP 418: Rename adjusted attribute to adjustable in time.get_clock_info() result Fix also its value on Windows and Linux according to its documentation: "adjustable" indicates if the clock *can be* adjusted, not if it is or was adjusted. In most cases, it is not possible to indicate if a clock is or was adjusted. -- Basically, time.get_clock_info().adjustable is only True for time.time(). It can also be True for time.perf_counter() if time.monotonic() is not available. I prefer "adjustable" over "adjusted" because it is well defined and its value is well known. For example, it is not easy to say if time.monotonic() is "adjusted" or not on Linux, whereas I can say that time.monotonic() is not *adjustable* on any OS. I will update the PEP except if someone complains :-) Sorry for being late, but I was exhausted by this PEP. Victor From fwierzbicki at gmail.com Wed Jun 13 00:24:24 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Tue, 12 Jun 2012 15:24:24 -0700 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 12:01 PM, Brett Cannon wrote: > > > On Tue, Jun 12, 2012 at 2:28 PM, Eric Snow > wrote: >> >> On Tue, Jun 12, 2012 at 10:48 AM, Brett Cannon wrote: >> > I should mention another option is to add sys.dont_read_bytecode (I >> > think I >> > have discussed this with Frank at some point). >> >> Or check for "sys.implementation.cache_tag is None"... > > > Perfect! Will that work for Jython (Franke) and IronPython (Jeff)? So Jython does actually emit bytecodes, but they are Java bytecodes instead of Python bytecodes. Right now they end up next to the .py files just like .pyc files. They have the possibly unfortunate naming foo.py -> foo$py.class -- If I understand cache_tag (I may not) I guess Python 3 is putting the .pyc files into hidden subdirectories instead of putting them next to the .py files? If so we may do the same with our $py.class files. Incidentally we also have a mode for reading .pyc files -- though we haven't implementing writing them yet (we probably will eventually) I guess what I'm trying to say is that I don't know exactly how we will handle these new flags, but chances are we will use them (Again provided my guesses about what they do are anywhere near what they really do). > > This does mean, though, that imp.cache_from_source() and > imp.source_from_cache() might need to be updated to raise a reasonable > exception when sys.implementation.cache_tag is set to None as I believe > right now it will raise a TypeError because None isn't a str. But what to > raise instead? TypeError? EnvironmentError? NotImplementedError seems fine for me too if we don't end up using this flag. -Frank From kristjan at ccpgames.com Wed Jun 13 00:03:30 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 12 Jun 2012 22:03:30 +0000 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <1339531057.12486.20.camel@ronan-desktop> References: <4FD78D70.4030107@stoneleaf.us>, <1339531057.12486.20.camel@ronan-desktop> Message-ID: [Vaguely related: -B prevents the writing of .pyc and .pyo (don't know how it works for pep 3147) However, it doesn't prevent the _reading_ of said files. It's been discussed here before and considered useful, since rudiment .pyc files tend to stick around. Maybe a -BB flag should be considered?] K ________________________________________ Fr?: python-dev-bounces+kristjan=ccpgames.com at python.org [python-dev-bounces+kristjan=ccpgames.com at python.org] fyrir hönd Ronan Lamy [ronan.lamy at gmail.com] Sent: 12. j?n? 2012 19:57 To: Ethan Furman Cc: python-dev at python.org Efni: Re: [Python-Dev] #12982: Should -O be required to *read* .pyo files? Le mardi 12 juin 2012 ? 11:41 -0700, Ethan Furman a ?crit : > Terry Reedy wrote: > > http://bugs.python.org/issue12982 > > > > Currently, cpython requires the -O flag to *read* .pyo files as well as > > the write them. This is a nuisance to people who receive them from > > others, without the source. The originator of the issue quotes the > > following from the doc (without giving the location). > > > > "It is possible to have a file called spam.pyc (or spam.pyo when -O is > > used) without a file spam.py for the same module. This can be used to > > distribute a library of Python code in a form that is moderately hard to > > reverse engineer." > > > > There is no warning that .pyo files are viral, in a sense. The user has > > to use -O, which is a) a nuisance to remember if he has multiple scripts > > and some need it and some not, and b) makes his own .py files used with > > .pyo imports cached as .pyo, without docstrings, like it or not. > > > > Currently, the easiest workaround is to rename .pyo to .pyc and all > > seems to work fine, even with a mixture of true .pyc and renamed .pyo > > files. (The same is true with the -O flag and no renaming.) This > > suggests that there is no current reason for the restriction in that the > > *execution* of bytecode is not affected by the -O flag. (Another > > workaround might be a custom importer -- but this is not trivial, > > apparently.) > > > > So is the import restriction either an accident or obsolete holdover? If > > so, can removing it be treated as a bugfix and put into current > > releases, or should it be treated as an enhancement only for a future > > release? > > > > Or is the restriction an intentional reservation of the possibility of > > making *execution* depend on the flag? Which would mean that the > > restriction should be kept and only the doc changed? > > I have no history so cannot say what was supposed to happen, but my > $0.02 would be that if -O is *not* specified then we should try to read > .pyc, then .pyo, and finally .py. In other words, I vote for -O being a > write flag, not a read flag. I don't know much about the history either, but under PEP 3147, there are really two cases: * .pyc and .pyo as compilation caches. These live in __pycache__/ and have a cache_tag, their filename looks like pkg/__pycache__/module.cpython-33.pyc and their only role is to speed up imports. * .pyc and .pyo as standalone, precompiled sources for modules. These are found in the same place as .py files (e.g. pkg/module.pyc). In the first case, I think that -O should dictate which of .pyc and .pyo is used, while the other is completely ignored. In the second case, both .pyc and .pyo should always be considered as valid module sources, because -O is a compilation flag and loading a bytecode file doesn't involve compilation. At most, -O could switch the priority between .pyc and .pyo. 2.7 doesn't really differentiate between cached .pyc and standalone .pyc, so I don't know if a consistent behaviour can be achieved. Maybe the presence or absence of a matching .py can be used to trigger the first or second case above. _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From solipsis at pitrou.net Wed Jun 13 02:26:21 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 13 Jun 2012 02:26:21 +0200 Subject: [Python-Dev] issue #15038 - Optimize python Locks on Windows References: Message-ID: <20120613022621.3ff2a966@pitrou.net> On Tue, 12 Jun 2012 11:23:28 +0000 Kristj?n Valur J?nsson wrote: > Hi, > Could I get some feedback on this proposed patch? > It would be great to get it in before the beta. The review I made on your previous patch still applies. You shouldn't ask for feedback if you aren't willing to take it into account... Regards Antoine. From brett at python.org Wed Jun 13 03:10:46 2012 From: brett at python.org (Brett Cannon) Date: Tue, 12 Jun 2012 21:10:46 -0400 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Jun 12, 2012 6:24 PM, "fwierzbicki at gmail.com" wrote: > > On Tue, Jun 12, 2012 at 12:01 PM, Brett Cannon wrote: > > > > > > On Tue, Jun 12, 2012 at 2:28 PM, Eric Snow > > wrote: > >> > >> On Tue, Jun 12, 2012 at 10:48 AM, Brett Cannon wrote: > >> > I should mention another option is to add sys.dont_read_bytecode (I > >> > think I > >> > have discussed this with Frank at some point). > >> > >> Or check for "sys.implementation.cache_tag is None"... > > > > > > Perfect! Will that work for Jython (Franke) and IronPython (Jeff)? > So Jython does actually emit bytecodes, but they are Java bytecodes > instead of Python bytecodes. Right now they end up next to the .py > files just like .pyc files. They have the possibly unfortunate naming > foo.py -> foo$py.class -- If I understand cache_tag (I may not) I > guess Python 3 is putting the .pyc files into hidden subdirectories > instead of putting them next to the .py files? Yes, __pycache__. The tag is to allow different versions of bytecode to exist side-by-side (eg for CPython 3.3 it's cpython33 so the file ends up being named foo-cpython33.pyc). If so we may do the > same with our $py.class files. That was part of the hope when it was designed. > > Incidentally we also have a mode for reading .pyc files -- though we > haven't implementing writing them yet (we probably will eventually) If you can read .pyc files then you should be fine. > > I guess what I'm trying to say is that I don't know exactly how we > will handle these new flags, but chances are we will use them (Again > provided my guesses about what they do are anywhere near what they > really do). IOW it's too soon to be having this discussion. :) I mean regardless of what happens you can always tweak the import lib code as necessary, I just wanted to try to avoid it. > > > > > This does mean, though, that imp.cache_from_source() and > > imp.source_from_cache() might need to be updated to raise a reasonable > > exception when sys.implementation.cache_tag is set to None as I believe > > right now it will raise a TypeError because None isn't a str. But what to > > raise instead? TypeError? EnvironmentError? > NotImplementedError seems fine for me too if we don't end up using this flag. OK, that's 2 votes for that exception. > > -Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Wed Jun 13 06:30:00 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 13 Jun 2012 13:30:00 +0900 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: <87vcix6xvr.fsf@uwakimon.sk.tsukuba.ac.jp> <87obop6jf9.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <87lijr7s1j.fsf@uwakimon.sk.tsukuba.ac.jp> Brett Cannon writes: > On Tue, Jun 12, 2012 at 4:09 AM, Stephen J. Turnbull wrote: > > Brett Cannon writes: > Ah, but you helped make my point! Not at all; your point has long since been made. I certainly agree that the current situation is unfortunate. I think it's a bit rude of you to assume that those who oppose a new discussion list don't understand that. The question is whether a new list will be a net positive for the *whole* community, and whether it will significantly benefit the VM developers (beyond giving them some leverage to say "you really should have posted this to compatibility-sig, you know!") > If anything this new list would act as a showing of good will "The road to Hell," as they say. We tried this a couple of times at XEmacs; it didn't work. In practice, threads didn't move, they split, and the actual decisions were taken on the main list, sometimes seriously offending the members of the SIG list. The analogy of topics is not exact, and Python is more disciplined, so it might work better here. But you should plan for it, not merely appeal to "men of good will". > Anyway, it looks like everyone else chiming in is capitulating to keeping > it on python-dev with a proper subject line, so we will start with there > and if it proves ineffective I will create the list. At that time, please consider an announce-only list that VM developers can subscribe to in lieu of python-dev (maybe with reply-to directing discussion to python-dev). From ncoghlan at gmail.com Wed Jun 13 07:18:24 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 13 Jun 2012 15:18:24 +1000 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Wed, Jun 13, 2012 at 11:10 AM, Brett Cannon wrote: >> > This does mean, though, that imp.cache_from_source() and >> > imp.source_from_cache() might need to be updated to raise a reasonable >> > exception when sys.implementation.cache_tag is set to None as I believe >> > right now it will raise a TypeError because None isn't a str. But what >> > to >> > raise instead? TypeError? EnvironmentError? >> NotImplementedError seems fine for me too if we don't end up using this >> flag. > > OK, that's 2 votes for that exception. + 1 from me as well, both for skipping any implicit reading or writing of the cache when cache_tag is None (IIRC, that's the use case we had in mind when we allowed that field to be None in the PEP 421 discussion), and for *explicit* attempts to access the cache when the tag is None triggering NotImplementedError. That way people are free to use either LBYL (checking cache_tag) or EAFP (catching NotImplementedError). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Wed Jun 13 07:32:59 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 13 Jun 2012 15:32:59 +1000 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: <87lijr7s1j.fsf@uwakimon.sk.tsukuba.ac.jp> References: <87vcix6xvr.fsf@uwakimon.sk.tsukuba.ac.jp> <87obop6jf9.fsf@uwakimon.sk.tsukuba.ac.jp> <87lijr7s1j.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Wed, Jun 13, 2012 at 2:30 PM, Stephen J. Turnbull wrote: > Brett Cannon writes: > ?> If anything this new list would act as a showing of good will > > "The road to Hell," as they say. > > We tried this a couple of times at XEmacs; it didn't work. ?In > practice, threads didn't move, they split, and the actual decisions > were taken on the main list, sometimes seriously offending the members > of the SIG list. ?The analogy of topics is not exact, and Python is > more disciplined, so it might work better here. ?But you should plan > for it, not merely appeal to "men of good will". Aye, we already suffer the "split discussion" problem with import-sig (and any other sig once conclusions are brought to python-dev for ratification, although here every SIG already knows they will eventually have to make their case on the main list for any standard library changes). I think the idea of using topic markers as a way to allow people to set up their own filters that doesn't require spinning out a whole new list is a good compromise. Adding a subject header is even less of a burden than remembering to pass a question to a different list. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From eliben at gmail.com Wed Jun 13 09:43:04 2012 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 13 Jun 2012 10:43:04 +0300 Subject: [Python-Dev] dictnotes.txt out of date? Message-ID: Hi pydev, I was looking at the memory allocation strategy of dict, out of curiosity, and noted that Objects/dictnotes.txt is out of date as far as the parameters go. It says about PyDict_STARTSIZE: ---- * PyDict_STARTSIZE. Starting size of dict (unless an instance dict). Currently set to 8. Must be a power of two. New dicts have to zero-out every cell. Increasing improves the sparseness of small dictionaries but costs time to read in the additional cache lines if they are not already in cache. That case is common when keyword arguments are passed. Prior to version 3.3, PyDict_MINSIZE was used as the starting size of a new dict. ----- Although it mentions 3.3, I find no reference to PyDict_STARTSIZE in the code anywhere. Also it mentions PyDict_MINSIZE, which doesn't exist any more - having been replaced by Py_DICT_MINZISE_SPLIT and Py_DICT_COMBINED. I don't know what else is out of date, just looked at those and they were. Maybe it would make sense to kill dictnotes.txt, folding some of its more important contents in to comments in dictobject.c, since the latter has a higher chance of being maintained along with code changes? Eli From kristjan at ccpgames.com Wed Jun 13 15:47:28 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 13 Jun 2012 13:47:28 +0000 Subject: [Python-Dev] issue #15038 - Optimize python Locks on Windows In-Reply-To: <20120613022621.3ff2a966@pitrou.net> References: <20120613022621.3ff2a966@pitrou.net> Message-ID: I have reworked the patch, so it might be helpful to specify what exactly it is that you object to. Perhaps in the defect itself. I can add here that your worries that the previous patch defaulted to Vista specific features, were actually unfounded. I've added my reasons for including vista specific support in in with the issue as well. I happen to think it is a good idea and would like to get the input from others on that particular issue. Cheers, K > -----Original Message----- > From: python-dev-bounces+kristjan=ccpgames.com at python.org > [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On > Behalf Of Antoine Pitrou > Sent: 13. j?n? 2012 00:26 > To: python-dev at python.org > Subject: Re: [Python-Dev] issue #15038 - Optimize python Locks on Windows > > > The review I made on your previous patch still applies. > You shouldn't ask for feedback if you aren't willing to take it into account... > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python- > dev/kristjan%40ccpgames.com From solipsis at pitrou.net Wed Jun 13 16:04:12 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 13 Jun 2012 16:04:12 +0200 Subject: [Python-Dev] issue #15038 - Optimize python Locks on Windows References: <20120613022621.3ff2a966@pitrou.net> Message-ID: <20120613160412.6396110d@pitrou.net> On Wed, 13 Jun 2012 13:47:28 +0000 Kristj?n Valur J?nsson wrote: > I have reworked the patch, so it might be helpful to specify what exactly it is that you object to. Perhaps in the defect itself. > I can add here that your worries that the previous patch defaulted to Vista specific features, were actually unfounded. Ok, thanks. I probably have misunderstood the chain of #define's. I will let other people comment on the other aspects of the patch. Regards Antoine. From mark at hotpy.org Wed Jun 13 16:22:02 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 13 Jun 2012 15:22:02 +0100 Subject: [Python-Dev] dictnotes.txt out of date? In-Reply-To: References: Message-ID: <4FD8A20A.3000701@hotpy.org> Eli Bendersky wrote: > Hi pydev, > > I was looking at the memory allocation strategy of dict, out of > curiosity, and noted that Objects/dictnotes.txt is out of date as far > as the parameters go. It says about PyDict_STARTSIZE: > > ---- > * PyDict_STARTSIZE. Starting size of dict (unless an instance dict). > Currently set to 8. Must be a power of two. > New dicts have to zero-out every cell. > Increasing improves the sparseness of small dictionaries but costs > time to read in the additional cache lines if they are not already > in cache. That case is common when keyword arguments are passed. > Prior to version 3.3, PyDict_MINSIZE was used as the starting size > of a new dict. > ----- > > Although it mentions 3.3, I find no reference to PyDict_STARTSIZE in > the code anywhere. > Also it mentions PyDict_MINSIZE, which doesn't exist any more - having > been replaced by Py_DICT_MINZISE_SPLIT and Py_DICT_COMBINED. That's my fault. I didn't update dictnotes.txt when I changed PyDict_STARTSIZE to PyDict_MINSIZE_COMBINED. > > I don't know what else is out of date, just looked at those and they > were. Maybe it would make sense to kill dictnotes.txt, folding some of > its more important contents in to comments in dictobject.c, since the > latter has a higher chance of being maintained along with code > changes? I think that the parts of dictnotes.txt that just duplicate comments in dictobject.c should be removed. However, I think it is worth keeping dictnotes.txt as it has historical information and results of previous experiments. Cheers, Mark From eliben at gmail.com Wed Jun 13 17:03:05 2012 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 13 Jun 2012 18:03:05 +0300 Subject: [Python-Dev] dictnotes.txt out of date? In-Reply-To: <4FD8A20A.3000701@hotpy.org> References: <4FD8A20A.3000701@hotpy.org> Message-ID: >> I was looking at the memory allocation strategy of dict, out of >> curiosity, and noted that Objects/dictnotes.txt is out of date as far >> as the parameters go. It says about PyDict_STARTSIZE: >> >> ---- >> * PyDict_STARTSIZE. Starting size of dict (unless an instance dict). >> ? ?Currently set to 8. Must be a power of two. >> ? ?New dicts have to zero-out every cell. >> ? ?Increasing improves the sparseness of small dictionaries but costs >> ? ?time to read in the additional cache lines if they are not already >> ? ?in cache. That case is common when keyword arguments are passed. >> ? ?Prior to version 3.3, PyDict_MINSIZE was used as the starting size >> ? ?of a new dict. >> ----- >> >> Although it mentions 3.3, I find no reference to PyDict_STARTSIZE in >> the code anywhere. >> Also it mentions PyDict_MINSIZE, which doesn't exist any more - having >> been replaced by Py_DICT_MINZISE_SPLIT and Py_DICT_COMBINED. > > > That's my fault. I didn't update dictnotes.txt when I changed > PyDict_STARTSIZE to PyDict_MINSIZE_COMBINED. Could you update it now? >> I don't know what else is out of date, just looked at those and they >> were. Maybe it would make sense to kill dictnotes.txt, folding some of >> its more important contents in to comments in dictobject.c, since the >> latter has a higher chance of being maintained along with code >> changes? > > > I think that the parts of dictnotes.txt that just duplicate comments in > dictobject.c should be removed. > However, I think it is worth keeping dictnotes.txt as it has historical > information and results of previous experiments. Personally I think that describing the customization #defines belongs in the source, above the relevant #defines, rather than in a separate file. No problem with leaving historical notes and misc ruminations in the separate .txt file, though. Eli From g.brandl at gmx.net Wed Jun 13 18:58:44 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 13 Jun 2012 18:58:44 +0200 Subject: [Python-Dev] what is happening with the regex module going into Python 3.3? In-Reply-To: <4FCBEA8C.7080207@v.loewis.de> References: <4FCBEA8C.7080207@v.loewis.de> Message-ID: Am 04.06.2012 00:51, schrieb "Martin v. L?wis": >> That last statement basically suggests that something like regex would >> never be accepted until a CPython core developer was actually running >> into pain with the many flaws in the re module (especially when it comes >> to Unicode). I disagree with that. >> >> Per the language summit, I think we need to just do it. Put it in as re >> and rename the existing re module to sre. > > There are really places where "we" just doesn't work, even in a > community project. "We" will never commit anything to revision control. > Individual committers commit. > > So if *you* want to commit it, go ahead - I think there is general > approval for that. Take the praise when it works, and take the (likely) > blame for when it fails in some significant way, and then work on fixing > it. > >> The issue seems to be primarily one of "who is volunteering to do it?" > > I don't think anybody is, or will be for the coming years. I wish I had > trust into MRAB to stay around and work on this for the next ten years > (and I think the author of the regex module really needs to commit for > that timespan, see SRE's history), but I don't. So whoever commits the > change now is in charge, and will either have to work hard on fixing the > problems, or will be responsible for breaking Python 3 in a serious way. Agreed with all of the above. (That's not a -1, but a warning.) Georg From brett at python.org Wed Jun 13 19:11:46 2012 From: brett at python.org (Brett Cannon) Date: Wed, 13 Jun 2012 13:11:46 -0400 Subject: [Python-Dev] [compatibility-sig] making sure importlib.machinery.SourceLoader doesn't throw an exception if bytecode is not supported by a VM In-Reply-To: References: Message-ID: On Wed, Jun 13, 2012 at 1:18 AM, Nick Coghlan wrote: > On Wed, Jun 13, 2012 at 11:10 AM, Brett Cannon wrote: > >> > This does mean, though, that imp.cache_from_source() and > >> > imp.source_from_cache() might need to be updated to raise a reasonable > >> > exception when sys.implementation.cache_tag is set to None as I > believe > >> > right now it will raise a TypeError because None isn't a str. But what > >> > to > >> > raise instead? TypeError? EnvironmentError? > >> NotImplementedError seems fine for me too if we don't end up using this > >> flag. > > > > OK, that's 2 votes for that exception. > > + 1 from me as well, both for skipping any implicit reading or writing > of the cache when cache_tag is None (IIRC, that's the use case we had > in mind when we allowed that field to be None in the PEP 421 > discussion), and for *explicit* attempts to access the cache when the > tag is None triggering NotImplementedError. > > That way people are free to use either LBYL (checking cache_tag) or > EAFP (catching NotImplementedError). > I'm sold: http://bugs.python.org/issue15056 for tracking the change. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed Jun 13 19:19:43 2012 From: brett at python.org (Brett Cannon) Date: Wed, 13 Jun 2012 13:19:43 -0400 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 2:16 PM, Terry Reedy wrote: > http://bugs.python.org/**issue12982 > > Currently, cpython requires the -O flag to *read* .pyo files as well as > the write them. This is a nuisance to people who receive them from others, > without the source. The originator of the issue quotes the following from > the doc (without giving the location). > > "It is possible to have a file called spam.pyc (or spam.pyo when -O is > used) without a file spam.py for the same module. This can be used to > distribute a library of Python code in a form that is moderately hard to > reverse engineer." > > There is no warning that .pyo files are viral, in a sense. The user has to > use -O, which is a) a nuisance to remember if he has multiple scripts and > some need it and some not, and b) makes his own .py files used with .pyo > imports cached as .pyo, without docstrings, like it or not. > > Currently, the easiest workaround is to rename .pyo to .pyc and all seems > to work fine, even with a mixture of true .pyc and renamed .pyo files. (The > same is true with the -O flag and no renaming.) This suggests that there is > no current reason for the restriction in that the *execution* of bytecode > is not affected by the -O flag. (Another workaround might be a custom > importer -- but this is not trivial, apparently.) > In Python 3.3 it's actually trivial. > > So is the import restriction either an accident or obsolete holdover? Neither. .pyo files are actually different from .pyc files in terms of what bytecode they may emit. Currently -O causes all asserts to be left out of the bytecode and -OO leaves out all docstrings on top of what -O does. This makes a difference if you are trying to introspect at the interpreter prompt or are testing things in development and want those asserts to be triggered if needed. > If so, can removing it be treated as a bugfix and put into current > releases, or should it be treated as an enhancement only for a future > release? > > The behaviour shouldn't change. There has been talk of doing even more aggressive optimizing under -O, which once again would cause an even larger deviation between a .pyo file and a .pyc file (e.g. allowing Python code to hook into the peephole optimizer or an entirely new AST optimizer). > Or is the restriction an intentional reservation of the possibility of > making *execution* depend on the flag? Which would mean that the > restriction should be kept and only the doc changed? The docs should get updated to be more clear. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Wed Jun 13 19:58:10 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 13 Jun 2012 13:58:10 -0400 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: References: Message-ID: <20120613175810.D7A032500AC@webabinitio.net> On Wed, 13 Jun 2012 13:19:43 -0400, Brett Cannon wrote: > On Tue, Jun 12, 2012 at 2:16 PM, Terry Reedy wrote: > > > http://bugs.python.org/**issue12982 > > > > Currently, cpython requires the -O flag to *read* .pyo files as well as > > the write them. This is a nuisance to people who receive them from others, > > without the source. The originator of the issue quotes the following from > > the doc (without giving the location). [...] > > So is the import restriction either an accident or obsolete holdover? > > Neither. .pyo files are actually different from .pyc files in terms of what > bytecode they may emit. Currently -O causes all asserts to be left out of > the bytecode and -OO leaves out all docstrings on top of what -O does. This > makes a difference if you are trying to introspect at the interpreter > prompt or are testing things in development and want those asserts to be > triggered if needed. > > > If so, can removing it be treated as a bugfix and put into current > > releases, or should it be treated as an enhancement only for a future > > release? > > > The behaviour shouldn't change. There has been talk of doing even more > aggressive optimizing under -O, which once again would cause an even larger > deviation between a .pyo file and a .pyc file (e.g. allowing Python code to > hook into the peephole optimizer or an entirely new AST optimizer). > > > Or is the restriction an intentional reservation of the possibility of > > making *execution* depend on the flag? Which would mean that the > > restriction should be kept and only the doc changed? > > The docs should get updated to be more clear. OK, but you didn't answer the question :). If I understand correctly, everything you said applies to *writing* the bytecode, not reading it. So, is there any reason to not use the .pyo file (if that's all that is around) when -O is not specified? The only technical reason I can see why -O should be required for a .pyo file to be used (*if* it is the only thing around) is if it won't *run* without the -O switch. Is there any expectation that that will ever be the case? On the other hand, not wanting make any extra effort to support sourceless distributions could be a reason as well. But if that's the case we should be transparent about it. --David From a.badger at gmail.com Wed Jun 13 20:20:24 2012 From: a.badger at gmail.com (Toshio Kuratomi) Date: Wed, 13 Jun 2012 11:20:24 -0700 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120613175810.D7A032500AC@webabinitio.net> References: <20120613175810.D7A032500AC@webabinitio.net> Message-ID: <20120613182024.GE4682@unaka.lan> On Wed, Jun 13, 2012 at 01:58:10PM -0400, R. David Murray wrote: > > OK, but you didn't answer the question :). If I understand correctly, > everything you said applies to *writing* the bytecode, not reading it. > > So, is there any reason to not use the .pyo file (if that's all that is > around) when -O is not specified? > > The only technical reason I can see why -O should be required for a .pyo > file to be used (*if* it is the only thing around) is if it won't *run* > without the -O switch. Is there any expectation that that will ever be > the case? > Yes. For instance, if I create a .pyo with -OO it wouldn't have docstrings. Another piece of code can legally import that and try to use the docstring for something. This would fail if only the .pyo was present. Of course, it would also fail under the present behaviour since no .py or .pyc was present to be imported. The error that's displayed might be clearer if we fail when attempting to read a .py/.pyc rather than failing when the docstring is found to be missing, though. -Toshio -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From ethan at stoneleaf.us Wed Jun 13 20:46:47 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 13 Jun 2012 11:46:47 -0700 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: References: Message-ID: <4FD8E017.60006@stoneleaf.us> Brett Cannon wrote: > On Tue, Jun 12, 2012 at 2:16 PM, Terry Reedy wrote: > >> http://bugs.python.org/__issue12982 >> >> Currently, cpython requires the -O flag to *read* .pyo files as well >> as the write them. This is a nuisance to people who receive them >> from others, without the source. The originator of the issue quotes >> the following from the doc (without giving the location). >> >> "It is possible to have a file called spam.pyc (or spam.pyo when -O >> is used) without a file spam.py for the same module. This can be >> used to distribute a library of Python code in a form that is >> moderately hard to reverse engineer." >> >> There is no warning that .pyo files are viral, in a sense. The user >> has to use -O, which is a) a nuisance to remember if he has multiple >> scripts and some need it and some not, and b) makes his own .py >> files used with .pyo imports cached as .pyo, without docstrings, >> like it or not. >> >> Currently, the easiest workaround is to rename .pyo to .pyc and all >> seems to work fine, even with a mixture of true .pyc and renamed >> .pyo files. (The same is true with the -O flag and no renaming.) >> This suggests that there is no current reason for the restriction in >> that the *execution* of bytecode is not affected by the -O flag. >> (Another workaround might be a custom importer -- but this is not >> trivial, apparently.) > > In Python 3.3 it's actually trivial. > >> So is the import restriction either an accident or obsolete holdover? > > Neither. .pyo files are actually different from .pyc files in terms of > what bytecode they may emit. Currently -O causes all asserts to be left > out of the bytecode and -OO leaves out all docstrings on top of what -O > does. This makes a difference if you are trying to introspect at the > interpreter prompt or are testing things in development and want those > asserts to be triggered if needed. But what does this have to do with those cases where *only* the .pyo file is available, and we are trying to run it? In these cases it would have to be in the main folder (not __pycache__) which means somebody did it deliberately. ~Ethan~ From solipsis at pitrou.net Wed Jun 13 20:46:50 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 13 Jun 2012 20:46:50 +0200 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> Message-ID: <20120613204650.09ccbe92@pitrou.net> On Wed, 13 Jun 2012 11:20:24 -0700 Toshio Kuratomi wrote: > On Wed, Jun 13, 2012 at 01:58:10PM -0400, R. David Murray wrote: > > > > OK, but you didn't answer the question :). If I understand correctly, > > everything you said applies to *writing* the bytecode, not reading it. > > > > So, is there any reason to not use the .pyo file (if that's all that is > > around) when -O is not specified? > > > > The only technical reason I can see why -O should be required for a .pyo > > file to be used (*if* it is the only thing around) is if it won't *run* > > without the -O switch. Is there any expectation that that will ever be > > the case? > > > Yes. For instance, if I create a .pyo with -OO it wouldn't have docstrings. > Another piece of code can legally import that and try to use the docstring > for something. This would fail if only the .pyo was present. Not only docstrings, but also asserts. I think running a pyo without -O would be a bug. Regards Antoine. From ethan at stoneleaf.us Wed Jun 13 20:41:56 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 13 Jun 2012 11:41:56 -0700 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120613182024.GE4682@unaka.lan> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> Message-ID: <4FD8DEF4.6000803@stoneleaf.us> Toshio Kuratomi wrote: > On Wed, Jun 13, 2012 at 01:58:10PM -0400, R. David Murray wrote: >> OK, but you didn't answer the question :). If I understand correctly, >> everything you said applies to *writing* the bytecode, not reading it. >> >> So, is there any reason to not use the .pyo file (if that's all that is >> around) when -O is not specified? >> >> The only technical reason I can see why -O should be required for a .pyo >> file to be used (*if* it is the only thing around) is if it won't *run* >> without the -O switch. Is there any expectation that that will ever be >> the case? >> > Yes. For instance, if I create a .pyo with -OO it wouldn't have docstrings. > Another piece of code can legally import that and try to use the docstring > for something. This would fail if only the .pyo was present. Why should it fail? -OO causes docstring access to return None, just as if a docstring had not been specified in the first place. Any decent code will be checking for an undefined docstring -- after all, they are not rare. ~Ethan~ From rdmurray at bitdance.com Wed Jun 13 20:57:35 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 13 Jun 2012 14:57:35 -0400 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120613182024.GE4682@unaka.lan> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> Message-ID: <20120613185736.1B8A6250031@webabinitio.net> On Wed, 13 Jun 2012 11:20:24 -0700, Toshio Kuratomi wrote: > On Wed, Jun 13, 2012 at 01:58:10PM -0400, R. David Murray wrote: > > > > OK, but you didn't answer the question :). If I understand correctly, > > everything you said applies to *writing* the bytecode, not reading it. > > > > So, is there any reason to not use the .pyo file (if that's all that is > > around) when -O is not specified? > > > > The only technical reason I can see why -O should be required for a .pyo > > file to be used (*if* it is the only thing around) is if it won't *run* > > without the -O switch. Is there any expectation that that will ever be > > the case? > > > Yes. For instance, if I create a .pyo with -OO it wouldn't have docstrings. > Another piece of code can legally import that and try to use the docstring > for something. This would fail if only the .pyo was present. Yes, but that's not what I'm talking about. I would treat code that depends on the presence of docstrings and doesn't have a fallback for dealing with there absence as buggy code, since anyone might decide to run that code with -OO, and the code would fail in that case too. I'm talking about a case where the code runs correctly with -O (or -OO), but fails if the code from the .pyo is loaded and python is run *without* -O (or -OO). > Of course, it would also fail under the present behaviour since no .py or > .pyc was present to be imported. The error that's displayed might be > clearer if we fail when attempting to read a .py/.pyc rather than failing > when the docstring is found to be missing, though. Well, right now if there is only a .pyo file and you run python without -O, you get an import error. The question is, is that the way we really want it to work? --David From rdmurray at bitdance.com Wed Jun 13 21:13:54 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 13 Jun 2012 15:13:54 -0400 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120613204650.09ccbe92@pitrou.net> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> Message-ID: <20120613191355.C82AD25009E@webabinitio.net> On Wed, 13 Jun 2012 20:46:50 +0200, Antoine Pitrou wrote: > On Wed, 13 Jun 2012 11:20:24 -0700 > Toshio Kuratomi wrote: > > On Wed, Jun 13, 2012 at 01:58:10PM -0400, R. David Murray wrote: > > > > > > OK, but you didn't answer the question :). If I understand correctly, > > > everything you said applies to *writing* the bytecode, not reading it. > > > > > > So, is there any reason to not use the .pyo file (if that's all that is > > > around) when -O is not specified? > > > > > > The only technical reason I can see why -O should be required for a .pyo > > > file to be used (*if* it is the only thing around) is if it won't *run* > > > without the -O switch. Is there any expectation that that will ever be > > > the case? > > > > > Yes. For instance, if I create a .pyo with -OO it wouldn't have docstrings. > > Another piece of code can legally import that and try to use the docstring > > for something. This would fail if only the .pyo was present. > > Not only docstrings, but also asserts. I think running a pyo without -O > would be a bug. Again, a program that depends on asserts is buggy. As Ethan pointed out we are asking about the case where someone is *deliberately* setting the .pyo file up to be run as the "normal" case. I'm not sure we want to support that, I just want us to be clear about why we don't :) --David From ethan at stoneleaf.us Wed Jun 13 21:36:55 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 13 Jun 2012 12:36:55 -0700 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120613191355.C82AD25009E@webabinitio.net> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> Message-ID: <4FD8EBD7.7090500@stoneleaf.us> R. David Murray wrote: > On Wed, 13 Jun 2012 20:46:50 +0200, Antoine Pitrou wrote: >> On Wed, 13 Jun 2012 11:20:24 -0700 >> Toshio Kuratomi wrote: >>> On Wed, Jun 13, 2012 at 01:58:10PM -0400, R. David Murray wrote: >>>> OK, but you didn't answer the question :). If I understand correctly, >>>> everything you said applies to *writing* the bytecode, not reading it. >>>> >>>> So, is there any reason to not use the .pyo file (if that's all that is >>>> around) when -O is not specified? >>>> >>>> The only technical reason I can see why -O should be required for a .pyo >>>> file to be used (*if* it is the only thing around) is if it won't *run* >>>> without the -O switch. Is there any expectation that that will ever be >>>> the case? >>>> >>> Yes. For instance, if I create a .pyo with -OO it wouldn't have docstrings. >>> Another piece of code can legally import that and try to use the docstring >>> for something. This would fail if only the .pyo was present. >> Not only docstrings, but also asserts. I think running a pyo without -O >> would be a bug. > > Again, a program that depends on asserts is buggy. > > As Ethan pointed out we are asking about the case where someone is > *deliberately* setting the .pyo file up to be run as the "normal" > case. > > I'm not sure we want to support that, I just want us to be clear > about why we don't :) Currently, the alternative to supporting this behavior is to either: 1) require the end-user to specify -O (major nuisance) or 2) have the distributor rename the .pyo file to .pyc I think 1 is a non-starter (non-finisher? ;) but I could live with 2 -- after all, if someone is going to the effort of removing the .py file and moving the .pyo file into its place, renaming the .pyo to .pyc is trivial. So the question, then, is: is option 2 better than just supporting .pyo files without -O when they are all that is available? ~Ethan~ From raymond.hettinger at gmail.com Wed Jun 13 21:52:14 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 13 Jun 2012 12:52:14 -0700 Subject: [Python-Dev] dictnotes.txt out of date? In-Reply-To: References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> Message-ID: On Jun 13, 2012, at 10:35 AM, Eli Bendersky wrote: > Did you mean to send this to the list, Raymond? Yes. I wanted to find-out whether someone approved changing all the dict tunable parameters. I thought those weren't supposed to have changed. PEP 412 notes that the existing parameters were finely tuned and it did not make recommendations for changing them. At one point, I spent a full month testing all of the tunable parameters using dozens of popular Python applications. The values used in Py3.2 represent the best settings for most apps. They should not have been changed without deep thought and substantial testing. The reduction of the dict min size has an immediate impact on code using multiple keyword arguments. The reduced growth rate (from x4 to x2) negatively impacts apps that have a dicts with a steady size but constantly changing keys (removing an old key and adding a new one). The lru_cache is an example. The reduced growth causes it to resize much more frequently than before. I think the tunable parameters should be switched back to what they were before. Tim and others spent a lot of time getting those right and my month of detailed testing confirmed that those were excellent choices. The introduction of Mark's shared-key dicts was orthogonal to the issue of correct parameter settings. Those parameters did not have to change and probably should not have been changed. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Wed Jun 13 22:06:22 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 13 Jun 2012 16:06:22 -0400 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120613204650.09ccbe92@pitrou.net> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> Message-ID: On 6/13/2012 2:46 PM, Antoine Pitrou wrote: > Not only docstrings, but also asserts. I think running a pyo without -O > would be a bug. That cat is already out of the bag ;-) People are doing that now by renaming x.pyo to x.pyc. Brett claims that it is also easy to do in 3.3 with a custom importer. -- Terry Jan Reedy From tjreedy at udel.edu Wed Jun 13 22:32:01 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 13 Jun 2012 16:32:01 -0400 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: References: Message-ID: On 6/13/2012 1:19 PM, Brett Cannon wrote: > On Tue, Jun 12, 2012 at 2:16 PM, Terry Reedy > wrote: > > http://bugs.python.org/__issue12982 > > Currently, cpython requires the -O flag to *read* .pyo files as well > as the write them. This is a nuisance to people who receive them > from others, without the source. The originator of the issue quotes > the following from the doc (without giving the location). > > "It is possible to have a file called spam.pyc (or spam.pyo when -O > is used) without a file spam.py for the same module. This can be > used to distribute a library of Python code in a form that is > moderately hard to reverse engineer." > > There is no warning that .pyo files are viral, in a sense. The user > has to use -O, which is a) a nuisance to remember if he has multiple > scripts and some need it and some not, and b) makes his own .py > files used with .pyo imports cached as .pyo, without docstrings, > like it or not. > > Currently, the easiest workaround is to rename .pyo to .pyc and all > seems to work fine, even with a mixture of true .pyc and renamed > .pyo files. (The same is true with the -O flag and no renaming.) > This suggests that there is no current reason for the restriction in > that the *execution* of bytecode is not affected by the -O flag. > (Another workaround might be a custom importer -- but this is not > trivial, apparently.) > > > In Python 3.3 it's actually trivial. For you. Anyway, I am sure Michael of #12982 is using an earlier version. > So is the import restriction either an accident or obsolete holdover? > > > Neither. .pyo files are actually different from .pyc files in terms of > what bytecode they may emit. Currently -O causes all asserts to be left > out of the bytecode and -OO leaves out all docstrings on top of what -O > does. This makes a difference if you are trying to introspect at the > interpreter prompt or are testing things in development and want those > asserts to be triggered if needed. I suggested to Michael that he should request an all-.pyc library for that reason. > If so, can removing it be treated as a bugfix and put into current > releases, or should it be treated as an enhancement only for a > future release? > > > The behaviour shouldn't change. There has been talk of doing even more > aggressive optimizing under -O, which once again would cause an even > larger deviation between a .pyo file and a .pyc file (e.g. allowing > Python code to hook into the peephole optimizer or an entirely new AST > optimizer). Would such a change mean that *reading* a .pyo file as if it were a .pyc file would start failing? (It now works, by renaming the file.) If so, would not it be better to rely on having a different magic number *in* the file rather than on its mutable external name? You just said above that evading import restriction by name is now trivial. So what is the point of keeping it. If, in the future, there *are* separate execution pathways*, and we want the .pyo pathway closed unless -O is passed, then it seems that that could only be enforced by a magic number in the file. *Unladen Swallow would have *optionally* produced cache files completely different from current bytecode, with a different extension. A couple of people have suggested using wordcode instead of bytecode. If this were also introduced as an option, its cache files would also need a different extension and magic number. > Or is the restriction an intentional reservation of the possibility > of making *execution* depend on the flag? Which would mean that the > restriction should be kept and only the doc changed? > > The docs should get updated to be more clear. -- Terry Jan Reedy From mark at hotpy.org Wed Jun 13 23:37:29 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 13 Jun 2012 22:37:29 +0100 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> Message-ID: <4FD90819.4040305@hotpy.org> Raymond Hettinger wrote: > > On Jun 13, 2012, at 10:35 AM, Eli Bendersky wrote: > >> Did you mean to send this to the list, Raymond? > > > Yes. I wanted to find-out whether someone approved changing > all the dict tunable parameters. I thought those weren't supposed > to have changed. PEP 412 notes that the existing parameters > were finely tuned and it did not make recommendations for changing them. The PEP says that the current (3.2) implementation is finely tuned. No mention is made of tunable parameters. > > At one point, I spent a full month testing all of the tunable parameters > using dozens of popular Python applications. The values used in Py3.2 > represent the best settings for most apps. Best in terms of speed, but what about memory use? > They should not have been > changed without deep thought and substantial testing. I have thought about it, a lot :) I have also tested it, using the Unladen Swallow benchmarks. Additional (realistic) tests would be appreciated. > > The reduction of the dict min size has an immediate impact on code > using multiple keyword arguments. There is no change to the minimum size for combined-table (old style) dictionaries. > > The reduced growth rate (from x4 to x2) negatively impacts apps that > have a dicts with a steady size but constantly changing keys > (removing an old key and adding a new one). The lru_cache is > an example. The reduced growth causes it to resize much more > frequently than before. Resizing was probably the part of the implementation that took the most time. I had gained the impression from comments on this list, comments on the tracker and the web in general that the memory consumption of CPython was more of an issue than its performance. So my goal was to minimise memory usage without any significant slowdown (< 1% slowdown). The problem with resizing is that you don't know when it is going to stop. A bigger growth factor means fewer resizes, but more of a potential overshoot (a larger final size than required). So in order to reduce memory use a growth factor of x2 is required. Split-table dicts are (to a first-order approximation) never resized so the growth factor for them should as small as possible; x2. Combined-table dicts are more challenging. Reducing the growth rate to x2 increases the number of resizes by x2 or (x2-1) *and* increases the number of items per resize by about 50%. But it is not the number of resizes that matters, it is the time spent performing those resizes. I went to considerable effort to improve the performance of dictresize() so that even benchmarks that create a lot of short-lived medium-to-large sized combined-table dicts do not suffer impaired performance. It would be possible to split the growth factor into two: one for split-tables (which would be x2) and one for combined tables. But which is better for combined tables, x4 or x2? What is the relative value of speed and memory consumption? 50% less memory and 1% slower is good. 1% less memory and 50% slower is bad. But what about intermediate values? I think that for combined tables a growth factor of x2 is best, but I don't have any hard evidence to back that up. > > I think the tunable parameters should be switched back to what they > were before. Tim and others spent a lot of time getting those right > and my month of detailed testing confirmed that those were excellent > choices. They may have been excellent choices for the previous implementation. They are not necessarily the best for the new implementation. The current parameters seem to be the best for the new implementation. When Django and Twisted run on Python3, then it might be worthwhile to do some more experimentation. > > The introduction of Mark's shared-key dicts was orthogonal to the > issue of correct parameter settings. Those parameters did not have > to change and probably should not have been changed. Parameters for tuning code and the code itself are unlikely to be orthogonal. While I did strive to minimise the impact of the changes on combined-table dicts, the performance characteristics have necessarily changed. Cheers, Mark. From cs at zip.com.au Thu Jun 14 00:13:16 2012 From: cs at zip.com.au (Cameron Simpson) Date: Thu, 14 Jun 2012 08:13:16 +1000 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: References: Message-ID: <20120613221316.GA26765@cskk.homeip.net> On 11Jun2012 15:35, PJ Eby wrote: | Yes, perhaps if the list were *just* a place to cc: in or send a heads-up | to python-dev discussions, and not to have actual list discussions per se, | that would do the trick. This approach has its own problems. Is the proposed list, like many lists, restricted to accept posts only from subscribers? If that is the case, when someone CCs the VM list, everyone honouring the CC in replies needs to be a VM list member if they are not to get annoying bounce messages. It is a core reason that mailing list cross posting is discouraged in many lists; the receivers are not all members of all the cross posted lists, and so get lots of shrapnel when they try to participate. Personally, I am _for_ cross posting (when on topic), except for this unfortunate issue. | IOW, the idea is, "If you're a contributor to a non-CPython implementation, | subscribe here to get a heads-up on Python-Dev discussions you should be | following." Not, "here's a list to discuss Python implementations in | general", and definitely not a place to *actually conduct discussions* at | all: +1 to this, but how to keep this etiquette maintained? | the only things ever posted there should be cc:'d from or to | Python-Dev, or be pointers to Python-Dev threads. And (premised on my concern above), do people wanting to CC: the VM list for the heads-up purpose need to join it first? Conversely, some of this discussion mentions that people don't subscribe to python-dev; do they need to subscribe to chime in when the bat signal goes off? To reiterate, I'm all for the bat signal, but will there be shrapnel? Hackish idea: suppose there were a special purpose mail forwarder, like a write-only mailing list? It would require special Mailman hackery, but imagine: - a bat-signal list, which rejected posts not from members of python-dev - accepted messages get forwarded to all the relevant VM-specific lists, but with a rewritten "From:" line of "bat-signal at python.org" or such, and we subscribe _that_ address to the relevant lists to allow it in. And replies directed to "<>", as done with bounce messages, perhaps? Or python-dev, probably better. - if the rewritten "From:" address is the bat-signal list itself (pleasing), the copies sent back are dropped (special hack - drop inbound from the list address) This mechanical approach would get you access control to block spam to the bat-signal and send alerts to the other lists, and send discussion back to python-dev. Cheers, -- Cameron Simpson Nothing is impossible for the man who doesn't have to do it. From alexander.belopolsky at gmail.com Thu Jun 14 02:09:23 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Wed, 13 Jun 2012 20:09:23 -0400 Subject: [Python-Dev] TZ-aware local time In-Reply-To: <871ullyuvs.fsf@benfinney.id.au> References: <8762axz3dv.fsf@benfinney.id.au> <871ullyuvs.fsf@benfinney.id.au> Message-ID: On Tue, Jun 12, 2012 at 1:14 AM, Ben Finney wrote: >> To the contrary, without the POSIX timestamp model to define the >> equivalency between the same point in time expressed using different >> timezones, sane comparisons and arithmetic on timestamps would be >> impossible. > > Why is the POSIX timestamp model the only possible model? To the > contrary, there are many representations with different tradeoffs but > with the common properties you name (?equivalency between the same point > in time expressed using different timezones?). Here is my take on this. If datetime objects did not support fractions of a second, the difference between naive datetime and POSIX timestamp would be just a matter of tradeoff between human readability and efficient storage and fast calculations. This is very similar to a more familiar tradeoff between decimal and binary representation of numbers. Binary arithmetics is more efficient in terms of both storage and processing, so it is common to convert from decimal to binary on input and convert back on output. In some applications (e.g. hand-held calculators), I/O dominates internal processing, so implementing direct decimal arithmetics is not uncommon. The original designers of the datetime module chose the "decimal" internal format. Equivalent functionality can be implemented using "binary" format and in fact popular mxDateTime library is implemented that way. (Since we do support fractions of a second, there may be small difference in calculations performed using datetime types and float timestamps, but this issue has nothing to do with time zones, local time or UTC.) It is a common misconception that POSIX timestamps are somehow more closely tied to UTC than broken down time values. The opposite is true. at the end of this month, UTC clocks (e.g. http://tycho.usno.navy.mil/simpletime.html) will show 2012-06-30 23:59:59, 2012-06-30 23:59:60, 2012-06-31 00:00:00, while corresponding POSIX timestamps are 1341100799, 1341100800, 1341100800. Most POSIX systems will not freeze their clocks for a whole second or move them back, but instead they will either be one second ahead until someone (or something like the NTP daemon) causes the adjustment. In an earlier message, Guido criticized the practice of converting local broken down time to an integer using POSIX algorithm for calculations. While keeping time using local timezone is often not the best choice, there is nothing wrong with implementing local time arithmetics using timegm/gmtime conversion to integers. In fact, the results will be exactly the same as if the calculations were performed using datetime module with tzinfo=LocalTimezone. I think many users are confused by references to "Seconds Since the Epoch" and think that time_t should contain an actual number of SI seconds elapsed since a world-wide radio broadcast of "1970-01-01T00:00:00+0000". First of all, there was no such broadcast in 1970, but the time known as UTC 1970-01-01T00:00:00+0000 was 35 or so seconds earlier than time.time() seconds ago. The POSIX standard gets away hiding the true nature of UTC by defining the integral timestamp as "a value that *approximates* the number of seconds that have elapsed since the Epoch." The choice of approximation is specified by an explicit formula: tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86400 + (tm_year-70)*31536000 + ((tm_year-69)/4)*86400 - ((tm_year-1)/100)*86400 + ((tm_year+299)/400)*86400 This formula can be used to pack any broken down time value in an integer. As long as one does not care about leap seconds, any broken down time value can be converted to an integer and back using this formula. If the broken down value was in local time, the values stored in an integer will represent local time. If the broken down value was in UTC, the values stored in an integer will represent UTC. Either value will "approximate the number of seconds that have elapsed since the Epoch" given the right definition of the Epoch and sufficient tolerance for the approximation. The bottom line: POSIX time stamps and datetime objects (interpreted as UTC) implement the same timescale. The only difference is that POSIX timestamps can be stored using fewer bytes at the expense of not having an obvious meaning. While conversion from UTC to local time may require using POSIX timestamps internally, there is no need to expose them in datetime module interfaces. From steve at pearwood.info Thu Jun 14 02:55:59 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 14 Jun 2012 10:55:59 +1000 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120613175810.D7A032500AC@webabinitio.net> References: <20120613175810.D7A032500AC@webabinitio.net> Message-ID: <20120614005559.GA29549@ando> On Wed, Jun 13, 2012 at 01:58:10PM -0400, R. David Murray wrote: > So, is there any reason to not use the .pyo file (if that's all that is > around) when -O is not specified? .pyo and .pyc files have potentially different semantics. Right now, .pyo files don't include asserts, so that's one difference right there. In the future there may be more aggressive optimizations. Good practice is to never write an assert that actually changes the semantics of your program, but in practice people don't write asserts correctly, e.g. they use them for checking user-input or function parameters. So, no, we should never use .pyo files unless explicitly told to do so, since doing so risks breaking poorly-written but otherwise working code. -- Steven From raymond.hettinger at gmail.com Thu Jun 14 03:15:16 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 13 Jun 2012 18:15:16 -0700 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: <4FD90819.4040305@hotpy.org> References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> Message-ID: <3E06E9BD-B65F-4783-B66F-105353FBDE61@gmail.com> On Jun 13, 2012, at 2:37 PM, Mark Shannon wrote: > I think that for combined tables a growth factor of x2 is best, > but I don't have any hard evidence to back that up. I believe that change should be reverted. You've undone work that was based on extensive testing and timings of many python apps. In particular, it harms the speed of building-up all large dictionaries, and it greatly harms apps with steady-size dictionaries with changing keys. The previously existing parameter were well studied and have been well-reviewed by the likes of Tim Peters. They shouldn't be changed without deep thought and study. Certainly, "I think a growth factor of x2 is best" is insufficient. Raymond P.S. In the tracker discussion of key-sharing dict, you were asked to NOT change the tunable parameters. I'm not sure why you went ahead and did it anyway. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Thu Jun 14 03:31:07 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Thu, 14 Jun 2012 10:31:07 +0900 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: <20120613221316.GA26765@cskk.homeip.net> References: <20120613221316.GA26765@cskk.homeip.net> Message-ID: <871uli7k84.fsf@uwakimon.sk.tsukuba.ac.jp> Cameron Simpson writes: > This approach has its own problems. Is the proposed list, like many lists, > restricted to accept posts only from subscribers? If that is the case, > when someone CCs the VM list, everyone honouring the CC in replies needs > to be a VM list member if they are not to get annoying bounce > messages. Mailman has a feature called "sibling lists", which seems to include cross-checking membership on posts, although I'm not sure if it is tuned to exactly this purpose. In any cases, if the proposed list is not a discussion list, it can be configured not to send bounce messages just because somebody honored CC. For example, by keying on the Reference and In-Reply-To headers, and discarding the message if they are present (possible by ordinary configuration of the spam filters). For bonus points, bounce such messages when python-dev is not present among the visible addressees. (Might require a special Handler, but it wouldn't be a big deal to write it because it can be installed only for the new list.) > +1 to this, but how to keep this etiquette maintained? A filter on the new list, implemented as above. It's pretty much trivial for those with the know-how. > And (premised on my concern above), do people wanting to CC: the VM list > for the heads-up purpose need to join it first? Probably not, but this is hardly burdensome; many people with the background to know when to CC: may want to subscribe anyway even if they are subscribed to python-dev, and traffic should be quite low. If even so that's a bother, they can set their subscription to no-mail. > Conversely, some of this discussion mentions that people don't subscribe > to python-dev; do they need to subscribe to chime in when the bat signal > goes off? Maybe not: I believe it's possible to post to python-dev via Gmane if you're not subscribed. Even if they need to be subscribed, there is a wealth of options, including Gmane and the archives, for reading traffic without receiving it as mail (ie, by subscribing and then setting the no-mail flag). > Hackish idea: suppose there were a special purpose mail forwarder, > like a write-only mailing list? It would require special Mailman > hackery, Not that special. From brian.curtin at gmail.com Thu Jun 14 03:45:00 2012 From: brian.curtin at gmail.com (Brian Curtin) Date: Wed, 13 Jun 2012 20:45:00 -0500 Subject: [Python-Dev] backporting stdlib 2.7.x from pypy to cpython In-Reply-To: <871uli7k84.fsf@uwakimon.sk.tsukuba.ac.jp> References: <20120613221316.GA26765@cskk.homeip.net> <871uli7k84.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Jun 13, 2012 8:31 PM, "Stephen J. Turnbull" wrote: > > Cameron Simpson writes: > > > This approach has its own problems. Is the proposed list, like many lists, > > restricted to accept posts only from subscribers? If that is the case, > > when someone CCs the VM list, everyone honouring the CC in replies needs > > to be a VM list member if they are not to get annoying bounce > > messages. > > Mailman has a feature called "sibling lists", which seems to include > cross-checking membership on posts, although I'm not sure if it is > tuned to exactly this purpose. > > In any cases, if the proposed list is not a discussion list, it can be > configured not to send bounce messages just because somebody honored > CC. For example, by keying on the Reference and In-Reply-To headers, > and discarding the message if they are present (possible by ordinary > configuration of the spam filters). For bonus points, bounce such > messages when python-dev is not present among the visible addressees. > (Might require a special Handler, but it wouldn't be a big deal to > write it because it can be installed only for the new list.) > > > +1 to this, but how to keep this etiquette maintained? > > A filter on the new list, implemented as above. It's pretty much > trivial for those with the know-how. > > > And (premised on my concern above), do people wanting to CC: the VM list > > for the heads-up purpose need to join it first? > > Probably not, but this is hardly burdensome; many people with the > background to know when to CC: may want to subscribe anyway even if > they are subscribed to python-dev, and traffic should be quite low. > If even so that's a bother, they can set their subscription to no-mail. > > > Conversely, some of this discussion mentions that people don't subscribe > > to python-dev; do they need to subscribe to chime in when the bat signal > > goes off? > > Maybe not: I believe it's possible to post to python-dev via Gmane if > you're not subscribed. Even if they need to be subscribed, there is a > wealth of options, including Gmane and the archives, for reading > traffic without receiving it as mail (ie, by subscribing and then > setting the no-mail flag). > > > Hackish idea: suppose there were a special purpose mail forwarder, > > like a write-only mailing list? It would require special Mailman > > hackery, > > Not that special. > Can someone create python-dev-meta? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jun 14 03:48:08 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 14 Jun 2012 11:48:08 +1000 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> Message-ID: On Thu, Jun 14, 2012 at 6:06 AM, Terry Reedy wrote: > On 6/13/2012 2:46 PM, Antoine Pitrou wrote: > >> Not only docstrings, but also asserts. I think running a pyo without -O >> would be a bug. > > > That cat is already out of the bag ;-) > People are doing that now by renaming x.pyo to x.pyc. > Brett claims that it is also easy to do in 3.3 with a custom importer. Right, but by resorting to either of those approaches, people are clearly doing something that isn't formally supported by the core. Yes, you can do it, and most of the time it will work out OK, but any weird glitches that result are officially *not our problem*. The main reason this matters is that the "__debug__" flag is *supposed* to be process global - if you check it in one place, the answer should be correct for all Python code loaded in the process. If you load a .pyo file into a process running without -O (or a .pyc file into a process running *with* -O), then you have broken that assumption. Because the compiler understands __debug__, and is explicitly free to make optimisations based on the value of that flag at compile time (such as throwing away unreachable branches in if statements or applying constant folding operations), the following code will do different things if loaded from a .pyo file instead of .pyc: print("__debug__ is not a builtin, it is checked at compile time") if __debug__: print("A .pyc file always has __debug__ == True") else: print("A .pyo file always has __debug__ == False") $ ./python -c "import foo" __debug__ is not a builtin, it is checked at compile time A .pyc file always has __debug__ == True $ ./python -O -c "import foo" __debug__ is not a builtin, it is checked at compile time A .pyo file always has __debug__ == False $ ./python __pycache__/foo.cpython-33.pyo __debug__ is not a builtin, it is checked at compile time A .pyo file always has __debug__ == False $ ./python -O __pycache__/foo.cpython-33.pyc __debug__ is not a builtin, it is checked at compile time A .pyc file always has __debug__ == True Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Thu Jun 14 03:54:30 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 13 Jun 2012 21:54:30 -0400 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120614005559.GA29549@ando> References: <20120613175810.D7A032500AC@webabinitio.net> <20120614005559.GA29549@ando> Message-ID: On 6/13/2012 8:55 PM, Steven D'Aprano wrote: > On Wed, Jun 13, 2012 at 01:58:10PM -0400, R. David Murray wrote: > >> So, is there any reason to not use the .pyo file (if that's all that is >> around) when -O is not specified? > > .pyo and .pyc files have potentially different semantics. Right now, > .pyo files don't include asserts, so that's one difference right there. > In the future there may be more aggressive optimizations. > > Good practice is to never write an assert that actually changes the > semantics of your program, but in practice people don't write asserts > correctly, e.g. they use them for checking user-input or function > parameters. > > So, no, we You mean the interpreter? > should never use Do you mean import or execute? Current, the interpreter executes any bytecode that gets imported. > .pyo files unless explicitly told to do so, What constitutes 'explicitly told to do so'? Currently, an 'optimized' file written as .pyo gets imported (and hence executed) if 1) the interpreter is started with -O 2) a custom importer ignores the absence of -O 3) someone renames x.pyo to x.pyc. > since doing so risks breaking poorly-written but otherwise working code. Agreed, though a slightly different issue. Would you somehow disable 2) or 3) if not considered 'explicit' enough? -- Terry Jan Reedy From tjreedy at udel.edu Thu Jun 14 04:22:00 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 13 Jun 2012 22:22:00 -0400 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: <3E06E9BD-B65F-4783-B66F-105353FBDE61@gmail.com> References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> <3E06E9BD-B65F-4783-B66F-105353FBDE61@gmail.com> Message-ID: On 6/13/2012 9:15 PM, Raymond Hettinger wrote: > > On Jun 13, 2012, at 2:37 PM, Mark Shannon wrote: > >> I think that for combined tables a growth factor of x2 is best, >> but I don't have any hard evidence to back that up. > > I believe that change should be reverted. > You've undone work that was based on extensive testing and timings of > many python apps. > In particular, it harms the speed of building-up all large dictionaries, > and it greatly harms apps with steady-size dictionaries with changing keys. > > The previously existing parameter were well studied > and have been well-reviewed by the likes of Tim Peters. > They shouldn't be changed without deep thought and study. > Certainly, "I think a growth factor of x2 is best" is insufficient. The use cases 'directly used dict that is possibly the primary data structure and possibly gigabytes in size' is sufficiently different from 'hidden instance dicts that are typically a small, constant size (for a given class) but possibly repeated a million times' are sufficiently different that different tuning parameters might be appropriate. So I agree with Raymond think you should leave the parameters for the standard dicts alone until there is good evidence for a change. (Good == apparent to more than just one person ;-). If you improved the resize speed of standard dicts, great. However, that does not necessarily mean that the resize factor should change. [clipped by Raymond] >> I had gained the impression from comments on this list, >> comments on the tracker and the web in general that the memory >> consumption of CPython was more of an issue than its performance. In 16 years, I seemed to have missed that. I assure you there are many who feel the opposite. -- Terry Jan Reedy From rdmurray at bitdance.com Thu Jun 14 04:47:45 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 13 Jun 2012 22:47:45 -0400 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> Message-ID: <20120614024746.6C21425009E@webabinitio.net> On Thu, 14 Jun 2012 11:48:08 +1000, Nick Coghlan wrote: > On Thu, Jun 14, 2012 at 6:06 AM, Terry Reedy wrote: > > On 6/13/2012 2:46 PM, Antoine Pitrou wrote: > > > >> Not only docstrings, but also asserts. I think running a pyo without -O > >> would be a bug. > > > > That cat is already out of the bag ;-) > > People are doing that now by renaming x.pyo to x.pyc. > > Brett claims that it is also easy to do in 3.3 with a custom importer. > > Right, but by resorting to either of those approaches, people are > clearly doing something that isn't formally supported by the core. > Yes, you can do it, and most of the time it will work out OK, but any > weird glitches that result are officially *not our problem*. > > The main reason this matters is that the "__debug__" flag is > *supposed* to be process global - if you check it in one place, the OK, the above are the two concrete reasons I have heard in this thread for continuing the current behavior: 1) we do not wish to support running from .pyo files without -O being on, even if it currently happens to work 2) the __debug__ setting is supposed to be process-global Both of these are good reasons. IMO the issue should be closed with a documentation fix, which could optionally include either or both of the above motivations. --David From yselivanov.ml at gmail.com Thu Jun 14 04:52:43 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 13 Jun 2012 22:52:43 -0400 Subject: [Python-Dev] PEP 362 Third Revision Message-ID: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> Hello, The new revision of PEP 362 has been posted: http://www.python.org/dev/peps/pep-0362/ Summary: 1. Signature object now represents the call signature of a function. That said, it doesn't have 'name' and 'qualname' attributes anymore, and can be tested for equality against other signatures. 2. signature() function support all kinds of callables: classes, metaclasses, methods, class- & staticmethods, 'functools.partials', and callable objects. If a callable object has a '__signature__' attribute it does a deepcopy of it before return. 3. No implicit caching to __signature__. 4. Added 'Signature.bind_partial' and 'Signature.format' methods. A patch with the PEP implementation is attached to the issue 15008. It should be ready for code review. Thank you! PEP: 362 Title: Function Signature Object Version: $Revision$ Last-Modified: $Date$ Author: Brett Cannon , Jiwon Seo , Yury Selivanov , Larry Hastings Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 21-Aug-2006 Python-Version: 3.3 Post-History: 04-Jun-2012 Abstract ======== Python has always supported powerful introspection capabilities, including introspecting functions and methods (for the rest of this PEP, "function" refers to both functions and methods). By examining a function object you can fully reconstruct the function's signature. Unfortunately this information is stored in an inconvenient manner, and is spread across a half-dozen deeply nested attributes. This PEP proposes a new representation for function signatures. The new representation contains all necessary information about a function and its parameters, and makes introspection easy and straightforward. However, this object does not replace the existing function metadata, which is used by Python itself to execute those functions. The new metadata object is intended solely to make function introspection easier for Python programmers. Signature Object ================ A Signature object represents the call signature of a function and its return annotation. For each parameter accepted by the function it stores a `Parameter object`_ in its ``parameters`` collection. A Signature object has the following public attributes and methods: * return_annotation : object The annotation for the return type of the function if specified. If the function has no annotation for its return type, this attribute is not set. * parameters : OrderedDict An ordered mapping of parameters' names to the corresponding Parameter objects (keyword-only arguments are in the same order as listed in ``code.co_varnames``). * bind(\*args, \*\*kwargs) -> BoundArguments Creates a mapping from positional and keyword arguments to parameters. Raises a ``BindError`` (subclass of ``TypeError``) if the passed arguments do not match the signature. * bind_partial(\*args, \*\*kwargs) -> BoundArguments Works the same way as ``bind()``, but allows the omission of some required arguments (mimics ``functools.partial`` behavior.) * format(...) -> str Formats the Signature object to a string. Optional arguments allow for custom render functions for parameter names, annotations and default values, along with custom separators. Signature implements the ``__str__`` method, which fallbacks to the ``Signature.format()`` call. It's possible to test Signatures for equality. Two signatures are equal when they have equal parameters and return annotations. Changes to the Signature object, or to any of its data members, do not affect the function itself. Parameter Object ================ Python's expressive syntax means functions can accept many different kinds of parameters with many subtle semantic differences. We propose a rich Parameter object designed to represent any possible function parameter. The structure of the Parameter object is: * name : str The name of the parameter as a string. * default : object The default value for the parameter, if specified. If the parameter has no default value, this attribute is not set. * annotation : object The annotation for the parameter if specified. If the parameter has no annotation, this attribute is not set. * is_keyword_only : bool True if the parameter is keyword-only, else False. * is_args : bool True if the parameter accepts variable number of arguments (``*args``-like), else False. * is_kwargs : bool True if the parameter accepts variable number of keyword arguments (``**kwargs``-like), else False. * is_implemented : bool True if the parameter is implemented for use. Some platforms implement functions but can't support specific parameters (e.g. "mode" for ``os.mkdir``). Passing in an unimplemented parameter may result in the parameter being ignored, or in NotImplementedError being raised. It is intended that all conditions where ``is_implemented`` may be False be thoroughly documented. Two parameters are equal when all their attributes are equal. BoundArguments Object ===================== Result of a ``Signature.bind`` call. Holds the mapping of arguments to the function's parameters. Has the following public attributes: * arguments : OrderedDict An ordered, mutable mapping of parameters' names to arguments' values. Does not contain arguments' default values. * args : tuple Tuple of positional arguments values. Dynamically computed from the 'arguments' attribute. * kwargs : dict Dict of keyword arguments values. Dynamically computed from the 'arguments' attribute. The ``arguments`` attribute should be used in conjunction with ``Signature.parameters`` for any arguments processing purposes. ``args`` and ``kwargs`` properties can be used to invoke functions: :: def test(a, *, b): ... sig = signature(test) ba = sig.bind(10, b=20) test(*ba.args, **ba.kwargs) Implementation ============== The implementation adds a new function ``signature()`` to the ``inspect`` module. The function is the preferred way of getting a ``Signature`` for a callable object. The function implements the following algorithm: - If the object is not callable - raise a TypeError - If the object has a ``__signature__`` attribute and if it is not ``None`` - return a deepcopy of it - If it is ``None`` and the object is an instance of ``BuiltinFunction``, raise a ``ValueError`` - If the object is a an instance of ``FunctionType``: - If it has a ``__wrapped__`` attribute, return ``signature(object.__wrapped__)`` - Or else construct a new ``Signature`` object and return it - If the object is a method or a classmethod, construct and return a new ``Signature`` object, with its first parameter (usually ``self`` or ``cls``) removed - If the object is a staticmethod, construct and return a new ``Signature`` object - If the object is an instance of ``functools.partial``, construct a new ``Signature`` from its ``partial.func`` attribute, and account for already bound ``partial.args`` and ``partial.kwargs`` - If the object is a class or metaclass: - If the object's type has a ``__call__`` method defined in its MRO, return a Signature for it - If the object has a ``__new__`` method defined in its class, return a Signature object for it - If the object has a ``__init__`` method defined in its class, return a Signature object for it - Return ``signature(object.__call__)`` Note, that the ``Signature`` object is created in a lazy manner, and is not automatically cached. If, however, the Signature object was explicitly cached by the user, ``signature()`` returns a new deepcopy of it on each invocation. An implementation for Python 3.3 can be found at [#impl]_. The python issue tracking the patch is [#issue]_. Design Considerations ===================== No implicit caching of Signature objects ---------------------------------------- The first PEP design had a provision for implicit caching of ``Signature`` objects in the ``inspect.signature()`` function. However, this has the following downsides: * If the ``Signature`` object is cached then any changes to the function it describes will not be reflected in it. However, If the caching is needed, it can be always done manually and explicitly * It is better to reserve the ``__signature__`` attribute for the cases when there is a need to explicitly set to a ``Signature`` object that is different from the actual one Examples ======== Visualizing Callable Objects' Signature --------------------------------------- :: from inspect import signature from functools import partial, wraps class FooMeta(type): def __new__(mcls, name, bases, dct, *, bar:bool=False): return super().__new__(mcls, name, bases, dct) def __init__(cls, name, bases, dct, **kwargs): return super().__init__(name, bases, dct) class Foo(metaclass=FooMeta): def __init__(self, spam:int=42): self.spam = spam def __call__(self, a, b, *, c) -> tuple: return a, b, c print('FooMeta >', str(signature(FooMeta))) print('Foo >', str(signature(Foo))) print('Foo.__call__ >', str(signature(Foo.__call__))) print('Foo().__call__ >', str(signature(Foo().__call__))) print('partial(Foo().__call__, 1, c=3) >', str(signature(partial(Foo().__call__, 1, c=3)))) print('partial(partial(Foo().__call__, 1, c=3), 2, c=20) >', str(signature(partial(partial(Foo().__call__, 1, c=3), 2, c=20)))) The script will output: :: FooMeta > (name, bases, dct, *, bar:bool=False) Foo > (spam:int=42) Foo.__call__ > (self, a, b, *, c) -> tuple Foo().__call__ > (a, b, *, c) -> tuple partial(Foo().__call__, 1, c=3) > (b, *, c=3) -> tuple partial(partial(Foo().__call__, 1, c=3), 2, c=20) > (*, c=20) -> tuple Annotation Checker ------------------ :: import inspect import functools def checktypes(func): '''Decorator to verify arguments and return types Example: >>> @checktypes ... def test(a:int, b:str) -> int: ... return int(a * b) >>> test(10, '1') 1111111111 >>> test(10, 1) Traceback (most recent call last): ... ValueError: foo: wrong type of 'b' argument, 'str' expected, got 'int' ''' sig = inspect.signature(func) types = {} for param in sig.parameters.values(): # Iterate through function's parameters and build the list of # arguments types try: type_ = param.annotation except AttributeError: continue else: if not inspect.isclass(type_): # Not a type, skip it continue types[param.name] = type_ # If the argument has a type specified, let's check that its # default value (if present) conforms with the type. try: default = param.default except AttributeError: continue else: if not isinstance(default, type_): raise ValueError("{func}: wrong type of a default value for {arg!r}". \ format(func=sig.qualname, arg=param.name)) def check_type(sig, arg_name, arg_type, arg_value): # Internal function that incapsulates arguments type checking if not isinstance(arg_value, arg_type): raise ValueError("{func}: wrong type of {arg!r} argument, " \ "{exp!r} expected, got {got!r}". \ format(func=sig.qualname, arg=arg_name, exp=arg_type.__name__, got=type(arg_value).__name__)) @functools.wraps(func) def wrapper(*args, **kwargs): # Let's bind the arguments ba = sig.bind(*args, **kwargs) for arg_name, arg in ba.arguments.items(): # And iterate through the bound arguments try: type_ = types[arg_name] except KeyError: continue else: # OK, we have a type for the argument, lets get the corresponding # parameter description from the signature object param = sig.parameters[arg_name] if param.is_args: # If this parameter is a variable-argument parameter, # then we need to check each of its values for value in arg: check_type(sig, arg_name, type_, value) elif param.is_kwargs: # If this parameter is a variable-keyword-argument parameter: for subname, value in arg.items(): check_type(sig, arg_name + ':' + subname, type_, value) else: # And, finally, if this parameter a regular one: check_type(sig, arg_name, type_, arg) result = func(*ba.args, **ba.kwargs) # The last bit - let's check that the result is correct try: return_type = sig.return_annotation except AttributeError: # Looks like we don't have any restriction on the return type pass else: if isinstance(return_type, type) and not isinstance(result, return_type): raise ValueError('{func}: wrong return type, {exp} expected, got {got}'. \ format(func=sig.qualname, exp=return_type.__name__, got=type(result).__name__)) return result return wrapper References ========== .. [#impl] pep362 branch (https://bitbucket.org/1st1/cpython/overview) .. [#issue] issue 15008 (http://bugs.python.org/issue15008) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From yselivanov at gmail.com Thu Jun 14 05:06:47 2012 From: yselivanov at gmail.com (Yury Selivanov) Date: Wed, 13 Jun 2012 23:06:47 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> Message-ID: <5BE279F4-8E3F-4F92-B08E-93FF046A4816@gmail.com> On 2012-06-13, at 10:52 PM, Yury Selivanov wrote: > 2. signature() function support all kinds of callables: > classes, metaclasses, methods, class- & staticmethods, > 'functools.partials', and callable objects. If a callable > object has a '__signature__' attribute it does a deepcopy > of it before return. Properly decorated functions are also supported. - Yury From tjreedy at udel.edu Thu Jun 14 05:06:53 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 13 Jun 2012 23:06:53 -0400 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120614024746.6C21425009E@webabinitio.net> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120614024746.6C21425009E@webabinitio.net> Message-ID: On 6/13/2012 10:47 PM, R. David Murray wrote: > On Thu, 14 Jun 2012 11:48:08 +1000, Nick Coghlan wrote: >> Right, but by resorting to either of those approaches, people are >> clearly doing something that isn't formally supported by the core. That was not clear to me until I read your post -- the key word being formally (or officially). I see now that distributing a sourceless library as a mixture of .pyc and .pyo files is even crazier that I thought. >> Yes, you can do it, and most of the time it will work out OK, but any >> weird glitches that result are officially *not our problem*. >> >> The main reason this matters is that the "__debug__" flag is >> *supposed* to be process global - if you check it in one place, the > > OK, the above are the two concrete reasons I have heard in this thread > for continuing the current behavior: > > 1) we do not wish to support running from .pyo files without -O > being on, even if it currently happens to work > > 2) the __debug__ setting is supposed to be process-global > > Both of these are good reasons. IMO the issue should be closed with a > documentation fix, which could optionally include either or both of the > above motivations. I agree. We have gotten what we need from this thread. -- Terry Jan Reedy From steve at pearwood.info Thu Jun 14 05:12:16 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 14 Jun 2012 13:12:16 +1000 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120613191355.C82AD25009E@webabinitio.net> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> Message-ID: <20120614031216.GB29549@ando> On Wed, Jun 13, 2012 at 03:13:54PM -0400, R. David Murray wrote: > Again, a program that depends on asserts is buggy. > > As Ethan pointed out we are asking about the case where someone is > *deliberately* setting the .pyo file up to be run as the "normal" > case. You can't be sure that the .pyo file is there due to *deliberate* choice. It may be accidental. Perhaps the end user has ignorantly deleted the .pyc file, but failed to delete the .pyo file. Perhaps the developer has merely made a mistake. Under current behaviour, deleting the .pyc file shouldn't matter: - if the source file is available, that will be used - if not, a clear error is raised Under the proposed change: - if the source file is *newer* than the .pyo file, it will be used - but if it is missing or older, the .pyo file is used This opens a potential mismatch between the code I *think* is being run, and the actual code being run: I think the .py[c] code is running when the .pyo is actually running. Realistically, we should expect that most people don't *sufficiently* test their apps under -O (if at all!) even if they are aware that there are differences in behaviour. I know I don't, and I know I should. This is just a matter of priority: testing without -O is a higher priority for me than testing with -O and -OO. The consequence is that I may then receive a mysterious bug report that I can't duplicate, because the user correctly reports that they are running *without* -O, but unknown to anyone, they are actually running the .pyo file. > I'm not sure we want to support that, I just want us to be clear > about why we don't :) If I receive a bug report that only occurs under -O, then I immediately suspect that the bug has something to do with assert. If I receive a bug report that occurs without -O, under the proposed change I can't be sure with the optimized code or standard code is running. That adds complexity and confusion. -- Steven From steve at pearwood.info Thu Jun 14 05:15:42 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 14 Jun 2012 13:15:42 +1000 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> Message-ID: <20120614031542.GC29549@ando> On Wed, Jun 13, 2012 at 04:06:22PM -0400, Terry Reedy wrote: > On 6/13/2012 2:46 PM, Antoine Pitrou wrote: > > >Not only docstrings, but also asserts. I think running a pyo without -O > >would be a bug. > > That cat is already out of the bag ;-) > People are doing that now by renaming x.pyo to x.pyc. > Brett claims that it is also easy to do in 3.3 with a custom importer. That's fine. Both steps require an overt, deliberate act, and so is under the control of (and the responsibilty of) the developer. It's not something that could happen innocently by accident. -- Steven From steve at pearwood.info Thu Jun 14 05:25:25 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 14 Jun 2012 13:25:25 +1000 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: References: <20120613175810.D7A032500AC@webabinitio.net> <20120614005559.GA29549@ando> Message-ID: <20120614032525.GD29549@ando> On Wed, Jun 13, 2012 at 09:54:30PM -0400, Terry Reedy wrote: > >So, no, we > > You mean the interpreter? Yes. > >should never use > > Do you mean import or execute? > Current, the interpreter executes any bytecode that gets imported. Both. > >.pyo files unless explicitly told to do so, > > What constitutes 'explicitly told to do so'? Currently, an 'optimized' > file written as .pyo gets imported (and hence executed) if > 1) the interpreter is started with -O > 2) a custom importer ignores the absence of -O > 3) someone renames x.pyo to x.pyc. Any of the above are fine by me. I oppose this one: 4) the interpreter is started without -O but there is no .pyc file. since it can lead to a mismatch between what I (the developer) thinks is being run and what is actually being run (or imported). For the avoidance of doubt, if my end-users secretly rename .pyo to .pyc files, that's my problem, not the Python's interpreter's problem. I don't expect Python to be idiot-proof. -- Steven From ethan at stoneleaf.us Thu Jun 14 05:39:04 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 13 Jun 2012 20:39:04 -0700 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120614031216.GB29549@ando> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> <20120614031216.GB29549@ando> Message-ID: <4FD95CD8.7040404@stoneleaf.us> Steven D'Aprano wrote: > On Wed, Jun 13, 2012 at 03:13:54PM -0400, R. David Murray wrote: > >> Again, a program that depends on asserts is buggy. >> >> As Ethan pointed out we are asking about the case where someone is >> *deliberately* setting the .pyo file up to be run as the "normal" >> case. > > You can't be sure that the .pyo file is there due to *deliberate* > choice. It may be accidental. Perhaps the end user has ignorantly > deleted the .pyc file, but failed to delete the .pyo file. Perhaps the > developer has merely made a mistake. You can't just delete the .pyc file to get the .pyo file to run; remember in 3.x compiled files are kept in a __pycache__ folder, and if there is no .py file the compiled files are ignored (correct me if I'm wrong), so to get the either the .pyc file /or/ the .pyo file to run /without/ a .py file, you have to physically move the compiled file to where the source file should be. It could still be accidental, but it's far less likely to be. > Under current behaviour, deleting the .pyc file shouldn't matter: > > - if the source file is available, that will be used > - if not, a clear error is raised > > Under the proposed change: > > - if the source file is *newer* than the .pyo file, it will be used > - but if it is missing or older, the .pyo file is used Again, not in 3.x. ~Ethan~ From ncoghlan at gmail.com Thu Jun 14 06:17:02 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 14 Jun 2012 14:17:02 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <5BE279F4-8E3F-4F92-B08E-93FF046A4816@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <5BE279F4-8E3F-4F92-B08E-93FF046A4816@gmail.com> Message-ID: On Thu, Jun 14, 2012 at 1:06 PM, Yury Selivanov wrote: > On 2012-06-13, at 10:52 PM, Yury Selivanov wrote: >> 2. signature() function support all kinds of callables: >> classes, metaclasses, methods, class- & staticmethods, >> 'functools.partials', and callable objects. ?If a callable >> object has a '__signature__' attribute it does a deepcopy >> of it before return. > > > Properly decorated functions are also supported. I'd like to see the "shared state" decorator from the previous thread included, as well as a short interactive interpreter session showing correct reporting of the signature of functools.partial instances. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From alexandre.zani at gmail.com Thu Jun 14 06:29:12 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Wed, 13 Jun 2012 21:29:12 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <5BE279F4-8E3F-4F92-B08E-93FF046A4816@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <5BE279F4-8E3F-4F92-B08E-93FF046A4816@gmail.com> Message-ID: On Wed, Jun 13, 2012 at 8:06 PM, Yury Selivanov wrote: > On 2012-06-13, at 10:52 PM, Yury Selivanov wrote: >> 2. signature() function support all kinds of callables: >> classes, metaclasses, methods, class- & staticmethods, >> 'functools.partials', and callable objects. ?If a callable >> object has a '__signature__' attribute it does a deepcopy >> of it before return. > > > Properly decorated functions are also supported. > > - > Yury > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com This is really exciting! A couple questions/points: Why do we look at __wrapped__ only if the object is a FunctionType? Why not support __wrapped__ on all callables? Why special-case functools.partial? Couldn't functools.partial just set __signature__ itself? Is that because of inspect's dependency on functools? Just a thought: Do we want to include the docstring? A function's docstring is often intimately tied to its signature. (Or at least, a lot of us try to write docstrings that effectively describe the function's signature) From ncoghlan at gmail.com Thu Jun 14 12:00:27 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 14 Jun 2012 20:00:27 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <5BE279F4-8E3F-4F92-B08E-93FF046A4816@gmail.com> Message-ID: On Jun 14, 2012 2:31 PM, "Alexandre Zani" wrote: > Why do we look at __wrapped__ only if the object is a FunctionType? > Why not support __wrapped__ on all callables? Fair question - duck typing here makes more sense to me, too. > > Why special-case functools.partial? Couldn't functools.partial just > set __signature__ itself? Is that because of inspect's dependency on > functools? Got it in one. Really, it's the same reason we don't burden the builtin callable machinery with it - to ensure the dependencies only flow in one direction. > Just a thought: Do we want to include the docstring? A function's > docstring is often intimately tied to its signature. (Or at least, a > lot of us try to write docstrings that effectively describe the > function's signature) No, combining the signature with other details like the name and docstring is the task of higher level interfaces like pydoc. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Thu Jun 14 12:11:42 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 14 Jun 2012 12:11:42 +0200 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120613191355.C82AD25009E@webabinitio.net> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> Message-ID: On Wed, Jun 13, 2012 at 9:13 PM, R. David Murray wrote: > On Wed, 13 Jun 2012 20:46:50 +0200, Antoine Pitrou > wrote: > > On Wed, 13 Jun 2012 11:20:24 -0700 > > Toshio Kuratomi wrote: > > > On Wed, Jun 13, 2012 at 01:58:10PM -0400, R. David Murray wrote: > > > > > > > > OK, but you didn't answer the question :). If I understand > correctly, > > > > everything you said applies to *writing* the bytecode, not reading > it. > > > > > > > > So, is there any reason to not use the .pyo file (if that's all that > is > > > > around) when -O is not specified? > > > > > > > > The only technical reason I can see why -O should be required for a > .pyo > > > > file to be used (*if* it is the only thing around) is if it won't > *run* > > > > without the -O switch. Is there any expectation that that will ever > be > > > > the case? > > > > > > > Yes. For instance, if I create a .pyo with -OO it wouldn't have > docstrings. > > > Another piece of code can legally import that and try to use the > docstring > > > for something. This would fail if only the .pyo was present. > > > > Not only docstrings, but also asserts. I think running a pyo without -O > > would be a bug. > > Again, a program that depends on asserts is buggy. > > As Ethan pointed out we are asking about the case where someone is > *deliberately* setting the .pyo file up to be run as the "normal" > case. > > I'm not sure we want to support that, I just want us to be clear > about why we don't :) PyPy toolchain is an example of such buggy program. And oh any tests. I would not be impressed if my python read .pyo files out of nowhere when not running with -O flag (I'm trying very hard to never run python with -O, because it's different python after all) -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Jun 14 12:25:24 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 14 Jun 2012 12:25:24 +0200 Subject: [Python-Dev] deprecating .pyo and -O References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> <4FD8EBD7.7090500@stoneleaf.us> Message-ID: <20120614122524.01a53741@pitrou.net> On Wed, 13 Jun 2012 12:36:55 -0700 Ethan Furman wrote: > > Currently, the alternative to supporting this behavior is to either: > > 1) require the end-user to specify -O (major nuisance) > > or > > 2) have the distributor rename the .pyo file to .pyc > > I think 1 is a non-starter (non-finisher? ;) but I could live with 2 -- > after all, if someone is going to the effort of removing the .py file > and moving the .pyo file into its place, renaming the .pyo to .pyc is > trivial. > > So the question, then, is: is option 2 better than just supporting .pyo > files without -O when they are all that is available? Honestly, I think the best option would be to deprecate .pyo files as well as the useless -O option. They only cause confusion without providing any significant benefits. (also, they ironically make Python installs bigger since both .pyc and .pyo files have to be provided by system packages) Regards Antoine. From mark at hotpy.org Thu Jun 14 12:45:31 2012 From: mark at hotpy.org (Mark Shannon) Date: Thu, 14 Jun 2012 11:45:31 +0100 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: <3E06E9BD-B65F-4783-B66F-105353FBDE61@gmail.com> References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> <3E06E9BD-B65F-4783-B66F-105353FBDE61@gmail.com> Message-ID: <4FD9C0CB.3040005@hotpy.org> Raymond Hettinger wrote: > > On Jun 13, 2012, at 2:37 PM, Mark Shannon wrote: > >> I think that for combined tables a growth factor of x2 is best, >> but I don't have any hard evidence to back that up. > > I believe that change should be reverted. > You've undone work that was based on extensive testing and timings of > many python apps. > In particular, it harms the speed of building-up all large dictionaries, > and it greatly harms apps with steady-size dictionaries with changing keys. > > The previously existing parameter were well studied > and have been well-reviewed by the likes of Tim Peters. > They shouldn't be changed without deep thought and study. > Certainly, "I think a growth factor of x2 is best" is insufficient. Indeed, "I think a growth factor of x2 is best" is insufficient, but so is "based on extensive testing and timings of many python apps" unless you provide those timings and apps. So here is some evidence. I have compared tip (always resize by x2) with a x4 variant (resize split-dict x2 and combined-dicts x4). All benchmarks from http://hg.python.org/benchmarks/ For my old 32bit machine, numbers are for the x4 variant relative to tip. For the 2n3 suite (24 micro benchmarks) Average speed up: None (~0.05% on average) Average memory use +4%. GC: 1% faster, no change to memory use. Mako: 4% slower, 4% more memory 2to3: 3% faster, 32% more memory. Overall: No change to speed, 5% more memory. The results seem to indicate that resizing is now sufficiently fast that changing from x4 to x2 makes no difference in terms of speed. However, for some programs (notable 2to3) the change from x4 to x2 can save a lot of memory. Cheers, Mark. From yselivanov at gmail.com Thu Jun 14 13:45:28 2012 From: yselivanov at gmail.com (Yury Selivanov) Date: Thu, 14 Jun 2012 07:45:28 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <5BE279F4-8E3F-4F92-B08E-93FF046A4816@gmail.com> Message-ID: <5EE14352-B60B-45E5-A44A-C93D2968E986@gmail.com> On 2012-06-14, at 12:17 AM, Nick Coghlan wrote: > On Thu, Jun 14, 2012 at 1:06 PM, Yury Selivanov wrote: >> On 2012-06-13, at 10:52 PM, Yury Selivanov wrote: >>> 2. signature() function support all kinds of callables: >>> classes, metaclasses, methods, class- & staticmethods, >>> 'functools.partials', and callable objects. If a callable >>> object has a '__signature__' attribute it does a deepcopy >>> of it before return. >> >> >> Properly decorated functions are also supported. > > I'd like to see the "shared state" decorator from the previous thread > included, as well as a short interactive interpreter session showing > correct reporting of the signature of functools.partial instances. OK. Below is how I want to update the PEP. Do you want to include anything else? Visualizing Callable Objects' Signatures ---------------------------------------- Let's define some classes and functions: :: from inspect import signature from functools import partial, wraps class FooMeta(type): def __new__(mcls, name, bases, dct, *, bar:bool=False): return super().__new__(mcls, name, bases, dct) def __init__(cls, name, bases, dct, **kwargs): return super().__init__(name, bases, dct) class Foo(metaclass=FooMeta): def __init__(self, spam:int=42): self.spam = spam def __call__(self, a, b, *, c) -> tuple: return a, b, c def shared_vars(*shared_args): """Decorator factory that defines shared variables that are passed to every invocation of the function""" def decorator(f): @wraps(f) def wrapper(*args, **kwds): full_args = shared_args + args return f(*full_args, **kwds) # Override signature sig = wrapper.__signature__ = signature(f) for __ in shared_args: sig.parameters.popitem(last=False) return wrapper return decorator @shared_vars({}) def example(_state, a, b, c): return _state, a, b, c def format_signature(obj): return str(signature(obj)) Now, in the python REPL: :: >>> format_signature(FooMeta) '(name, bases, dct, *, bar:bool=False)' >>> format_signature(Foo) '(spam:int=42)' >>> format_signature(Foo.__call__) '(self, a, b, *, c) -> tuple' >>> format_signature(Foo().__call__) '(a, b, *, c) -> tuple' >>> format_signature(partial(Foo().__call__, 1, c=3)) '(b, *, c=3) -> tuple' >>> format_signature(partial(partial(Foo().__call__, 1, c=3), 2, c=20)) '(*, c=20) -> tuple' >>> format_signature(example) '(a, b, c)' >>> format_signature(partial(example, 1, 2)) '(c)' >>> format_signature(partial(partial(example, 1, b=2), c=3)) '(b=2, c=3)' - Yury From kristjan at ccpgames.com Thu Jun 14 13:32:17 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Thu, 14 Jun 2012 11:32:17 +0000 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: <4FD90819.4040305@hotpy.org> References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> Message-ID: FYI, I had to visit those parameters for my PS3 port and cut down on the bombastic memory footprint of 2.7 dicts. Some the supposedly tunable parameters aren?t tunable at all. See http://blog.ccpgames.com/kristjan/2012/04/25/optimizing-the-dict/ Speed and memory are most often conflicting requirements. I would like to two or more compile time settings to choose from: Memory optimal, speed optimal, (and mix). Cheers, K > -----Original Message----- > From: python-dev-bounces+kristjan=ccpgames.com at python.org > [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On > Behalf Of Mark Shannon > Sent: 13. j?n? 2012 21:37 > To: python-dev at python.org > Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt > out of date?) > > Raymond Hettinger wrote: > > > > On Jun 13, 2012, at 10:35 AM, Eli Bendersky wrote: > > > >> Did you mean to send this to the list, Raymond? > > > > > > Yes. I wanted to find-out whether someone approved changing > > all the dict tunable parameters. I thought those weren't supposed > > to have changed. PEP 412 notes that the existing parameters were > > finely tuned and it did not make recommendations for changing them. > > The PEP says that the current (3.2) implementation is finely tuned. > No mention is made of tunable parameters. > > > > > At one point, I spent a full month testing all of the tunable > > parameters using dozens of popular Python applications. The values > > used in Py3.2 represent the best settings for most apps. > > Best in terms of speed, but what about memory use? > From yselivanov at gmail.com Thu Jun 14 13:50:45 2012 From: yselivanov at gmail.com (Yury Selivanov) Date: Thu, 14 Jun 2012 07:50:45 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <5BE279F4-8E3F-4F92-B08E-93FF046A4816@gmail.com> Message-ID: <88BE3834-4B9E-416D-A99F-FC5F30EBB896@gmail.com> On 2012-06-14, at 12:29 AM, Alexandre Zani wrote: > Why do we look at __wrapped__ only if the object is a FunctionType? > Why not support __wrapped__ on all callables? Good idea ;) I'll add this. Thanks, - Yury From flub at devork.be Thu Jun 14 13:58:16 2012 From: flub at devork.be (Floris Bruynooghe) Date: Thu, 14 Jun 2012 12:58:16 +0100 Subject: [Python-Dev] deprecating .pyo and -O In-Reply-To: <20120614122524.01a53741@pitrou.net> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> <4FD8EBD7.7090500@stoneleaf.us> <20120614122524.01a53741@pitrou.net> Message-ID: On 14 June 2012 11:25, Antoine Pitrou wrote: > Honestly, I think the best option would be to deprecate .pyo files as > well as the useless -O option. They only cause confusion without > providing any significant benefits. +1 But what happens to __debug__ and assert statements? I think it should be possible to always put assert statements inside a __debug__ block and then create -O a simple switch for setting __debug__ to False. If desired a simple strip tool could then easily remove __debug__ blocks and (unused) docstrings. -- Debian GNU/Linux -- The Power of Freedom www.debian.org | www.gnu.org | www.kernel.org From victor.stinner at gmail.com Thu Jun 14 14:06:09 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 14 Jun 2012 14:06:09 +0200 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> Message-ID: Sorry if I'm asking dummy questions, I didn't follow the discussion. > * format(...) -> str > ? ?Formats the Signature object to a string. ?Optional arguments allow > ? ?for custom render functions for parameter names, > ? ?annotations and default values, along with custom separators. Hum, what are these "custom render functions"? Can you give an example? > * is_keyword_only : bool > ? ?True if the parameter is keyword-only, else False. > * is_args : bool > ? ?True if the parameter accepts variable number of arguments > ? ?(``*args``-like), else False. > * is_kwargs : bool > ? ?True if the parameter accepts variable number of keyword > ? ?arguments (``**kwargs``-like), else False. Hum, why not using a attribute with a string value instead of 3 attribute? For example: * argtype: "index", "varargs", "keyword" or "keyword_only" It would avoid a possible inconsitency (ex: is_args=True and is_kwargs=True). And it would help to implement something like a C switch/case using a dict: argtype => function for functions using signatures. > * is_implemented : bool > ? ?True if the parameter is implemented for use. ?Some platforms > ? ?implement functions but can't support specific parameters > ? ?(e.g. "mode" for ``os.mkdir``). ?Passing in an unimplemented > ? ?parameter may result in the parameter being ignored, > ? ?or in NotImplementedError being raised. ?It is intended that > ? ?all conditions where ``is_implemented`` may be False be > ? ?thoroughly documented. I suppose that the value depends on the running platform? (For example, you may get a different value on Linux and Windows.) > Implementation > ============== > > ? ?- If the object has a ``__signature__`` attribute and if it > ? ? ?is not ``None`` - return a deepcopy of it Oh, why copying the object? It may impact performances. If fhe caller knows that it will modify the signature, it can deepcopy the signature. > ? ? ? ?- If it is ``None`` and the object is an instance of > ? ? ? ? ?``BuiltinFunction``, raise a ``ValueError`` What about builtin functions (ex: len)? Do you plan to add a __signature__ attribute? If yes, something created on demand or created at startup? It would be nice to have a C API to create Signature objects, maybe from the same format string than PyArg_Parse*() functions. But it can be implemented later. Is it possible to build a Signature object from a string describing the prototype (ex: "def f(x, y): pass")? (I mean: do you plan to add such function?) -- Except of these remarks, I like this PEP :-) Victor From ncoghlan at gmail.com Thu Jun 14 14:15:20 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 14 Jun 2012 22:15:20 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <5EE14352-B60B-45E5-A44A-C93D2968E986@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <5BE279F4-8E3F-4F92-B08E-93FF046A4816@gmail.com> <5EE14352-B60B-45E5-A44A-C93D2968E986@gmail.com> Message-ID: Looks great! One very minor quibble is that I prefer 'ns' to 'dct' for the namespace parameter in a metaclass, but that doesn't really matter for the PEP. -- Sent from my phone, thus the relative brevity :) On Jun 14, 2012 9:45 PM, "Yury Selivanov" wrote: > On 2012-06-14, at 12:17 AM, Nick Coghlan wrote: > > > On Thu, Jun 14, 2012 at 1:06 PM, Yury Selivanov > wrote: > >> On 2012-06-13, at 10:52 PM, Yury Selivanov wrote: > >>> 2. signature() function support all kinds of callables: > >>> classes, metaclasses, methods, class- & staticmethods, > >>> 'functools.partials', and callable objects. If a callable > >>> object has a '__signature__' attribute it does a deepcopy > >>> of it before return. > >> > >> > >> Properly decorated functions are also supported. > > > > I'd like to see the "shared state" decorator from the previous thread > > included, as well as a short interactive interpreter session showing > > correct reporting of the signature of functools.partial instances. > > > OK. Below is how I want to update the PEP. Do you want to include > anything else? > > > Visualizing Callable Objects' Signatures > ---------------------------------------- > > Let's define some classes and functions: > > :: > > from inspect import signature > from functools import partial, wraps > > > class FooMeta(type): > def __new__(mcls, name, bases, dct, *, bar:bool=False): > return super().__new__(mcls, name, bases, dct) > > def __init__(cls, name, bases, dct, **kwargs): > return super().__init__(name, bases, dct) > > > class Foo(metaclass=FooMeta): > def __init__(self, spam:int=42): > self.spam = spam > > def __call__(self, a, b, *, c) -> tuple: > return a, b, c > > > def shared_vars(*shared_args): > """Decorator factory that defines shared variables that are > passed to every invocation of the function""" > > def decorator(f): > @wraps(f) > def wrapper(*args, **kwds): > full_args = shared_args + args > return f(*full_args, **kwds) > # Override signature > sig = wrapper.__signature__ = signature(f) > for __ in shared_args: > sig.parameters.popitem(last=False) > return wrapper > return decorator > > > @shared_vars({}) > def example(_state, a, b, c): > return _state, a, b, c > > > def format_signature(obj): > return str(signature(obj)) > > > Now, in the python REPL: > > :: > > >>> format_signature(FooMeta) > '(name, bases, dct, *, bar:bool=False)' > > >>> format_signature(Foo) > '(spam:int=42)' > > >>> format_signature(Foo.__call__) > '(self, a, b, *, c) -> tuple' > > >>> format_signature(Foo().__call__) > '(a, b, *, c) -> tuple' > > >>> format_signature(partial(Foo().__call__, 1, c=3)) > '(b, *, c=3) -> tuple' > > >>> format_signature(partial(partial(Foo().__call__, 1, c=3), 2, c=20)) > '(*, c=20) -> tuple' > > >>> format_signature(example) > '(a, b, c)' > > >>> format_signature(partial(example, 1, 2)) > '(c)' > > >>> format_signature(partial(partial(example, 1, b=2), c=3)) > '(b=2, c=3)' > > > - > Yury -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Jun 14 14:14:54 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 14 Jun 2012 14:14:54 +0200 Subject: [Python-Dev] deprecating .pyo and -O References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> <4FD8EBD7.7090500@stoneleaf.us> <20120614122524.01a53741@pitrou.net> Message-ID: <20120614141454.07ce5feb@pitrou.net> On Thu, 14 Jun 2012 12:58:16 +0100 Floris Bruynooghe wrote: > On 14 June 2012 11:25, Antoine Pitrou wrote: > > Honestly, I think the best option would be to deprecate .pyo files as > > well as the useless -O option. They only cause confusion without > > providing any significant benefits. > > +1 > > But what happens to __debug__ and assert statements? I think it > should be possible to always put assert statements inside a __debug__ > block and then create -O a simple switch for setting __debug__ to > False. If desired a simple strip tool could then easily remove > __debug__ blocks and (unused) docstrings. I don't really see the point. In my experience there is no benefit to removing assert statements in production mode. This is a C-specific notion that doesn't really map very well to Python code. Do other high-level languages have similar functionality? Regards Antoine. From python at mrabarnett.plus.com Thu Jun 14 14:27:15 2012 From: python at mrabarnett.plus.com (MRAB) Date: Thu, 14 Jun 2012 13:27:15 +0100 Subject: [Python-Dev] PyPI down? Message-ID: <4FD9D8A3.4000904@mrabarnett.plus.com> It looks like PyPI is down. :-( From rdmurray at bitdance.com Thu Jun 14 15:14:59 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 14 Jun 2012 09:14:59 -0400 Subject: [Python-Dev] deprecating .pyo and -O In-Reply-To: <20120614141454.07ce5feb@pitrou.net> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> <4FD8EBD7.7090500@stoneleaf.us> <20120614122524.01a53741@pitrou.net> <20120614141454.07ce5feb@pitrou.net> Message-ID: <20120614131459.DB762250031@webabinitio.net> On Thu, 14 Jun 2012 14:14:54 +0200, Antoine Pitrou wrote: > On Thu, 14 Jun 2012 12:58:16 +0100 > Floris Bruynooghe wrote: > > On 14 June 2012 11:25, Antoine Pitrou wrote: > > > Honestly, I think the best option would be to deprecate .pyo files as > > > well as the useless -O option. They only cause confusion without > > > providing any significant benefits. > > > > +1 > > > > But what happens to __debug__ and assert statements? I think it > > should be possible to always put assert statements inside a __debug__ > > block and then create -O a simple switch for setting __debug__ to > > False. If desired a simple strip tool could then easily remove > > __debug__ blocks and (unused) docstrings. > > I don't really see the point. In my experience there is no benefit to > removing assert statements in production mode. This is a C-specific > notion that doesn't really map very well to Python code. Do other > high-level languages have similar functionality? What does matter though is the memory savings. I'm working with an application where the difference between normal and -OO is around a 10% savings (about 2MB) in program DATA size at startup, and that makes a difference for an ap running in a memory constrained environment. A docstring stripper would enable the bulk of that savings, but it is still nice to be able to omit code (such as debug logging statements) as well. --David From solipsis at pitrou.net Thu Jun 14 15:23:34 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 14 Jun 2012 15:23:34 +0200 Subject: [Python-Dev] deprecating .pyo and -O References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> <4FD8EBD7.7090500@stoneleaf.us> <20120614122524.01a53741@pitrou.net> <20120614141454.07ce5feb@pitrou.net> <20120614131459.DB762250031@webabinitio.net> Message-ID: <20120614152334.3f203fdc@pitrou.net> On Thu, 14 Jun 2012 09:14:59 -0400 "R. David Murray" wrote: > > What does matter though is the memory savings. I'm working with an > application where the difference between normal and -OO is around a 10% > savings (about 2MB) in program DATA size at startup, and that makes a > difference for an ap running in a memory constrained environment. > > A docstring stripper would enable the bulk of that savings, Probably indeed. > but it is > still nice to be able to omit code (such as debug logging statements) > as well. But does that justify all the additional complication in the core interpreter, as well as potential user confusion? Regards Antoine. From yselivanov.ml at gmail.com Thu Jun 14 15:50:38 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 14 Jun 2012 09:50:38 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> Message-ID: <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> On 2012-06-14, at 8:06 AM, Victor Stinner wrote: > Sorry if I'm asking dummy questions, I didn't follow the discussion. > >> * format(...) -> str >> Formats the Signature object to a string. Optional arguments allow >> for custom render functions for parameter names, >> annotations and default values, along with custom separators. > > Hum, what are these "custom render functions"? Can you give an example? That's how the function looks right now (I'm not sure we should load the PEP with this): def format(self, *, format_name=str, format_default=repr, format_annotation=formatannotation, format_args=(lambda param: '*' + str(param)), format_kwargs=(lambda param: '**' + str(param)), token_params_separator=', ', token_kwonly_separator='*', token_left_paren='(', token_right_paren=')', token_colon=':', token_eq='=', token_return_annotation=' -> '): '''Format signature to a string. Arguments (all optional): * format_name : A function to format names of parameters. Parameter won't be rendered if ``None`` is returned. * format_default : A function to format default values of parameters. Default value won't be rendered if ``None`` is returned. * format_annotation : A function to format parameter annotations. Annotation won't be rendered if ``None`` is returned. * format_args : A function to render ``*args`` like parameters. Parameter won't be rendered if ``None`` is returned. * format_kwargs : A function to render ``**kwargs`` like parameters. Parameter won't be rendered if ``None`` is returned. * token_params_separator : A separator for parameters. Set to ', ' by default. * token_kwonly_separator : A separator for arguments and keyword-only arguments. Defaults to '*'. * token_left_paren : Left signature parenthesis, defaults to '('. * token_right_paren : Left signature parenthesis, defaults to ')'. * token_colon : Separates parameter from its annotation, defaults to ':'. * token_eq : Separates parameter from its default value, set to '=' by default. * token_return_annotation : Function return annotation, defaults to ' -> '. ''' I've designed it in such a way, that everything is configurable, so you can render functions to color-term, HTML, or whatever else. >> * is_keyword_only : bool >> True if the parameter is keyword-only, else False. >> * is_args : bool >> True if the parameter accepts variable number of arguments >> (``*args``-like), else False. >> * is_kwargs : bool >> True if the parameter accepts variable number of keyword >> arguments (``**kwargs``-like), else False. > > Hum, why not using a attribute with a string value instead of 3 > attribute? For example: > * argtype: "index", "varargs", "keyword" or "keyword_only" > > It would avoid a possible inconsitency (ex: is_args=True and > is_kwargs=True). And it would help to implement something like a C > switch/case using a dict: argtype => function for functions using > signatures. Originally, I thought the the line: if parameters.is_args is better looking that: if parameters.kind == 'vararg' But, I like your arguments regarding inconsistency and dispatch through a dict (someone may find it useful). Also, Larry gave another one - who knows if we add another type of arguments in the future. I guess if nobody really wants to keep 'is_args', we can alter the PEP. Let's consider replacement of 'Parameter.is_*' set of attributes with a single 'Parameter.kind' attribute, which will have the following possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. (I think 'positional' is more intuitive than 'index'?) >> * is_implemented : bool >> True if the parameter is implemented for use. Some platforms >> implement functions but can't support specific parameters >> (e.g. "mode" for ``os.mkdir``). Passing in an unimplemented >> parameter may result in the parameter being ignored, >> or in NotImplementedError being raised. It is intended that >> all conditions where ``is_implemented`` may be False be >> thoroughly documented. > > I suppose that the value depends on the running platform? (For > example, you may get a different value on Linux and Windows.) Correct. >> Implementation >> ============== >> >> - If the object has a ``__signature__`` attribute and if it >> is not ``None`` - return a deepcopy of it > > Oh, why copying the object? It may impact performances. If fhe caller > knows that it will modify the signature, it can deepcopy the > signature. There was a discussion on this topic earlier on python-dev. In short - as we usually create new signatures with each 'signature()' call, users will expect that they can modify those freely. But if we have one defined in __signature__, without copying it, all its modifications will be persistent across 'signature()' calls. So the deepcopy here is required more for the consistency reasons. Besides, I don't think that 'signature()' will be used extensively in performance-critical types of code. And even if it is - you can just cache it manually. >> - If it is ``None`` and the object is an instance of >> ``BuiltinFunction``, raise a ``ValueError`` > > What about builtin functions (ex: len)? Do you plan to add a > __signature__ attribute? If yes, something created on demand or > created at startup? Larry is going to add signatures to some 'os' module functions. But that would it for 3.3, I guess. > It would be nice to have a C API to create Signature objects, maybe > from the same format string than PyArg_Parse*() functions. But it can > be implemented later. Then parameters will lack the 'name' attribute. I think we need another approach here. > Is it possible to build a Signature object from a string describing > the prototype (ex: "def f(x, y): pass")? (I mean: do you plan to add > such function?) There are no plans to add it now (no good reasons to include such functionality in 3.3 at least) Thank you, - Yury From raymond.hettinger at gmail.com Thu Jun 14 16:01:08 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Thu, 14 Jun 2012 07:01:08 -0700 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> Message-ID: On Jun 14, 2012, at 4:32 AM, Kristj?n Valur J?nsson wrote: > I would like to two or more compile time > settings to choose from: Memory optimal, speed optimal, (and mix). A compile time option would be nice. The default should be what we've had though. The new settings cause a lot more collisions and resizes. The resizes themselves have more collisions than before (there are few collisions when resizing into a quadrupled dict than into a doubled dict). Dicts get their efficiency from sparseness. Reducing the mindict size from 8 to 4 causes substantially more collisions in small dicts and gets closer to a linear search of a small tuple. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.brandl at gmx.net Thu Jun 14 16:15:41 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 14 Jun 2012 16:15:41 +0200 Subject: [Python-Dev] cpython (2.7): Issue #15060: fix typo in socket doc; Patch by anatoly techtonik In-Reply-To: References: Message-ID: Am 13.06.2012 23:59, schrieb sandro.tosi: > http://hg.python.org/cpython/rev/744fb52ffdf0 > changeset: 77417:744fb52ffdf0 > branch: 2.7 > parent: 77408:60a7b704de5c > user: Sandro Tosi > date: Wed Jun 13 23:58:35 2012 +0200 > summary: > Issue #15060: fix typo in socket doc; Patch by anatoly techtonik > > files: > Doc/library/socket.rst | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > > diff --git a/Doc/library/socket.rst b/Doc/library/socket.rst > --- a/Doc/library/socket.rst > +++ b/Doc/library/socket.rst > @@ -38,7 +38,7 @@ > :const:`AF_UNIX` address family. A pair ``(host, port)`` is used for the > :const:`AF_INET` address family, where *host* is a string representing either a > hostname in Internet domain notation like ``'daring.cwi.nl'`` or an IPv4 address > -like ``'100.50.200.5'``, and *port* is an integral port number. For > +like ``'100.50.200.5'``, and *port* is an integer port number. For > :const:`AF_INET6` address family, a four-tuple ``(host, port, flowinfo, > scopeid)`` is used, where *flowinfo* and *scopeid* represents ``sin6_flowinfo`` > and ``sin6_scope_id`` member in :const:`struct sockaddr_in6` in C. For I don't see the typo here, isn't "integral" the adjective form of "integer"? Georg From python at mrabarnett.plus.com Thu Jun 14 16:48:27 2012 From: python at mrabarnett.plus.com (MRAB) Date: Thu, 14 Jun 2012 15:48:27 +0100 Subject: [Python-Dev] cpython (2.7): Issue #15060: fix typo in socket doc; Patch by anatoly techtonik In-Reply-To: References: Message-ID: <4FD9F9BB.2050706@mrabarnett.plus.com> On 14/06/2012 15:15, Georg Brandl wrote: > Am 13.06.2012 23:59, schrieb sandro.tosi: >> http://hg.python.org/cpython/rev/744fb52ffdf0 >> changeset: 77417:744fb52ffdf0 >> branch: 2.7 >> parent: 77408:60a7b704de5c >> user: Sandro Tosi >> date: Wed Jun 13 23:58:35 2012 +0200 >> summary: >> Issue #15060: fix typo in socket doc; Patch by anatoly techtonik >> >> files: >> Doc/library/socket.rst | 2 +- >> 1 files changed, 1 insertions(+), 1 deletions(-) >> >> >> diff --git a/Doc/library/socket.rst b/Doc/library/socket.rst >> --- a/Doc/library/socket.rst >> +++ b/Doc/library/socket.rst >> @@ -38,7 +38,7 @@ >> :const:`AF_UNIX` address family. A pair ``(host, port)`` is used for the >> :const:`AF_INET` address family, where *host* is a string representing either a >> hostname in Internet domain notation like ``'daring.cwi.nl'`` or an IPv4 address >> -like ``'100.50.200.5'``, and *port* is an integral port number. For >> +like ``'100.50.200.5'``, and *port* is an integer port number. For >> :const:`AF_INET6` address family, a four-tuple ``(host, port, flowinfo, >> scopeid)`` is used, where *flowinfo* and *scopeid* represents ``sin6_flowinfo`` >> and ``sin6_scope_id`` member in :const:`struct sockaddr_in6` in C. For > > I don't see the typo here, isn't "integral" the adjective form of "integer"? > Yes, although it also means "necessary and important as a part of", and as it's talking about a Python type (class), I think that "integer" would be clearer, IHMO. From rdmurray at bitdance.com Thu Jun 14 16:48:55 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 14 Jun 2012 10:48:55 -0400 Subject: [Python-Dev] cpython (2.7): Issue #15060: fix typo in socket doc; Patch by anatoly techtonik In-Reply-To: References: Message-ID: <20120614144856.AD1F525009E@webabinitio.net> On Thu, 14 Jun 2012 16:15:41 +0200, Georg Brandl wrote: > Am 13.06.2012 23:59, schrieb sandro.tosi: > > http://hg.python.org/cpython/rev/744fb52ffdf0 > > changeset: 77417:744fb52ffdf0 > > branch: 2.7 > > parent: 77408:60a7b704de5c > > user: Sandro Tosi > > date: Wed Jun 13 23:58:35 2012 +0200 > > summary: > > Issue #15060: fix typo in socket doc; Patch by anatoly techtonik > > > > files: > > Doc/library/socket.rst | 2 +- > > 1 files changed, 1 insertions(+), 1 deletions(-) > > > > > > diff --git a/Doc/library/socket.rst b/Doc/library/socket.rst > > --- a/Doc/library/socket.rst > > +++ b/Doc/library/socket.rst > > @@ -38,7 +38,7 @@ > > :const:`AF_UNIX` address family. A pair ``(host, port)`` is used for the > > :const:`AF_INET` address family, where *host* is a string representing either a > > hostname in Internet domain notation like ``'daring.cwi.nl'`` or an IPv4 address > > -like ``'100.50.200.5'``, and *port* is an integral port number. For > > +like ``'100.50.200.5'``, and *port* is an integer port number. For > > :const:`AF_INET6` address family, a four-tuple ``(host, port, flowinfo, > > scopeid)`` is used, where *flowinfo* and *scopeid* represents ``sin6_flowinfo`` > > and ``sin6_scope_id`` member in :const:`struct sockaddr_in6` in C. For > > I don't see the typo here, isn't "integral" the adjective form of "integer"? We had a discussion about this on IRC, and I believe it has now been further updated to "...*port* is an integer.". To a native speaker's ear, "integral port number" sounds wrong, probably because the port number has to be an integer, so it sounds like saying "an integral integer". The important thing the doc needs to convey is that the *port* argument actually needs to be an *integer*, as opposed to a string. --David From alexandre.zani at gmail.com Thu Jun 14 16:50:42 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Thu, 14 Jun 2012 07:50:42 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> Message-ID: On Thu, Jun 14, 2012 at 6:50 AM, Yury Selivanov wrote: > On 2012-06-14, at 8:06 AM, Victor Stinner wrote: >> Sorry if I'm asking dummy questions, I didn't follow the discussion. >> >>> * format(...) -> str >>> ? ?Formats the Signature object to a string. ?Optional arguments allow >>> ? ?for custom render functions for parameter names, >>> ? ?annotations and default values, along with custom separators. >> >> Hum, what are these "custom render functions"? Can you give an example? > > That's how the function looks right now (I'm not sure we should load > the PEP with this): > > ? ?def format(self, *, format_name=str, > ? ? ? ? ? ? ? ? ? ? ? ?format_default=repr, > ? ? ? ? ? ? ? ? ? ? ? ?format_annotation=formatannotation, > ? ? ? ? ? ? ? ? ? ? ? ?format_args=(lambda param: '*' + str(param)), > ? ? ? ? ? ? ? ? ? ? ? ?format_kwargs=(lambda param: '**' + str(param)), > > ? ? ? ? ? ? ? ? ? ? ? ?token_params_separator=', ', > ? ? ? ? ? ? ? ? ? ? ? ?token_kwonly_separator='*', > ? ? ? ? ? ? ? ? ? ? ? ?token_left_paren='(', > ? ? ? ? ? ? ? ? ? ? ? ?token_right_paren=')', > ? ? ? ? ? ? ? ? ? ? ? ?token_colon=':', > ? ? ? ? ? ? ? ? ? ? ? ?token_eq='=', > ? ? ? ? ? ? ? ? ? ? ? ?token_return_annotation=' -> '): > > ? ? ? ?'''Format signature to a string. > > ? ? ? ?Arguments (all optional): > > ? ? ? ?* format_name : A function to format names of parameters. ?Parameter > ? ? ? ? ?won't be rendered if ``None`` is returned. > ? ? ? ?* format_default : A function to format default values of parameters. > ? ? ? ? ?Default value won't be rendered if ``None`` is returned. > ? ? ? ?* format_annotation : A function to format parameter annotations. > ? ? ? ? ?Annotation won't be rendered if ``None`` is returned. > ? ? ? ?* format_args : A function to render ``*args`` like parameters. > ? ? ? ? ?Parameter won't be rendered if ``None`` is returned. > ? ? ? ?* format_kwargs : A function to render ``**kwargs`` like parameters. > ? ? ? ? ?Parameter won't be rendered if ``None`` is returned. > ? ? ? ?* token_params_separator : A separator for parameters. ?Set to > ? ? ? ? ?', ' by default. > ? ? ? ?* token_kwonly_separator : A separator for arguments and > ? ? ? ? ?keyword-only arguments. ?Defaults to '*'. > ? ? ? ?* token_left_paren : Left signature parenthesis, defaults to '('. > ? ? ? ?* token_right_paren : Left signature parenthesis, defaults to ')'. > ? ? ? ?* token_colon : Separates parameter from its annotation, > ? ? ? ? ?defaults to ':'. > ? ? ? ?* token_eq : Separates parameter from its default value, set to > ? ? ? ? ?'=' by default. > ? ? ? ?* token_return_annotation : Function return annotation, defaults > ? ? ? ? ?to ' -> '. > ? ? ? ?''' > > I've designed it in such a way, that everything is configurable, so you > can render functions to color-term, HTML, or whatever else. > >>> * is_keyword_only : bool >>> ? ?True if the parameter is keyword-only, else False. >>> * is_args : bool >>> ? ?True if the parameter accepts variable number of arguments >>> ? ?(``*args``-like), else False. >>> * is_kwargs : bool >>> ? ?True if the parameter accepts variable number of keyword >>> ? ?arguments (``**kwargs``-like), else False. >> >> Hum, why not using a attribute with a string value instead of 3 >> attribute? For example: >> * argtype: "index", "varargs", "keyword" or "keyword_only" >> >> It would avoid a possible inconsitency (ex: is_args=True and >> is_kwargs=True). And it would help to implement something like a C >> switch/case using a dict: argtype => function for functions using >> signatures. > > Originally, I thought the the line: > > ? if parameters.is_args > > is better looking that: > > ? if parameters.kind == 'vararg' > > But, I like your arguments regarding inconsistency and dispatch > through a dict (someone may find it useful). ?Also, Larry gave > another one - who knows if we add another type of arguments in > the future. > > I guess if nobody really wants to keep 'is_args', we can alter the > PEP. > > Let's consider replacement of 'Parameter.is_*' set of attributes with > a single 'Parameter.kind' attribute, which will have the following > possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. > > (I think 'positional' is more intuitive than 'index'?) > I disagree largely for readability reasons. As the PEP stands, I can look at a Parameter object and immediately understand what the different possible values are by just listing its attributes. The kind attribute makes that harder. Comparing with strings is error prone. If I do param.is_varargs (adding an s at the end of the attribute name) I will see an attribute error and know what is going on. If I do the same mistake with the kind attribute param.kind == "varargs", the expression will just always be False without any explanation. >>> * is_implemented : bool >>> ? ?True if the parameter is implemented for use. ?Some platforms >>> ? ?implement functions but can't support specific parameters >>> ? ?(e.g. "mode" for ``os.mkdir``). ?Passing in an unimplemented >>> ? ?parameter may result in the parameter being ignored, >>> ? ?or in NotImplementedError being raised. ?It is intended that >>> ? ?all conditions where ``is_implemented`` may be False be >>> ? ?thoroughly documented. >> >> I suppose that the value depends on the running platform? (For >> example, you may get a different value on Linux and Windows.) > > Correct. > >>> Implementation >>> ============== >>> >>> ? ?- If the object has a ``__signature__`` attribute and if it >>> ? ? ?is not ``None`` - return a deepcopy of it >> >> Oh, why copying the object? It may impact performances. If fhe caller >> knows that it will modify the signature, it can deepcopy the >> signature. > > There was a discussion on this topic earlier on python-dev. > In short - as we usually create new signatures with each 'signature()' > call, users will expect that they can modify those freely. ?But if we > have one defined in __signature__, without copying it, all its > modifications will be persistent across 'signature()' calls. ?So the > deepcopy here is required more for the consistency reasons. ?Besides, > I don't think that 'signature()' will be used extensively in > performance-critical types of code. ?And even if it is - you can just > cache it manually. > >>> ? ? ? ?- If it is ``None`` and the object is an instance of >>> ? ? ? ? ?``BuiltinFunction``, raise a ``ValueError`` >> >> What about builtin functions (ex: len)? Do you plan to add a >> __signature__ attribute? If yes, something created on demand or >> created at startup? > > Larry is going to add signatures to some 'os' module functions. But > that would it for 3.3, I guess. > >> It would be nice to have a C API to create Signature objects, maybe >> from the same format string than PyArg_Parse*() functions. But it can >> be implemented later. > > Then parameters will lack the 'name' attribute. ?I think we need another > approach here. > >> Is it possible to build a Signature object from a string describing >> the prototype (ex: "def f(x, y): pass")? (I mean: do you plan to add >> such function?) > > > There are no plans to add it now (no good reasons to include such > functionality in 3.3 at least) > > Thank you, > > - > Yury > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com From alexandre.zani at gmail.com Thu Jun 14 16:54:53 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Thu, 14 Jun 2012 07:54:53 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <88BE3834-4B9E-416D-A99F-FC5F30EBB896@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <5BE279F4-8E3F-4F92-B08E-93FF046A4816@gmail.com> <88BE3834-4B9E-416D-A99F-FC5F30EBB896@gmail.com> Message-ID: Thanks. :) On Thu, Jun 14, 2012 at 4:50 AM, Yury Selivanov wrote: > On 2012-06-14, at 12:29 AM, Alexandre Zani wrote: >> Why do we look at __wrapped__ only if the object is a FunctionType? >> Why not support __wrapped__ on all callables? > > Good idea ;) ?I'll add this. > > Thanks, > - > Yury From brett at python.org Thu Jun 14 16:58:55 2012 From: brett at python.org (Brett Cannon) Date: Thu, 14 Jun 2012 10:58:55 -0400 Subject: [Python-Dev] #12982: Should -O be required to *read* .pyo files? In-Reply-To: <20120614024746.6C21425009E@webabinitio.net> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120614024746.6C21425009E@webabinitio.net> Message-ID: On Wed, Jun 13, 2012 at 10:47 PM, R. David Murray wrote: > On Thu, 14 Jun 2012 11:48:08 +1000, Nick Coghlan > wrote: > > On Thu, Jun 14, 2012 at 6:06 AM, Terry Reedy wrote: > > > On 6/13/2012 2:46 PM, Antoine Pitrou wrote: > > > > > >> Not only docstrings, but also asserts. I think running a pyo without > -O > > >> would be a bug. > > > > > > That cat is already out of the bag ;-) > > > People are doing that now by renaming x.pyo to x.pyc. > > > Brett claims that it is also easy to do in 3.3 with a custom importer. > > > > Right, but by resorting to either of those approaches, people are > > clearly doing something that isn't formally supported by the core. > > Yes, you can do it, and most of the time it will work out OK, but any > > weird glitches that result are officially *not our problem*. > > > > The main reason this matters is that the "__debug__" flag is > > *supposed* to be process global - if you check it in one place, the > > OK, the above are the two concrete reasons I have heard in this thread > for continuing the current behavior: > > 1) we do not wish to support running from .pyo files without -O > being on, even if it currently happens to work > > 2) the __debug__ setting is supposed to be process-global > > Both of these are good reasons. IMO the issue should be closed with a > documentation fix, which could optionally include either or both of the > above motivations. > Just for completeness, there is a third reason: 3) Would lead to an extra stat call per module when doing sourceless loads. While minor, it could add up if you ship only .pyo files but never run with -O. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Thu Jun 14 17:05:05 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 14 Jun 2012 11:05:05 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> Message-ID: <34355DDB-9D57-4C64-B1B8-DFF32BBFA4DD@gmail.com> On 2012-06-14, at 10:50 AM, Alexandre Zani wrote: > On Thu, Jun 14, 2012 at 6:50 AM, Yury Selivanov wrote: >> I guess if nobody really wants to keep 'is_args', we can alter the >> PEP. >> >> Let's consider replacement of 'Parameter.is_*' set of attributes with >> a single 'Parameter.kind' attribute, which will have the following >> possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. >> >> (I think 'positional' is more intuitive than 'index'?) >> > > I disagree largely for readability reasons. As the PEP stands, I can > look at a Parameter object and immediately understand what the > different possible values are by just listing its attributes. The kind > attribute makes that harder. > > Comparing with strings is error prone. If I do param.is_varargs > (adding an s at the end of the attribute name) I will see an attribute > error and know what is going on. If I do the same mistake with the > kind attribute param.kind == "varargs", the expression will just > always be False without any explanation. Agree on this one, good point (although unit tests generally help to avoid those problems.) - Yury From rdmurray at bitdance.com Thu Jun 14 17:06:01 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 14 Jun 2012 11:06:01 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> Message-ID: <20120614150601.8172D25009E@webabinitio.net> On Thu, 14 Jun 2012 07:50:42 -0700, Alexandre Zani wrote: > On Thu, Jun 14, 2012 at 6:50 AM, Yury Selivanov wrote: > > On 2012-06-14, at 8:06 AM, Victor Stinner wrote: > >> Hum, why not using a attribute with a string value instead of 3 > >> attribute? For example: > >> * argtype: "index", "varargs", "keyword" or "keyword_only" > >> > >> It would avoid a possible inconsitency (ex: is_args=True and > >> is_kwargs=True). And it would help to implement something like a C > >> switch/case using a dict: argtype => function for functions using > >> signatures. > > > > Originally, I thought the the line: > > > > ?? if parameters.is_args > > > > is better looking that: > > > > ?? if parameters.kind == 'vararg' > > > > But, I like your arguments regarding inconsistency and dispatch > > through a dict (someone may find it useful). ??Also, Larry gave > > another one - who knows if we add another type of arguments in > > the future. > > > > I guess if nobody really wants to keep 'is_args', we can alter the > > PEP. > > > > Let's consider replacement of 'Parameter.is_*' set of attributes with > > a single 'Parameter.kind' attribute, which will have the following > > possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. > > > > (I think 'positional' is more intuitive than 'index'?) > > > > I disagree largely for readability reasons. As the PEP stands, I can > look at a Parameter object and immediately understand what the > different possible values are by just listing its attributes. The kind > attribute makes that harder. > > Comparing with strings is error prone. If I do param.is_varargs > (adding an s at the end of the attribute name) I will see an attribute > error and know what is going on. If I do the same mistake with the > kind attribute param.kind == "varargs", the expression will just > always be False without any explanation. I don't have strong feelings about this, but to me the fact that there are values of the individual parameters that if they occur on the same object at the same time would be invalid is a code smell. If the thing can be one and only one of a list of possible types, it makes sense to me that this be indicated as a single attribute with a list of possible values, rather than a set of boolean options, one for each type. For the attribute error issue, we could have module attributes that give names to the strings: if parameter.kind == inspect.VARARG_KIND: stuff Or if we don't want that in the stdlib, the individual programmer who cares about it can define their own constants. --David From p.f.moore at gmail.com Thu Jun 14 17:14:14 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 14 Jun 2012 16:14:14 +0100 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> Message-ID: On 14 June 2012 15:50, Alexandre Zani wrote: > Comparing with strings is error prone. If I do param.is_varargs > (adding an s at the end of the attribute name) I will see an attribute > error and know what is going on. If I do the same mistake with the > kind attribute param.kind == "varargs", the expression will just > always be False without any explanation. Agreed. Particularly in this case, a lot of the possible values are far from completely standardised terms, so misspellings are quite possible. Apart from the varargs case mentioned, I'd have to look at the docs to know what name was used for the kind of a "normal" parameter. To be honest, I'm not too keen in is_args/is_kwargs as names, but they are short and match common usage. I could go with is_vararg and is_kwarg (or is_varargs and is_kwargs, but I hate the abbreviation varkwarg :-)) because they are closer parallels, but either way this is trivial bikeshedding, not worth spending time on. If anyone *really* wants a "kind" string, parameter_kind(param) isn't hard to define in your app, and you can choose your own terms... So my view is -1 to a "kind" parameter, and +1 to the current is_XXX names simply because there's no point bikeshedding. Oh, and +1 to the PEP as a whole :-) Paul. From brett at python.org Thu Jun 14 17:24:52 2012 From: brett at python.org (Brett Cannon) Date: Thu, 14 Jun 2012 11:24:52 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> Message-ID: On Thu, Jun 14, 2012 at 9:50 AM, Yury Selivanov wrote: [SNIP] > > > Let's consider replacement of 'Parameter.is_*' set of attributes with > a single 'Parameter.kind' attribute, which will have the following > possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. > > (I think 'positional' is more intuitive than 'index'?) > > +1 if this change is made. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Thu Jun 14 17:52:10 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 14 Jun 2012 08:52:10 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> Message-ID: <4FDA08AA.10408@stoneleaf.us> Brett Cannon wrote: > > > On Thu, Jun 14, 2012 at 9:50 AM, Yury Selivanov > wrote: > > [SNIP] > > > > Let's consider replacement of 'Parameter.is_*' set of attributes with > a single 'Parameter.kind' attribute, which will have the following > possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. > > (I think 'positional' is more intuitive than 'index'?) > > > +1 if this change is made. +1 to using 'kind', and another +1 to using 'kwarg' instead of 'varkwarg'. ~Ethan~ From yselivanov.ml at gmail.com Thu Jun 14 18:16:27 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 14 Jun 2012 12:16:27 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> Message-ID: <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> On 2012-06-14, at 11:24 AM, Brett Cannon wrote: > On Thu, Jun 14, 2012 at 9:50 AM, Yury Selivanov wrote: > > [SNIP] > > Let's consider replacement of 'Parameter.is_*' set of attributes with > a single 'Parameter.kind' attribute, which will have the following > possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. > > (I think 'positional' is more intuitive than 'index'?) > > > +1 if this change is made. How about adding 'kind' and keeping 'is_*' attributes, but making them read-only dynamic properties, i.e.: class Parameter: ... @property def is_vararg(self): return self.kind == 'vararg' ... ? Thanks, - Yury From ethan at stoneleaf.us Thu Jun 14 18:37:37 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 14 Jun 2012 09:37:37 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> Message-ID: <4FDA1351.5020701@stoneleaf.us> Yury Selivanov wrote: > On 2012-06-14, at 11:24 AM, Brett Cannon wrote: >> On Thu, Jun 14, 2012 at 9:50 AM, Yury Selivanov wrote: >> >> [SNIP] >> >> Let's consider replacement of 'Parameter.is_*' set of attributes with >> a single 'Parameter.kind' attribute, which will have the following >> possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. >> >> (I think 'positional' is more intuitive than 'index'?) >> >> >> +1 if this change is made. > > How about adding 'kind' and keeping 'is_*' attributes, > but making them read-only dynamic properties, i.e.: > > class Parameter: > ... > > @property > def is_vararg(self): > return self.kind == 'vararg' > > ... > > ? I like it! +1! ~Ethan~ From benjamin at python.org Thu Jun 14 18:32:59 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 14 Jun 2012 09:32:59 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> Message-ID: 2012/6/14 Yury Selivanov : > On 2012-06-14, at 11:24 AM, Brett Cannon wrote: >> On Thu, Jun 14, 2012 at 9:50 AM, Yury Selivanov wrote: >> >> [SNIP] >> >> Let's consider replacement of 'Parameter.is_*' set of attributes with >> a single 'Parameter.kind' attribute, which will have the following >> possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. >> >> (I think 'positional' is more intuitive than 'index'?) >> >> >> +1 if this change is made. > > How about adding 'kind' and keeping 'is_*' attributes, > but making them read-only dynamic properties, i.e.: > > ? class Parameter: > ? ? ? ... > > ? ? ? @property > ? ? ? def is_vararg(self): > ? ? ? ? ? return self.kind == 'vararg' > > ? ? ? ... > > ? Seems a bit bloatly to me. (One way to do it.) -- Regards, Benjamin From yselivanov.ml at gmail.com Thu Jun 14 18:39:42 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 14 Jun 2012 12:39:42 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> Message-ID: On 2012-06-14, at 12:32 PM, Benjamin Peterson wrote: > 2012/6/14 Yury Selivanov : >> On 2012-06-14, at 11:24 AM, Brett Cannon wrote: >>> On Thu, Jun 14, 2012 at 9:50 AM, Yury Selivanov wrote: >>> >>> [SNIP] >>> >>> Let's consider replacement of 'Parameter.is_*' set of attributes with >>> a single 'Parameter.kind' attribute, which will have the following >>> possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. >>> >>> (I think 'positional' is more intuitive than 'index'?) >>> >>> >>> +1 if this change is made. >> >> How about adding 'kind' and keeping 'is_*' attributes, >> but making them read-only dynamic properties, i.e.: >> >> class Parameter: >> ... >> >> @property >> def is_vararg(self): >> return self.kind == 'vararg' >> >> ... >> >> ? > > Seems a bit bloatly to me. (One way to do it.) Yes, but on the other hand it solves "strings are error prone" argument, keeps all 'is_*' attributes in sync, and makes them read-only. 'kind' property may do validation on set, to diminish mistakes probability even further. - Yury From ethan at stoneleaf.us Thu Jun 14 18:03:40 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 14 Jun 2012 09:03:40 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> Message-ID: <4FDA0B5C.4080206@stoneleaf.us> Yury Selivanov wrote: > Hello, > > The new revision of PEP 362 has been posted: > http://www.python.org/dev/peps/pep-0362/ > > > It's possible to test Signatures for equality. Two signatures > are equal when they have equal parameters and return annotations. Possibly a dumb question, but do the parameter names have to be the same to compare equal? If yes, is there an easy way to compare two signatures by annotations alone? ~Ethan~ From yselivanov.ml at gmail.com Thu Jun 14 19:09:25 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 14 Jun 2012 13:09:25 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDA0B5C.4080206@stoneleaf.us> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <4FDA0B5C.4080206@stoneleaf.us> Message-ID: <1C910CD4-CAA4-41B9-AB99-293BED69A9EA@gmail.com> On 2012-06-14, at 12:03 PM, Ethan Furman wrote: > Yury Selivanov wrote: >> Hello, >> The new revision of PEP 362 has been posted: >> http://www.python.org/dev/peps/pep-0362/ >> It's possible to test Signatures for equality. Two signatures >> are equal when they have equal parameters and return annotations. > > Possibly a dumb question, but do the parameter names have to be the same to compare equal? If yes, is there an easy way to compare two signatures by annotations alone? Yes, parameter names have be the same. You need to write a custom compare function for Parameters, that will check that two have (or both don't) equal annotations and *kinds*, and then write a compare function for Signatures, that will test return_annotations and 'parameters' collections. All in all, shouldn't be longer than 10-15 lines of code. Another "solution" to the problem could be adding a new 'annotations' read-only dynamic property to the Signature, that would iterate through parameters and produce a single dict. But this solution has a serious flaw, as signature of: def foo(a:int, *, b:int) -> float is not equal to the signature of: def bar(a:int, b:int) -> float and certainly not the signature of: def spam(*args:int, **kwargs:int) -> float So the most correct approach here is the one I described in the first place. Thanks, - Yury From brett at python.org Thu Jun 14 19:10:13 2012 From: brett at python.org (Brett Cannon) Date: Thu, 14 Jun 2012 13:10:13 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> Message-ID: On Thu, Jun 14, 2012 at 12:39 PM, Yury Selivanov wrote: > On 2012-06-14, at 12:32 PM, Benjamin Peterson wrote: > > > 2012/6/14 Yury Selivanov : > >> On 2012-06-14, at 11:24 AM, Brett Cannon wrote: > >>> On Thu, Jun 14, 2012 at 9:50 AM, Yury Selivanov < > yselivanov.ml at gmail.com> wrote: > >>> > >>> [SNIP] > >>> > >>> Let's consider replacement of 'Parameter.is_*' set of attributes with > >>> a single 'Parameter.kind' attribute, which will have the following > >>> possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. > >>> > >>> (I think 'positional' is more intuitive than 'index'?) > >>> > >>> > >>> +1 if this change is made. > >> > >> How about adding 'kind' and keeping 'is_*' attributes, > >> but making them read-only dynamic properties, i.e.: > >> > >> class Parameter: > >> ... > >> > >> @property > >> def is_vararg(self): > >> return self.kind == 'vararg' > >> > >> ... > >> > >> ? > > > > Seems a bit bloatly to me. (One way to do it.) > > Yes, but on the other hand it solves "strings are error prone" > argument, keeps all 'is_*' attributes in sync, and makes them > read-only. > > 'kind' property may do validation on set, to diminish mistakes > probability even further. > I agree with Benjamin, it goes against TOOWTDI without enough of a justification to break the rule. Just make the strings constants on the Parameter class and you solve the lack of enum issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: From edreamleo at gmail.com Thu Jun 14 18:25:31 2012 From: edreamleo at gmail.com (Edward K. Ream) Date: Thu, 14 Jun 2012 11:25:31 -0500 Subject: [Python-Dev] Announcing the python-static-type-checking google group Message-ID: Hello all, GvR has asked me to announce the python-static-type-checking google group http://groups.google.com/group/python-static-type-checking to python-dev. Consider it announced. Anyone from python-dev who likes may become a member. Here is the "About this group" posting: QQQQQ This group got its start as a response to GvR's Keynote address at PyCon 2012, http://pyvideo.org/video/956/keynote-guido-van-rossum specifically, his remarks about static type checking beginning in the 28'th minute. Guido and I have been emailing about this topic for a day or so. These emails will form the first few posts of this group. This group is public, in the sense that anyone can see it, but initial invitations will go only to those who have written significant python tools related to type checking. It will probably be best to have group have a low profile at first. Having said that, I encourage members to invite others who may be able to contribute. Experience with other projects shows that crucial contributions may come from unlikely sources. Because of spam, this will be a moderated group. New members must request permission to post. QQQQQ Edward ------------------------------------------------------------------------------ Edward K. Ream email: edreamleo at gmail.com Leo: http://webpages.charter.net/edreamleo/front.html ------------------------------------------------------------------------------ From alexandre.zani at gmail.com Thu Jun 14 19:13:26 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Thu, 14 Jun 2012 10:13:26 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> Message-ID: +1 On Thu, Jun 14, 2012 at 9:16 AM, Yury Selivanov wrote: > On 2012-06-14, at 11:24 AM, Brett Cannon wrote: >> On Thu, Jun 14, 2012 at 9:50 AM, Yury Selivanov wrote: >> >> [SNIP] >> >> Let's consider replacement of 'Parameter.is_*' set of attributes with >> a single 'Parameter.kind' attribute, which will have the following >> possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. >> >> (I think 'positional' is more intuitive than 'index'?) >> >> >> +1 if this change is made. > > How about adding 'kind' and keeping 'is_*' attributes, > but making them read-only dynamic properties, i.e.: > > ? class Parameter: > ? ? ? ... > > ? ? ? @property > ? ? ? def is_vararg(self): > ? ? ? ? ? return self.kind == 'vararg' > > ? ? ? ... > > ? > > Thanks, > - > Yury > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com From tjreedy at udel.edu Thu Jun 14 19:51:59 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 14 Jun 2012 13:51:59 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <5BE279F4-8E3F-4F92-B08E-93FF046A4816@gmail.com> Message-ID: On 6/14/2012 6:00 AM, Nick Coghlan wrote: > > Just a thought: Do we want to include the docstring? A function's > > docstring is often intimately tied to its signature. (Or at least, a > > lot of us try to write docstrings that effectively describe the > > function's signature) > > No, combining the signature with other details like the name and > docstring is the task of higher level interfaces like pydoc. Idle tooltips are potentially two lines: the signature and the first line of the docstring. (Some builtins need a better first line, but thats another issue.) -- Terry Jan Reedy From tjreedy at udel.edu Thu Jun 14 20:02:13 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 14 Jun 2012 14:02:13 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> Message-ID: On 6/14/2012 1:10 PM, Brett Cannon wrote: > > > On Thu, Jun 14, 2012 at 12:39 PM, Yury Selivanov > > wrote: > > On 2012-06-14, at 12:32 PM, Benjamin Peterson wrote: > > > 2012/6/14 Yury Selivanov >: > >> On 2012-06-14, at 11:24 AM, Brett Cannon wrote: > >>> On Thu, Jun 14, 2012 at 9:50 AM, Yury Selivanov > > wrote: > >>> > >>> [SNIP] > >>> > >>> Let's consider replacement of 'Parameter.is_*' set of > attributes with > >>> a single 'Parameter.kind' attribute, which will have the following > >>> possible values: 'positional', 'vararg', 'keyword-only', > 'varkwarg'. > >>> > >>> (I think 'positional' is more intuitive than 'index'?) > >>> > >>> > >>> +1 if this change is made. > >> > >> How about adding 'kind' and keeping 'is_*' attributes, > >> but making them read-only dynamic properties, i.e.: > >> > >> class Parameter: > >> ... > >> > >> @property > >> def is_vararg(self): > >> return self.kind == 'vararg' > >> > >> ... > >> > >> ? > > > > Seems a bit bloatly to me. (One way to do it.) > > Yes, but on the other hand it solves "strings are error prone" > argument, keeps all 'is_*' attributes in sync, and makes them > read-only. > > 'kind' property may do validation on set, to diminish mistakes > probability even further. > > > I agree with Benjamin, it goes against TOOWTDI without enough of a > justification to break the rule. Just make the strings constants on the > Parameter class and you solve the lack of enum issue. My opinion: I don't like the multiple radio-button attributes either. It is one attribute with multiple values. We use constants elsewhere. In the absence of an enum type that maps ints to a string, the constants should be strings for display and printing. -- Terry Jan Reedy From yselivanov.ml at gmail.com Thu Jun 14 20:09:36 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 14 Jun 2012 14:09:36 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <5BE279F4-8E3F-4F92-B08E-93FF046A4816@gmail.com> Message-ID: On 2012-06-14, at 1:51 PM, Terry Reedy wrote: > On 6/14/2012 6:00 AM, Nick Coghlan wrote: > >> > Just a thought: Do we want to include the docstring? A function's >> > docstring is often intimately tied to its signature. (Or at least, a >> > lot of us try to write docstrings that effectively describe the >> > function's signature) >> >> No, combining the signature with other details like the name and >> docstring is the task of higher level interfaces like pydoc. > > Idle tooltips are potentially two lines: the signature and the first line of the docstring. (Some builtins need a better first line, but thats another issue.) We've decided to make Signature to represent only call-signature-part of objects. Docstring, along with object's name and docstring isn't part of it. In any way, it's easy to get all information you need: def introspect_function(func): sig = signature(func) return (func.__name__, func.__doc__, str(sig)) - Yury From ericsnowcurrently at gmail.com Thu Jun 14 20:23:37 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Thu, 14 Jun 2012 12:23:37 -0600 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> Message-ID: On Thu, Jun 14, 2012 at 9:06 AM, R. David Murray wrote: > I don't have strong feelings about this, but to me the fact that there > are values of the individual parameters that if they occur on the same > object at the same time would be invalid is a code smell. ?If the thing > can be one and only one of a list of possible types, it makes sense to > me that this be indicated as a single attribute with a list of possible > values, rather than a set of boolean options, one for each type. > > For the attribute error issue, we could have module attributes that give > names to the strings: > > ? ?if parameter.kind == inspect.VARARG_KIND: > ? ? ? ?stuff > > Or if we don't want that in the stdlib, the individual programmer who > cares about it can define their own constants. On Thu, Jun 14, 2012 at 11:10 AM, Brett Cannon wrote: > I agree with Benjamin, it goes against TOOWTDI without enough of a > justification to break the rule. Just make the strings constants on the > Parameter class and you solve the lack of enum issue. +1 -eric From alexandre.zani at gmail.com Thu Jun 14 20:36:03 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Thu, 14 Jun 2012 11:36:03 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> Message-ID: On Thu, Jun 14, 2012 at 10:10 AM, Brett Cannon wrote: > > > On Thu, Jun 14, 2012 at 12:39 PM, Yury Selivanov > wrote: >> >> On 2012-06-14, at 12:32 PM, Benjamin Peterson wrote: >> >> > 2012/6/14 Yury Selivanov : >> >> On 2012-06-14, at 11:24 AM, Brett Cannon wrote: >> >>> On Thu, Jun 14, 2012 at 9:50 AM, Yury Selivanov >> >>> wrote: >> >>> >> >>> [SNIP] >> >>> >> >>> Let's consider replacement of 'Parameter.is_*' set of attributes with >> >>> a single 'Parameter.kind' attribute, which will have the following >> >>> possible values: 'positional', 'vararg', 'keyword-only', 'varkwarg'. >> >>> >> >>> (I think 'positional' is more intuitive than 'index'?) >> >>> >> >>> >> >>> +1 if this change is made. >> >> >> >> How about adding 'kind' and keeping 'is_*' attributes, >> >> but making them read-only dynamic properties, i.e.: >> >> >> >> ? class Parameter: >> >> ? ? ? ... >> >> >> >> ? ? ? @property >> >> ? ? ? def is_vararg(self): >> >> ? ? ? ? ? return self.kind == 'vararg' >> >> >> >> ? ? ? ... >> >> >> >> ? >> > >> > Seems a bit bloatly to me. (One way to do it.) >> >> Yes, but on the other hand it solves "strings are error prone" >> argument, keeps all 'is_*' attributes in sync, and makes them >> read-only. >> >> 'kind' property may do validation on set, to diminish mistakes >> probability even further. > > > I agree with Benjamin, it goes against TOOWTDI without enough of a > justification to break the rule. Just make the strings constants on the > Parameter class and you solve the lack of enum issue. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com > I don't think it really breaks TOOWTDI because you're talking about two use-cases. In one case, you're checking if something is a particular kind of parameter. In the other case, you're doing some sort of dict-based dispatch. I also think is_args etc is cleaner to use when doing a comparison: if param.is_arg: vs if param.kind == param.ARG: That said, it's not a huge deal and so I won't push this any more than I already have. From ethan at stoneleaf.us Thu Jun 14 21:12:57 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 14 Jun 2012 12:12:57 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> Message-ID: <4FDA37B9.9000705@stoneleaf.us> Alexandre Zani wrote: > I don't think it really breaks TOOWTDI because you're talking about > two use-cases. In one case, you're checking if something is a > particular kind of parameter. In the other case, you're doing some > sort of dict-based dispatch. I also think is_args etc is cleaner to > use when doing a comparison: > > if param.is_arg: > > vs > > if param.kind == param.ARG: +1 > That said, it's not a huge deal and so I won't push this any more than > I already have. ditto ~Ethan~ From solipsis at pitrou.net Thu Jun 14 21:24:03 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 14 Jun 2012 21:24:03 +0200 Subject: [Python-Dev] PEP 362 Third Revision References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> Message-ID: <20120614212403.669cecf4@pitrou.net> On Thu, 14 Jun 2012 09:32:59 -0700 Benjamin Peterson wrote: > > > > How about adding 'kind' and keeping 'is_*' attributes, > > but making them read-only dynamic properties, i.e.: > > > > ? class Parameter: > > ? ? ? ... > > > > ? ? ? @property > > ? ? ? def is_vararg(self): > > ? ? ? ? ? return self.kind == 'vararg' > > > > ? ? ? ... > > > > ? > > Seems a bit bloatly to me. (One way to do it.) Agreed with Benjamin. Also, the "is_*" attributes are misleading: it looks like they are orthogonal but only one of them can be true at any time. Regards Antoine. From ethan at stoneleaf.us Thu Jun 14 21:46:38 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 14 Jun 2012 12:46:38 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <20120614212403.669cecf4@pitrou.net> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> Message-ID: <4FDA3F9E.4090901@stoneleaf.us> Antoine Pitrou wrote: > On Thu, 14 Jun 2012 09:32:59 -0700 > Benjamin Peterson wrote: >>> How about adding 'kind' and keeping 'is_*' attributes, >>> but making them read-only dynamic properties, i.e.: >>> >>> class Parameter: >>> ... >>> >>> @property >>> def is_vararg(self): >>> return self.kind == 'vararg' >>> >>> ... >>> >>> ? >> Seems a bit bloatly to me. (One way to do it.) > > Agreed with Benjamin. > Also, the "is_*" attributes are misleading: it looks like they are > orthogonal but only one of them can be true at any time. This is no different from what we have with strings now: --> 'aA'.islower() False --> 'aA'.isupper() False --> 'a'.islower() True --> 'A'.isupper() True We know that a string cannot be both all-upper and all-lower at the same time; likewise we know a variable cannot be both positional and kwargs. ~Ethan~ From solipsis at pitrou.net Thu Jun 14 21:57:34 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 14 Jun 2012 21:57:34 +0200 Subject: [Python-Dev] PEP 362 Third Revision References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> Message-ID: <20120614215734.0b8f62bb@pitrou.net> On Thu, 14 Jun 2012 12:46:38 -0700 Ethan Furman wrote: > > This is no different from what we have with strings now: > > --> 'aA'.islower() > False > --> 'aA'.isupper() > False > --> 'a'.islower() > True > --> 'A'.isupper() > True > > We know that a string cannot be both all-upper and all-lower at the same > time; We know that because it's common wisdom for everyone (although who knows what oddities the unicode consortium may come up with in the future). Whether a given function argument may be of several kinds at the same time is much less obvious to most people. Regards Antoine. From benjamin at python.org Thu Jun 14 22:19:19 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 14 Jun 2012 13:19:19 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDA3F9E.4090901@stoneleaf.us> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> Message-ID: 2012/6/14 Ethan Furman : > This is no different from what we have with strings now: > > --> 'aA'.islower() > False > --> 'aA'.isupper() > False > --> 'a'.islower() > True > --> 'A'.isupper() > True > > We know that a string cannot be both all-upper and all-lower at the same > time; likewise we know a variable cannot be both positional and kwargs. This is much less clear cut as there's no clause in the Unicode standard saying (!islower() or !isupper()) must be true. -- Regards, Benjamin From alexandre.zani at gmail.com Thu Jun 14 22:21:03 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Thu, 14 Jun 2012 13:21:03 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <20120614215734.0b8f62bb@pitrou.net> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> Message-ID: On Thu, Jun 14, 2012 at 12:57 PM, Antoine Pitrou wrote: > On Thu, 14 Jun 2012 12:46:38 -0700 > Ethan Furman wrote: >> >> This is no different from what we have with strings now: >> >> --> 'aA'.islower() >> False >> --> 'aA'.isupper() >> False >> --> 'a'.islower() >> True >> --> 'A'.isupper() >> True >> >> We know that a string cannot be both all-upper and all-lower at the same >> time; > > We know that because it's common wisdom for everyone (although who knows > what oddities the unicode consortium may come up with in the future). > Whether a given function argument may be of several kinds at the same > time is much less obvious to most people. Is it obvious to most people? No. Is it obvious to most users of this functionality? I would expect so. This isn't some implementation detail, this is a characteristic of python parameters. If you don't understand it, you are probably not the audience for signature. > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com From benjamin at python.org Thu Jun 14 22:24:39 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 14 Jun 2012 13:24:39 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> Message-ID: 2012/6/14 Alexandre Zani : > On Thu, Jun 14, 2012 at 12:57 PM, Antoine Pitrou wrote: >> On Thu, 14 Jun 2012 12:46:38 -0700 >> Ethan Furman wrote: >>> >>> This is no different from what we have with strings now: >>> >>> --> 'aA'.islower() >>> False >>> --> 'aA'.isupper() >>> False >>> --> 'a'.islower() >>> True >>> --> 'A'.isupper() >>> True >>> >>> We know that a string cannot be both all-upper and all-lower at the same >>> time; >> >> We know that because it's common wisdom for everyone (although who knows >> what oddities the unicode consortium may come up with in the future). >> Whether a given function argument may be of several kinds at the same >> time is much less obvious to most people. > > Is it obvious to most people? No. Is it obvious to most users of this > functionality? I would expect so. This isn't some implementation > detail, this is a characteristic of python parameters. If you don't > understand it, you are probably not the audience for signature. Consequently, the "kind" model should match up very well with their understanding that a parameter can only be one "kind" at a time. -- Regards, Benjamin From tjreedy at udel.edu Thu Jun 14 22:27:23 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 14 Jun 2012 16:27:23 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDA3F9E.4090901@stoneleaf.us> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> Message-ID: On 6/14/2012 3:46 PM, Ethan Furman wrote: > Antoine Pitrou wrote: >> Also, the "is_*" attributes are misleading: it looks like they are >> orthogonal but only one of them can be true at any time. > > This is no different from what we have with strings now: > > --> 'aA'.islower() > False > --> 'aA'.isupper() > False > --> 'a'.islower() > True > --> 'A'.isupper() > True The analogy does not hold. These are not attributes. They are methods that scan the attributes of individual characters in the string. Also, for many alphabets, characters are both upper and lower case, unless you prefer no case or uncased. Then there is also titlecase. Of course, multiple character strings can be mixed case. So str.casekind does not make much sense. So your example convinces me even more that 'kind' is the way to go ;-). --- Letter upper/lower case, if they follow the unicode definition below, are primarily derived from 'Lu' and 'Ll', which are two of about 30 possible general categories (one attribute). But they also use Other_Uppercase and Other_Lowercase. ''' DerivedCoreProperties.txt Lowercase B I Characters with the Lowercase property. For more information, see Chapter 4, Character Properties in [Unicode]. Generated from: Ll + Other_Lowercase Uppercase B I Characters with the Uppercase property. For more information, see Chapter 4, Character Properties in [Unicode]. Generated from: Lu + Other_Uppercase ''' But these are all implementation details depending on the particular organization of the unicode character database. Defining cross-alphabet 'character' properties is a non-trivial endeavor, not at all like argument kind. -- Terry Jan Reedy From rdmurray at bitdance.com Thu Jun 14 22:28:07 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 14 Jun 2012 16:28:07 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <20120614215734.0b8f62bb@pitrou.net> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> Message-ID: <20120614202807.C1B9125009E@webabinitio.net> On Thu, 14 Jun 2012 21:57:34 +0200, Antoine Pitrou wrote: > On Thu, 14 Jun 2012 12:46:38 -0700 > Ethan Furman wrote: > > > > This is no different from what we have with strings now: > > > > --> 'aA'.islower() > > False > > --> 'aA'.isupper() > > False > > --> 'a'.islower() > > True > > --> 'A'.isupper() > > True > > > > We know that a string cannot be both all-upper and all-lower at the same > > time; > > We know that because it's common wisdom for everyone (although who knows > what oddities the unicode consortium may come up with in the future). Indeed, there is at least one letter that is used in both upper case and lower case, so the consortium could reasonably declare that it should return True for both isupper and islower :). I'm not going to claim that there was that much foresight in the creation of those two methods. I will, however, note that we aren't perfectly consistent in the application of our rules. --David From steve at pearwood.info Thu Jun 14 22:38:45 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 15 Jun 2012 06:38:45 +1000 Subject: [Python-Dev] deprecating .pyo and -O In-Reply-To: References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> <4FD8EBD7.7090500@stoneleaf.us> <20120614122524.01a53741@pitrou.net> Message-ID: <4FDA4BD5.6060108@pearwood.info> Floris Bruynooghe wrote: > On 14 June 2012 11:25, Antoine Pitrou wrote: >> Honestly, I think the best option would be to deprecate .pyo files as >> well as the useless -O option. They only cause confusion without >> providing any significant benefits. > > +1 > > But what happens to __debug__ and assert statements? I think it > should be possible to always put assert statements inside a __debug__ > block and then create -O a simple switch for setting __debug__ to > False. If desired a simple strip tool could then easily remove > __debug__ blocks and (unused) docstrings. So in other words, you want to keep the functionality of -O, but make it the responsibility of the programmer to write an external tool to implement it? Apart from the duplication of effort (everyone who wants to optimize their code has to write their own source-code strip tool), that surely is only going to decrease the reliability and usefulness of Python optimization, not increase it. -O may be under-utilized by programmers who don't need or want it, but that doesn't mean it isn't useful to those who do want it. -1 on deprecation. -- Steven From yselivanov.ml at gmail.com Thu Jun 14 22:45:08 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 14 Jun 2012 16:45:08 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> Message-ID: <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> On 2012-06-14, at 4:24 PM, Benjamin Peterson wrote: > 2012/6/14 Alexandre Zani : >> On Thu, Jun 14, 2012 at 12:57 PM, Antoine Pitrou wrote: >>> On Thu, 14 Jun 2012 12:46:38 -0700 >>> Ethan Furman wrote: >>>> >>>> This is no different from what we have with strings now: >>>> >>>> --> 'aA'.islower() >>>> False >>>> --> 'aA'.isupper() >>>> False >>>> --> 'a'.islower() >>>> True >>>> --> 'A'.isupper() >>>> True >>>> >>>> We know that a string cannot be both all-upper and all-lower at the same >>>> time; >>> >>> We know that because it's common wisdom for everyone (although who knows >>> what oddities the unicode consortium may come up with in the future). >>> Whether a given function argument may be of several kinds at the same >>> time is much less obvious to most people. >> >> Is it obvious to most people? No. Is it obvious to most users of this >> functionality? I would expect so. This isn't some implementation >> detail, this is a characteristic of python parameters. If you don't >> understand it, you are probably not the audience for signature. > > Consequently, the "kind" model should match up very well with their > understanding that a parameter can only be one "kind" at a time. I myself now like the 'kind' attribute more than 'is_*' family. Brett and Larry also voted for it, as well the majority here. I'll amend the PEP this evening to replace 'is_args', 'is_kwargs', and 'is_keyword_only' with a 'kind' attribute, with possible values: 'positional', 'vararg', 'varkw', 'kwonly'. Parameter class will have four constants, respectively: class Parameter: KIND_POSITIONAL = 'positional' KIND_VARARG = 'vararg' KIND_VARKW = 'varkw' KIND_KWONLY = 'kwonly' 'Parameter.is_implemented' will be renamed to 'Parameter.implemented' Is everybody OK with this? Thoughts? I, for instance, like 'varkwarg' more than 'varkw' (+ it is more consistent with **kwargs) - Yury From solipsis at pitrou.net Thu Jun 14 22:46:36 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 14 Jun 2012 22:46:36 +0200 Subject: [Python-Dev] deprecating .pyo and -O References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> <4FD8EBD7.7090500@stoneleaf.us> <20120614122524.01a53741@pitrou.net> <4FDA4BD5.6060108@pearwood.info> Message-ID: <20120614224636.37d7a402@pitrou.net> On Fri, 15 Jun 2012 06:38:45 +1000 Steven D'Aprano wrote: > > Apart from the duplication of effort (everyone who wants to optimize their > code has to write their own source-code strip tool), Actually, it could be shipped with Python, or even done dynamically at runtime (instead of relying on separate bytecode files). Regards Antoine. From steve at pearwood.info Thu Jun 14 22:54:28 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 15 Jun 2012 06:54:28 +1000 Subject: [Python-Dev] deprecating .pyo and -O In-Reply-To: <20120614141454.07ce5feb@pitrou.net> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> <4FD8EBD7.7090500@stoneleaf.us> <20120614122524.01a53741@pitrou.net> <20120614141454.07ce5feb@pitrou.net> Message-ID: <4FDA4F84.1050207@pearwood.info> Antoine Pitrou wrote: > Do other high-level languages have similar functionality? Parrot (does anyone actually use Parrot?) has a byte-code optimizer. javac -O is supposed to emit optimized byte-code, but allegedly it is a no-op. On the other hand, the Java ecosystem includes third-party Java compilers which claim to be faster/better than Oracle's compiler, including emitting much tighter byte-code. There are also Java byte-code optimizers such as Proguard and Soot. By default, Perl doesn't write byte-code to files. But when it does, there are various "optimization back-ends" that you can use. Until version 1.9, Ruby didn't even use byte-code at all. -- Steven From solipsis at pitrou.net Thu Jun 14 22:53:37 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 14 Jun 2012 22:53:37 +0200 Subject: [Python-Dev] PEP 362 Third Revision References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> Message-ID: <20120614225337.2b1fe5a9@pitrou.net> On Wed, 13 Jun 2012 22:52:43 -0400 Yury Selivanov wrote: > * is_implemented : bool > True if the parameter is implemented for use. Some platforms > implement functions but can't support specific parameters > (e.g. "mode" for ``os.mkdir``). Passing in an unimplemented > parameter may result in the parameter being ignored, > or in NotImplementedError being raised. It is intended that > all conditions where ``is_implemented`` may be False be > thoroughly documented. I don't understand what the purpose of is_implemented is, or how it is supposed to be computed. > * bind(\*args, \*\*kwargs) -> BoundArguments > Creates a mapping from positional and keyword arguments to > parameters. Raises a ``BindError`` (subclass of ``TypeError``) > if the passed arguments do not match the signature. Why a dedicated exception class? TypeError is good enough, and the proliferation of exception classes is a nuisance. Regards Antoine. From alexandre.zani at gmail.com Thu Jun 14 22:58:02 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Thu, 14 Jun 2012 13:58:02 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> Message-ID: On Thu, Jun 14, 2012 at 1:45 PM, Yury Selivanov wrote: > On 2012-06-14, at 4:24 PM, Benjamin Peterson wrote: > >> 2012/6/14 Alexandre Zani : >>> On Thu, Jun 14, 2012 at 12:57 PM, Antoine Pitrou wrote: >>>> On Thu, 14 Jun 2012 12:46:38 -0700 >>>> Ethan Furman wrote: >>>>> >>>>> This is no different from what we have with strings now: >>>>> >>>>> --> 'aA'.islower() >>>>> False >>>>> --> 'aA'.isupper() >>>>> False >>>>> --> 'a'.islower() >>>>> True >>>>> --> 'A'.isupper() >>>>> True >>>>> >>>>> We know that a string cannot be both all-upper and all-lower at the same >>>>> time; >>>> >>>> We know that because it's common wisdom for everyone (although who knows >>>> what oddities the unicode consortium may come up with in the future). >>>> Whether a given function argument may be of several kinds at the same >>>> time is much less obvious to most people. >>> >>> Is it obvious to most people? No. Is it obvious to most users of this >>> functionality? I would expect so. This isn't some implementation >>> detail, this is a characteristic of python parameters. If you don't >>> understand it, you are probably not the audience for signature. >> >> Consequently, the "kind" model should match up very well with their >> understanding that a parameter can only be one "kind" at a time. > > I myself now like the 'kind' attribute more than 'is_*' family. > Brett and Larry also voted for it, as well the majority here. > > I'll amend the PEP this evening to replace 'is_args', 'is_kwargs', > and 'is_keyword_only' with a 'kind' attribute, with possible > values: 'positional', 'vararg', 'varkw', 'kwonly'. > > Parameter class will have four constants, respectively: > > ? ? class Parameter: > ? ? ? ? KIND_POSITIONAL = 'positional' > ? ? ? ? KIND_VARARG = 'vararg' > ? ? ? ? KIND_VARKW = 'varkw' > ? ? ? ? KIND_KWONLY = 'kwonly' > > 'Parameter.is_implemented' will be renamed to 'Parameter.implemented' > > Is everybody OK with this? ?Thoughts? > > I, for instance, like 'varkwarg' more than 'varkw' (+ it is more > consistent with **kwargs) > > - > Yury How about keyword instead of kwonly? I find kwonly clear when side-by-side with varkw, but ambiguous standalone. I like the idea of using args and kwargs just because those are the defacto standard way we refer to that type of argument. From benjamin at python.org Thu Jun 14 23:00:00 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 14 Jun 2012 14:00:00 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> Message-ID: 2012/6/14 Alexandre Zani : > How about keyword instead of kwonly? I find kwonly clear when > side-by-side with varkw, but ambiguous standalone. > > I like the idea of using args and kwargs just because those are the > defacto standard way we refer to that type of argument. That conflates the name of the parameter with what it does. -- Regards, Benjamin From ethan at stoneleaf.us Thu Jun 14 23:07:07 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 14 Jun 2012 14:07:07 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> Message-ID: <4FDA527B.1050703@stoneleaf.us> Yury Selivanov wrote: > I'll amend the PEP this evening to replace 'is_args', 'is_kwargs', > and 'is_keyword_only' with a 'kind' attribute, with possible > values: 'positional', 'vararg', 'varkw', 'kwonly'. > > Parameter class will have four constants, respectively: > > class Parameter: > KIND_POSITIONAL = 'positional' > KIND_VARARG = 'vararg' > KIND_VARKW = 'varkw' > KIND_KWONLY = 'kwonly' > > 'Parameter.is_implemented' will be renamed to 'Parameter.implemented' > > Is everybody OK with this? Thoughts? > > I, for instance, like 'varkwarg' more than 'varkw' (+ it is more > consistent with **kwargs) +1 I like these names, and the similarity between 'vararg' and 'varkw'. I would also be happy with 'args' and 'kwargs'. ~Ethan~ From alexandre.zani at gmail.com Thu Jun 14 23:01:04 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Thu, 14 Jun 2012 14:01:04 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> Message-ID: On Thu, Jun 14, 2012 at 2:00 PM, Benjamin Peterson wrote: > 2012/6/14 Alexandre Zani : >> How about keyword instead of kwonly? I find kwonly clear when >> side-by-side with varkw, but ambiguous standalone. >> >> I like the idea of using args and kwargs just because those are the >> defacto standard way we refer to that type of argument. > > That conflates the name of the parameter with what it does. Agreed, but also easily understood. > > > -- > Regards, > Benjamin From martin at v.loewis.de Thu Jun 14 22:57:40 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 14 Jun 2012 22:57:40 +0200 Subject: [Python-Dev] deprecating .pyo and -O In-Reply-To: <20120614141454.07ce5feb@pitrou.net> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> <4FD8EBD7.7090500@stoneleaf.us> <20120614122524.01a53741@pitrou.net> <20120614141454.07ce5feb@pitrou.net> Message-ID: <4FDA5044.3060603@v.loewis.de> > I don't really see the point. In my experience there is no benefit to > removing assert statements in production mode. This is a C-specific > notion that doesn't really map very well to Python code. Do other > high-level languages have similar functionality? It's not at all C specific. C# also has it: http://msdn.microsoft.com/en-us/library/ttcc4x86(v=vs.80).aspx Java makes it a VM option (rather than a compiler option), but it's still a flag to the VM (-enableassertions): http://docs.oracle.com/javase/1.4.2/docs/tooldocs/windows/java.html Delphi also has assertions that can be disabled at compile time. Regards, Martin From aahz at pythoncraft.com Thu Jun 14 23:12:42 2012 From: aahz at pythoncraft.com (Aahz) Date: Thu, 14 Jun 2012 14:12:42 -0700 Subject: [Python-Dev] FWD: Windows 3.2.3 64 bit installers are actually 3.2 Message-ID: <20120614211242.GA26953@panix.com> Note: I'm no-mail on python-dev ----- Forwarded message from Sean Johnson ----- > Date: Thu, 14 Jun 2012 03:48:55 -0400 > From: Sean Johnson > To: webmaster at python.org > Subject: Windows 3.2.3 64 bit installers are actually 3.2 > > The installers on both this page: > > http://www.python.org/getit/releases/3.2.3/ > > and > > http://www.python.org/download/ > > For the x86-64 MSI Installer are both builds for version 3.2, not 3.2.3 (even though the filename says 3.2.3). > > I just tried for about 30 minutes to find out why the input() bug mentioned here: http://bugs.python.org/issue11272 was occuring in what I thought was the latest release - then I realized that my terminal windows stated version 3.2, not 3.2.3 after several uninstalls/installs. ----- End forwarded message ----- -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ https://en.wikipedia.org/wiki/Mary_Anning From yselivanov.ml at gmail.com Thu Jun 14 23:20:47 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 14 Jun 2012 17:20:47 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <20120614225337.2b1fe5a9@pitrou.net> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> Message-ID: <8774FD66-63C9-408F-A11C-C049529ECDA8@gmail.com> On 2012-06-14, at 4:53 PM, Antoine Pitrou wrote: > On Wed, 13 Jun 2012 22:52:43 -0400 > Yury Selivanov wrote: > >> * bind(\*args, \*\*kwargs) -> BoundArguments >> Creates a mapping from positional and keyword arguments to >> parameters. Raises a ``BindError`` (subclass of ``TypeError``) >> if the passed arguments do not match the signature. > > Why a dedicated exception class? TypeError is good enough, and the > proliferation of exception classes is a nuisance. Agree. Will fix this. Thanks, - Yury From brian at python.org Thu Jun 14 23:32:23 2012 From: brian at python.org (Brian Curtin) Date: Thu, 14 Jun 2012 16:32:23 -0500 Subject: [Python-Dev] FWD: Windows 3.2.3 64 bit installers are actually 3.2 In-Reply-To: <20120614211242.GA26953@panix.com> References: <20120614211242.GA26953@panix.com> Message-ID: On Thu, Jun 14, 2012 at 4:12 PM, Aahz wrote: > Note: I'm no-mail on python-dev > > ----- Forwarded message from Sean Johnson ----- > >> Date: Thu, 14 Jun 2012 03:48:55 -0400 >> From: Sean Johnson >> To: webmaster at python.org >> Subject: Windows 3.2.3 64 bit installers are actually 3.2 >> >> The installers on both this page: >> >> http://www.python.org/getit/releases/3.2.3/ >> >> and >> >> http://www.python.org/download/ >> >> For the x86-64 MSI Installer are both builds for version 3.2, not 3.2.3 (even though the filename says 3.2.3). >> >> I just tried for about 30 minutes to find out why the input() bug mentioned here: ?http://bugs.python.org/issue11272 was occuring in what I thought was the latest release - then I realized that my terminal windows stated version 3.2, not 3.2.3 after several uninstalls/installs. I think you're doing something wrong, either installing a different file than you just downloaded or not allowing it to overwrite an existing 3.2 installation. Both links have 3.2.3 labeled installers which I just downloaded, and both of them install 3.2.3 executables. From ncoghlan at gmail.com Fri Jun 15 00:24:16 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 15 Jun 2012 08:24:16 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDA527B.1050703@stoneleaf.us> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> <4FDA527B.1050703@stoneleaf.us> Message-ID: I like the idea of a kind attribute, I don't like the current names for the possible values. At the very least, "positional only" needs to be supported to handle nameless parameters in C functions (or those that unpack *args internally) The level of abbreviation used also seems unnecessary and internally inconsistent. My proposal: POSITIONAL- positional only NAMED_POSITIONAL - normal parameter VAR_POSITIONAL - *args KEYWORD - keyword only VAR_KEYWORDS - **kwds -- Sent from my phone, thus the relative brevity :) On Jun 15, 2012 7:07 AM, "Ethan Furman" wrote: > Yury Selivanov wrote: > >> I'll amend the PEP this evening to replace 'is_args', 'is_kwargs', >> and 'is_keyword_only' with a 'kind' attribute, with possible >> values: 'positional', 'vararg', 'varkw', 'kwonly'. >> >> Parameter class will have four constants, respectively: >> >> class Parameter: >> KIND_POSITIONAL = 'positional' >> KIND_VARARG = 'vararg' >> KIND_VARKW = 'varkw' >> KIND_KWONLY = 'kwonly' >> >> 'Parameter.is_implemented' will be renamed to 'Parameter.implemented' >> >> Is everybody OK with this? Thoughts? >> >> I, for instance, like 'varkwarg' more than 'varkw' (+ it is more >> consistent with **kwargs) >> > > +1 > > I like these names, and the similarity between 'vararg' and 'varkw'. I > would also be happy with 'args' and 'kwargs'. > > ~Ethan~ > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > ncoghlan%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Fri Jun 15 00:37:44 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 14 Jun 2012 15:37:44 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> <4FDA527B.1050703@stoneleaf.us> Message-ID: 2012/6/14 Nick Coghlan : > I like the idea of a kind attribute, I don't like the current names for the > possible values. > > At the very least, "positional only" needs to be supported to handle > nameless parameters in C functions (or those that unpack *args internally) > > The level of abbreviation used also seems unnecessary and internally > inconsistent. > > My proposal: > POSITIONAL- positional only > NAMED_POSITIONAL - normal parameter Probably POSITIONAL should be the normal one, and there should be ONLY_POSITIONAL for the weirdos. -- Regards, Benjamin From ethan at stoneleaf.us Fri Jun 15 00:47:45 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 14 Jun 2012 15:47:45 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> <4FDA527B.1050703@stoneleaf.us> Message-ID: <4FDA6A11.9040801@stoneleaf.us> Nick Coghlan wrote: > I like the idea of a kind attribute, I don't like the current names for > the possible values. > > At the very least, "positional only" needs to be supported to handle > nameless parameters in C functions (or those that unpack *args internally) > > The level of abbreviation used also seems unnecessary and internally > inconsistent. > > My proposal: > POSITIONAL- positional only > NAMED_POSITIONAL - normal parameter > VAR_POSITIONAL - *args > KEYWORD - keyword only > VAR_KEYWORDS - **kwds I could live with this, too -- as long as we lose the 'S' on 'VAR_KEYWORDS'. :) ~Ethan~ From ncoghlan at gmail.com Fri Jun 15 01:16:42 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 15 Jun 2012 09:16:42 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> <4FDA527B.1050703@stoneleaf.us> Message-ID: On Jun 15, 2012 8:37 AM, "Benjamin Peterson" wrote: > > 2012/6/14 Nick Coghlan : > > I like the idea of a kind attribute, I don't like the current names for the > > possible values. > > > > At the very least, "positional only" needs to be supported to handle > > nameless parameters in C functions (or those that unpack *args internally) > > > > The level of abbreviation used also seems unnecessary and internally > > inconsistent. > > > > My proposal: > > POSITIONAL- positional only > > NAMED_POSITIONAL - normal parameter > > Probably POSITIONAL should be the normal one, and there should be > ONLY_POSITIONAL for the weirdos. I did think about that, but it's both inaccurate and internally inconsistent without further changes. Normal parameters can be specified by index (as a positional argument) or by name (as a keyword argument). Thus, the distinctions to be made are between "positional only", "positional or keyword" and "keyword only". If positional only parameters are allowed to have names for documentation purposes (as I believe they should), then using the "positional" kind for the "positional or keyword" case is just plain wrong. In addition, if the "only" isn't considered implied in the positional case, then it can't be implied in the keyword case, either. That gives the other set of internally consistent names I thought of: POSITIONAL_ONLY POSITIONAL_OR_KEYWORD VAR_POSITIONAL KEYWORD_ONLY VAR_KEYWORD I slightly prefer the first set I posted, but would be fine with this more explicit approach, too. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Fri Jun 15 03:37:52 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 14 Jun 2012 21:37:52 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> <4FDA527B.1050703@stoneleaf.us> Message-ID: On 2012-06-14, at 7:16 PM, Nick Coghlan wrote: > POSITIONAL_ONLY > POSITIONAL_OR_KEYWORD > VAR_POSITIONAL > KEYWORD_ONLY > VAR_KEYWORD I like those. A bit too lengthy and verbose, but the names are consistent. - Yury From larry at hastings.org Fri Jun 15 04:29:34 2012 From: larry at hastings.org (Larry Hastings) Date: Thu, 14 Jun 2012 19:29:34 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> Message-ID: <4FDA9E0E.3020200@hastings.org> On 06/14/2012 05:06 AM, Victor Stinner wrote: >> * is_implemented : bool >> True if the parameter is implemented for use. Some platforms >> implement functions but can't support specific parameters >> (e.g. "mode" for ``os.mkdir``). Passing in an unimplemented >> parameter may result in the parameter being ignored, >> or in NotImplementedError being raised. It is intended that >> all conditions where ``is_implemented`` may be False be >> thoroughly documented. > I suppose that the value depends on the running platform? (For > example, you may get a different value on Linux and Windows.) Exactly. I expect it to vary mainly by platform. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Fri Jun 15 04:34:36 2012 From: larry at hastings.org (Larry Hastings) Date: Thu, 14 Jun 2012 19:34:36 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <20120614225337.2b1fe5a9@pitrou.net> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> Message-ID: <4FDA9F3C.2000208@hastings.org> On 06/14/2012 01:53 PM, Antoine Pitrou wrote: >> * is_implemented : bool >> True if the parameter is implemented for use. Some platforms >> implement functions but can't support specific parameters >> (e.g. "mode" for ``os.mkdir``). Passing in an unimplemented >> parameter may result in the parameter being ignored, >> or in NotImplementedError being raised. It is intended that >> all conditions where ``is_implemented`` may be False be >> thoroughly documented. > I don't understand what the purpose of is_implemented is, or how it is > supposed to be computed. It's computed based on locally available functionality. Its purpose is to allow LBYL when using functionality that may not be available on all platforms. See issue 14626 for a specific use-case--which is why I pushed for this. When all the chips fall into place, I expect to have some code that looks like this: os.chown.__signature__.parameters['fd'].is_implemented = sysconfig.get_config_var('HAVE_FCHOWN') That's oversimplified (and almost certainly not spelled correctly) but you get the general idea. os.chown will soon sprout an "fd" parameter, but it will only work on platforms where we have fchown(). Using it will result in a NotImplementedError. But some folks want to LBYL. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Fri Jun 15 04:49:44 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 14 Jun 2012 19:49:44 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDA9F3C.2000208@hastings.org> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> Message-ID: 2012/6/14 Larry Hastings : > > On 06/14/2012 01:53 PM, Antoine Pitrou wrote: > > * is_implemented : bool > True if the parameter is implemented for use. Some platforms > implement functions but can't support specific parameters > (e.g. "mode" for ``os.mkdir``). Passing in an unimplemented > parameter may result in the parameter being ignored, > or in NotImplementedError being raised. It is intended that > all conditions where ``is_implemented`` may be False be > thoroughly documented. > > I don't understand what the purpose of is_implemented is, or how it is > supposed to be computed. > > > It's computed based on locally available functionality.? Its purpose is to > allow LBYL when using functionality that may not be available on all > platforms.? See issue 14626 for a specific use-case--which is why I pushed > for this. In that case wouldn't be nicer to have os level attribute ala os.path.supports_unicode_filenames? os.supports_atfunctions is gobs nicer than os.chown.__signature__.parameters['fd'].is_implemented Not "implementing" all parameters (whatever exactly that means) is not a very common case for a function, so I don't see what it needs to pollute a signature object for every Python function. -- Regards, Benjamin From larry at hastings.org Fri Jun 15 05:17:26 2012 From: larry at hastings.org (Larry Hastings) Date: Thu, 14 Jun 2012 20:17:26 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> Message-ID: <4FDAA946.9070607@hastings.org> On 06/14/2012 07:49 PM, Benjamin Peterson wrote: > In that case wouldn't be nicer to have os level attribute ala > os.path.supports_unicode_filenames? > > os.supports_atfunctions > > is gobs nicer than > > os.chown.__signature__.parameters['fd'].is_implemented > > Not "implementing" all parameters (whatever exactly that means) is not > a very common case for a function, so I don't see what it needs to > pollute a signature object for every Python function. We can safely agree to disagree here. Also, it's more granular than that. For example, Python now understands symbolic links on Windows--but only haphazardly at best. The "follow_symlinks" argument works on Windows for os.stat() but not for os.chmod(). //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Fri Jun 15 05:20:02 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 14 Jun 2012 20:20:02 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDAA946.9070607@hastings.org> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> Message-ID: 2012/6/14 Larry Hastings : > Also, it's more granular than that.? For example, Python now understands > symbolic links on Windows--but only haphazardly at best.? The > "follow_symlinks" argument works on Windows for os.stat() but not for > os.chmod(). Then indeed it's more granular than a parameter being "implemented" or not. A parameter may have a more restricted or extended meaning on different operating systems. (sendfile() on files for example). -- Regards, Benjamin From ncoghlan at gmail.com Fri Jun 15 05:21:20 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 15 Jun 2012 13:21:20 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> <4FDA527B.1050703@stoneleaf.us> Message-ID: On Fri, Jun 15, 2012 at 11:37 AM, Yury Selivanov wrote: > On 2012-06-14, at 7:16 PM, Nick Coghlan wrote: > >> POSITIONAL_ONLY >> POSITIONAL_OR_KEYWORD >> VAR_POSITIONAL >> KEYWORD_ONLY >> VAR_KEYWORD > > I like those. ?A bit too lengthy and verbose, but the names > are consistent. In this case, I'm willing to trade a bit of verbosity for accuracy. It also suggests a possible, more explicit name for the attribute: "binding". Parameter.binding - describes how argument values are bound to the parameter POSITIONAL_ONLY - value must be supplied as a positional argument [1] POSITIONAL_OR_KEYWORD - value may be supplied as either a keyword or positional argument [2] KEYWORD_ONLY - value must be supplied as a keyword argument [3] VAR_POSITIONAL - a tuple of positional arguments that aren't bound to any other parameter [4] VAR_KEYWORD - a dict of keyword arguments that aren't bound to any other parameter [5] [1] Python has no explicit syntax for defining positional only parameters, but they may be implemented by processing the contents of a VAR_POSITIONAL parameter and customising the contents of __signature__. Many builtin and extension module functions (especially those that accept only one or two parameters) accept positional-only parameters. [2] This is the standard binding behaviour for functions implemented in Python [3] Keyword only parameters are those which appear after a "*" or "*args" entry in a Python function definition. They may also be implemented by processing the contents of a VAR_KEYWORD parameter and customising the contents of __signature__. [4] This corresponds to a "*args" parameter in a Python function definition [5] This corresponds to a "**kwds" parameter in a Python function definition Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Fri Jun 15 05:41:47 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 15 Jun 2012 13:41:47 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> Message-ID: On Fri, Jun 15, 2012 at 1:20 PM, Benjamin Peterson wrote: > 2012/6/14 Larry Hastings : >> Also, it's more granular than that.? For example, Python now understands >> symbolic links on Windows--but only haphazardly at best.? The >> "follow_symlinks" argument works on Windows for os.stat() but not for >> os.chmod(). > > Then indeed it's more granular than a parameter being "implemented" or > not. A parameter may have a more restricted or extended meaning on > different operating systems. (sendfile() on files for example). I agree with Benjamin here: I'd like to leave the flag out for now. I can see there could be a legitimate use case for something *like* that, but: 1. Context-specific function annotations may be a better answer 2. Context-specific "info" containers (such as sys.flags, sys.int_info, sys.float_info, time.get_clock_info) may be a better answer 3. A multi-valued attribute or an arbitrary string attribute (parameter docstrings, anyone?) may be a better answer There's no need to enshrine a flag for a currently ill-defined concept in the initial version of the API. If it still seems like a good idea by the time 3.4 rolls around, then we can add it than as a new attribute on inspect.Parameter objects Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From larry at hastings.org Fri Jun 15 06:43:34 2012 From: larry at hastings.org (Larry Hastings) Date: Thu, 14 Jun 2012 21:43:34 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> Message-ID: <4FDABD76.3020705@hastings.org> On 06/14/2012 08:20 PM, Benjamin Peterson wrote: > 2012/6/14 Larry Hastings: >> Also, it's more granular than that. For example, Python now understands >> symbolic links on Windows--but only haphazardly at best. The >> "follow_symlinks" argument works on Windows for os.stat() but not for >> os.chmod(). > Then indeed it's more granular than a parameter being "implemented" or > not. A parameter may have a more restricted or extended meaning on > different operating systems. (sendfile() on files for example). If you can suggest a representation that can convey this sort of subtle complexity without being miserable to use, I for one would be very interested to see it. I suggest that "is_implemented" solves a legitimate problem in a reasonable way; I wasn't attempting to be all things to all use cases. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Fri Jun 15 07:23:30 2012 From: larry at hastings.org (Larry Hastings) Date: Thu, 14 Jun 2012 22:23:30 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> Message-ID: <4FDAC6D2.7020807@hastings.org> On 06/14/2012 08:41 PM, Nick Coghlan wrote: > On Fri, Jun 15, 2012 at 1:20 PM, Benjamin Peterson wrote: >> 2012/6/14 Larry Hastings: >>> Also, it's more granular than that. For example, Python now understands >>> symbolic links on Windows--but only haphazardly at best. The >>> "follow_symlinks" argument works on Windows for os.stat() but not for >>> os.chmod(). >> Then indeed it's more granular than a parameter being "implemented" or >> not. A parameter may have a more restricted or extended meaning on >> different operating systems. (sendfile() on files for example). > I agree with Benjamin here: I'd like to leave the flag out for now. I > can see there could be a legitimate use case for something *like* > that, but: > > 1. Context-specific function annotations may be a better answer > 2. Context-specific "info" containers (such as sys.flags, > sys.int_info, sys.float_info, time.get_clock_info) may be a better > answer > 3. A multi-valued attribute or an arbitrary string attribute > (parameter docstrings, anyone?) may be a better answer I disagree that 2. would be better. I would prefer a standardized way of introspecting the availability of functionality to a collection of unique approaches stored in unpredictable locations. I disagree with 1. for much the same reason, though I like it more than 2.--at least it's bound directly to the function. Regarding 3., "parameter docstrings" suggest docstrings, which suggest not-machine-readable. The purpose of having it at all is so one can LBYL programmatically--if human-readable documentation is sufficient then we don't need this at all. As for "multi-valued attribute", I take it you're suggesting something more complex than "is_implemented". As I just said in a reply to Benjamin: I can't come up with a representation that's all things to all people. I contend "is_implemented" solves a legitimate problem in a reasonable way. If you can propose a superior representation, one that can convey more complex situations without becoming miserable to use, I'd like to see it. However, you appear to be saying you don't know what such a representation would be--you only conjecture that it *might* exist. I can't debate hypothetical representations. Furthermore, I suggest that if such a representation is possible, that it would be implementable in current Python. So again I ask: please suggest a superior representation. I would be genuinely interested in seeing it. Failing that, I'd prefer to restrict the discussion to whether or not the use case merits adding the flag. (I apologize in advance if I have misrepresented your position.) > There's no need to enshrine a flag for a currently ill-defined concept > in the initial version of the API. If it still seems like a good idea > by the time 3.4 rolls around, then we can add it than as a new > attribute on inspect.Parameter objects I disagree with the description "ill-defined". I would be very surprised indeed if either you or Benjamin genuinely didn't understand exactly what "is_implemented" represents. If you're suggesting that the documentation is inadequate we can certainly address that. Perhaps you meant "ill-concieved"? //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Fri Jun 15 07:43:50 2012 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 15 Jun 2012 08:43:50 +0300 Subject: [Python-Dev] investigating bot failures for ElementTree Message-ID: Hi, I committed a significant patch to _elementtree, which causes some bots to fail on test_xml_etree[_c]. I'm currently investigating the possible cause of this - and will be disabling a couple of tests in the suite temporarily, since the problems are most likely due to the ugly monkey-patching the test_xml_etree is based on. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Fri Jun 15 08:37:40 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 15 Jun 2012 08:37:40 +0200 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDA9F3C.2000208@hastings.org> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> Message-ID: >> I don't understand what the purpose of is_implemented is, or how it is >> supposed to be computed. > > It's computed based on locally available functionality.? Its purpose is to > allow LBYL when using functionality that may not be available on all > platforms.? See issue 14626 for a specific use-case--which is why I pushed > for this. > > When all the chips fall into place, I expect to have some code that looks > like this: > > os.chown.__signature__.parameters['fd'].is_implemented = > sysconfig.get_config_var('HAVE_FCHOWN') (Do you mean "fd" or "dirfd"?) I don't like such function, how can it be portable? How do you decide in your program if you can use it on any platform or not? I prefer to have a clear distinction between chown() et fchown() for example. So you can simply test hasattr(os, "fchown") and decide what to do if the function is not supported. How do you decide if the parameter is supported or not? For example, some platforms may not support all available values for a parameter. Dummy example, maybe not the best one: clocks supported by time.clock_gettime() heavily depends on the platform. Victor From larry at hastings.org Fri Jun 15 08:56:29 2012 From: larry at hastings.org (Larry Hastings) Date: Thu, 14 Jun 2012 23:56:29 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> Message-ID: <4FDADC9D.30309@hastings.org> On 06/14/2012 11:37 PM, Victor Stinner wrote: >> os.chown.__signature__.parameters['fd'].is_implemented = >> sysconfig.get_config_var('HAVE_FCHOWN') > (Do you mean "fd" or "dirfd"?) I meant "fd". "dir_fd" is contingent on fchownat(). But it was only an example anyway. > I don't like such function, how can it be portable? I suggest that's a separate discussion; please see issue 14626. > How do you decide in your program if you can use it on any platform or not? I can suggest two ways: 1) Attempt to use it and catch NotImplementedError. 2) Check the "is_implemented" flag for that parameter in the function's signature, assuming that detail is accepted as part of PEP 362. Thank you for independently confirming the legitimacy of the use case for "is_implemented". ;-) > How do you decide if the parameter is supported or not? Please see my example above for one plausible approach. I assume the code would look different for other implementations. > For example, some platforms may not support all available values for a parameter. This is the third time this has been brought up today by my count. Rather than repeat myself, I ask that you read my remarks elsewhere in this thread. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Jun 15 09:18:01 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 15 Jun 2012 17:18:01 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDAC6D2.7020807@hastings.org> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> Message-ID: On Fri, Jun 15, 2012 at 3:23 PM, Larry Hastings wrote: > I disagree with the description "ill-defined".? I would be very surprised > indeed if either you or Benjamin genuinely didn't understand exactly what > "is_implemented" represents.? If you're suggesting that the documentation is > inadequate we can certainly address that. > > Perhaps you meant "ill-concieved"? No, I mean ill-defined. The criteria for when a particular platform should flip that bit for an arbitrary parameter is highly unclear, as whether or not a particular parameter is "implemented" or not depends on the operation and the parameter. Let's take the "buffering" parameter to the open() builtin. It has three interesting settings: - unbuffered - line buffered - fixed size buffering What counts as "implemented" in that case? Supporting all 3? At least 2? Any 1 of them? If there's a maximum (or minimum) buffer size, does that still count as implemented? To know what "is_implemented" means for any given parameter, it's going to have to be documented *for that parameter*. In that case, better to define an interface specific mechanism that lets people ask the questions they want to ask. It's not appropriate to lump it into a general purpose introspection facility (certainly not one that hasn't even been added yet). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From solipsis at pitrou.net Fri Jun 15 09:53:55 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 15 Jun 2012 09:53:55 +0200 Subject: [Python-Dev] PEP 362 Third Revision References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDABD76.3020705@hastings.org> Message-ID: <20120615095355.35ecf6e7@pitrou.net> On Thu, 14 Jun 2012 21:43:34 -0700 Larry Hastings wrote: > On 06/14/2012 08:20 PM, Benjamin Peterson wrote: > > 2012/6/14 Larry Hastings: > >> Also, it's more granular than that. For example, Python now understands > >> symbolic links on Windows--but only haphazardly at best. The > >> "follow_symlinks" argument works on Windows for os.stat() but not for > >> os.chmod(). > > Then indeed it's more granular than a parameter being "implemented" or > > not. A parameter may have a more restricted or extended meaning on > > different operating systems. (sendfile() on files for example). > > If you can suggest a representation that can convey this sort of subtle > complexity without being miserable to use, I for one would be very > interested to see it. I suggest that "is_implemented" solves a > legitimate problem in a reasonable way; I wasn't attempting to be all > things to all use cases. I don't think it solves a legitimate problem. As Benjamin pointed out, people want to know whether a functionality is supported, not whether a specific parameter is "implemented". Also, the incantation to look up that information on a signature object is definitely too complicated to be helpful. Regards Antoine. From steve at pearwood.info Fri Jun 15 10:02:40 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 15 Jun 2012 18:02:40 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <20120614202807.C1B9125009E@webabinitio.net> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <20120614202807.C1B9125009E@webabinitio.net> Message-ID: <4FDAEC20.5010909@pearwood.info> R. David Murray wrote: >>> We know that a string cannot be both all-upper and all-lower at the same >>> time; >> We know that because it's common wisdom for everyone (although who knows >> what oddities the unicode consortium may come up with in the future). > > Indeed, there is at least one letter that is used in both upper case and > lower case, so the consortium could reasonably declare that it should > return True for both isupper and islower :). If you're talking about the German double-s ?, technically it is a lowercase letter and should be written as SS or SZ in uppercase. Just to add complication, historically German used to have an uppercase ?, and in recent years some designers have started to reintroduce it. (Note: I am not a German speaker, and everything I say above is taken from Wikipedia.) -- Steven From eliben at gmail.com Fri Jun 15 10:20:40 2012 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 15 Jun 2012 11:20:40 +0300 Subject: [Python-Dev] investigating bot failures for ElementTree In-Reply-To: References: Message-ID: On Fri, Jun 15, 2012 at 8:43 AM, Eli Bendersky wrote: > Hi, > > I committed a significant patch to _elementtree, which causes some bots to > fail on test_xml_etree[_c]. I'm currently investigating the possible cause > of this - and will be disabling a couple of tests in the suite temporarily, > since the problems are most likely due to the ugly monkey-patching the > test_xml_etree is based on. > > Eli > > That's it, the stable bots are fairly green now. Some Windows failures were there before my commits also (seems like there's instability in some tests, like Locks). One test in test_xml_etree remains disabled for the time being. I've opened http://bugs.python.org/issue15075 to track this and will continue to investigate. The problem most likely happens due to the monkey-patching done by the test to import the Python and not the C implementation of ET. Since it's the test that is problematic and not the implementation, I've only marked the issue priority as "normal". Anyone partial to debugging hairy import-related problems, feel free to chime in ;-) Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Fri Jun 15 12:51:16 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 15 Jun 2012 03:51:16 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> Message-ID: <4FDB13A4.8020104@hastings.org> On 06/15/2012 12:18 AM, Nick Coghlan wrote: >> Perhaps you meant "ill-concieved"? > No, I mean ill-defined. The criteria for when a particular platform > should flip that bit for an arbitrary parameter is highly unclear, as > whether or not a particular parameter is "implemented" or not depends > on the operation and the parameter. I guess I really did do a poor job of explaining it then. Here's another pass. My working definition for "is_implemented" is "is this functionality available at all?" Pressed to produce a stricter definition, I *would* define "is_implemented" as: True if it is possible to produce any valid inputs for the parameter on the current platform, and False otherwise. However, I don't think parameters should be added and removed from a function's signature based on what functionality is available on the local platform--that way lies madness. Instead I think it best to define a no-op default value for every optional parameter, and always accept that even when the functionality behind it isn't locally available. Function signatures should be stable. Therefore I amend the definition to: True if it is possible to produce any valid non-default inputs for the parameter on the current platform, and False otherwise. If I understand you correctly, you seem to be trying to apply "is_implemented" to the problem of predicting which specific inputs to a parameter would be valid. I don't think that problem is tractable--it's way too context-specific. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Fri Jun 15 13:25:40 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 15 Jun 2012 07:25:40 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> <4FDA527B.1050703@stoneleaf.us> Message-ID: <420B0C05-93C8-47F6-97AE-C1A72768782C@gmail.com> On 2012-06-14, at 11:21 PM, Nick Coghlan wrote: > On Fri, Jun 15, 2012 at 11:37 AM, Yury Selivanov > wrote: >> On 2012-06-14, at 7:16 PM, Nick Coghlan wrote: >> >>> POSITIONAL_ONLY >>> POSITIONAL_OR_KEYWORD >>> VAR_POSITIONAL >>> KEYWORD_ONLY >>> VAR_KEYWORD >> >> I like those. A bit too lengthy and verbose, but the names >> are consistent. > > In this case, I'm willing to trade a bit of verbosity for accuracy. It > also suggests a possible, more explicit name for the attribute: > "binding". Can we keep it called 'kind'? While technically 'binding' is more precise description for this, it's not as intuitive as 'kind'. Thank you, - Yury From ncoghlan at gmail.com Fri Jun 15 13:32:49 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 15 Jun 2012 21:32:49 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDB13A4.8020104@hastings.org> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> Message-ID: On Fri, Jun 15, 2012 at 8:51 PM, Larry Hastings wrote: > If I understand you correctly, you seem to be trying to apply > "is_implemented" to the problem of predicting which specific inputs to a > parameter would be valid.? I don't think that problem is tractable--it's way > too context-specific. Correct, but the more important point is that I don't think the question you're proposing to ask is worth answering. I *don't care* if there is some value that's supported on the current platform, I only care if the value *I am about to pass* is supported. Since I don't believe your proposed flag will answer any question that actually matters in practice, I consider it useless noise that should be dropped from the proposal. To go back to my simple buffering parameter example: 1. A hypothetical platform supports line buffered and fixed chunk buffered IO. Therefore, it sets the "is_implemented" flag for "buffering" to True (under your proposed definition) 2. My LBYL program checks the flag, sees that it is implemented and passes "buffering=0" 3. My program fails with NotImplementedError or UnsupportedOperationError, since my LBYL check wasn't strict enough A simple "is this parameter implemented?" does not provide enough useful information to be justify being part of the standard signature objects. Now, what a function *could* do is set __signature__ to a Signature subclass that provided an additional "validate()" method, or provided arbitrary additional information about supported features. That's a perfectly reasonable option. But what we definitely *shouldn't* be doing is supporting a niche use case directly on Parameter objects without allowing adequate time to explore alternative solutions that may be better in the long run. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Fri Jun 15 13:36:54 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 15 Jun 2012 21:36:54 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <420B0C05-93C8-47F6-97AE-C1A72768782C@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <3F53FF2F-B381-4C60-A00C-E932F3410AA5@gmail.com> <49BE4084-AAEC-40B9-AF4B-1274FB285806@gmail.com> <20120614212403.669cecf4@pitrou.net> <4FDA3F9E.4090901@stoneleaf.us> <20120614215734.0b8f62bb@pitrou.net> <6E7FA6CE-CA1A-4180-A6F6-A48ED2EF0CD7@gmail.com> <4FDA527B.1050703@stoneleaf.us> <420B0C05-93C8-47F6-97AE-C1A72768782C@gmail.com> Message-ID: On Fri, Jun 15, 2012 at 9:25 PM, Yury Selivanov wrote: > On 2012-06-14, at 11:21 PM, Nick Coghlan wrote: >> On Fri, Jun 15, 2012 at 11:37 AM, Yury Selivanov >> wrote: >>> On 2012-06-14, at 7:16 PM, Nick Coghlan wrote: >>> >>>> POSITIONAL_ONLY >>>> POSITIONAL_OR_KEYWORD >>>> VAR_POSITIONAL >>>> KEYWORD_ONLY >>>> VAR_KEYWORD >>> >>> I like those. ?A bit too lengthy and verbose, but the names >>> are consistent. >> >> In this case, I'm willing to trade a bit of verbosity for accuracy. It >> also suggests a possible, more explicit name for the attribute: >> "binding". > > Can we keep it called 'kind'? > > While technically 'binding' is more precise description for this, it's > not as intuitive as 'kind'. Sure, I'm still OK with "kind". Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From p.f.moore at gmail.com Fri Jun 15 14:14:41 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 15 Jun 2012 13:14:41 +0100 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> Message-ID: On 15 June 2012 12:32, Nick Coghlan wrote: > Now, what a function *could* do is set __signature__ to a Signature > subclass that provided an additional "validate()" method, or provided > arbitrary additional information about supported features. That's a > perfectly reasonable option. It might be worth mentioning this option in the PEP - it hadn't occurred to me that using subclasses of Signature would be something people could do... Having said that, I also find it hard to imagine a case where I'd check is_implemented (Larry's explanation notwithstanding). Paul From status at bugs.python.org Fri Jun 15 18:07:08 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 15 Jun 2012 18:07:08 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120615160708.A00591CA8B@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-06-08 - 2012-06-15) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3472 (+12) closed 23384 (+30) total 26856 (+42) Open issues with patches: 1464 Issues opened (31) ================== #14599: Windows test_import failure thanks to ImportError.path http://bugs.python.org/issue14599 reopened by skrah #15037: test_curses fails with OverflowError http://bugs.python.org/issue15037 opened by jcbollinger #15038: Optimize python Locks on Windows http://bugs.python.org/issue15038 opened by kristjan.jonsson #15039: module/ found before module.py when both are in the CWD http://bugs.python.org/issue15039 opened by strombrg #15040: stdlib compatability with pypy: mailbox.py http://bugs.python.org/issue15040 opened by mattip #15041: tkinter documentation: update "see also" list http://bugs.python.org/issue15041 opened by weirdink13 #15042: Implemented PyState_AddModule, PyState_RemoveModule http://bugs.python.org/issue15042 opened by Robin.Schreiber #15043: test_gdb is disallowed by default security settings in Fedora http://bugs.python.org/issue15043 opened by ncoghlan #15044: _dbm not building on Fedora 17 http://bugs.python.org/issue15044 opened by ncoghlan #15045: Make textwrap.dedent() consistent with str.splitlines(True) an http://bugs.python.org/issue15045 opened by ncoghlan #15047: Cygwin install (regen) problem http://bugs.python.org/issue15047 opened by jlt63 #15049: line buffering isn't always http://bugs.python.org/issue15049 opened by r.david.murray #15050: Python 3.2.3 fail to make http://bugs.python.org/issue15050 opened by shojnhv #15052: Outdated comments in build_ssl.py http://bugs.python.org/issue15052 opened by jkloth #15053: imp.lock_held() "Changed in Python 3.3" mention accidentally o http://bugs.python.org/issue15053 opened by brett.cannon #15054: bytes literals erroneously tokenized http://bugs.python.org/issue15054 opened by flox #15055: dictnotes.txt is out of date http://bugs.python.org/issue15055 opened by Mark.Shannon #15056: Have imp.cache_from_source() raise NotImplementedError when ca http://bugs.python.org/issue15056 opened by brett.cannon #15061: hmac.secure_compare() leaks information about length of string http://bugs.python.org/issue15061 opened by christian.heimes #15062: argparse: option for a positional argument http://bugs.python.org/issue15062 opened by wrobell #15063: Source code links for JSON documentation http://bugs.python.org/issue15063 opened by louiscipher #15064: multiprocessing should use more context manager http://bugs.python.org/issue15064 opened by sbt #15066: make install error: ImportError: No module named _struct http://bugs.python.org/issue15066 opened by suzhengchun #15067: sqlite3 docs reference PEP 246, which was rejected http://bugs.python.org/issue15067 opened by zach.ware #15068: fileinput requires two EOF when reading stdin http://bugs.python.org/issue15068 opened by jason.coombs #15071: TLS get keys and randoms http://bugs.python.org/issue15071 opened by llaniscudani #15074: Strange behaviour of python cmd module. (Ignores slash) http://bugs.python.org/issue15074 opened by jsevilleja #15075: XincludeTest failure in test_xml_etree http://bugs.python.org/issue15075 opened by eli.bendersky #15076: Sometimes couldn't import os, shown 'import site' failed, use http://bugs.python.org/issue15076 opened by qtld614 #15077: Regexp match goes into infinite loop http://bugs.python.org/issue15077 opened by moriyoshi #15078: Change os.sendfile so its arguments are stable http://bugs.python.org/issue15078 opened by larry Most recent 15 issues with no replies (15) ========================================== #15076: Sometimes couldn't import os, shown 'import site' failed, use http://bugs.python.org/issue15076 #15064: multiprocessing should use more context manager http://bugs.python.org/issue15064 #15063: Source code links for JSON documentation http://bugs.python.org/issue15063 #15054: bytes literals erroneously tokenized http://bugs.python.org/issue15054 #15045: Make textwrap.dedent() consistent with str.splitlines(True) an http://bugs.python.org/issue15045 #15039: module/ found before module.py when both are in the CWD http://bugs.python.org/issue15039 #15037: test_curses fails with OverflowError http://bugs.python.org/issue15037 #15032: Provide a select.select implemented using select.poll http://bugs.python.org/issue15032 #15025: httplib and http.client are missing response messages for defi http://bugs.python.org/issue15025 #15018: Incomplete Python LDFLAGS and CPPFLAGS used for extension modu http://bugs.python.org/issue15018 #15010: unittest: _top_level_dir is incorrectly persisted between call http://bugs.python.org/issue15010 #14999: ctypes ArgumentError lists arguments from 1, not 0 http://bugs.python.org/issue14999 #14995: PyLong_FromString documentation should state that the string m http://bugs.python.org/issue14995 #14991: Option for regex groupdict() to show only matching names http://bugs.python.org/issue14991 #14988: _elementtree: Raise ImportError when importing of pyexpat fail http://bugs.python.org/issue14988 Most recent 15 issues waiting for review (15) ============================================= #15068: fileinput requires two EOF when reading stdin http://bugs.python.org/issue15068 #15063: Source code links for JSON documentation http://bugs.python.org/issue15063 #15061: hmac.secure_compare() leaks information about length of string http://bugs.python.org/issue15061 #15055: dictnotes.txt is out of date http://bugs.python.org/issue15055 #15047: Cygwin install (regen) problem http://bugs.python.org/issue15047 #15044: _dbm not building on Fedora 17 http://bugs.python.org/issue15044 #15042: Implemented PyState_AddModule, PyState_RemoveModule http://bugs.python.org/issue15042 #15040: stdlib compatability with pypy: mailbox.py http://bugs.python.org/issue15040 #15038: Optimize python Locks on Windows http://bugs.python.org/issue15038 #15036: mailbox.mbox fails to pop two items in a row, flushing in betw http://bugs.python.org/issue15036 #15031: Split .pyc parsing from module loading http://bugs.python.org/issue15031 #15030: PyPycLoader can't read cached .pyc files http://bugs.python.org/issue15030 #15027: Faster UTF-32 encoding http://bugs.python.org/issue15027 #15026: Faster UTF-16 encoding http://bugs.python.org/issue15026 #15022: types.SimpleNamespace needs to be picklable http://bugs.python.org/issue15022 Top 10 most discussed issues (10) ================================= #15061: hmac.secure_compare() leaks information about length of string http://bugs.python.org/issue15061 54 msgs #15068: fileinput requires two EOF when reading stdin http://bugs.python.org/issue15068 19 msgs #12982: Document that importing .pyo files needs python -O http://bugs.python.org/issue12982 14 msgs #14599: Windows test_import failure thanks to ImportError.path http://bugs.python.org/issue14599 8 msgs #14850: The inconsistency of codecs.charmap_decode http://bugs.python.org/issue14850 8 msgs #14119: Ability to adjust queue size in Executors http://bugs.python.org/issue14119 7 msgs #15040: stdlib compatability with pypy: mailbox.py http://bugs.python.org/issue15040 7 msgs #13598: string.Formatter doesn't support empty curly braces "{}" http://bugs.python.org/issue13598 6 msgs #13691: pydoc help (or help('help')) should show the doc for help http://bugs.python.org/issue13691 6 msgs #4442: document immutable type subclassing via __new__ http://bugs.python.org/issue4442 4 msgs Issues closed (27) ================== #3955: maybe doctest doesn't understand unicode_literals? http://bugs.python.org/issue3955 closed by r.david.murray #7699: strptime, strftime documentation http://bugs.python.org/issue7699 closed by r.david.murray #8028: self.terminate() from a multiprocessing.Process raises Attribu http://bugs.python.org/issue8028 closed by sbt #8289: multiprocessing.Process.__init__ pickles all arguments http://bugs.python.org/issue8289 closed by sbt #10037: multiprocessing.pool processes started by worker handler stops http://bugs.python.org/issue10037 closed by sbt #10133: multiprocessing: conn_recv_string() broken error handling http://bugs.python.org/issue10133 closed by sbt #10469: test_socket fails using Visual Studio 2010 http://bugs.python.org/issue10469 closed by kristjan.jonsson #12897: Support for iterators in multiprocessing map http://bugs.python.org/issue12897 closed by sbt #13841: multiprocessing should use sys.exit() where possible http://bugs.python.org/issue13841 closed by sbt #13857: Add textwrap.indent() as counterpart to textwrap.dedent() http://bugs.python.org/issue13857 closed by python-dev #14908: datetime.datetime should have a timestamp() method http://bugs.python.org/issue14908 closed by belopolsky #14936: PEP 3121, 384 refactoring applied to curses_panel module http://bugs.python.org/issue14936 closed by loewis #14968: Section "Inplace Operators" of :mod:`operator` should be a sub http://bugs.python.org/issue14968 closed by eric.araujo #15015: Access to non-existing "future" attribute in error path of fut http://bugs.python.org/issue15015 closed by bquinlan #15021: xmlrpc server hangs http://bugs.python.org/issue15021 closed by Abhishek.Singh #15046: Missing cast to Py_ssize_t in socket_connection.c http://bugs.python.org/issue15046 closed by amaury.forgeotdarc #15048: Manually Installed Python Includes System Wide Paths http://bugs.python.org/issue15048 closed by ronaldoussoren #15051: Can't compile Python 3.3a4 on OS X http://bugs.python.org/issue15051 closed by ronaldoussoren #15057: Potential Bug in mpd_qdivint and mpd_qrem http://bugs.python.org/issue15057 closed by skrah #15058: Potential Bug in dlpvalloc and dlvalloc http://bugs.python.org/issue15058 closed by amaury.forgeotdarc #15059: Potential Bug in mpd_qresize and mpd_qresize_zero http://bugs.python.org/issue15059 closed by skrah #15060: docs: socket typo http://bugs.python.org/issue15060 closed by sandro.tosi #15065: strftime format string %F %T consistency problem http://bugs.python.org/issue15065 closed by r.david.murray #15069: Dictionary Creation Fails with integer key http://bugs.python.org/issue15069 closed by amaury.forgeotdarc #15070: VS9.0 build doesn't work http://bugs.python.org/issue15070 closed by pitrou #15072: Segfault on OSX http://bugs.python.org/issue15072 closed by ronaldoussoren #15073: commands.getoutput() does not work on windows http://bugs.python.org/issue15073 closed by amaury.forgeotdarc From benjamin at python.org Fri Jun 15 18:52:05 2012 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 15 Jun 2012 09:52:05 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDB13A4.8020104@hastings.org> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> Message-ID: 2012/6/15 Larry Hastings : > If I understand you correctly, you seem to be trying to apply > "is_implemented" to the problem of predicting which specific inputs to a > parameter would be valid.? I don't think that problem is tractable--it's way > too context-specific. Exactly! It's too context sensitive to belong on a generic signature object. Without is_implemented, all the properties of the signature object should only change if you alter the parameter list. How a parameter is dealt with in the function should not affect the signature of a function. -- Regards, Benjamin From alexandre.zani at gmail.com Fri Jun 15 19:21:00 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Fri, 15 Jun 2012 10:21:00 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> Message-ID: On Fri, Jun 15, 2012 at 9:52 AM, Benjamin Peterson wrote: > 2012/6/15 Larry Hastings : >> If I understand you correctly, you seem to be trying to apply >> "is_implemented" to the problem of predicting which specific inputs to a >> parameter would be valid.? I don't think that problem is tractable--it's way >> too context-specific. > > Exactly! It's too context sensitive to belong on a generic signature > object. Without is_implemented, all the properties of the signature > object should only change if you alter the parameter list. How a > parameter is dealt with in the function should not affect the > signature of a function. > I agree. It seems to me is_implemented solves too small a class of the problem it attacks to be worth including in the signature. > > -- > Regards, > Benjamin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com From larry at hastings.org Fri Jun 15 20:17:09 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 15 Jun 2012 11:17:09 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> Message-ID: <4FDB7C25.20209@hastings.org> On 06/15/2012 04:32 AM, Nick Coghlan wrote: > Since I don't believe your proposed flag will answer any question that > actually matters in practice, I consider it useless noise that should > be dropped from the proposal. I can cite a situation where it matters in practice: the implementation of os.fwalk is effectively gated on hasattr(posix, "openat"). I expect to remove openat() in favor of adding a dir_fd parameter to open (see issue 14626). > Now, what a function *could* do is set __signature__ to a Signature > subclass that provided an additional "validate()" method, or provided > arbitrary additional information about supported features. That's a > perfectly reasonable option. What would the validate() function for os.close do? //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Fri Jun 15 20:46:09 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 15 Jun 2012 14:46:09 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDB7C25.20209@hastings.org> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> <4FDB7C25.20209@hastings.org> Message-ID: <20120615184609.708A6250031@webabinitio.net> On Fri, 15 Jun 2012 11:17:09 -0700, Larry Hastings wrote: > On 06/15/2012 04:32 AM, Nick Coghlan wrote: > > Since I don't believe your proposed flag will answer any question that > > actually matters in practice, I consider it useless noise that should > > be dropped from the proposal. > > I can cite a situation where it matters in practice: the implementation > of os.fwalk is effectively gated on hasattr(posix, "openat"). I expect > to remove openat() in favor of adding a dir_fd parameter to open (see > issue 14626). I don't think that justifies adding an attribute to __signature__, though. As someone pointed out, it isn't part of the function's signature, it is part of the function's function. Adding a os.have_openat seems more reasonable than adding is_implemented to every __signature__ object. And more useful, as well; it provides a much more specific piece of information. > > Now, what a function *could* do is set __signature__ to a Signature > > subclass that provided an additional "validate()" method, or provided > > arbitrary additional information about supported features. That's a > > perfectly reasonable option. > > What would the validate() function for os.close do? Why would you need one? --David From larry at hastings.org Fri Jun 15 20:52:16 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 15 Jun 2012 11:52:16 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> Message-ID: <4FDB8460.5010804@hastings.org> On Fri, Jun 15, 2012 at 9:52 AM, Benjamin Peterson wrote: > 2012/6/15 Larry Hastings: >> If I understand you correctly, you seem to be trying to apply >> "is_implemented" to the problem of predicting which specific inputs to a >> parameter would be valid. I don't think that problem is tractable--it's way >> too context-specific. > Exactly! It's too context sensitive to belong on a generic signature > object. Without is_implemented, all the properties of the signature > object should only change if you alter the parameter list. How a > parameter is dealt with in the function should not affect the > signature of a function. My opinion is that function introspection allows you to answer questions about that function, and the question "Can I use this parameter at all?" is relevant. On 06/15/2012 10:21 AM, Alexandre Zani wrote: > I agree. It seems to me is_implemented solves too small a class of the > problem it attacks to be worth including in the signature. I concede that I appear to be in an extremely small minority. (Has a single other person stepped forward in support of is_implemented? I don't recall one.) //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Fri Jun 15 21:04:53 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 15 Jun 2012 12:04:53 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <20120615184609.708A6250031@webabinitio.net> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> <4FDB7C25.20209@hastings.org> <20120615184609.708A6250031@webabinitio.net> Message-ID: <4FDB8755.3040707@hastings.org> On 06/15/2012 11:46 AM, R. David Murray wrote: > Adding a os.have_openat seems more reasonable than adding is_implemented > to every __signature__ object. And more useful, as well; it provides > a much more specific piece of information. We already have "os.have_openat"; it's spelled sysconfig.get_config_var('HAVE_OPENAT'). But, assuming I land issue 14626, this leads us to: Q: Can I use the dir_fd parameter to os.open? A: Only if sysconfig.get_config_var('HAVE_OPENAT') is true. Q: Can I use the fd parameter to os.utime? A: Only if sysconfig.get_config_var('HAVE_FUTIMENS') or sysconfig.get_config_var('HAVE_FUTIMES') is true. I feel this interface lacks civility. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Fri Jun 15 21:06:43 2012 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 15 Jun 2012 12:06:43 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDB8755.3040707@hastings.org> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> <4FDB7C25.20209@hastings.org> <20120615184609.708A6250031@webabinitio.net> <4FDB8755.3040707@hastings.org> Message-ID: 2012/6/15 Larry Hastings : > On 06/15/2012 11:46 AM, R. David Murray wrote: > > Adding a os.have_openat seems more reasonable than adding is_implemented > to every __signature__ object. And more useful, as well; it provides > a much more specific piece of information. > > > We already have "os.have_openat"; it's spelled > sysconfig.get_config_var('HAVE_OPENAT').? But, assuming I land issue 14626, > this leads us to: > > Q: Can I use the dir_fd parameter to os.open? > A: Only if sysconfig.get_config_var('HAVE_OPENAT') is true. > > Q: Can I use the fd parameter to os.utime? > A: Only if sysconfig.get_config_var('HAVE_FUTIMENS') or > sysconfig.get_config_var('HAVE_FUTIMES') is true. > > I feel this interface lacks civility. There's no reason this couldn't be wrapped into some sort of os level attribute or function. -- Regards, Benjamin From larry at hastings.org Fri Jun 15 21:31:03 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 15 Jun 2012 12:31:03 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> <4FDB7C25.20209@hastings.org> <20120615184609.708A6250031@webabinitio.net> <4FDB8755.3040707@hastings.org> Message-ID: <4FDB8D77.9080509@hastings.org> On 06/15/2012 12:06 PM, Benjamin Peterson wrote: > 2012/6/15 Larry Hastings: >> On 06/15/2012 11:46 AM, R. David Murray wrote: >> >> Adding a os.have_openat seems more reasonable than adding is_implemented >> to every __signature__ object. And more useful, as well; it provides >> a much more specific piece of information. >> >> >> We already have "os.have_openat"; it's spelled >> sysconfig.get_config_var('HAVE_OPENAT'). But, assuming I land issue 14626, >> this leads us to: >> >> Q: Can I use the dir_fd parameter to os.open? >> A: Only if sysconfig.get_config_var('HAVE_OPENAT') is true. >> >> Q: Can I use the fd parameter to os.utime? >> A: Only if sysconfig.get_config_var('HAVE_FUTIMENS') or >> sysconfig.get_config_var('HAVE_FUTIMES') is true. >> >> I feel this interface lacks civility. > There's no reason this couldn't be wrapped into some sort of os level > attribute or function. No, but how would you spell it in a graceful way? It really is specific to a particular parameter on a particular function. And there are a bunch of parameters that are available if any one of a couple C functions is locally available--fd and follow_symlinks on utime, follow_symlinks on chown. follow_symlinks for stat is available if you have lstat *or* you're on Windows. ('HAS_LSTAT' isn't set on Windows, we don't use configure there.) So certainly I don't like the idea of just checking if the C function(s) is (are) available. Note that I'm genuinely interested in your answer--"is_implemented" appears to have a groundswell of anti-support and I rather suspect will be axed. Meantime I still need to solve this problem. I can't say I like any of these: os.can_use(os.stat, "fd") # too generic os.can_use_fd(os.stat) # too specific Binding it to the function itself seems to be Just Right to me. But since they're PyCFunctionObjects I can't add arbitrary attributes. (Perhaps the right thing would be to take this discussion to issue 14626.) //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Fri Jun 15 21:34:43 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 15 Jun 2012 15:34:43 -0400 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDB8D77.9080509@hastings.org> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> <4FDB7C25.20209@hastings.org> <20120615184609.708A6250031@webabinitio.net> <4FDB8755.3040707@hastings.org> <4FDB8D77.9080509@hastings.org> Message-ID: On 2012-06-15, at 3:31 PM, Larry Hastings wrote: > (Perhaps the right thing would be to take this discussion to issue 14626.) Let's keep it in this thread while it's related to Signature. - Yury From larry at hastings.org Fri Jun 15 21:39:41 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 15 Jun 2012 12:39:41 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> <4FDB7C25.20209@hastings.org> <20120615184609.708A6250031@webabinitio.net> <4FDB8755.3040707@hastings.org> <4FDB8D77.9080509@hastings.org> Message-ID: <4FDB8F7D.1020905@hastings.org> On 06/15/2012 12:34 PM, Yury Selivanov wrote: > On 2012-06-15, at 3:31 PM, Larry Hastings wrote: >> (Perhaps the right thing would be to take this discussion to issue 14626.) > Let's keep it in this thread while it's related to Signature. I can assure you, however Benjamin might spell it, it won't use Signature ;-) //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Fri Jun 15 21:50:25 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 15 Jun 2012 15:50:25 -0400 Subject: [Python-Dev] PEP 362: 4th edition Message-ID: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> Hello, The new revision of PEP 362 has been posted: http://www.python.org/dev/peps/pep-0362/ Summary: 1. Signature's 'is_args', 'is_kwargs', 'is_keyword_only' were all replaced with a single 'kind' attribute (Nick, I borrowed your description of 'kind' attribute and its possible values.) 2. 'signature()' checks for the '__wrapped__' attribute on all callables. 3. 'POSITIONAL_ONLY' parameters should be fully supported by 'bind()' and other Signature class methods. Open questions: 1. Should we keep 'Parameter.implemented' or not. *Please vote* 2. Should 'Signature.format()' instead of 7 or so arguments accept just a SignatureFormatter class, that implements all the formatting logic, that is easy to override? All in all, I think that the PEP is almost ready for the acceptance. As 3.3 beta 1 is just a week away, I want to ask for BDFAP (Nick?) Please also do the patch review: http://bugs.python.org/issue15008 Thank you! PEP: 362 Title: Function Signature Object Version: $Revision$ Last-Modified: $Date$ Author: Brett Cannon , Jiwon Seo , Yury Selivanov , Larry Hastings Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 21-Aug-2006 Python-Version: 3.3 Post-History: 04-Jun-2012 Abstract ======== Python has always supported powerful introspection capabilities, including introspecting functions and methods (for the rest of this PEP, "function" refers to both functions and methods). By examining a function object you can fully reconstruct the function's signature. Unfortunately this information is stored in an inconvenient manner, and is spread across a half-dozen deeply nested attributes. This PEP proposes a new representation for function signatures. The new representation contains all necessary information about a function and its parameters, and makes introspection easy and straightforward. However, this object does not replace the existing function metadata, which is used by Python itself to execute those functions. The new metadata object is intended solely to make function introspection easier for Python programmers. Signature Object ================ A Signature object represents the call signature of a function and its return annotation. For each parameter accepted by the function it stores a `Parameter object`_ in its ``parameters`` collection. A Signature object has the following public attributes and methods: * return_annotation : object The annotation for the return type of the function if specified. If the function has no annotation for its return type, this attribute is not set. * parameters : OrderedDict An ordered mapping of parameters' names to the corresponding Parameter objects (keyword-only arguments are in the same order as listed in ``code.co_varnames``). * bind(\*args, \*\*kwargs) -> BoundArguments Creates a mapping from positional and keyword arguments to parameters. Raises a ``TypeError`` if the passed arguments do not match the signature. * bind_partial(\*args, \*\*kwargs) -> BoundArguments Works the same way as ``bind()``, but allows the omission of some required arguments (mimics ``functools.partial`` behavior.) Raises a ``TypeError`` if the passed arguments do not match the signature. * format(...) -> str Formats the Signature object to a string. Optional arguments allow for custom render functions for parameter names, annotations and default values, along with custom separators. Signature implements the ``__str__`` method, which fallbacks to the ``Signature.format()`` call. It's possible to test Signatures for equality. Two signatures are equal when they have equal parameters and return annotations. Changes to the Signature object, or to any of its data members, do not affect the function itself. Parameter Object ================ Python's expressive syntax means functions can accept many different kinds of parameters with many subtle semantic differences. We propose a rich Parameter object designed to represent any possible function parameter. The structure of the Parameter object is: * name : str The name of the parameter as a string. * default : object The default value for the parameter, if specified. If the parameter has no default value, this attribute is not set. * annotation : object The annotation for the parameter if specified. If the parameter has no annotation, this attribute is not set. * kind : str Describes how argument values are bound to the parameter. Possible values: * ``Parameter.POSITIONAL_ONLY`` - value must be supplied as a positional argument. Python has no explicit syntax for defining positional-only parameters, but many builtin and extension module functions (especially those that accept only one or two parameters) accept them. * ``Parameter.POSITIONAL_OR_KEYWORD`` - value may be supplied as either a keyword or positional argument (this is the standard binding behaviour for functions implemented in Python.) * ``Parameter.KEYWORD_ONLY`` - value must be supplied as a keyword argument. Keyword only parameters are those which appear after a "*" or "\*args" entry in a Python function definition. * ``Parameter.VAR_POSITIONAL`` - a tuple of positional arguments that aren't bound to any other parameter. This corresponds to a "\*args" parameter in a Python function definition. * ``Parameter.VAR_KEYWORD`` - a dict of keyword arguments that aren't bound to any other parameter. This corresponds to a "\*\*kwds" parameter in a Python function definition. * implemented : bool True if the parameter is implemented for use. Some platforms implement functions but can't support specific parameters (e.g. "mode" for ``os.mkdir``). Passing in an unimplemented parameter may result in the parameter being ignored, or in NotImplementedError being raised. It is intended that all conditions where ``implemented`` may be False be thoroughly documented. Two parameters are equal when all their attributes are equal. BoundArguments Object ===================== Result of a ``Signature.bind`` call. Holds the mapping of arguments to the function's parameters. Has the following public attributes: * arguments : OrderedDict An ordered, mutable mapping of parameters' names to arguments' values. Does not contain arguments' default values. * args : tuple Tuple of positional arguments values. Dynamically computed from the 'arguments' attribute. * kwargs : dict Dict of keyword arguments values. Dynamically computed from the 'arguments' attribute. The ``arguments`` attribute should be used in conjunction with ``Signature.parameters`` for any arguments processing purposes. ``args`` and ``kwargs`` properties can be used to invoke functions: :: def test(a, *, b): ... sig = signature(test) ba = sig.bind(10, b=20) test(*ba.args, **ba.kwargs) Implementation ============== The implementation adds a new function ``signature()`` to the ``inspect`` module. The function is the preferred way of getting a ``Signature`` for a callable object. The function implements the following algorithm: - If the object is not callable - raise a TypeError - If the object has a ``__signature__`` attribute and if it is not ``None`` - return a deepcopy of it - If it is ``None`` and the object is an instance of ``BuiltinFunction``, raise a ``ValueError`` - If it has a ``__wrapped__`` attribute, return ``signature(object.__wrapped__)`` - If the object is a an instance of ``FunctionType`` construct and return a new ``Signature`` for it - If the object is a method or a classmethod, construct and return a new ``Signature`` object, with its first parameter (usually ``self`` or ``cls``) removed - If the object is a staticmethod, construct and return a new ``Signature`` object - If the object is an instance of ``functools.partial``, construct a new ``Signature`` from its ``partial.func`` attribute, and account for already bound ``partial.args`` and ``partial.kwargs`` - If the object is a class or metaclass: - If the object's type has a ``__call__`` method defined in its MRO, return a Signature for it - If the object has a ``__new__`` method defined in its class, return a Signature object for it - If the object has a ``__init__`` method defined in its class, return a Signature object for it - Return ``signature(object.__call__)`` Note, that the ``Signature`` object is created in a lazy manner, and is not automatically cached. If, however, the Signature object was explicitly cached by the user, ``signature()`` returns a new deepcopy of it on each invocation. An implementation for Python 3.3 can be found at [#impl]_. The python issue tracking the patch is [#issue]_. Design Considerations ===================== No implicit caching of Signature objects ---------------------------------------- The first PEP design had a provision for implicit caching of ``Signature`` objects in the ``inspect.signature()`` function. However, this has the following downsides: * If the ``Signature`` object is cached then any changes to the function it describes will not be reflected in it. However, If the caching is needed, it can be always done manually and explicitly * It is better to reserve the ``__signature__`` attribute for the cases when there is a need to explicitly set to a ``Signature`` object that is different from the actual one Examples ======== Visualizing Callable Objects' Signature --------------------------------------- Let's define some classes and functions: :: from inspect import signature from functools import partial, wraps class FooMeta(type): def __new__(mcls, name, bases, dct, *, bar:bool=False): return super().__new__(mcls, name, bases, dct) def __init__(cls, name, bases, dct, **kwargs): return super().__init__(name, bases, dct) class Foo(metaclass=FooMeta): def __init__(self, spam:int=42): self.spam = spam def __call__(self, a, b, *, c) -> tuple: return a, b, c def shared_vars(*shared_args): """Decorator factory that defines shared variables that are passed to every invocation of the function""" def decorator(f): @wraps(f) def wrapper(*args, **kwds): full_args = shared_args + args return f(*full_args, **kwds) # Override signature sig = wrapper.__signature__ = signature(f) for __ in shared_args: sig.parameters.popitem(last=False) return wrapper return decorator @shared_vars({}) def example(_state, a, b, c): return _state, a, b, c def format_signature(obj): return str(signature(obj)) Now, in the python REPL: :: >>> format_signature(FooMeta) '(name, bases, dct, *, bar:bool=False)' >>> format_signature(Foo) '(spam:int=42)' >>> format_signature(Foo.__call__) '(self, a, b, *, c) -> tuple' >>> format_signature(Foo().__call__) '(a, b, *, c) -> tuple' >>> format_signature(partial(Foo().__call__, 1, c=3)) '(b, *, c=3) -> tuple' >>> format_signature(partial(partial(Foo().__call__, 1, c=3), 2, c=20)) '(*, c=20) -> tuple' >>> format_signature(example) '(a, b, c)' >>> format_signature(partial(example, 1, 2)) '(c)' >>> format_signature(partial(partial(example, 1, b=2), c=3)) '(b=2, c=3)' Annotation Checker ------------------ :: import inspect import functools def checktypes(func): '''Decorator to verify arguments and return types Example: >>> @checktypes ... def test(a:int, b:str) -> int: ... return int(a * b) >>> test(10, '1') 1111111111 >>> test(10, 1) Traceback (most recent call last): ... ValueError: foo: wrong type of 'b' argument, 'str' expected, got 'int' ''' sig = inspect.signature(func) types = {} for param in sig.parameters.values(): # Iterate through function's parameters and build the list of # arguments types try: type_ = param.annotation except AttributeError: continue else: if not inspect.isclass(type_): # Not a type, skip it continue types[param.name] = type_ # If the argument has a type specified, let's check that its # default value (if present) conforms with the type. try: default = param.default except AttributeError: continue else: if not isinstance(default, type_): raise ValueError("{func}: wrong type of a default value for {arg!r}". \ format(func=func.__qualname__, arg=param.name)) def check_type(sig, arg_name, arg_type, arg_value): # Internal function that encapsulates arguments type checking if not isinstance(arg_value, arg_type): raise ValueError("{func}: wrong type of {arg!r} argument, " \ "{exp!r} expected, got {got!r}". \ format(func=func.__qualname__, arg=arg_name, exp=arg_type.__name__, got=type(arg_value).__name__)) @functools.wraps(func) def wrapper(*args, **kwargs): # Let's bind the arguments ba = sig.bind(*args, **kwargs) for arg_name, arg in ba.arguments.items(): # And iterate through the bound arguments try: type_ = types[arg_name] except KeyError: continue else: # OK, we have a type for the argument, lets get the corresponding # parameter description from the signature object param = sig.parameters[arg_name] if param.kind == param.VAR_POSITIONAL: # If this parameter is a variable-argument parameter, # then we need to check each of its values for value in arg: check_type(sig, arg_name, type_, value) elif param.kind == param.VAR_KEYWORD: # If this parameter is a variable-keyword-argument parameter: for subname, value in arg.items(): check_type(sig, arg_name + ':' + subname, type_, value) else: # And, finally, if this parameter a regular one: check_type(sig, arg_name, type_, arg) result = func(*ba.args, **ba.kwargs) # The last bit - let's check that the result is correct try: return_type = sig.return_annotation except AttributeError: # Looks like we don't have any restriction on the return type pass else: if isinstance(return_type, type) and not isinstance(result, return_type): raise ValueError('{func}: wrong return type, {exp} expected, got {got}'. \ format(func=func.__qualname__, exp=return_type.__name__, got=type(result).__name__)) return result return wrapper Render Function Signature to HTML --------------------------------- :: import inspect def format_to_html(func): sig = inspect.signature(func) html = sig.format(token_params_separator=',', token_colon=':', token_eq='=', token_return_annotation='->', token_left_paren='(', token_right_paren=')', token_kwonly_separator='*', format_name=lambda name: ''+name+'') return '{}'.format(html) References ========== .. [#impl] pep362 branch (https://bitbucket.org/1st1/cpython/overview) .. [#issue] issue 15008 (http://bugs.python.org/issue15008) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From larry at hastings.org Fri Jun 15 21:56:16 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 15 Jun 2012 12:56:16 -0700 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> Message-ID: <4FDB9360.7050804@hastings.org> On 06/15/2012 12:50 PM, Yury Selivanov wrote: > Open questions: > > 1. Should we keep 'Parameter.implemented' or not. *Please vote* +1 to keeping Parameter.implemented. Let's get this over with, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Jun 15 22:10:00 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 15 Jun 2012 22:10:00 +0200 Subject: [Python-Dev] PEP 362: 4th edition References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> Message-ID: <20120615221000.11351eed@pitrou.net> On Fri, 15 Jun 2012 15:50:25 -0400 Yury Selivanov wrote: > > Open questions: > > 1. Should we keep 'Parameter.implemented' or not. *Please vote* -1 to keeping it. Regards Antoine. From alexandre.zani at gmail.com Fri Jun 15 22:17:24 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Fri, 15 Jun 2012 13:17:24 -0700 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <4FDB9360.7050804@hastings.org> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4FDB9360.7050804@hastings.org> Message-ID: -1 implemented It appears to target the problem of platform-dependent parameters. However as was discussed previously, much more common than a parameter simply not being supported on a platform is a parameter supporting different values on different platforms. As such, I think that it solves too small a sub-set of the problem that it attacks to be useful. Raising exceptions solves the problem. Furthermore, whether a parameter is implemented or not is not properly a part of the callable's signature. It's a part of the function's internals. On Fri, Jun 15, 2012 at 12:56 PM, Larry Hastings wrote: > On 06/15/2012 12:50 PM, Yury Selivanov wrote: > > Open questions: > > 1. Should we keep 'Parameter.implemented' or not. *Please vote* > > > +1 to keeping Parameter.implemented. > > Let's get this over with, > > > /arry > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com > From victor.stinner at gmail.com Fri Jun 15 22:48:42 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 15 Jun 2012 22:48:42 +0200 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> Message-ID: > 1. Should we keep 'Parameter.implemented' or not. ?*Please vote* It looks like only few functions of the os module will use it. Even for the os module, I'm not convinced that it is the appropriate solution to indicate if a feature is supported or not. I would prefer something like os.path.supports_unicode_filename, hasattr(os, 'fork'), or something else. And such attribute might only be useful if changes proposed in the issue #14626 are accepted, but the issue is still open and not everybody do agree on the idea... > ? ?- If the object has a ``__signature__`` attribute and if it > ? ? ?is not ``None`` - return a deepcopy of it I still disagree with the deepcopy. I read somewhere that Python developers are consenting adult. If someone really want to modify a Signature, it would be nice to provide a simple method to copy it. But I don't see why it should be copied *by default*. I expect that modifying a signature is more rare than just reading a signature. > ? ? ? ?- If it is ``None`` and the object is an instance of > ? ? ? ? ?``BuiltinFunction``, raise a ``ValueError`` Would it be possible to only create a signature for builtin the first time that you read its __signature__ attribute? I don't know how to implement such behaviour on a builtin function. I don't know if it's important to decide this right now. I don't want to create a signature at startup if it is not used, because it would waste memory (as docstrings? :-)). Or we might drop __signature__ if python -O is used? Victor From rdmurray at bitdance.com Fri Jun 15 23:03:43 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 15 Jun 2012 17:03:43 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> Message-ID: <20120615210343.87279250031@webabinitio.net> On Fri, 15 Jun 2012 22:48:42 +0200, Victor Stinner wrote: > > 1. Should we keep 'Parameter.implemented' or not. ??*Please vote* -1 to implemented. > I still disagree with the deepcopy. I read somewhere that Python > developers are consenting adult. If someone really want to modify a > Signature, it would be nice to provide a simple method to copy it. But > I don't see why it should be copied *by default*. I expect that > modifying a signature is more rare than just reading a signature. The issue isn't "consenting adults", the issue is consistency. Without the deepcopy, sometimes what you get back from the inspect function is freely modifiable and sometimes it is not. That inconsistency is a bad thing. --David From yselivanov.ml at gmail.com Fri Jun 15 23:07:46 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 15 Jun 2012 17:07:46 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> Message-ID: On 2012-06-15, at 4:48 PM, Victor Stinner wrote: [snip] > Would it be possible to only create a signature for builtin the first > time that you read its __signature__ attribute? I don't know how to > implement such behaviour on a builtin function. I don't know if it's > important to decide this right now. > > I don't want to create a signature at startup if it is not used, > because it would waste memory (as docstrings? :-)). I think when we have the working mechanism to generate them in place, we can make it lazy. > Or we might drop > __signature__ if python -O is used? No, that'd be unacceptable. - Yury From solipsis at pitrou.net Fri Jun 15 23:13:16 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 15 Jun 2012 23:13:16 +0200 Subject: [Python-Dev] PEP 362: 4th edition References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> Message-ID: <20120615231316.583422a4@pitrou.net> On Fri, 15 Jun 2012 17:07:46 -0400 Yury Selivanov wrote: > On 2012-06-15, at 4:48 PM, Victor Stinner wrote: > [snip] > > Would it be possible to only create a signature for builtin the first > > time that you read its __signature__ attribute? I don't know how to > > implement such behaviour on a builtin function. I don't know if it's > > important to decide this right now. > > > > I don't want to create a signature at startup if it is not used, > > because it would waste memory (as docstrings? :-)). > > I think when we have the working mechanism to generate them in place, > we can make it lazy. I'm not sure I understand. The PEP already says signatures are computed lazily. Is there an exception for built-in functions? Regards Antoine. From larry at hastings.org Fri Jun 15 23:24:58 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 15 Jun 2012 14:24:58 -0700 Subject: [Python-Dev] PEP 362 implementation issue: C callables Message-ID: <4FDBA82A.8030709@hastings.org> One wrinkle of PEP 362 in CPython is callables implemented directly in C. Right now there's no good way to generate such signatures automatically. That's why we kept __signature__ in the spec, to allow implementors to generate signatures by hand for these callables. But the wrinkle gets wrinklier. The problem is, you often can't add arbitrary attributes to these objects: >>> import os >>> os.stat.random_new_attribute = 3 Traceback (most recent call last): File "", line 1, in AttributeError: 'builtin_function_or_method' object has no attribute 'random_new_attribute' If users are to attach precomputed signatures to callables implemented in C, those callables may need to *already* have a __signature__ attribute. I've already added __signature__ to PyCFunctionObject in the PEP 362 patch. This takes care of the vast bulk of such callables. But there are a bunch of obscure callable objects in CPython, and I suspect some of them may need a __signature__ attribute. Here's an almost- complete list, listed as their Python type: functools.KeyWrapper (result of functools.cmp_to_key) weakref.weakref method (bound method objects, aka "x = Class(); x.method") instancemethod (is this still even used? it's never used in trunk) operator.itemgetter operator.attrgetter operator.methodcaller _json.Scanner (result of _json.make_scanner) _json.Encoder (result of _json.make_encoder) _ctypes.DictRemover (I'm not even sure how you can get your hands on one of these) sqlite3.Connection I produced this by grepping for "/* tp_call */" and ignoring all of them that were set to "0", then creating one of those objects and trying (and failing) to set a __signature__ attribute. There are four more candidates I found with the grep but couldn't figure out how to instantiate and test. They have to do with the descriptor protocol, aka properties, but the types aren't directly exposed by Python. They're all defined in Object/descrobject.c. The internal class names are: method_descriptor classmethod_descriptor wrapper_descriptor method-wrapper (you get one of these out of a "wrapper_descriptor") I'm happy to do the work, but I don't have a good sense of which callables need a __signature__ attribute. Can someone here tell me which ones might need a __signature__ attribute? //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Fri Jun 15 23:26:25 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 15 Jun 2012 17:26:25 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <20120615231316.583422a4@pitrou.net> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615231316.583422a4@pitrou.net> Message-ID: On 2012-06-15, at 5:13 PM, Antoine Pitrou wrote: > On Fri, 15 Jun 2012 17:07:46 -0400 > Yury Selivanov wrote: >> On 2012-06-15, at 4:48 PM, Victor Stinner wrote: >> [snip] >>> Would it be possible to only create a signature for builtin the first >>> time that you read its __signature__ attribute? I don't know how to >>> implement such behaviour on a builtin function. I don't know if it's >>> important to decide this right now. >>> >>> I don't want to create a signature at startup if it is not used, >>> because it would waste memory (as docstrings? :-)). >> >> I think when we have the working mechanism to generate them in place, >> we can make it lazy. > > I'm not sure I understand. The PEP already says signatures are computed > lazily. Is there an exception for built-in functions? Right now, if there is no '__signature__' attribute set on a builtin function - there is no way of generating it (PyCFunctionObject doesn't have __code__), so a ValueError will be raised. Maybe, if in the future we replace PyArg_ParseTuple and family with something, that can generate metadata for builtins (or generate Signature object, or generate some callable that generates it, or something else) we make 'signature()' use it. But again, that's not for 3.3. And yes, signature() is still lazy. - Yury From solipsis at pitrou.net Fri Jun 15 23:30:01 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 15 Jun 2012 23:30:01 +0200 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615231316.583422a4@pitrou.net> Message-ID: <20120615233001.29e33120@pitrou.net> On Fri, 15 Jun 2012 17:26:25 -0400 Yury Selivanov wrote: > On 2012-06-15, at 5:13 PM, Antoine Pitrou wrote: > > > On Fri, 15 Jun 2012 17:07:46 -0400 > > Yury Selivanov wrote: > >> On 2012-06-15, at 4:48 PM, Victor Stinner wrote: > >> [snip] > >>> Would it be possible to only create a signature for builtin the first > >>> time that you read its __signature__ attribute? I don't know how to > >>> implement such behaviour on a builtin function. I don't know if it's > >>> important to decide this right now. > >>> > >>> I don't want to create a signature at startup if it is not used, > >>> because it would waste memory (as docstrings? :-)). > >> > >> I think when we have the working mechanism to generate them in place, > >> we can make it lazy. > > > > I'm not sure I understand. The PEP already says signatures are computed > > lazily. Is there an exception for built-in functions? > > Right now, if there is no '__signature__' attribute set on a builtin > function - there is no way of generating it (PyCFunctionObject doesn't > have __code__), so a ValueError will be raised. Ok, but what does this mean for 3.3? Does the PEP propose that all builtins get a non-lazy __signature__, or simply that ValueError always be raised? Regards Antoine. From yselivanov.ml at gmail.com Fri Jun 15 23:33:29 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 15 Jun 2012 17:33:29 -0400 Subject: [Python-Dev] PEP 362 implementation issue: C callables In-Reply-To: <4FDBA82A.8030709@hastings.org> References: <4FDBA82A.8030709@hastings.org> Message-ID: <8059E06E-6D49-471F-9DC1-F7BF4A244183@gmail.com> On 2012-06-15, at 5:24 PM, Larry Hastings wrote: [snip] > I'm happy to do the work, but I don't have a good sense of which callables need a __signature__ attribute. Can someone here tell me which ones might need a __signature__ attribute? I think may we need to be able to write __signature__ attribute to 'classmethod', 'staticmethod', 'functools.partial' and 'method' ('types.MethodType'). - Yury From yselivanov.ml at gmail.com Fri Jun 15 23:35:41 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 15 Jun 2012 17:35:41 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <20120615233001.29e33120@pitrou.net> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615231316.583422a4@pitrou.net> <20120615233001.29e33120@pitrou.net> Message-ID: On 2012-06-15, at 5:30 PM, Antoine Pitrou wrote: > On Fri, 15 Jun 2012 17:26:25 -0400 > Yury Selivanov wrote: > >> On 2012-06-15, at 5:13 PM, Antoine Pitrou wrote: >> >>> On Fri, 15 Jun 2012 17:07:46 -0400 >>> Yury Selivanov wrote: >>>> On 2012-06-15, at 4:48 PM, Victor Stinner wrote: >>>> [snip] >>>>> Would it be possible to only create a signature for builtin the first >>>>> time that you read its __signature__ attribute? I don't know how to >>>>> implement such behaviour on a builtin function. I don't know if it's >>>>> important to decide this right now. >>>>> >>>>> I don't want to create a signature at startup if it is not used, >>>>> because it would waste memory (as docstrings? :-)). >>>> >>>> I think when we have the working mechanism to generate them in place, >>>> we can make it lazy. >>> >>> I'm not sure I understand. The PEP already says signatures are computed >>> lazily. Is there an exception for built-in functions? >> >> Right now, if there is no '__signature__' attribute set on a builtin >> function - there is no way of generating it (PyCFunctionObject doesn't >> have __code__), so a ValueError will be raised. > > Ok, but what does this mean for 3.3? Does the PEP propose that all > builtins get a non-lazy __signature__, or simply that ValueError always > be raised? Simply ValueError. - Yury From larry at hastings.org Fri Jun 15 23:38:03 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 15 Jun 2012 14:38:03 -0700 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <20120615231316.583422a4@pitrou.net> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615231316.583422a4@pitrou.net> Message-ID: <4FDBAB3B.9040304@hastings.org> On 06/15/2012 02:13 PM, Antoine Pitrou wrote: > I'm not sure I understand. The PEP already says signatures are computed > lazily. Is there an exception for built-in functions? Right now we have no way to automatically generate signatures for built-in functions. So, as of current trunk, any such signatures would have to be built by hand. If we could somehow produce signature information in C, what then? Two possible approaches suggest themselves: 1. Pre-generate the signatures and cache them in the __signature__ attribute. 2. Add a callback, perhaps named __get_signature__(), which calculates the signature and returns it. Both approaches have downsides. The former means generation is not lazy and therefore consumes more memory. The latter adds a new attribute to builtin functions. Which is a better approach? Who knows? We don't even have the signature information in C yet. As we all learned from Star Trek VI, the future is an undiscovered country. So the PEP deliberately makes no ruling here. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Fri Jun 15 23:38:10 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 15 Jun 2012 17:38:10 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615231316.583422a4@pitrou.net> <20120615233001.29e33120@pitrou.net> Message-ID: <5316D6E6-27E0-469F-98E5-43E09235DDEE@gmail.com> On 2012-06-15, at 5:35 PM, Yury Selivanov wrote: > On 2012-06-15, at 5:30 PM, Antoine Pitrou wrote: > >> On Fri, 15 Jun 2012 17:26:25 -0400 >> Yury Selivanov wrote: >> >>> On 2012-06-15, at 5:13 PM, Antoine Pitrou wrote: >>> >>>> On Fri, 15 Jun 2012 17:07:46 -0400 >>>> Yury Selivanov wrote: >>>>> On 2012-06-15, at 4:48 PM, Victor Stinner wrote: >>>>> [snip] >>>>>> Would it be possible to only create a signature for builtin the first >>>>>> time that you read its __signature__ attribute? I don't know how to >>>>>> implement such behaviour on a builtin function. I don't know if it's >>>>>> important to decide this right now. >>>>>> >>>>>> I don't want to create a signature at startup if it is not used, >>>>>> because it would waste memory (as docstrings? :-)). >>>>> >>>>> I think when we have the working mechanism to generate them in place, >>>>> we can make it lazy. >>>> >>>> I'm not sure I understand. The PEP already says signatures are computed >>>> lazily. Is there an exception for built-in functions? >>> >>> Right now, if there is no '__signature__' attribute set on a builtin >>> function - there is no way of generating it (PyCFunctionObject doesn't >>> have __code__), so a ValueError will be raised. >> >> Ok, but what does this mean for 3.3? Does the PEP propose that all >> builtins get a non-lazy __signature__, or simply that ValueError always >> be raised? > > Simply ValueError. Although, Larry added __signature__ attribute to PyCFunctionObject (None by default_. So those, who want to manually create signatures for their 'C' functions would be able to do that. - Yury From steve at pearwood.info Fri Jun 15 23:48:19 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 16 Jun 2012 07:48:19 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <8774FD66-63C9-408F-A11C-C049529ECDA8@gmail.com> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <8774FD66-63C9-408F-A11C-C049529ECDA8@gmail.com> Message-ID: <4FDBADA3.10606@pearwood.info> Yury Selivanov wrote: > On 2012-06-14, at 4:53 PM, Antoine Pitrou wrote: >> On Wed, 13 Jun 2012 22:52:43 -0400 >> Yury Selivanov wrote: >> >>> * bind(\*args, \*\*kwargs) -> BoundArguments >>> Creates a mapping from positional and keyword arguments to >>> parameters. Raises a ``BindError`` (subclass of ``TypeError``) >>> if the passed arguments do not match the signature. >> Why a dedicated exception class? TypeError is good enough, and the >> proliferation of exception classes is a nuisance. > > > Agree. Will fix this. It's not broken. Within reason, more specific exceptions are better than less specific. I have always considered it a wart that there was no supported way to programmatically distinguish between (say) len(42) and len("a", "b"). Both raise TypeError. Why is the second case a type error? It has nothing to do with the type of either len, "a" or "b". The introduction of BindError will allow functions to raise a more specific, and less misleading, exception when they are called with the wrong number of arguments, or invalid keywords, etc. -- Steven From solipsis at pitrou.net Fri Jun 15 23:51:46 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 15 Jun 2012 23:51:46 +0200 Subject: [Python-Dev] PEP 362 Third Revision References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <8774FD66-63C9-408F-A11C-C049529ECDA8@gmail.com> <4FDBADA3.10606@pearwood.info> Message-ID: <20120615235146.53510d8c@pitrou.net> On Sat, 16 Jun 2012 07:48:19 +1000 Steven D'Aprano wrote: > > The introduction of BindError will allow functions to raise a more specific, > and less misleading, exception when they are called with the wrong number of > arguments, or invalid keywords, etc. If that's what you want, a separate PEP is needed. Right now we are talking about using it only in the context of PEP 362, where it is not useful over a plain TypeError (bind() only binds, so there is no possible confusion between different "kinds" of TypeErrors). Regards Antoine. From lists at cheimes.de Sat Jun 16 00:05:32 2012 From: lists at cheimes.de (Christian Heimes) Date: Sat, 16 Jun 2012 00:05:32 +0200 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <20120615210343.87279250031@webabinitio.net> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> Message-ID: Am 15.06.2012 23:03, schrieb R. David Murray: > The issue isn't "consenting adults", the issue is consistency. > Without the deepcopy, sometimes what you get back from the > inspect function is freely modifiable and sometimes it is not. > That inconsistency is a bad thing. This must be addressed one way or the other. Otherwise you will break isolation of sub interpreters. Builtin objects, types and methods *must* be immutable because they are shared across subinterpreters. This topic has been addressed by the PEP. Proposal: You could store the signature objects for builtin methods in a dict in each PyInterpreterState and use the qualname to reference the signature object. This ensures full isolation. Christian From steve at pearwood.info Sat Jun 16 00:38:09 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 16 Jun 2012 08:38:09 +1000 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> Message-ID: <4FDBB951.2050004@pearwood.info> Yury Selivanov wrote: > Hello, > > The new revision of PEP 362 has been posted: > http://www.python.org/dev/peps/pep-0362/ > Open questions: > > 1. Should we keep 'Parameter.implemented' or not. *Please vote* -0.5 It is easier to add it later if it is needed, then to take it away if not needed. > 2. Should 'Signature.format()' instead of 7 or so arguments accept > just a SignatureFormatter class, that implements all the formatting > logic, that is easy to override? -1 It is easier to pass different arguments to a function than it is to subclass and modify a class. -- Steven From lists at cheimes.de Sat Jun 16 00:58:43 2012 From: lists at cheimes.de (Christian Heimes) Date: Sat, 16 Jun 2012 00:58:43 +0200 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <5316D6E6-27E0-469F-98E5-43E09235DDEE@gmail.com> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615231316.583422a4@pitrou.net> <20120615233001.29e33120@pitrou.net> <5316D6E6-27E0-469F-98E5-43E09235DDEE@gmail.com> Message-ID: <4FDBBE23.1060301@cheimes.de> Am 15.06.2012 23:38, schrieb Yury Selivanov: > Although, Larry added __signature__ attribute to PyCFunctionObject > (None by default_. So those, who want to manually create signatures > for their 'C' functions would be able to do that. As I explained in my other posting: You can't add a mutable attribute to builtin functions and types. It breaks the isolation between subinterpreters. Christian From victor.stinner at gmail.com Sat Jun 16 01:19:22 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 16 Jun 2012 01:19:22 +0200 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <4FDBAB3B.9040304@hastings.org> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615231316.583422a4@pitrou.net> <4FDBAB3B.9040304@hastings.org> Message-ID: > Right now we have no way to automatically generate signatures for built-in > functions.? So, as of current trunk, any such signatures would have to be > built by hand. > > If we could somehow produce signature information in C, what then?? Two > possible approaches suggest themselves: > > Pre-generate the signatures and cache them in the __signature__ attribute. This solution has an impact on memory footpring. > Add a callback, perhaps named __get_signature__(), which calculates the > signature and returns it. This might be decided before the PEP is accepted, because this is a new attribute not listed in the PEP. Or would it be possible to call a function when the __signature__ attribute is read? Victor From benjamin at python.org Sat Jun 16 02:52:07 2012 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 15 Jun 2012 17:52:07 -0700 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: <4FDB8D77.9080509@hastings.org> References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> <4FDB7C25.20209@hastings.org> <20120615184609.708A6250031@webabinitio.net> <4FDB8755.3040707@hastings.org> <4FDB8D77.9080509@hastings.org> Message-ID: 2012/6/15 Larry Hastings : > Note that I'm genuinely interested in your answer--"is_implemented" appears > to have a groundswell of anti-support and I rather suspect will be axed. > Meantime I still need to solve this problem. How about a list of functions that support it. Then you could do things like "chown" in os.supports_at_variant -- Regards, Benjamin From larry at hastings.org Sat Jun 16 03:17:59 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 15 Jun 2012 18:17:59 -0700 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: <4FC7A0B6.7050705@v.loewis.de> References: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> <4FC7A0B6.7050705@v.loewis.de> Message-ID: <4FDBDEC7.6000107@hastings.org> On 05/31/2012 09:47 AM, "Martin v. L?wis" wrote: >> I hereby predict that Microsoft will revert this decision, and that >> VS Express 11 will be able to build CPython. >> >> But will it be able to target Windows XP? > > I have now tried, and it seems that the chances are really low (unless > you use the VS 2010 tool chain, in which case you can just as well use > VS 2010 Express in the first place). Microsoft is now claiming that it will be able to using something called "multi-targeting", "later this year. http://blogs.msdn.com/b/vcblog/archive/2012/06/15/10320645.aspx //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimjjewett at gmail.com Sat Jun 16 05:56:31 2012 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Fri, 15 Jun 2012 20:56:31 -0700 (PDT) Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> Message-ID: <4fdc03ef.8759320a.5112.6c67@mx.google.com> Summary: *Every* Parameter attribute is optional, even name. (Think of builtins, even if they aren't automatically supported yet.) So go ahead and define some others that are sometimes useful. Instead of defining a BoundArguments class, just return a copy of the Signature, with "value" attributes added to the Parameters. Use subclasses to distinguish the parameter kind. (Replacing most of the is_ methods from the 3rd version.) [is_]implemented is important information, but the API isn't quite right; even with tweaks, maybe we should wait a version before freezing it on the base class. But I would be happy to have Larry create a Signature for the os.* functions, whether that means a subclass or just an extra instance attribute. I favor passing a class to Signature.format, because so many of the formatting arguments would normally change in parallel. But my tolerance for nested structures may be unusually high. I make some more specific suggestions below. In http://mail.python.org/pipermail/python-dev/2012-June/120305.html Yury Selivanov wrote: > A Signature object has the following public attributes and methods: > * return_annotation : object > The annotation for the return type of the function if specified. > If the function has no annotation for its return type, this > attribute is not set. This means users must already be prepared to use hasattr with the Signature as well as the Parameters -- in which case, I don't see any harm in a few extra optional properties. I would personally prefer to see the name (and qualname) and docstring, but it would make perfect sense to implement these by keeping a weakref to the original callable, and just delegating there unless/until the properties are explicitly changed. I suspect others will have a use for additional delegated attributes, such as the self of boundmethods. I do agree that __eq__ and __hash__ should depend at most on the parameters (including their order) and the annotation. > * parameters : OrderedDict > An ordered mapping of parameters' names to the corresponding > Parameter objects (keyword-only arguments are in the same order > as listed in ``code.co_varnames``). For a specification, that feels a little too tied to the specific implementation. How about: Parameters will be ordered as they are in the function declaration. or even just: Positional parameters will be ordered as they are in the function declaration. because: def f(*, a=4, b=5): pass and: def f(*, b=5, a=4): pass should probably have equal signatures. Wild thought: Instead of just *having* an OrderedDict of Parameters, should a Signature *be* that OrderedDict (with other attributes)? That is, should signature(testfn)["foo"] get the foo parameter? > * bind(\*args, \*\*kwargs) -> BoundArguments > Creates a mapping from positional and keyword arguments to > parameters. Raises a ``BindError`` (subclass of ``TypeError``) > if the passed arguments do not match the signature. > * bind_partial(\*args, \*\*kwargs) -> BoundArguments > Works the same way as ``bind()``, but allows the omission > of some required arguments (mimics ``functools.partial`` > behavior.) Are those descriptions actually correct? I would expect the mapping to be from parameters (or parameter names) to values extracted from *args and **kwargs. And I'm not sure the current patch does even that, since it seems to instead return a non-Mapping object (but with a mapping attribute) that could be used to re-create *args, **kwargs in canonical form. (Though that canonicalization is valuable for calls; it might even be worth an as_call method.) I think it should be explicit that this mapping does not include parameters which would be filled by default arguments. In fact, if you stick with this interface, I would like a 3rd method that does fill out everything. But I think it would be simpler to just add an optional attribute to each Parameter instance, and let bind fill that in on the copies, so that the return value is also a Signature. (No need for the BoundArguments class.) Then the user can decide whether or not to plug in the defaults for missing values. > * format(...) -> str > Formats the Signature object to a string. Optional arguments allow > for custom render functions for parameter names, > annotations and default values, along with custom separators. I think it should state explicitly that by default, the return value will be a string that could be used to declare an equivalent function, if "Signature" were replaced with "def funcname". There are enough customization parameters that would often be changed together (e.g., to produce HTML output) that it might make sense to use overridable class defaults -- or even to make format a class itself. I also think it would make sense to delegate formatting the individual parameters to the parameter objects. Yes, this does mean that the subclasses would be more than markers classes. > It's possible to test Signatures for equality. Two signatures > are equal when they have equal parameters and return annotations. I would be more explicit about parameter order mattering. Perhaps: It's possible to test Signatures for equality. Two signatures are equal when their parameters are equal, their positional parameters appear in the same order, and they have equal return annotations. > The structure of the Parameter object is: > * name : str > The name of the parameter as a string. If there is no name, as with some builtins, will this be "", None or not set? (3rd edition) > * is_keyword_only : bool ... > * is_args : bool ... > * is_kwargs : bool ... (4th edition) > ... Parameter.POSITIONAL_ONLY ... > ... Parameter.POSITIONAL_OR_KEYWORD ... > ... Parameter.KEYWORD_ONLY ... > ... Parameter.VAR_POSITIONAL ... > ... Parameter.VAR_KEYWORD ... This set has already grown, and I can think of others I would like to use. (Pseudo-parameters, such as a method's self instance, or an auxiliary variable.) To me, that suggests marker subclasses and isinstance checking. class BaseParameter: ... # These two really are different class ArgsParameter(BaseParameter): ... class KwargsParameter(BaseParameter): ... class KeywordParameter(BaseParameter): ... class PositionalParameter(BaseParameter): ... class Parameter(KeywordParameter, PositionalParameter): ... (I'm not sure that normal parameters should really be the bottom of an inheritance diamond, as opposed to a sibling.) It may also be worth distinguishing BoundParameter at the class level, but since the only distinction is an extra attribute (value), I'm not sure about that. Question: These names are getting long. Was there a reason not to use Sig and Param? > * implemented : bool This information is valuable, and should be at least an optional part of a signature (or a specific parameter, if there are no interactions). But I don't think a boolean is sufficient. At a minimum, let it return an object such that bool(obj) indicates whether non-default values are ever useful, but obj *could* provide more information. For example I would be happy with the following: >>> s=signature(open("myfile").seek) >>> v=s.parameters["whence"].is_implemented >>> v[0] True >>> v["SEEK_DATA"] False That said, if the decision is to leave is_implemented up to subclasses for now, I won't object. > Two parameters are equal when all their attributes are equal. I think that it would be better to specify which attributes matter, and let them be equal so long as those attributes matched. I'll try to post details on the ticket, but roughly only the attributes specifically mentioned in this PEP should matter. I'm not sure if positional parameters should also check position, or if that can be left to the Signature. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From solipsis at pitrou.net Sat Jun 16 10:23:30 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 16 Jun 2012 10:23:30 +0200 Subject: [Python-Dev] cpython: Optimize _PyUnicode_FastCopyCharacters() when maxchar(from) > maxchar(to) References: Message-ID: <20120616102330.152e1a04@pitrou.net> On Sat, 16 Jun 2012 02:29:09 +0200 victor.stinner wrote: > + if (from_kind == to_kind) { > + if (!PyUnicode_IS_ASCII(from) && PyUnicode_IS_ASCII(to)) { > + /* Writing Latin-1 characters into an ASCII string requires to > + check that all written characters are pure ASCII */ > +#ifndef Py_DEBUG > + if (check_maxchar) { > + Py_UCS4 max_char; > + max_char = ucs1lib_find_max_char(from_data, > + (char*)from_data + how_many); > + if (max_char >= 128) > + return -1; > + } > +#else > + const Py_UCS4 to_maxchar = PyUnicode_MAX_CHAR_VALUE(to); > + Py_UCS4 ch; > + Py_ssize_t i; > + for (i=0; i < how_many; i++) { > + ch = PyUnicode_READ(from_kind, from_data, from_start + i); > + assert(ch <= to_maxchar); > + } > +#endif So you're returning -1 in release mode but you're crashing (assert()) in debug mode? Why that? > +#ifndef Py_DEBUG > + if (!check_maxchar) { [...] This means the optimizations are not exercised in debug mode? That sounds like a bad idea. Regards Antoine. From urban.dani+py at gmail.com Sat Jun 16 10:32:56 2012 From: urban.dani+py at gmail.com (Daniel Urban) Date: Sat, 16 Jun 2012 10:32:56 +0200 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <4fdc03ef.8759320a.5112.6c67@mx.google.com> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: On Sat, Jun 16, 2012 at 5:56 AM, Jim J. Jewett wrote: > I think it should be explicit that this mapping does not include > parameters which would be filled by default arguments. ?In fact, if > you stick with this interface, I would like a 3rd method that does > fill out everything. +1 Daniel From urban.dani+py at gmail.com Sat Jun 16 12:21:34 2012 From: urban.dani+py at gmail.com (Daniel Urban) Date: Sat, 16 Jun 2012 12:21:34 +0200 Subject: [Python-Dev] PEP 362 implementation issue: C callables In-Reply-To: <4FDBA82A.8030709@hastings.org> References: <4FDBA82A.8030709@hastings.org> Message-ID: On Fri, Jun 15, 2012 at 11:24 PM, Larry Hastings wrote: > There are four more candidates I found with the grep but couldn't figure out > how to instantiate and test.? They have to do with the descriptor protocol, > aka properties, but the types aren't directly exposed by Python.? They're > all defined in Object/descrobject.c.? The internal class names are: > > method_descriptor > classmethod_descriptor > wrapper_descriptor > method-wrapper (you get one of these out of a "wrapper_descriptor") 'method_descriptor' is apparently used for the methods of built-in (implemented in C) objects: >>> set.__dict__['union'].__class__ 'classmethod_descriptor' is similarly for the classmethods of built-in classes: >>> import itertools >>> itertools.chain.__dict__['from_iterable'].__class__ 'wrapper_descriptor' is used for example the operators of built-in types: >>> int.__dict__['__add__'].__class__ And 'method-wrapper': >>> (5).__add__.__class__ Daniel From ncoghlan at gmail.com Sat Jun 16 16:51:30 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 17 Jun 2012 00:51:30 +1000 Subject: [Python-Dev] PEP 362 Third Revision In-Reply-To: References: <895A397B-8380-43B1-9877-ABF27F1B83F7@gmail.com> <20120614225337.2b1fe5a9@pitrou.net> <4FDA9F3C.2000208@hastings.org> <4FDAA946.9070607@hastings.org> <4FDAC6D2.7020807@hastings.org> <4FDB13A4.8020104@hastings.org> <4FDB7C25.20209@hastings.org> <20120615184609.708A6250031@webabinitio.net> <4FDB8755.3040707@hastings.org> <4FDB8D77.9080509@hastings.org> Message-ID: On Sat, Jun 16, 2012 at 10:52 AM, Benjamin Peterson wrote: > 2012/6/15 Larry Hastings : >> Note that I'm genuinely interested in your answer--"is_implemented" appears >> to have a groundswell of anti-support and I rather suspect will be axed. >> Meantime I still need to solve this problem. > > How about a list of functions that support it. Then you could do things like > > "chown" in os.supports_at_variant Indeed, PEP 362 is *not* the right answer for obtaining a more user friendly interface to sysconfig. The real answer for this kind of case is to just try whatever you want to do and see if it throws an exception. Just as LBYL is sometimes unusable due to problems with race conditions, in this case, a dedicated LBYL interface should be rejected because it requires double-keying data. If you try it and it fails, you know it's not supported. If some separate metadata *claims* it's not supported, it might just be that the metadata is wrong. We don't have to implement bad ideas just because some people don't like to rely on exceptions - LBYL vs EAFP isn't always just a stylistic choice, there are cases where one or the other can be a genuinely poor concept. If someone really does want to do LBYL in this case, then finding out the underlying criteria and perform the same check themselves is the *right way to do it*. They'll get the wrong answer if the criteria ever changes, but that's the inevitable risk with LBYL in cases like this. At most, it may make sense to provide more user friendly interfaces for specifically requested sysconfig checks (perhaps in sysconfig or platform). For example: if platform.provides_openat(): # Can use dirfd param to os.open() if platform.provides_fd_times(): # Can use fd param to os.utime() I'm not convinced this need to be in the standard library, though - we will already be providing an obvious way to do these checks (i.e. catch the exceptions). It definitely shouldn't be rushed into 3.3. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Sat Jun 16 17:27:39 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 17 Jun 2012 01:27:39 +1000 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <4fdc03ef.8759320a.5112.6c67@mx.google.com> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: On Sat, Jun 16, 2012 at 1:56 PM, Jim J. Jewett wrote: > > Summary: > > ? ?*Every* Parameter attribute is optional, even name. ?(Think of > ? ?builtins, even if they aren't automatically supported yet.) > ? ?So go ahead and define some others that are sometimes useful. No, that's not the right attitude to take when designing a new API. Add only stuff we know is interesting and useful. Call YAGNI on everything else. If we later decided we missed something, then we can add it easily. If something we add up front turns out to be useless (or, worse, an attractive nuisance), then it's a pain to eliminate. For parameters, that list is only: - kind - name (should be given meaningful content, even for POSITIONAL_ONLY parameters) - default (may be missing, since "None" is allowed as a default value) - annotation (may be missing, since "None" is allowed as an annotation) > ? ?Instead of defining a BoundArguments class, just return a copy > ? ?of the Signature, with "value" attributes added to the Parameters. No, the "BoundArguments" class is designed to be easy to feed to a function call as f(*args, **kwds) > ? ?Use subclasses to distinguish the parameter kind. ?(Replacing > ? ?most of the is_ methods from the 3rd version.) Please, no, using subclasses when there is no behavioural change is annoying. A "kind" attribute will handle this just fine. > ? ?I favor passing a class to Signature.format, because so many of > ? ?the formatting arguments would normally change in parallel. > ? ?But my tolerance for nested structures may be unusually high. I'm actually inclined to drop a lot of that complexity altogether - better to provide a simple default implementation, and if people want anything else they can write their own. >> A Signature object has the following public attributes and methods: > >> * return_annotation : object >> ? ?The annotation for the return type of the function if specified. >> ? ?If the function has no annotation for its return type, this >> ? ?attribute is not set. > > This means users must already be prepared to use hasattr with the > Signature as well as the Parameters -- in which case, I don't see any > harm in a few extra optional properties. No, this gets the reasoning wrong. The optional properties are optional solely because "None" and "not present" don't mean the same thing - the attributes potentially being missing represents the fact that annotations and default values may be omitted altogether. > I would personally prefer to see the name (and qualname) and docstring, > but it would make perfect sense to implement these by keeping a weakref > to the original callable, and just delegating there unless/until the > properties are explicitly changed. ?I suspect others will have a use > for additional delegated attributes, such as the self of boundmethods. It is expected that higher level interfaces will often compose the signature object with more information from the callable. That is already well supported today, and is not the role of the signature object. The signature object is being added purely to provide a standard way to describe how a callable binds the supplied arguments to the expected parameters, that's all. > I do agree that __eq__ and __hash__ should depend at most on the > parameters (including their order) and the annotation. > >> * parameters : OrderedDict >> ? ? An ordered mapping of parameters' names to the corresponding >> ? ? Parameter objects (keyword-only arguments are in the same order >> ? ? as listed in ``code.co_varnames``). > > For a specification, that feels a little too tied to the specific > implementation. ?How about: > > ? ? Parameters will be ordered as they are in the function declaration. > > or even just: > > ? ? Positional parameters will be ordered as they are in the function > ? ? declaration. > > because: > ? ?def f(*, a=4, b=5): pass > > and: > ? ?def f(*, b=5, a=4): pass > > should probably have equal signatures. > > > Wild thought: ?Instead of just *having* an OrderedDict of Parameters, > should a Signature *be* that OrderedDict (with other attributes)? > That is, should signature(testfn)["foo"] get the foo parameter? No. While the sequence of parameters is obviously the most important part of the signature, it's still much clearer to expose it as a distinct attribute. If return annotations didn't exist, you may be able to make a stronger case. > I think it should state explicitly that by default, the return value > will be a string that could be used to declare an equivalent function, > if "Signature" were replaced with "def funcname". > > There are enough customization parameters that would often be changed > together (e.g., to produce HTML output) that it might make sense to use > overridable class defaults -- or even to make format a class itself. > > I also think it would make sense to delegate formatting the individual > parameters to the parameter objects. ?Yes, this does mean that the > subclasses would be more than markers classes. I'd like to see support for customising the formatted output dropped from the PEP entirely. We can add that later after seeing how people use the class, there's no need to try to guess in advance. >> The structure of the Parameter object is: > >> * name : str >> ? ? The name of the parameter as a string. > > If there is no name, as with some builtins, will this be "", None or > not set? Positional parameters should still be given a meaningful name (preferably matching the name used in their docstring and prose documentation). > > (3rd edition) >> * is_keyword_only : bool ... >> * is_args : bool ... >> * is_kwargs : bool ... > > (4th edition) >> ... Parameter.POSITIONAL_ONLY ... >> ... Parameter.POSITIONAL_OR_KEYWORD ... >> ... Parameter.KEYWORD_ONLY ... >> ... Parameter.VAR_POSITIONAL ... >> ... Parameter.VAR_KEYWORD ... > > This set has already grown, and I can think of others I would like to > use. ?(Pseudo-parameters, such as a method's self instance, or an > auxiliary variable.) No. This is the full set of binding behaviours. "self" is just an ordinary "POSITIONAL_OR_KEYWORD" argument (or POSITIONAL_ONLY, in some builtin cases). > > ? ?class BaseParameter: ... > > ? ?# These two really are different > ? ?class ArgsParameter(BaseParameter): ... > ? ?class KwargsParameter(BaseParameter): ... > > ? ?class KeywordParameter(BaseParameter): ... > ? ?class PositionalParameter(BaseParameter): ... > ? ?class Parameter(KeywordParameter, PositionalParameter): ... > > (I'm not sure that normal parameters should really be the bottom of an > inheritance diamond, as opposed to a sibling.) Completely unnecessary complexity. "kind" is not a free-form list, it's the full set of available binding behaviours when mapping arguments to parameters. > Question: ?These names are getting long. ?Was there a reason not to use > Sig and Param? Yes, because the full names are much clearer, and there's no reason people should be writing them out very often. >> * implemented : bool > > This information is valuable, and should be at least an optional part > of a signature (or a specific parameter, if there are no interactions). > But I don't think a boolean is sufficient. Users are free to subclass Signature (and Parameter) and add whatever additional attributes and behaviour they like. inspect.signature(obj) will faithfully copy whatever they place in __signature__. >> Two parameters are equal when all their attributes are equal. > > I think that it would be better to specify which attributes matter, > and let them be equal so long as those attributes matched. ?I'll try > to post details on the ticket, but roughly only the attributes > specifically mentioned in this PEP should matter. Agreed, since it should be made clear that subclasses will need to override __eq__ if they want it to take additional attributes into account. >?I'm not sure > if positional parameters should also check position, or if that > can be left to the Signature. Positional parameters don't know their relative position, so it *has* to be left to the signature. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Sun Jun 17 00:30:29 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Sat, 16 Jun 2012 18:30:29 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <4fdc03ef.8759320a.5112.6c67@mx.google.com> References: <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: Jim, On 2012-06-15, at 11:56 PM, Jim J. Jewett wrote: > because: > def f(*, a=4, b=5): pass > > and: > def f(*, b=5, a=4): pass > > should probably have equal signatures. That's a very good catch -- I'll fix the implementation. >> * bind(\*args, \*\*kwargs) -> BoundArguments >> Creates a mapping from positional and keyword arguments to >> parameters. Raises a ``BindError`` (subclass of ``TypeError``) >> if the passed arguments do not match the signature. >> * bind_partial(\*args, \*\*kwargs) -> BoundArguments >> Works the same way as ``bind()``, but allows the omission >> of some required arguments (mimics ``functools.partial`` >> behavior.) > > Are those descriptions actually correct? > > I would expect the mapping to be from parameters (or parameter names) > to values extracted from *args and **kwargs. > > And I'm not sure the current patch does even that, since it seems to > instead return a non-Mapping object (but with a mapping attribute) > that could be used to re-create *args, **kwargs in canonical form. > (Though that canonicalization is valuable for calls; it might even > be worth an as_call method.) > > > I think it should be explicit that this mapping does not include > parameters which would be filled by default arguments. In fact, if > you stick with this interface, I would like a 3rd method that does > fill out everything. You're right, the fact that the defaults are left out should be manifested in the PEP. I'm not sure that we need a separate method for defaults though. It's just a matter of 3-4 lines of code to iterate through parameters and add defaults to 'BoundArguments.arguments'. Let's keep the API clear. > But I think it would be simpler to just add an optional attribute > to each Parameter instance, and let bind fill that in on the copies, > so that the return value is also a Signature. (No need for the > BoundArguments class.) Then the user can decide whether or not to > plug in the defaults for missing values. Too complicated. And doesn't make the API easier to use. >> It's possible to test Signatures for equality. Two signatures >> are equal when they have equal parameters and return annotations. > > I would be more explicit about parameter order mattering. Perhaps: > > It's possible to test Signatures for equality. Two signatures are > equal when their parameters are equal, their positional parameters > appear in the same order, and they have equal return annotations. OK. Thanks, - Yury From yselivanov.ml at gmail.com Sun Jun 17 00:40:02 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Sat, 16 Jun 2012 18:40:02 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: On 2012-06-16, at 11:27 AM, Nick Coghlan wrote: > On Sat, Jun 16, 2012 at 1:56 PM, Jim J. Jewett wrote: >> >> Summary: >> >> *Every* Parameter attribute is optional, even name. (Think of >> builtins, even if they aren't automatically supported yet.) >> So go ahead and define some others that are sometimes useful. > > No, that's not the right attitude to take when designing a new API. > Add only stuff we know is interesting and useful. Call YAGNI on > everything else. If we later decided we missed something, then we can > add it easily. If something we add up front turns out to be useless > (or, worse, an attractive nuisance), then it's a pain to eliminate. > > For parameters, that list is only: > > - kind > - name (should be given meaningful content, even for POSITIONAL_ONLY parameters) +1. >>> A Signature object has the following public attributes and methods: >> >>> * return_annotation : object >>> The annotation for the return type of the function if specified. >>> If the function has no annotation for its return type, this >>> attribute is not set. >> >> This means users must already be prepared to use hasattr with the >> Signature as well as the Parameters -- in which case, I don't see any >> harm in a few extra optional properties. > > No, this gets the reasoning wrong. The optional properties are > optional solely because "None" and "not present" don't mean the same > thing - the attributes potentially being missing represents the fact > that annotations and default values may be omitted altogether. Exactly. And I'd stay away from adding back 'has_annotation' and 'has_default' attributes. It's easy enough to call 'hasattr' or do try-catch-else. >> I would personally prefer to see the name (and qualname) and docstring, >> but it would make perfect sense to implement these by keeping a weakref >> to the original callable, and just delegating there unless/until the >> properties are explicitly changed. I suspect others will have a use >> for additional delegated attributes, such as the self of boundmethods. > > It is expected that higher level interfaces will often compose the > signature object with more information from the callable. That is > already well supported today, and is not the role of the signature > object. The signature object is being added purely to provide a > standard way to describe how a callable binds the supplied arguments > to the expected parameters, that's all. > >> I do agree that __eq__ and __hash__ should depend at most on the >> parameters (including their order) and the annotation. Actually, Signature and Parameter are currently non-hashable (they are mutable). I'm not sure if it was a right decision. >> I think it should state explicitly that by default, the return value >> will be a string that could be used to declare an equivalent function, >> if "Signature" were replaced with "def funcname". >> >> There are enough customization parameters that would often be changed >> together (e.g., to produce HTML output) that it might make sense to use >> overridable class defaults -- or even to make format a class itself. >> >> I also think it would make sense to delegate formatting the individual >> parameters to the parameter objects. Yes, this does mean that the >> subclasses would be more than markers classes. > > I'd like to see support for customising the formatted output dropped > from the PEP entirely. We can add that later after seeing how people > use the class, there's no need to try to guess in advance. I'd be OK with that, as I don't like the current 'format()' design either. But we should keep the Signature.__str__. >>> The structure of the Parameter object is: >> >>> * name : str >>> The name of the parameter as a string. >> >> If there is no name, as with some builtins, will this be "", None or >> not set? > > Positional parameters should still be given a meaningful name > (preferably matching the name used in their docstring and prose > documentation). Again, +1 on the required name for positional-only args. - Yury From justin.venus at gmail.com Sat Jun 16 23:57:15 2012 From: justin.venus at gmail.com (Justin Venus) Date: Sat, 16 Jun 2012 16:57:15 -0500 Subject: [Python-Dev] Anyone interested in a Solaris11 Build Slave? Message-ID: I have an idle x86 Solaris 11 workstation (with SolarisStudio12.3) that is ready to be a build slave, it just needs credentials to participate. Thank you, -- Justin Venus justin.venus at gmail.com From greg at krypto.org Sun Jun 17 02:04:17 2012 From: greg at krypto.org (Gregory P. Smith) Date: Sat, 16 Jun 2012 17:04:17 -0700 Subject: [Python-Dev] deprecating .pyo and -O In-Reply-To: <4FDA5044.3060603@v.loewis.de> References: <20120613175810.D7A032500AC@webabinitio.net> <20120613182024.GE4682@unaka.lan> <20120613204650.09ccbe92@pitrou.net> <20120613191355.C82AD25009E@webabinitio.net> <4FD8EBD7.7090500@stoneleaf.us> <20120614122524.01a53741@pitrou.net> <20120614141454.07ce5feb@pitrou.net> <4FDA5044.3060603@v.loewis.de> Message-ID: On Thu, Jun 14, 2012 at 1:57 PM, "Martin v. L?wis" wrote: > > I don't really see the point. In my experience there is no benefit to > > removing assert statements in production mode. This is a C-specific > > notion that doesn't really map very well to Python code. Do other > > high-level languages have similar functionality? > > It's not at all C specific. C# also has it: > > http://msdn.microsoft.com/en-**us/library/ttcc4x86(v=vs.80).**aspx > > Java makes it a VM option (rather than a compiler option), but it's > still a flag to the VM (-enableassertions): > > http://docs.oracle.com/javase/**1.4.2/docs/tooldocs/windows/**java.html > > Delphi also has assertions that can be disabled at compile time. > > It may be a commonly supported feature but that doesn't mean it is a good idea. :) It seems to me that assertion laden code where the assertions are removed before shipping it or running it in production is largely a concept from the pre-unittesting era. One big issue with them in Python is that enabling or disabling them is VM global rather than controlled on a per module/library basis. That isn't true in C/C++ where you can control it on a per file basis at compile time (NDEBUG). As a developer I agree that it is very convenient to toss asserts into code while you are iterating on writing it. But I regularly have people remove assert statements during code reviews at work and, if the condition is important, replacing them with actual if condition checks that handle the problem or raise an appropriate module specific exception and documenting that as part of their API. At this point, I wish Python just didn't support assert as part of the language because it causes less headaches in the end of code is just written without them begin with. Too late now. In agreement with others: docstring memory consumption is a big deal, some large Python apps at work strip them or use -O (I don't remember which technique they're using today) before deployment. It does seem like something we should provide a standard way of doing. Docstring data is not needed within non-interactive code. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Sun Jun 17 02:19:54 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 16 Jun 2012 20:19:54 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: On 6/16/2012 6:40 PM, Yury Selivanov wrote: > Actually, Signature and Parameter are currently non-hashable (they are > mutable). I'm not sure if it was a right decision. If this is added for 3.3, I think it would be a candidate for 'provisional' (ie, api subject to change) status. (That is not to suggest not *trying* to get it right the first time ;-). -- Terry Jan Reedy From eliben at gmail.com Sun Jun 17 12:20:32 2012 From: eliben at gmail.com (Eli Bendersky) Date: Sun, 17 Jun 2012 13:20:32 +0300 Subject: [Python-Dev] [Python-checkins] cpython: Elaborate that sizeof only accounts for the object itself. In-Reply-To: References: Message-ID: On Sun, Jun 17, 2012 at 11:42 AM, martin.v.loewis < python-checkins at python.org> wrote: > http://hg.python.org/cpython/rev/cddaf96c8149 > changeset: 77484:cddaf96c8149 > parent: 77482:1f6c23ed8218 > user: Martin v. L?wis > date: Sun Jun 17 10:40:16 2012 +0200 > summary: > Elaborate that sizeof only accounts for the object itself. > > files: > Doc/library/sys.rst | 3 +++ > 1 files changed, 3 insertions(+), 0 deletions(-) > > > diff --git a/Doc/library/sys.rst b/Doc/library/sys.rst > --- a/Doc/library/sys.rst > +++ b/Doc/library/sys.rst > @@ -441,6 +441,9 @@ > does not have to hold true for third-party extensions as it is > implementation > specific. > > + Only the memory consumption directly attributed to the object is > + accounted for, not the memory consumption of objects it refers to. > + > If given, *default* will be returned if the object does not provide > means to > retrieve the size. Otherwise a :exc:`TypeError` will be raised. > > Great, thanks. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Sun Jun 17 12:37:55 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 17 Jun 2012 12:37:55 +0200 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: <4FDBDEC7.6000107@hastings.org> References: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> <4FC7A0B6.7050705@v.loewis.de> <4FDBDEC7.6000107@hastings.org> Message-ID: <4FDDB383.2000001@v.loewis.de> > Microsoft is now claiming that it will be able to using something called > "multi-targeting", "later this year. > > http://blogs.msdn.com/b/vcblog/archive/2012/06/15/10320645.aspx Interesting. Unlike the re-addition of desktop app support for Express, which was a purely political decision, this change likely involves significant code changes to the CRT, in particular if they aim for a single CRT that works on XP but uses newer features on Vista+. At the downside for CPython, this probably also means that they will not extend the life of VS 2010 just to support XP, as they can tell people to switch to VS 2012 and still support XP. So VS 2010 will probably expire on 07/14/2015 for mainstream support as planned today. So for 3.4, we have to ask whether to switch to VS 2012 or stay with VS 2010, and for 3.5, we have to ask whether to switch to VS 2014 (assuming that's when the next release is made). Regards, Martin From lists at cheimes.de Sun Jun 17 13:45:22 2012 From: lists at cheimes.de (Christian Heimes) Date: Sun, 17 Jun 2012 13:45:22 +0200 Subject: [Python-Dev] Raw string syntax inconsistency Message-ID: Hello, the topic came up on the python-users list today. The raw string syntax has a minor inconsistency. The ru"" notation is a syntax error although we support rb"". Neither rb"" nor ru"" are supported on Python 2.7. Python 3.3: works: r"", ur"", br"", rb"" syntax error: ru"" Python 2.7: works: r"", ur"", br"" syntax error: ru"", rb"" The ru"" notation isn't necessary for Python 2 compatibility but it's still an inconsistency. The docs [1] also state that 'r' is a prefix, not a suffix. On the other hand the lexical definition doesn't mention the u"" notation yet. Christian [1] http://docs.python.org/py3k/reference/lexical_analysis.html#string-and-bytes-literals From ncoghlan at gmail.com Sun Jun 17 14:26:01 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 17 Jun 2012 22:26:01 +1000 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: Message-ID: On Sun, Jun 17, 2012 at 9:45 PM, Christian Heimes wrote: > Hello, > > the topic came up on the python-users list today. The raw string syntax > has a minor inconsistency. The ru"" notation is a syntax error although > we support rb"". Neither rb"" nor ru"" are supported on Python 2.7. > > Python 3.3: > > ?works: r"", ur"", br"", rb"" > ?syntax error: ru"" > > Python 2.7: > > ?works: r"", ur"", br"" > ?syntax error: ru"", rb"" > > The ru"" notation isn't necessary for Python 2 compatibility but it's > still an inconsistency. The docs [1] also state that 'r' is a prefix, > not a suffix. On the other hand the lexical definition doesn't mention > the u"" notation yet. I suggest we drop the "raw Unicode" syntax altogether for 3.3, as its current implementation in 3.x doesn't match 2.x, and the 2.x "not really raw" mechanism only made any sense because the language support for embedding Unicode characters directly in string literals wasn't as robust as it is in 3.x (Terry Reedy pointed out this problem a while back, but I failed to follow up on it at the time). $ python Python 2.7.3 (default, Apr 30 2012, 21:18:11) [GCC 4.7.0 20120416 (Red Hat 4.7.0-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> print(ur'\u03B3') ? $ ./python Python 3.3.0a4+ (default:cfbf6aa5c9e3+, Jun 17 2012, 15:25:45) [GCC 4.7.0 20120507 (Red Hat 4.7.0-5)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> ur'\u03B3' '\\u03B3' [73691 refs] >>> r'\u03B3' '\\u03B3' [73691 refs] >>> '\u03B3' '?' Better to have a noisy conversion error than silently risking producing different output. So, while PEP 414 will allow u"" to run unmodified, ur"" will still need to be changed to something else, because that partially escaped behaviour isn't available in 3.x and we don't want to reintroduce it. I've created http://bugs.python.org/issue15096 to track this reversion. Cheers, Nick. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From nadeem.vawda at gmail.com Sun Jun 17 16:33:27 2012 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Sun, 17 Jun 2012 16:33:27 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Improve an internal ipaddress test, add a comment explaining why treating In-Reply-To: References: Message-ID: On Sun, Jun 17, 2012 at 8:33 AM, nick.coghlan wrote: > + ? ?@property > + ? ?def version(self): > + ? ? ? ?msg = '%200s has no version specified' % (type(self),) > + ? ? ? ?raise NotImplementedError(msg) > + Shouldn't that be "%.200s", rather than "%200s"? From martin at v.loewis.de Sun Jun 17 16:59:00 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 17 Jun 2012 16:59:00 +0200 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: Message-ID: <4FDDF0B4.3020805@v.loewis.de> > So, while PEP 414 will allow u"" to run unmodified, ur"" will still > need to be changed to something else, because that partially escaped > behaviour isn't available in 3.x and we don't want to reintroduce it. Given that the PEP currently explicitly supports ur, I think the reversal of the reversal will need some discussion in the PEP. (this reminds me of Germany's path wrt. nuclear power - where the previous government decided to pullout from nuclear power (der Ausstieg), the current government reverted that (Ausstieg vom Ausstieg), and then, after the Fukushima accident, decided to revert that decision (der Ausstieg vom Ausstieg vom Ausstieg aus der Atomenergie)). Regards, Martin From tjreedy at udel.edu Sun Jun 17 19:54:14 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 17 Jun 2012 13:54:14 -0400 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: <4FDDF0B4.3020805@v.loewis.de> References: <4FDDF0B4.3020805@v.loewis.de> Message-ID: On 6/17/2012 10:59 AM, "Martin v. L?wis" wrote: >> So, while PEP 414 will allow u"" to run unmodified, ur"" will still >> need to be changed to something else, because that partially escaped >> behaviour isn't available in 3.x and we don't want to reintroduce it. > > Given that the PEP currently explicitly supports ur, I think the > reversal of the reversal will need some discussion in the PEP. Definitely. The current version of the PEP is contradictory. "Combination of the unicode prefix with the raw string prefix will also be supported, just as it was in Python 2. No changes are proposed to Python 3's actual Unicode handling, only to the acceptable forms for string literals." Because there is an (unintuitive and obviously forgettable) interaction effect between 'u' and 'r' in 2.7, truly supporting 'ur', *just as it was in Python 2*, means changing "Python 3's actual Unicode handling". The premise of the discussion of adding 'u', and of Guido's acceptance, was that "it's about as harmless as they come". I do not remember any discussion of 'ur' and what it really means in 2.x, and that supporting it meant adding back 2.x's interaction effect. Indeed, Nick's version goes on to say "This PEP was originally written by Armin Ronacher, and Guido's approval was given based on that version." Armin's original version (and subsequent edit) only proposed adding 'u' (and 'U') and made no mention of 'ur'. Nick's seemingly innocuous addition of also adding 'ur' came after Guido's approval, and as discovered, is not so innocuous. I do not think he needs to discuss adding and deleting support, but merely state that 'ur' support is not added because 'ur' has a special meaning that would require changing literal handling. The sentence about supporting 'ur' could be negated and moved after the sentence about not changing Unicode handling. A possibility: "Combination of the unicode prefix with the raw string prefix will not be supported because in Python 2, the combination 'ur' has a special meaning that would require changing the handling of unicode literals" -- Terry Jan Reedy From ncoghlan at gmail.com Sun Jun 17 22:11:27 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Jun 2012 06:11:27 +1000 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> Message-ID: On Mon, Jun 18, 2012 at 3:54 AM, Terry Reedy wrote: > The premise of the discussion of adding 'u', and of Guido's acceptance, was > that "it's about as harmless as they come". I do not remember any discussion > of 'ur' and what it really means in 2.x, and that supporting it meant adding > back 2.x's interaction effect. Indeed, Nick's version goes on to say "This > PEP was originally written by Armin Ronacher, and Guido's approval was given > based on that version." Armin's original version (and subsequent edit) only > proposed adding 'u' (and 'U') and made no mention of 'ur'. Nick's seemingly > innocuous addition of also adding 'ur' came after Guido's approval, and as > discovered, is not so innocuous. Right, that matches my recollection as well - we (or least I) thought mapping "ur" to the Python 3 "r" prefix was sufficient, but it turns out doing so means there are some 2.x string literals that will silently behave differently in 3.x. Martin's right that that part of the PEP should definitely be amended (along with the relevant section in What's New) > I do not think he needs to discuss adding and deleting support, but merely > state that 'ur' support is not added because 'ur' has a special meaning that > would require changing literal handling. The sentence about supporting 'ur' > could be negated and moved after the sentence about not changing Unicode > handling. A possibility: > > "Combination of the unicode prefix with the raw string prefix will not be > supported because in Python 2, the combination 'ur' has a special meaning > that would require changing the handling of unicode literals" In addition to changing the proposal section to only cover "u" and "U", I'll actually add a new subsection along the lines of the following: Exclusion of Raw Unicode Strings ------------------------------------------------- Python 2.x includes a concept of "raw Unicode" strings. These are partially raw string literals that still support the "\u" and "\U" escape codes for Unicode character entry, but otherwise treat "\" as a literal backslash character. As 3.x has no such concept of a partially raw string literal, explicit raw Unicode literals are still not supported. Such literals in Python 2 code will need to be converted to ordinary Unicode literals for forward compatibility with Python 3. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From guido at python.org Sun Jun 17 22:41:35 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 17 Jun 2012 13:41:35 -0700 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> Message-ID: Would it make sense to detect and reject these in 3.3 if the 2.7 syntax is used? --Guido van Rossum (sent from Android phone) On Jun 17, 2012 1:13 PM, "Nick Coghlan" wrote: > On Mon, Jun 18, 2012 at 3:54 AM, Terry Reedy wrote: > > The premise of the discussion of adding 'u', and of Guido's acceptance, > was > > that "it's about as harmless as they come". I do not remember any > discussion > > of 'ur' and what it really means in 2.x, and that supporting it meant > adding > > back 2.x's interaction effect. Indeed, Nick's version goes on to say > "This > > PEP was originally written by Armin Ronacher, and Guido's approval was > given > > based on that version." Armin's original version (and subsequent edit) > only > > proposed adding 'u' (and 'U') and made no mention of 'ur'. Nick's > seemingly > > innocuous addition of also adding 'ur' came after Guido's approval, and > as > > discovered, is not so innocuous. > > Right, that matches my recollection as well - we (or least I) thought > mapping "ur" to the Python 3 "r" prefix was sufficient, but it turns > out doing so means there are some 2.x string literals that will > silently behave differently in 3.x. > > Martin's right that that part of the PEP should definitely be amended > (along with the relevant section in What's New) > > > I do not think he needs to discuss adding and deleting support, but > merely > > state that 'ur' support is not added because 'ur' has a special meaning > that > > would require changing literal handling. The sentence about supporting > 'ur' > > could be negated and moved after the sentence about not changing Unicode > > handling. A possibility: > > > > "Combination of the unicode prefix with the raw string prefix will not be > > supported because in Python 2, the combination 'ur' has a special meaning > > that would require changing the handling of unicode literals" > > In addition to changing the proposal section to only cover "u" and > "U", I'll actually add a new subsection along the lines of the > following: > > Exclusion of Raw Unicode Strings > ------------------------------------------------- > > Python 2.x includes a concept of "raw Unicode" strings. These are > partially raw string literals that still support the "\u" and "\U" > escape codes for Unicode character entry, but otherwise treat "\" as a > literal backslash character. As 3.x has no such concept of a partially > raw string literal, explicit raw Unicode literals are still not > supported. Such literals in Python 2 code will need to be converted to > ordinary Unicode literals for forward compatibility with Python 3. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jun 18 01:55:13 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Jun 2012 09:55:13 +1000 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> Message-ID: On Mon, Jun 18, 2012 at 6:41 AM, Guido van Rossum wrote: > Would it make sense to detect and reject these in 3.3 if the 2.7 syntax is > used? Possibly - I'm trying not to actually *change* any of the internals of the string literal processing, though. (If I recall the way we implemented the change correctly, by the time we get to processing the string contents, we've forgotten which specific prefix was used) However, tis question did remind me of another detail I wanted to check after realising this discrepancy existed: it turns out this semantic inconsistency already arises if you use "from __future__ import unicode_literals" to get supposedly "Python 3 style" string literals in 2.x Python 2.7.3 (default, May 29 2012, 14:54:22) >>> from __future__ import unicode_literals >>> print(r"\u03b3") ? >>> print("\u03b3") ? Python 3.2.1 (default, Jul 11 2011, 18:54:42) >>> print(r"\u03b3") \u03b3 >>> print("\u03b3") ? So, perhaps the answer is to leave this as is, and try to make 2to3 smart enough to detect such escapes and replace them with their properly encoded (according to the source code encoding) Unicode equivalent? After all, that's already the way to include such characters in a forward compatible way when using the future import: Python 2.7.3 (default, May 29 2012, 14:54:22) >>> from __future__ import unicode_literals >>> print("?") ? >>> print(r"?\n") ?\n Python 3.2.1 (default, Jul 11 2011, 18:54:42) >>> print("?") ? >>> print(r"?\n") ?\n So, rather than going ahead with reverting "ur" support as I first suggested (since it turns out that's not a *new* problem, but just a different way of spelling an *existing* problem), how about I do the following: 1. Add a note to PEP 414 and the Py3k porting guide regarding the discrepancy in escaping semantics for raw Unicode strings between 2.x and 3.x 2. Reject the tracker issue for reverting the ur support (the semantic problem already exists, and any solution we come up with for __future__.unicode_literals should handle the ur prefix as well) 3. Create a new feature request for 2to3 to see if it can automatically handle the problem of translating "\u" and "\U" escapes into properly encoded Unicode characters The scope of the problem is really quite small: you have to be using a raw Unicode string in 2.x (either via the string prefix, or the future import) *and* using a "\u" or "\U" escape within that string. Regards, Nick. > > --Guido van Rossum (sent from Android phone) > > On Jun 17, 2012 1:13 PM, "Nick Coghlan" wrote: >> >> On Mon, Jun 18, 2012 at 3:54 AM, Terry Reedy wrote: >> > The premise of the discussion of adding 'u', and of Guido's acceptance, >> > was >> > that "it's about as harmless as they come". I do not remember any >> > discussion >> > of 'ur' and what it really means in 2.x, and that supporting it meant >> > adding >> > back 2.x's interaction effect. Indeed, Nick's version goes on to say >> > "This >> > PEP was originally written by Armin Ronacher, and Guido's approval was >> > given >> > based on that version." Armin's original version (and subsequent edit) >> > only >> > proposed adding 'u' (and 'U') and made no mention of 'ur'. Nick's >> > seemingly >> > innocuous addition of also adding 'ur' came after Guido's approval, and >> > as >> > discovered, is not so innocuous. >> >> Right, that matches my recollection as well - we (or least I) thought >> mapping "ur" to the Python 3 "r" prefix was sufficient, but it turns >> out doing so means there are some 2.x string literals that will >> silently behave differently in 3.x. >> >> Martin's right that that part of the PEP should definitely be amended >> (along with the relevant section in What's New) >> >> > I do not think he needs to discuss adding and deleting support, but >> > merely >> > state that 'ur' support is not added because 'ur' has a special meaning >> > that >> > would require changing literal handling. The sentence about supporting >> > 'ur' >> > could be negated and moved after the sentence about not changing Unicode >> > handling. A possibility: >> > >> > "Combination of the unicode prefix with the raw string prefix will not >> > be >> > supported because in Python 2, the combination 'ur' has a special >> > meaning >> > that would require changing the handling of unicode literals" >> >> In addition to changing the proposal section to only cover "u" and >> "U", I'll actually add a new subsection along the lines of the >> following: >> >> Exclusion of Raw Unicode Strings >> ------------------------------------------------- >> >> Python 2.x includes a concept of "raw Unicode" strings. These are >> partially raw string literals that still support the "\u" and "\U" >> escape codes for Unicode character entry, but otherwise treat "\" as a >> literal backslash character. As 3.x has no such concept of a partially >> raw string literal, explicit raw Unicode literals are still not >> supported. Such literals in Python 2 code will need to be converted to >> ordinary Unicode literals for forward compatibility with Python 3. >> >> Cheers, >> Nick. >> >> -- >> Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/guido%40python.org -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From guido at python.org Mon Jun 18 03:07:29 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 17 Jun 2012 18:07:29 -0700 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> Message-ID: On Sun, Jun 17, 2012 at 4:55 PM, Nick Coghlan wrote: > On Mon, Jun 18, 2012 at 6:41 AM, Guido van Rossum > wrote: > > Would it make sense to detect and reject these in 3.3 if the 2.7 syntax > is > > used? > > Possibly - I'm trying not to actually *change* any of the internals of > the string literal processing, though. (If I recall the way we > implemented the change correctly, by the time we get to processing the > string contents, we've forgotten which specific prefix was used) > > However, tis question did remind me of another detail I wanted to > check after realising this discrepancy existed: it turns out this > semantic inconsistency already arises if you use "from __future__ > import unicode_literals" to get supposedly "Python 3 style" string > literals in 2.x > > Python 2.7.3 (default, May 29 2012, 14:54:22) > >>> from __future__ import unicode_literals > >>> print(r"\u03b3") > ? > >>> print("\u03b3") > ? > > Python 3.2.1 (default, Jul 11 2011, 18:54:42) > >>> print(r"\u03b3") > \u03b3 > >>> print("\u03b3") > ? > > So, perhaps the answer is to leave this as is, and try to make 2to3 > smart enough to detect such escapes and replace them with their > properly encoded (according to the source code encoding) Unicode > equivalent? But the whole point of the reintroduction of u"..." is to support code that isn't run through 2to3. Frankly, I don't care how it's done, but I'd say it's important not to silently have different behavior for the same notation in the two versions. If that means we have to add an extra step to the compiler to reject r"\u03b3", so be it. > After all, that's already the way to include such > characters in a forward compatible way when using the future import: > > Python 2.7.3 (default, May 29 2012, 14:54:22) > >>> from __future__ import unicode_literals > >>> print("?") > ? > >>> print(r"?\n") > ?\n > > Python 3.2.1 (default, Jul 11 2011, 18:54:42) > >>> print("?") > ? > >>> print(r"?\n") > ?\n > Hm. I still encounter enough environments that don't know how to display such characters that I would prefer to have a rock solid \u escape mechanism. I can think of two ways to support "expanded" unicode characters in raw strings a la Python 2; (a) let the re module interpret the escapes (like it does for \r and \n); (b) the user can write r"someblah" "\u03b3" r"moreblah". > So, rather than going ahead with reverting "ur" support as I first > suggested (since it turns out that's not a *new* problem, but just a > different way of spelling an *existing* problem), how about I do the > following: > > 1. Add a note to PEP 414 and the Py3k porting guide regarding the > discrepancy in escaping semantics for raw Unicode strings between 2.x > and 3.x > 2. Reject the tracker issue for reverting the ur support (the semantic > problem already exists, and any solution we come up with for > __future__.unicode_literals should handle the ur prefix as well) > 3. Create a new feature request for 2to3 to see if it can > automatically handle the problem of translating "\u" and "\U" escapes > into properly encoded Unicode characters > > The scope of the problem is really quite small: you have to be using a > raw Unicode string in 2.x (either via the string prefix, or the future > import) *and* using a "\u" or "\U" escape within that string. > Yeah, but if you do this and it breaks you likely won't notice until way late in your QA cycle, when it may be tough to track down the origin. I'd rather make ru"\u03b3" a syntax error if we can't give it the same meaning as in Python 2. (I'm not sure what to do about the same bug with __future__. Maybe we should declare that a bug and "fix" it in a future 2.7 bugfix release?) > Regards, > Nick. > > > > > --Guido van Rossum (sent from Android phone) > > > > On Jun 17, 2012 1:13 PM, "Nick Coghlan" wrote: > >> > >> On Mon, Jun 18, 2012 at 3:54 AM, Terry Reedy wrote: > >> > The premise of the discussion of adding 'u', and of Guido's > acceptance, > >> > was > >> > that "it's about as harmless as they come". I do not remember any > >> > discussion > >> > of 'ur' and what it really means in 2.x, and that supporting it meant > >> > adding > >> > back 2.x's interaction effect. Indeed, Nick's version goes on to say > >> > "This > >> > PEP was originally written by Armin Ronacher, and Guido's approval was > >> > given > >> > based on that version." Armin's original version (and subsequent edit) > >> > only > >> > proposed adding 'u' (and 'U') and made no mention of 'ur'. Nick's > >> > seemingly > >> > innocuous addition of also adding 'ur' came after Guido's approval, > and > >> > as > >> > discovered, is not so innocuous. > >> > >> Right, that matches my recollection as well - we (or least I) thought > >> mapping "ur" to the Python 3 "r" prefix was sufficient, but it turns > >> out doing so means there are some 2.x string literals that will > >> silently behave differently in 3.x. > >> > >> Martin's right that that part of the PEP should definitely be amended > >> (along with the relevant section in What's New) > >> > >> > I do not think he needs to discuss adding and deleting support, but > >> > merely > >> > state that 'ur' support is not added because 'ur' has a special > meaning > >> > that > >> > would require changing literal handling. The sentence about supporting > >> > 'ur' > >> > could be negated and moved after the sentence about not changing > Unicode > >> > handling. A possibility: > >> > > >> > "Combination of the unicode prefix with the raw string prefix will not > >> > be > >> > supported because in Python 2, the combination 'ur' has a special > >> > meaning > >> > that would require changing the handling of unicode literals" > >> > >> In addition to changing the proposal section to only cover "u" and > >> "U", I'll actually add a new subsection along the lines of the > >> following: > >> > >> Exclusion of Raw Unicode Strings > >> ------------------------------------------------- > >> > >> Python 2.x includes a concept of "raw Unicode" strings. These are > >> partially raw string literals that still support the "\u" and "\U" > >> escape codes for Unicode character entry, but otherwise treat "\" as a > >> literal backslash character. As 3.x has no such concept of a partially > >> raw string literal, explicit raw Unicode literals are still not > >> supported. Such literals in Python 2 code will need to be converted to > >> ordinary Unicode literals for forward compatibility with Python 3. > >> > >> Cheers, > >> Nick. > >> > >> -- > >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > >> _______________________________________________ > >> Python-Dev mailing list > >> Python-Dev at python.org > >> http://mail.python.org/mailman/listinfo/python-dev > >> Unsubscribe: > >> http://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Mon Jun 18 03:13:31 2012 From: python at mrabarnett.plus.com (MRAB) Date: Mon, 18 Jun 2012 02:13:31 +0100 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> Message-ID: <4FDE80BB.3010407@mrabarnett.plus.com> On 18/06/2012 00:55, Nick Coghlan wrote: > On Mon, Jun 18, 2012 at 6:41 AM, Guido van Rossum wrote: >> Would it make sense to detect and reject these in 3.3 if the 2.7 syntax is >> used? > > Possibly - I'm trying not to actually *change* any of the internals of > the string literal processing, though. (If I recall the way we > implemented the change correctly, by the time we get to processing the > string contents, we've forgotten which specific prefix was used) > > However, tis question did remind me of another detail I wanted to > check after realising this discrepancy existed: it turns out this > semantic inconsistency already arises if you use "from __future__ > import unicode_literals" to get supposedly "Python 3 style" string > literals in 2.x > > Python 2.7.3 (default, May 29 2012, 14:54:22) >>>> from __future__ import unicode_literals >>>> print(r"\u03b3") > ? >>>> print("\u03b3") > ? > > Python 3.2.1 (default, Jul 11 2011, 18:54:42) >>>> print(r"\u03b3") > \u03b3 >>>> print("\u03b3") > ? > > So, perhaps the answer is to leave this as is, and try to make 2to3 > smart enough to detect such escapes and replace them with their > properly encoded (according to the source code encoding) Unicode > equivalent? What if it's not possible to encode that character? I suppose that it could be expanded into a string expression so that a non-raw string literal could be used, possibly using implicit concatenation, parenthesised, if necessary (or always?). > After all, that's already the way to include such characters in a > forward compatible way when using the future import: > > Python 2.7.3 (default, May 29 2012, 14:54:22) >>>> from __future__ import unicode_literals >>>> print("?") > ? >>>> print(r"?\n") > ?\n > > Python 3.2.1 (default, Jul 11 2011, 18:54:42) >>>> print("?") > ? >>>> print(r"?\n") > ?\n > > So, rather than going ahead with reverting "ur" support as I first > suggested (since it turns out that's not a *new* problem, but just a > different way of spelling an *existing* problem), how about I do the > following: > > 1. Add a note to PEP 414 and the Py3k porting guide regarding the > discrepancy in escaping semantics for raw Unicode strings between 2.x > and 3.x > 2. Reject the tracker issue for reverting the ur support (the semantic > problem already exists, and any solution we come up with for > __future__.unicode_literals should handle the ur prefix as well) > 3. Create a new feature request for 2to3 to see if it can > automatically handle the problem of translating "\u" and "\U" escapes > into properly encoded Unicode characters > > The scope of the problem is really quite small: you have to be using a > raw Unicode string in 2.x (either via the string prefix, or the future > import) *and* using a "\u" or "\U" escape within that string. > [snip] From gmspro at yahoo.com Mon Jun 18 06:43:07 2012 From: gmspro at yahoo.com (gmspro) Date: Sun, 17 Jun 2012 21:43:07 -0700 (PDT) Subject: [Python-Dev] What's the best way to debug python3 source code? Message-ID: <1339994587.58533.YahooMailClassic@web164606.mail.gq1.yahoo.com> Hi, What's the best way to debug python3 source code? To fix a bug i need to debug source code(C files). I use gdb to debug. But how can i get the exact file/point to fix the bug? How can i know quickly where the bug is? How can i run python>>> from gdb and giving input there how can i test and debug to fix a bug? Someone please explain/elaborate the process you use/do as usual with examples. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Mon Jun 18 07:54:46 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Mon, 18 Jun 2012 14:54:46 +0900 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: <4FDDF0B4.3020805@v.loewis.de> References: <4FDDF0B4.3020805@v.loewis.de> Message-ID: <87d34xb1w9.fsf@uwakimon.sk.tsukuba.ac.jp> "Martin v. L?wis" writes: > (this reminds me of Germany's path wrt. nuclear power Yeah, except presumably Python won't be buying cheap "raw Unicode" support from Perl. ;-) From martin at v.loewis.de Mon Jun 18 07:59:38 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 18 Jun 2012 07:59:38 +0200 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> Message-ID: <4FDEC3CA.9070109@v.loewis.de> On 17.06.2012 22:41, Guido van Rossum wrote: > Would it make sense to detect and reject these in 3.3 if the 2.7 syntax > is used? Maybe we are talking about different things: The (new) proposal is that the ur prefix in 3.3 is a syntax error (again, as it was before PEP 414). So, yes: the raw unicode literals will be rejected (not by explicitly detecting them, though). Regards, Martin From tjreedy at udel.edu Mon Jun 18 07:59:39 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 18 Jun 2012 01:59:39 -0400 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> Message-ID: On 6/17/2012 9:07 PM, Guido van Rossum wrote: > On Sun, Jun 17, 2012 at 4:55 PM, Nick Coghlan So, perhaps the answer is to leave this as is, and try to make 2to3 > smart enough to detect such escapes and replace them with their > properly encoded (according to the source code encoding) Unicode > equivalent? > > > But the whole point of the reintroduction of u"..." is to support code > that isn't run through 2to3. People writing 2&3 code sometimes use 2to3 once (or a few times) on their 2.6/7 version during development to find things they must pay attention to. So Nick's idea could be helpful to people who do not want to use 2to3 routinely either in development or deployment. > Frankly, I don't care how it's done, but > I'd say it's important not to silently have different behavior for the > same notation in the two versions. The fundamental problem was giving the 'u' prefix two different meanings in 2.x: 'change the storage type from bytes to unicode', and 'change the contents by partially cooking the literal even when raw processing is requested'*. The only way to silently have the same behavior is to re-introduce the second meaning of partial cooking. (But I would rather make it unnecessary.) But that would freeze the 'u' prefix, or at least 'ur' ('un-raw') forever. It would be better to introduce a new, separate 'p' prefix, to mean partially raw, partially cooked. (But I am opposes to *I think this non-orthogonal interaction effect was a design mistake and that it would have been better to have re do all the cooking needed by also interpreting \u and \U sequences. I also think we should add this now for 3.3 if possible, to make partial cooking at the parsing stage unnecessary. Putting the processing in re makes it work for all strings, not just those given as literals. > If that means we have to add an extra > step to the compiler to reject r"\u03b3", so be it. I do not get this. Surely you cannot mean to suddenly start rejecting, in 3.3, a large set of perfectly legal and sensible 6 and 10 character sequences when embedded in literals? > Hm. I still encounter enough environments that don't know how to display > such characters that I would prefer to have a rock solid \u escape > mechanism. I can think of two ways to support "expanded" unicode > characters in raw strings a la Python 2; (a) let the re module interpret the escapes (like it does for \r and \n); As said above, I favor this. The 2.x partial cooking (with 'ur' prefix) was primarily a substitute for this. (b) the user can write r"someblah" "\u03b3" r"moreblah". This is somewhat orthogonal to (a). Users can this whenever they want partial processing of backslashes without doubling those they want left as is. A generic example is r'someraw' 'somecooked' r'moreraw' 'morecooked'. -- Terry Jan Reedy From tjreedy at udel.edu Mon Jun 18 08:02:11 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 18 Jun 2012 02:02:11 -0400 Subject: [Python-Dev] What's the best way to debug python3 source code? In-Reply-To: <1339994587.58533.YahooMailClassic@web164606.mail.gq1.yahoo.com> References: <1339994587.58533.YahooMailClassic@web164606.mail.gq1.yahoo.com> Message-ID: On 6/18/2012 12:43 AM, gmspro wrote: > What's the best way to debug python3 source code? ... The pydev list is for development *of* future python releases. For questions about development *with* current releases, please ask on python-list or other user oriented forums. -- Terry Jan Reedy From martin at v.loewis.de Mon Jun 18 08:06:34 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Mon, 18 Jun 2012 08:06:34 +0200 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> Message-ID: <4FDEC56A.30801@v.loewis.de> > But the whole point of the reintroduction of u"..." is to support code > that isn't run through 2to3. Frankly, I don't care how it's done, but > I'd say it's important not to silently have different behavior for the > same notation in the two versions. If that means we have to add an extra > step to the compiler to reject r"\u03b3", so be it. It's actually ur"\u03b3" that will be rejected, and that falls out easily by just not being able to parse it. The 2.x r"\u03b3" denotes a 6-character (byte) string, which continues to be understood as a 6-character Unicode string in 3.3. > Hm. I still encounter enough environments that don't know how to display > such characters that I would prefer to have a rock solid \u escape > mechanism. If you want to use them under the revised PEP 414, you will have to avoid making them raw, and just use a plain u prefix. IOW, you need to double all backslashes that you want to stand on their own, and then use \u escapes to denote non-typable characters. > Yeah, but if you do this and it breaks you likely won't notice until way > late in your QA cycle, when it may be tough to track down the origin. > I'd rather make ru"\u03b3" a syntax error if we can't give it the same > meaning as in Python 2. That's exactly the proposal, see http://bugs.python.org/issue15096 http://bugs.python.org/file26036/issue15096-1.patch Regards, Martin From martin at v.loewis.de Mon Jun 18 08:16:07 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 18 Jun 2012 08:16:07 +0200 Subject: [Python-Dev] What's the best way to debug python3 source code? In-Reply-To: <1339994587.58533.YahooMailClassic@web164606.mail.gq1.yahoo.com> References: <1339994587.58533.YahooMailClassic@web164606.mail.gq1.yahoo.com> Message-ID: <4FDEC7A7.7070003@v.loewis.de> > What's the best way to debug python3 source code? > To fix a bug i need to debug source code(C files). > I use gdb to debug. If the bug is presumably in C, then using gdb works fine for me. > But how can i get the exact file/point to fix the bug? As usual: set breakpoints and watch points. In some cases, also augment the C code to print out trace messages. Use the excellent python-gdb.py, which requires a recent gdb version. > How can i know quickly where the bug is? The fastest way is probably Linus Torvald's approach: just look at the code for 20 seconds, and see the bug without running the code. YMMV. > How can i run python>>> from gdb and giving input there how can i test > and debug to fix a bug? If you start Python in gdb, then do "r", it will automatically start interactive mode. > Someone please explain/elaborate the process you use/do as usual with > examples. I wasn't quite sure whether your question is off-topic for python-dev: this last request definitely is. python-dev is not a place to get free education. Instead, it is a place where *you* contribute to Python. If you work on a specific bug and have a specific question about it, feel free to ask. However, "teach me how to debug" is best asked on other mailing lists. Regards, Martin From tjreedy at udel.edu Mon Jun 18 08:23:59 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 18 Jun 2012 02:23:59 -0400 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: <4FDEC56A.30801@v.loewis.de> References: <4FDDF0B4.3020805@v.loewis.de> <4FDEC56A.30801@v.loewis.de> Message-ID: On 6/18/2012 2:06 AM, "Martin v. L?wis" wrote: >> Hm. I still encounter enough environments that don't know how to display >> such characters that I would prefer to have a rock solid \u escape >> mechanism. > > If you want to use them under the revised PEP 414, you will have to > avoid making them raw, and just use a plain u prefix. IOW, you need > to double all backslashes that you want to stand on their own, and > then use \u escapes to denote non-typable characters. And such literals will mean the same thing in 2.x and 3.3+. -- Terry Jan Reedy From ncoghlan at gmail.com Mon Jun 18 08:31:54 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Jun 2012 16:31:54 +1000 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: <4FDEC3CA.9070109@v.loewis.de> References: <4FDDF0B4.3020805@v.loewis.de> <4FDEC3CA.9070109@v.loewis.de> Message-ID: On Mon, Jun 18, 2012 at 3:59 PM, "Martin v. L?wis" wrote: > On 17.06.2012 22:41, Guido van Rossum wrote: >> Would it make sense to detect and reject these in 3.3 if the 2.7 syntax >> is used? > > Maybe we are talking about different things: The (new) proposal is that > the ur prefix in 3.3 is a syntax error (again, as it was before PEP > 414). So, yes: the raw unicode literals will be rejected (not by > explicitly detecting them, though). I think GvR was replying to my email where I was briefly reconsidering the idea of keeping them around (because the unicode_literals future import already suffers from this problem of literals that don't mean the same things in 2.x and in 3.x). However, that was flawed reasoning on my part - simply banning them altogether in 3.x is the simplest option to ensure this particular error doesn't pass silently, especially since there are alternate forward compatible ways to write them, such as: Python 2.7.3 (default, May 29 2012, 14:54:22) >>> from __future__ import unicode_literals >>> print(u"\u03b3" r"\n") ?\n >>> print(u"\u03b3\\n") ?\n Python 3.3.0a4 (default:f1dd70bfb4c5, May 31 2012, 09:47:51) >>> print(u"\u03b3" r"\n") ?\n >>> print(u"\u03b3\\n") ?\n Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From jimjjewett at gmail.com Mon Jun 18 09:08:40 2012 From: jimjjewett at gmail.com (Jim Jewett) Date: Mon, 18 Jun 2012 03:08:40 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: On Sat, Jun 16, 2012 at 11:27 AM, Nick Coghlan wrote: > On Sat, Jun 16, 2012 at 1:56 PM, Jim J. Jewett wrote: >> ? ?*Every* Parameter attribute is optional, even name. >> ?(Think of ?builtins, even if they aren't automatically >> supported yet.)? ?So go ahead and define some others >> that are sometimes useful. > Add only stuff we know is interesting and useful. Agreed, but it doesn't have to be useful in all cases, or even available on all Signatures; if users are already prepared for missing data, it is enough that the attribute be well-defined, and be useful when it does appear. That said, it looks like is_implemented isn't sufficiently well-defined. > - kind > - name (should be given meaningful content, even for POSITIONAL_ONLY parameters) I agree that it *should* be given meaningful content, but I don't think the Parameter (or Signature) should be blocked without it. I also don't think that a documentation-only name that cannot be used for keyword calls should participate in equality. The existence of the parameter should participate, and its annotation is more important than usual, but its name is not. > - default (may be missing, since "None" is allowed as a default value) > - annotation (may be missing, since "None" is allowed as an annotation) Position is also important, but I'm not certain whether it should be represented in the Parameter, or only in the Signature. copy(source, target) copy(target, source) have different signatures, but I'm not sure whether it would be appropriate to reuse the same parameter objects. >> ? ?Instead of defining a BoundArguments class, just return >> a copy ?of the Signature, with "value" attributes added to >> the Parameters. > No, the "BoundArguments" class is designed to be easy to > feed to a function call as f(*args, **kwds) Why does that take a full class, as opposed to a method returning a tuple and a dict? >> ? ?Use subclasses to distinguish the parameter kind. > Please, no, using subclasses when there is no behavioural > change is annoying. A **kwargs argument is very different from an ordinary parameter. Its name doesn't matter (and therefore should not be considered in __eq__), it can only appear once per signature, and the possible location of its appearance is different. It is formatted differently (which I would prefer to do in the Parameter, rather than in Signature). It also holds very different data, and must be treated specially by several Signature methods, particularly when either validating or binding. (It is bound to a Mapping, rather than to a single value, so you have to keep it around longer and use a different "bind method".) >>> A Signature object has the following public attributes and methods: The more I try to work with it, the more I want direct references to the two special arguments (*args, **kwargs) if they exist. FWIW, the current bind logic to find them -- particularly kwargs -- seems contorted, compared to self.kwargsparameter. >> (3rd edition) >>> * is_keyword_only : bool ... >>> * is_args : bool ... >>> * is_kwargs : bool ... >> (4th edition) >>> ... Parameter.POSITIONAL_ONLY ... >>> ... Parameter.POSITIONAL_OR_KEYWORD ... >>> ... Parameter.KEYWORD_ONLY ... >>> ... Parameter.VAR_POSITIONAL ... >>> ... Parameter.VAR_KEYWORD ... >> This set has already grown, and I can think of others I would like to >> use. ?(Pseudo-parameters, such as a method's self instance, or an >> auxiliary variable.) > No. This is the full set of binding behaviours. "self" is just an > ordinary "POSITIONAL_OR_KEYWORD" argument (or POSITIONAL_ONLY, in some > builtin cases). Or no longer a "parameter" at all, once the method is bound. Except it sort of still is. Same for the space parameter in PyPy. I don't expect the stdlib implementation to support them initially, but I don't want it to get in the way, either. A supposedly closed set gets in the way. >>?I'm not sure >> if positional parameters should also check position, or if that >> can be left to the Signature. > Positional parameters don't know their relative position, so it *has* > to be left to the signature. But perhaps they *should* know their relative position. Also, positional_only, *args, and **kwargs should be able to remove name from the list of compared attributes. -jJ From g.brandl at gmx.net Mon Jun 18 08:56:12 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 18 Jun 2012 08:56:12 +0200 Subject: [Python-Dev] 3.3 beta in one week Message-ID: Hi all, this is just a quick reminder that the feature freeze for 3.3 will start next weekend with the release of beta1. Since I won't be able to shift that date for short periods (the next possible date for me would be around July 16), I hope that everybody has planned ahead accordingly. Let me also say that it's great to see how 3.3 is shaping up; the team is doing very well and I want to thank everybody for pushing, but not rushing, awesome features :) cheers, Georg From martin at v.loewis.de Mon Jun 18 09:17:56 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 18 Jun 2012 09:17:56 +0200 Subject: [Python-Dev] 3.3 beta in one week In-Reply-To: References: Message-ID: <4FDED624.60801@v.loewis.de> > this is just a quick reminder that the feature freeze for 3.3 > will start next weekend with the release of beta1. Since I > won't be able to shift that date for short periods (the next > possible date for me would be around July 16), I hope that > everybody has planned ahead accordingly. Expect some rushing of PEP 397 then. Regards, Martin From martin at v.loewis.de Mon Jun 18 09:20:09 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 18 Jun 2012 09:20:09 +0200 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> Message-ID: <4FDED6A9.1090209@v.loewis.de> > The default should be what we've had though. > The new settings cause a lot more collisions > and resizes. Raymond, can you kindly point to an application that demonstrates this claim (in particular the "a lot more" part, which I'd translate to "more than 20% more"). I'm fine with reverting changes, but I agree that any benchmarking performed should be repeatable, and public. I agree it's sad to see a month worth of benchmarking reverted - but had that benchmarking been documented publicly, rather than just reporting the outcome, such reversal might have been avoided. > Dicts get their efficiency from sparseness. > Reducing the mindict size from 8 to 4 causes > substantially more collisions in small dicts > and gets closer to a linear search of a small tuple. Why do you think the dictsize has been reduced from 8 to 4? It has not. Regards, Martin From ncoghlan at gmail.com Mon Jun 18 09:49:09 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Jun 2012 17:49:09 +1000 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: On Mon, Jun 18, 2012 at 5:08 PM, Jim Jewett wrote: > But perhaps they *should* know their relative position. No, relative position is a property of the Signature - a parameter has no position until it is made part of a signature. >?Also, > positional_only, *args, and **kwargs should be able to remove name > from the list of compared attributes. As you yourself said, removing the name from consideration for positional arguments is potentially dangerous - a function that accepts (source, dest) is very different from one that accepts (dest, source). If you don't care about names, then call bind() to see if it works or write your own more permissive comparison operation. If you do care about names, then the default equality definition is appropriate. Ultimately, people are going to be free to do their own thing. The heart of the new API is to get all of this already available information out into a more conveniently manipulable format. Once we see what people do with it, *then* it is time to discuss additional convenience APIs (such as Signature level properties for parameter subsets). Don't overengineer it at the foundational level - let people build their own additional layers of customisation on top. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From gmspro at yahoo.com Mon Jun 18 13:10:44 2012 From: gmspro at yahoo.com (gmspro) Date: Mon, 18 Jun 2012 04:10:44 -0700 (PDT) Subject: [Python-Dev] What's the best way to debug python3 source code? (for this bug: http://bugs.python.org/issue15068) Message-ID: <1340017844.52376.YahooMailClassic@web164606.mail.gq1.yahoo.com> @martin, I'm working on this bug, http://bugs.python.org/issue15068 I tried this with gdb (gdb)run >>>from sys import stdin >>>str=sys.stdin.read() blabla blabla blabla CTRL+D CTRL+D >>>CTRL+C (gdb)backtrace 0xb7f08348 in ___newselect_nocancel () at ../sysdeps/unix/syscall-template.S:82 82??? ../sysdeps/unix/syscall-template.S: No such file or directory. ??? in ../sysdeps/unix/syscall-template.S Current language:? auto The current source language is "auto; currently asm". (gdb) backtrace #0? 0xb7f08348 in ___newselect_nocancel () at ../sysdeps/unix/syscall-template.S:82 #1? 0xb7b43f9f in readline_until_enter_or_signal (prompt=0xb7b578d0 ">>> ", signal=0xbfffed54) ??? at /home/user1/python/Python-3.2.3/Modules/readline.c:987 #2? 0xb7b440ed in call_readline (sys_stdin=0xb7f84420, sys_stdout=0xb7f844c0, prompt=0xb7b578d0 ">>> ") ??? at /home/user1/python/Python-3.2.3/Modules/readline.c:1082 #3? 0x0811a15b in PyOS_Readline (sys_stdin=0xb7f84420, sys_stdout=0xb7f844c0, prompt=0xb7b578d0 ">>> ") ??? at ../Parser/myreadline.c:200 #4? 0x0811b92d in tok_nextc (tok=0x82e1d08) at ../Parser/tokenizer.c:897 #5? 0x0811c39b in tok_get (tok=0x82e1d08, p_start=0xbfffef20, p_end=0xbfffef1c) at ../Parser/tokenizer.c:1306 #6? 0x0811cda2 in PyTokenizer_Get (tok=0x82e1d08, p_start=0xbfffef20, p_end=0xbfffef1c) at ../Parser/tokenizer.c:1687 #7? 0x08119866 in parsetok (tok=0x82e1d08, g=0x81df0c0, start=256, err_ret=0xbfffefd8, flags=0xbfffefd4) ??? at ../Parser/parsetok.c:150 #8? 0x081197a6 in PyParser_ParseFileFlagsEx (fp=0xb7f84420, filename=0x818ac7a "", enc=0xb7c57750 "UTF-8", g= ??? 0x81df0c0, start=256, ps1=0xb7b578d0 ">>> ", ps2=0xb7b578e8 "... ", err_ret=0xbfffefd8, flags=0xbfffefd4) ??? at ../Parser/parsetok.c:100 #9? 0x080cb96d in PyParser_ASTFromFile (fp=0xb7f84420, filename=0x818ac7a "", enc=0xb7c57750 "UTF-8", start=256, ps1= ??? 0xb7b578d0 ">>> ", ps2=0xb7b578e8 "... ", flags=0xbffff194, errcode=0xbffff044, arena=0x8285a50) ??? at ../Python/pythonrun.c:1941 #10 0x080c9c9c in PyRun_InteractiveOneFlags (fp=0xb7f84420, filename=0x818ac7a "", flags=0xbffff194) ??? at ../Python/pythonrun.c:1175 #11 0x080c99ed in PyRun_InteractiveLoopFlags (fp=0xb7f84420, filename=0x818ac7a "", flags=0xbffff194) ??? at ../Python/pythonrun.c:1086 #12 0x080c98b5 in PyRun_AnyFileExFlags (fp=0xb7f84420, filename=0x818ac7a "", closeit=0, flags=0xbffff194) ??? at ../Python/pythonrun.c:1055 #13 0x080deb5c in run_file (fp=0xb7f84420, filename=0x0, p_cf=0xbffff194) at ../Modules/main.c:307 #14 0x080df5c0 in Py_Main (argc=1, argv=0x81f7008) at ../Modules/main.c:733 #15 0x0805c3d9 in main (argc=1, argv=0xbffff2f4) at ../Modules/python.c:63 But i can't get any clue which file to look at. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Jun 18 15:14:25 2012 From: arigo at tunes.org (Armin Rigo) Date: Mon, 18 Jun 2012 15:14:25 +0200 Subject: [Python-Dev] CFFI released Message-ID: Hi all, We (=fijal and myself) finally released the beta-0.1 version of CFFI. http://cffi.readthedocs.org/ It is a(nother) simple Foreign Function Interface for Python calling C code. I talked about it with a few python core people during the PyCon sprint; now it's done, with a pure Python part and a compact (but still 3000 lines) piece of C code. The goal is for it to be simple yet as complete as possible; it can be used in places where ctypes (say) is not applicable or only with platform-specific difficulties, e.g. to rewrite a "_curses" module in pure Python, or access the X libraries, etc. Of course I'm not going to suggest that it should be part of the standard library right now, but I do hope that over time, should it prove useful and used, I could come back and make such a suggestion. In any case it looks like we are going to write native and JITted PyPy support for it, and change our pure Python "_ctypes" implementation to be based on "cffi". As it is much more compact to support than the full _ctypes, it is also good for Jython and IronPython. A bient?t, Armin. From mark at hotpy.org Mon Jun 18 16:28:24 2012 From: mark at hotpy.org (Mark Shannon) Date: Mon, 18 Jun 2012 15:28:24 +0100 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: <4FDED6A9.1090209@v.loewis.de> References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> <4FDED6A9.1090209@v.loewis.de> Message-ID: <4FDF3B08.4070700@hotpy.org> Martin v. L?wis wrote: >> The default should be what we've had though. >> The new settings cause a lot more collisions >> and resizes. > > Raymond, can you kindly point to an application that demonstrates this > claim (in particular the "a lot more" part, which I'd translate to > "more than 20% more"). It is quite easy to find a program that results in a lots more resizes. The issue is not whether there are more resizes, but whether it is faster or slower. The evidence suggests that the new default settings are no slower and reduce memory use. > > I'm fine with reverting changes, but I agree that any benchmarking > performed should be repeatable, and public. I agree it's sad to see > a month worth of benchmarking reverted - but had that benchmarking > been documented publicly, rather than just reporting the outcome, > such reversal might have been avoided. > >> Dicts get their efficiency from sparseness. But do they? The results of benchmarking would seem to suggest (at least on my test machine) that overly-sparse dicts are slower. Possibly due to increased cache misses. >> Reducing the mindict size from 8 to 4 causes >> substantially more collisions in small dicts >> and gets closer to a linear search of a small tuple. > > Why do you think the dictsize has been reduced from 8 to 4? It has not. For combined tables it remains 8 as before. For split tables it *has* been reduced to 4. This will increase collisions, and it is possible that a linear search would be faster for these very small dictionaries. However, a 4 entry table fits into a single cache line (for a 64 byte cache line on a 32 bit machine) which may save a lot of cache misses. But this all conjecture. Whatever the reason, the current parameters give the best performance empirically. Cheers, Mark. From yselivanov.ml at gmail.com Mon Jun 18 16:37:15 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 18 Jun 2012 10:37:15 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: Jim, On 2012-06-18, at 3:08 AM, Jim Jewett wrote: > On Sat, Jun 16, 2012 at 11:27 AM, Nick Coghlan wrote: >> On Sat, Jun 16, 2012 at 1:56 PM, Jim J. Jewett wrote: > >>> Instead of defining a BoundArguments class, just return >>> a copy of the Signature, with "value" attributes added to >>> the Parameters. > >> No, the "BoundArguments" class is designed to be easy to >> feed to a function call as f(*args, **kwds) > > Why does that take a full class, as opposed to a method returning a > tuple and a dict? Read this thread, please: http://mail.python.org/pipermail/python-dev/2012-June/120000.html And also take a look at the check types example in the pep. In short - it's easy to work with 'BoundArguments.arguments' list, but it is not enough for invocation, thus 'BoundArguments.args' & '.kwargs' properties. >>> Use subclasses to distinguish the parameter kind. > >> Please, no, using subclasses when there is no behavioural >> change is annoying. > > A **kwargs argument is very different from an ordinary parameter. Its > name doesn't matter (and therefore should not be considered in > __eq__), The importance of its name depends hugely on the use context. In some it may be very important. > it can only appear once per signature, and the possible > location of its appearance is different. I'll fix the __eq__ to ignore positions of **kwargs & keyword-only parameters. > It is formatted differently > (which I would prefer to do in the Parameter, rather than in > Signature). I think we'll remove the 'Signature.format()' from the PEP, leaving just 'Signature.__str__'. And just for '__str__' I don't think it's a bad idea to let Signature format its parameters. > It also holds very different data, and must be treated > specially by several Signature methods, particularly when either > validating or binding. (It is bound to a Mapping, rather than to a > single value, so you have to keep it around longer and use a different > "bind method".) And it is treated specially, along with the *args. >>>> A Signature object has the following public attributes and methods: > > The more I try to work with it, the more I want direct references to > the two special arguments (*args, **kwargs) if they exist. FWIW, the > current bind logic to find them -- particularly kwargs -- seems > contorted, compared to self.kwargsparameter. Well, 'self.kwargsparameter' will break 'self.parameters' collection, unless you want one parameter to be in two places. I've already had three or four implementations of this PEP, with the first couple having **kwargs parameter stored separately, but keeping all parameters in one collection turned out to be the most elegant solution. In fact, the check types example (in the PEP) is currently shorter and easier to read with 'Signature.parameters' than with dedicated property for '**kwargs' parameter. And if after all you need direct references to *args or **kwargs - write a little helper, which finds them in 'Signature.parameters'. >>> I'm not sure >>> if positional parameters should also check position, or if that >>> can be left to the Signature. > >> Positional parameters don't know their relative position, so it *has* >> to be left to the signature. > > But perhaps they *should* know their relative position. I disagree here. That will just complicate things. > Also, > positional_only, *args, and **kwargs should be able to remove name > from the list of compared attributes. I still believe in the most contexts the name of a parameter matters (even if it's **kwargs). Besides, how can we make __eq__ to be configurable? Let's make it do the most explicit and simple logic, and those who need a custom one will implement such for themselves. Thank you, - Yury From solipsis at pitrou.net Mon Jun 18 17:04:10 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 18 Jun 2012 17:04:10 +0200 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> <4FDED6A9.1090209@v.loewis.de> <4FDF3B08.4070700@hotpy.org> Message-ID: <20120618170410.12fbf1a0@pitrou.net> On Mon, 18 Jun 2012 15:28:24 +0100 Mark Shannon wrote: > > But do they? The results of benchmarking would seem to suggest (at least > on my test machine) that overly-sparse dicts are slower. > Possibly due to increased cache misses. Or, at least, they are not faster. See the synthetic experiments in http://bugs.python.org/issue10408 That said, Raymond might have witnessed different results at the time. Hardware evolves quickly and the parameters change (memory latency today is at least 50+ CPU cycles, which is quite a lot of wasted work on a pipelined superscalar CPU). Regards Antoine. From guido at python.org Mon Jun 18 17:11:05 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 18 Jun 2012 08:11:05 -0700 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> Message-ID: On Sun, Jun 17, 2012 at 10:59 PM, Terry Reedy wrote: > On 6/17/2012 9:07 PM, Guido van Rossum wrote: > >> On Sun, Jun 17, 2012 at 4:55 PM, Nick Coghlan > > > So, perhaps the answer is to leave this as is, and try to make 2to3 >> smart enough to detect such escapes and replace them with their >> properly encoded (according to the source code encoding) Unicode >> equivalent? >> >> >> But the whole point of the reintroduction of u"..." is to support code >> that isn't run through 2to3. >> > > People writing 2&3 code sometimes use 2to3 once (or a few times) on their > 2.6/7 version during development to find things they must pay attention to. > So Nick's idea could be helpful to people who do not want to use 2to3 > routinely either in development or deployment. > > > > Frankly, I don't care how it's done, but > >> I'd say it's important not to silently have different behavior for the >> same notation in the two versions. >> > > The fundamental problem was giving the 'u' prefix two different meanings > in 2.x: 'change the storage type from bytes to unicode', and 'change the > contents by partially cooking the literal even when raw processing is > requested'*. The only way to silently have the same behavior is to > re-introduce the second meaning of partial cooking. (But I would rather > make it unnecessary.) But that would freeze the 'u' prefix, or at least > 'ur' ('un-raw') forever. It would be better to introduce a new, separate > 'p' prefix, to mean partially raw, partially cooked. (But I am opposes to > > *I think this non-orthogonal interaction effect was a design mistake and > that it would have been better to have re do all the cooking needed by also > interpreting \u and \U sequences. I also think we should add this now for > 3.3 if possible, to make partial cooking at the parsing stage unnecessary. > Putting the processing in re makes it work for all strings, not just those > given as literals. > > > > If that means we have to add an extra > >> step to the compiler to reject r"\u03b3", so be it. >> > > I do not get this. Surely you cannot mean to suddenly start rejecting, in > 3.3, a large set of perfectly legal and sensible 6 and 10 character > sequences when embedded in literals? > Sorry, I meant rejecting ru"...." (and ur"....") if it contains a \u or \U escape that would be expanded by Python 2. Hm. I still encounter enough environments that don't know how to display > such characters that I would prefer to have a rock solid \u escape > mechanism. I can think of two ways to support "expanded" unicode > characters in raw strings a la Python 2; > (a) let the re module interpret the escapes (like it does for \r and \n); As said above, I favor this. The 2.x partial cooking (with 'ur' prefix) was > primarily a substitute for this. > > > (b) the user can write r"someblah" "\u03b3" r"moreblah". > > This is somewhat orthogonal to (a). Users can this whenever they want > partial processing of backslashes without doubling those they want left as > is. A generic example is r'someraw' 'somecooked' r'moreraw' 'morecooked'. > > -- > Terry Jan Reedy > > > > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Jun 18 17:12:53 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 18 Jun 2012 08:12:53 -0700 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> <4FDEC3CA.9070109@v.loewis.de> Message-ID: Ok, banning ru"..." and ur"..." altogether is fine too (assuming it's fine with the originators of the PEP). On Sun, Jun 17, 2012 at 11:31 PM, Nick Coghlan wrote: > On Mon, Jun 18, 2012 at 3:59 PM, "Martin v. L?wis" > wrote: > > On 17.06.2012 22:41, Guido van Rossum wrote: > >> Would it make sense to detect and reject these in 3.3 if the 2.7 syntax > >> is used? > > > > Maybe we are talking about different things: The (new) proposal is that > > the ur prefix in 3.3 is a syntax error (again, as it was before PEP > > 414). So, yes: the raw unicode literals will be rejected (not by > > explicitly detecting them, though). > > I think GvR was replying to my email where I was briefly reconsidering > the idea of keeping them around (because the unicode_literals future > import already suffers from this problem of literals that don't mean > the same things in 2.x and in 3.x). However, that was flawed reasoning > on my part - simply banning them altogether in 3.x is the simplest > option to ensure this particular error doesn't pass silently, > especially since there are alternate forward compatible ways to write > them, such as: > > Python 2.7.3 (default, May 29 2012, 14:54:22) > >>> from __future__ import unicode_literals > >>> print(u"\u03b3" r"\n") > ?\n > >>> print(u"\u03b3\\n") > ?\n > > Python 3.3.0a4 (default:f1dd70bfb4c5, May 31 2012, 09:47:51) > >>> print(u"\u03b3" r"\n") > ?\n > >>> print(u"\u03b3\\n") > ?\n > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Mon Jun 18 17:23:51 2012 From: brian at python.org (Brian Curtin) Date: Mon, 18 Jun 2012 10:23:51 -0500 Subject: [Python-Dev] 3.3 beta in one week In-Reply-To: <4FDED624.60801@v.loewis.de> References: <4FDED624.60801@v.loewis.de> Message-ID: On Mon, Jun 18, 2012 at 2:17 AM, "Martin v. L?wis" wrote: >> this is just a quick reminder that the feature freeze for 3.3 >> will start next weekend with the release of beta1. ?Since ?I >> won't be able to shift that date for short periods (the next >> possible date for me would be around July 16), I hope that >> everybody has planned ahead accordingly. > > Expect some rushing of PEP 397 then. FYI: Martin requested that I be the PEP czar for 397, so the rush kicks off...now :) From tjreedy at udel.edu Mon Jun 18 18:49:02 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 18 Jun 2012 12:49:02 -0400 Subject: [Python-Dev] What's the best way to debug python3 source code? (for this bug: http://bugs.python.org/issue15068) In-Reply-To: <1340017844.52376.YahooMailClassic@web164606.mail.gq1.yahoo.com> References: <1340017844.52376.YahooMailClassic@web164606.mail.gq1.yahoo.com> Message-ID: On 6/18/2012 7:10 AM, gmspro wrote: > I'm working on this bug, http://bugs.python.org/issue15068 Oh. From your first message, I thought you were asking about a personal bug. I will mention that real names are customary on this list. (Unless you have a good professional reason otherwise.) They are also necessary for contributor forms, which we need from anyone submitting an acceptable patch large enough to be covered by copyright. > I tried this with gdb > (gdb)run > >>>from sys import stdin > >>>str=sys.stdin.read() > blabla > blabla > blabla > CTRL+D > CTRL+D > >>>CTRL+C > (gdb)backtrace The backtrace is from when you hit ^C after the prompt and should have nothing to do with the double ^D behavior, except that it shows what we would like to the state after the first ^D. > 0xb7f08348 in ___newselect_nocancel () at > ../sysdeps/unix/syscall-template.S:82 > 82 ../sysdeps/unix/syscall-template.S: No such file or directory. > in ../sysdeps/unix/syscall-template.S > Current language: auto > The current source language is "auto; currently asm". It appears you interrupted in a system call written in assembler. > (gdb) backtrace > #0 0xb7f08348 in ___newselect_nocancel () at > ../sysdeps/unix/syscall-template.S:82 > #1 0xb7b43f9f in readline_until_enter_or_signal (prompt=0xb7b578d0 ">>> > ", signal=0xbfffed54) > at /home/user1/python/Python-3.2.3/Modules/readline.c:987 As far as Python goes, you were at line 987 of readline.c in readline_until_enter_or_signal(prompt, signal) where prompt was ">>>". I would try again twice, hitting ^C once before and once after the first ^D. This should tell you where Python and the system is when it receives ^D and should return to the prompt (as in the backtrace you got) and where Python went instead after receiving ^D and where it is when it gets the second ^D and does return. -- Terry Jan Reedy From tjreedy at udel.edu Mon Jun 18 19:02:45 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 18 Jun 2012 13:02:45 -0400 Subject: [Python-Dev] CFFI released In-Reply-To: References: Message-ID: On 6/18/2012 9:14 AM, Armin Rigo wrote: > Hi all, > > We (=fijal and myself) finally released the beta-0.1 version of CFFI. > > http://cffi.readthedocs.org/ > > It is a(nother) simple Foreign Function Interface for Python calling C > code. I talked about it with a few python core people during the > PyCon sprint; now it's done, with a pure Python part and a compact > (but still 3000 lines) piece of C code. The goal is for it to be > simple yet as complete as possible; it can be used in places where > ctypes (say) is not applicable or only with platform-specific > difficulties, e.g. to rewrite a "_curses" module in pure Python, or > access the X libraries, etc. > > Of course I'm not going to suggest that it should be part of the > standard library right now, but I do hope that over time, should it > prove useful and used, I could come back and make such a suggestion. Make cffi less buggy (check the tracker for new test cases ;-), faster (closer to swig type wrappers), and easier to use than ctypes, and I am sure there will be interest. -- Terry Jan Reedy From tjreedy at udel.edu Mon Jun 18 19:19:07 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 18 Jun 2012 13:19:07 -0400 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> <4FDEC3CA.9070109@v.loewis.de> Message-ID: On 6/18/2012 11:12 AM, Guido van Rossum wrote: > Ok, banning ru"..." and ur"..." altogether is fine too (assuming it's > fine with the originators of the PEP). The original PEP never proposed ur or ru , only u/U. It turns out that ur is problematical even in 2.x, as its meaning is changed by the future import. 2&3 code should skip the convenience of the r prefix and just use u and doubled \s. The PEP should probably say that. -- Terry Jan Reedy From pje at telecommunity.com Mon Jun 18 19:35:44 2012 From: pje at telecommunity.com (PJ Eby) Date: Mon, 18 Jun 2012 13:35:44 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <20120615210343.87279250031@webabinitio.net> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> Message-ID: On Fri, Jun 15, 2012 at 5:03 PM, R. David Murray wrote: > On Fri, 15 Jun 2012 22:48:42 +0200, Victor Stinner < > victor.stinner at gmail.com> wrote: > > > 1. Should we keep 'Parameter.implemented' or not. *Please vote* > > -1 to implemented. > > > I still disagree with the deepcopy. I read somewhere that Python > > developers are consenting adult. If someone really want to modify a > > Signature, it would be nice to provide a simple method to copy it. But > > I don't see why it should be copied *by default*. I expect that > > modifying a signature is more rare than just reading a signature. > > The issue isn't "consenting adults", the issue is consistency. > Without the deepcopy, sometimes what you get back from the > inspect function is freely modifiable and sometimes it is not. > That inconsistency is a bad thing. > Then just copy the signature itself; as currently written, this is going to copy the annotation objects, which could produce weird side-effects from introspection. Using deepcopy seems like overkill when all that's needed is a new Signature instance with a fresh OrderedDict. Or, better yet: make signature and parameter objects immutable (along with the OrderedDict) and the whole problem of modification and copying goes away altogether. Or is there some reason not mentioned in the PEP why mutability is necessary? (The PEP provides no rationale at present for making any part of a signature mutable) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Mon Jun 18 20:05:02 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 18 Jun 2012 14:05:02 -0400 Subject: [Python-Dev] 3.3 beta in one week In-Reply-To: References: <4FDED624.60801@v.loewis.de> Message-ID: On 6/18/2012 11:23 AM, Brian Curtin wrote: > On Mon, Jun 18, 2012 at 2:17 AM, "Martin v. L?wis" wrote: >>> this is just a quick reminder that the feature freeze for 3.3 >>> will start next weekend with the release of beta1. Since I >>> won't be able to shift that date for short periods (the next >>> possible date for me would be around July 16), I hope that >>> everybody has planned ahead accordingly. >> >> Expect some rushing of PEP 397 then. > > FYI: Martin requested that I be the PEP czar for 397, so the rush > kicks off...now :) Some people who have downloaded the standalone version have praised it and recommended it on python-list. So I hope you include something for testing, even if details are changed later. It seems to solve real problems on Windows for many people. A couple of comments on the PEP. "Independent installations will always only overwrite newer versions of the launcher with older versions." 'always only' is a bit awkward and the sentence looks backwards to me. I would expect only overwriting older versions with newer versions. --- These seem contradictory: "The 32-bit distribution of Python will not install a 32-bit version of the launcher on a 64-bit system." I presume this mean that it will install the 64-bit version and that there will always be only one version of the launcher on the system. "On 64bit Windows with both 32bit and 64bit implementations of the same (major.minor) Python version installed, the 64bit version will always be preferred. This will be true for both 32bit and 64bit implementations of the launcher - a 32bit launcher will prefer to execute a 64bit Python installation of the specified version if available." This implies to me that the 32bit installation *will* install a 32bit launcher and that there could be both versions of the launcher installed. -- Terry Jan Reedy From yselivanov.ml at gmail.com Mon Jun 18 20:09:17 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 18 Jun 2012 14:09:17 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> Message-ID: On 2012-06-18, at 1:35 PM, PJ Eby wrote: > On Fri, Jun 15, 2012 at 5:03 PM, R. David Murray wrote: > On Fri, 15 Jun 2012 22:48:42 +0200, Victor Stinner wrote: > > > 1. Should we keep 'Parameter.implemented' or not. *Please vote* > > -1 to implemented. > > > I still disagree with the deepcopy. I read somewhere that Python > > developers are consenting adult. If someone really want to modify a > > Signature, it would be nice to provide a simple method to copy it. But > > I don't see why it should be copied *by default*. I expect that > > modifying a signature is more rare than just reading a signature. > > The issue isn't "consenting adults", the issue is consistency. > Without the deepcopy, sometimes what you get back from the > inspect function is freely modifiable and sometimes it is not. > That inconsistency is a bad thing. > > Then just copy the signature itself; as currently written, this is going to copy the annotation objects, which could produce weird side-effects from introspection. Using deepcopy seems like overkill when all that's needed is a new Signature instance with a fresh OrderedDict. That's an excerpt from Signature.__deepcopy__: cls = type(self) sig = cls.__new__(cls) sig.parameters = OrderedDict((name, param.__copy__()) \ for name, param in self.parameters.items()) And Parameter.__copy__: cls = type(self) copy = cls.__new__(cls) copy.__dict__.update(self.__dict__) return copy So we don't recursively deepcopy parameters in Signature.__deepcopy__ (I hope that we don't violate the deepcopy meaning here) > Or, better yet: make signature and parameter objects immutable (along with the OrderedDict) and the whole problem of modification and copying goes away altogether. Or is there some reason not mentioned in the PEP why mutability is necessary? (The PEP provides no rationale at present for making any part of a signature mutable) The rationale is that sometimes you need to modify signatures. For instance, in decorators. - Yury From fijall at gmail.com Mon Jun 18 21:10:41 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 18 Jun 2012 21:10:41 +0200 Subject: [Python-Dev] CFFI released In-Reply-To: References: Message-ID: On Mon, Jun 18, 2012 at 7:02 PM, Terry Reedy wrote: > On 6/18/2012 9:14 AM, Armin Rigo wrote: > >> Hi all, >> >> We (=fijal and myself) finally released the beta-0.1 version of CFFI. >> >> http://cffi.readthedocs.org/ >> >> It is a(nother) simple Foreign Function Interface for Python calling C >> code. I talked about it with a few python core people during the >> PyCon sprint; now it's done, with a pure Python part and a compact >> (but still 3000 lines) piece of C code. The goal is for it to be >> simple yet as complete as possible; it can be used in places where >> ctypes (say) is not applicable or only with platform-specific >> difficulties, e.g. to rewrite a "_curses" module in pure Python, or >> access the X libraries, etc. >> >> Of course I'm not going to suggest that it should be part of the >> standard library right now, but I do hope that over time, should it >> prove useful and used, I could come back and make such a suggestion. >> > > Make cffi less buggy (check the tracker for new test cases ;-), faster > (closer to swig type wrappers), and easier to use than ctypes, and I am > sure there will be interest. > > I would say it's already fulfilling those three, but I suppose you should try for yourself. Cheers, fijal -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Jun 18 21:31:27 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 18 Jun 2012 21:31:27 +0200 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: <20120618170410.12fbf1a0@pitrou.net> References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> <4FDED6A9.1090209@v.loewis.de> <4FDF3B08.4070700@hotpy.org> <20120618170410.12fbf1a0@pitrou.net> Message-ID: On Mon, Jun 18, 2012 at 5:04 PM, Antoine Pitrou wrote: > On Mon, 18 Jun 2012 15:28:24 +0100 > Mark Shannon wrote: > > > > But do they? The results of benchmarking would seem to suggest (at least > > on my test machine) that overly-sparse dicts are slower. > > Possibly due to increased cache misses. > > Or, at least, they are not faster. See the synthetic experiments in > http://bugs.python.org/issue10408 > > That said, Raymond might have witnessed different results at the time. > Hardware evolves quickly and the parameters change (memory latency > today is at least 50+ CPU cycles, which is quite a lot of wasted work on > a pipelined superscalar CPU). > > Regards > > Antoine. > > More like 200-500 CPU cycles on modern CPUs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Mon Jun 18 21:35:49 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 18 Jun 2012 21:35:49 +0200 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> <4FDED6A9.1090209@v.loewis.de> <4FDF3B08.4070700@hotpy.org> <20120618170410.12fbf1a0@pitrou.net> Message-ID: <20120618213549.2b22bc81@pitrou.net> On Mon, 18 Jun 2012 21:31:27 +0200 Maciej Fijalkowski wrote: > On Mon, Jun 18, 2012 at 5:04 PM, Antoine Pitrou wrote: > > > On Mon, 18 Jun 2012 15:28:24 +0100 > > Mark Shannon wrote: > > > > > > But do they? The results of benchmarking would seem to suggest (at least > > > on my test machine) that overly-sparse dicts are slower. > > > Possibly due to increased cache misses. > > > > Or, at least, they are not faster. See the synthetic experiments in > > http://bugs.python.org/issue10408 > > > > That said, Raymond might have witnessed different results at the time. > > Hardware evolves quickly and the parameters change (memory latency > > today is at least 50+ CPU cycles, which is quite a lot of wasted work on > > a pipelined superscalar CPU). > > > > Regards > > > > Antoine. > > > > > More like 200-500 CPU cycles on modern CPUs. You are right. I was thinking 50 nanoseconds (which for a - relatively high-end - 3GHz CPU puts us at 150 cycles). Regards Antoine. From guido at python.org Mon Jun 18 22:23:34 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 18 Jun 2012 13:23:34 -0700 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> <4FDEC3CA.9070109@v.loewis.de> Message-ID: Cool. On Mon, Jun 18, 2012 at 10:19 AM, Terry Reedy wrote: > On 6/18/2012 11:12 AM, Guido van Rossum wrote: > >> Ok, banning ru"..." and ur"..." altogether is fine too (assuming it's >> fine with the originators of the PEP). >> > > The original PEP never proposed ur or ru , only u/U. > > It turns out that ur is problematical even in 2.x, as its meaning is > changed by the future import. 2&3 code should skip the convenience of the r > prefix and just use u and doubled \s. The PEP should probably say that. > > -- > Terry Jan Reedy > > > > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Jun 18 22:25:15 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 18 Jun 2012 13:25:15 -0700 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> Message-ID: On Mon, Jun 18, 2012 at 11:09 AM, Yury Selivanov wrote: > The rationale is that sometimes you need to modify signatures. > For instance, in decorators. A decorator should make a modified copy, not modify it in place (since the signature of the decorated function does not change, and you have no guarantee that that function is no longer accessible.) -- --Guido van Rossum (python.org/~guido) From raymond.hettinger at gmail.com Mon Jun 18 22:46:25 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Mon, 18 Jun 2012 13:46:25 -0700 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: <20120618213549.2b22bc81@pitrou.net> References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> <4FDED6A9.1090209@v.loewis.de> <4FDF3B08.4070700@hotpy.org> <20120618170410.12fbf1a0@pitrou.net> <20120618213549.2b22bc81@pitrou.net> Message-ID: <8423854D-5ECD-4061-8CA8-50EA41CB1D51@gmail.com> On Jun 18, 2012, at 12:35 PM, Antoine Pitrou wrote: > You are right. I was thinking 50 nanoseconds (which for a - relatively > high-end - 3GHz CPU puts us at 150 cycles). The last guidance I read from Intel said that a cache miss was roughly as expensive as a floating-point divide. When a dictionary is less sparse, it has more collisions which means there are more cache misses. Resizing into a dictionary growing at 2x means that we're going from 2/3 full to 1/3 full and resizing at 4x means going from 2/3 full to 1/6 full. That latter will have fewer collisions (that said, one-third full is still very good). So, the performance is worse but not much worse. For dictionaries large enough to require multiple resizes, the 4x factor cuts the number of resizes in half and makes each resize faster (because of few collisions). Only the growth phase is affected though. It is more problematic for use cases such as caching where a dict is constantly deleting old entries to make space for new ones. Such a dict never reaches a steady-state because the dummy entries accumulate and trigger a resize. Under the 2x scheme this happens much more often. Under the 4x scheme, some dicts are left larger (more sparse) than they otherwise would be (i.e. formerly it grew 8, 32, 128, ...) and now it grows to (8, 16, 32, 64, ...). Some dicts will end-up the same dicts. Others might fit in 16 rather than 32. That decreases their sparsity, increases the number of collisions, and slows the lookup speed. The effect is not large though (the number of collisions between 1/4 full and 1/2 full is better but 1/2 is still pretty good). In the timings, I had done a few years ago, the results were that just about anything that increased the number of collisions or resizings would impact performance. I expect that effect will be accentuated on modern processors but I'll have to do updated tests to be sure. From a high-level view, I question efforts to desparsify dictionaries. When you have a bucket of water, the weight is in the water, not in the bucket itself. The actual keys and values dominate dict size unless you're reusing the same values over and over again. That said, the 4x growth factor was capped at 50,000. For larger dicts it fell back to 2x. Some the only dicts affected by the 2x vs 4x decision lie by in the 6 to 50k ranges. The only apps that see any noticeable difference in memory size are ones that have many dicts of that size range alive at the same time. Sorry I can make a more detailed post right now. I'll make time in the next couple of weeks to post some code and timings that document the collision counts, total memory size, and its affect on various dict use cases. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Mon Jun 18 23:03:16 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 18 Jun 2012 23:03:16 +0200 Subject: [Python-Dev] What's the best way to debug python3 source code? (for this bug: http://bugs.python.org/issue15068) In-Reply-To: <1340017844.52376.YahooMailClassic@web164606.mail.gq1.yahoo.com> References: <1340017844.52376.YahooMailClassic@web164606.mail.gq1.yahoo.com> Message-ID: <4FDF9794.20801@v.loewis.de> > But i can't get any clue which file to look at. I personally wouldn't use gdb but strace to establish what system calls are being made, and decide whether these system calls are correct or not. If you then think that some call is being made that shouldn't, set a breakpoint on the syscall function, and see when it gets hit. I'd also attach to a running Python interpreter instead of running python from gdb, since the interaction between gdb's stdin and python's stdin may be confusing. Regards, Martin From jimjjewett at gmail.com Mon Jun 18 23:06:50 2012 From: jimjjewett at gmail.com (Jim Jewett) Date: Mon, 18 Jun 2012 17:06:50 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: On Mon, Jun 18, 2012 at 10:37 AM, Yury Selivanov wrote: > Jim, > > On 2012-06-18, at 3:08 AM, Jim Jewett wrote: >> On Sat, Jun 16, 2012 at 11:27 AM, Nick Coghlan wrote: >>> On Sat, Jun 16, 2012 at 1:56 PM, Jim J. Jewett wrote: >>>> ? ?Instead of defining a BoundArguments class, just return >>>> ? a copy ?of the Signature, with "value" attributes added to >>>> ? the Parameters. >>> No, the "BoundArguments" class is designed to be easy to >>> feed to a function call as f(*args, **kwds) >> Why does that take a full class, as opposed to a method returning a >> tuple and a dict? > Read this thread, please: http://mail.python.org/pipermail/python-dev/2012-June/120000.html I reread that. I still don't see why it needs to be an instance of a specific independent class, as opposed to a Signature method that returns a (tuple of) a tuple and a dict. ((arg1, arg2, arg3...), {key1: val2, key2: val2}) >>>> ? ?Use subclasses to distinguish the parameter kind. >>> Please, no, using subclasses when there is no behavioural >>> change is annoying. [Examples of how the "kinds" of parameters are qualitatively different.] >> A **kwargs argument is very different from an ordinary parameter. ?Its >> name doesn't matter (and therefore should not be considered in >> __eq__), > The importance of its name depends hugely on the use context. ?In some > it may be very important. The name of kwargs can only be for documentation purposes. Like an annotation or a docstring, it won't affect the success of an attempted call. Annotations are kept because (often) their entire purpose is to document the signature. But docstrings are being dropped, because they often serve other purposes. I've had far more use for docstrings than for the names of positional-only parameters. (In fact, knowing the name of a positional-only parameter has sometimes been an attractive nuisance.) > And it is treated specially, along with the *args. Right -- but this was in response to Nick's claim that the distinctions should not be represented as a subclass, because the behavior wasn't different. I consider different __eq__ implementations or formatting concers to be sufficient on their own; I also consider different possible use locations and counts, different used-by-the-system attributes (name), or different value types (object vs collection) to be sufficiently behavioral. >>>>> A Signature object has the following public attributes and methods: >> The more I try to work with it, the more I want direct references to >> the two special arguments (*args, **kwargs) if they exist. ?FWIW, the >> current bind logic to find them -- particularly kwargs -- seems >> contorted, compared to self.kwargsparameter. > Well, 'self.kwargsparameter' ?will break 'self.parameters' collection, > unless you want one parameter to be in two places. Correct; it should be redundant. Signature.kwargsparameter should be the same object that occurs as the nth element of Signature.parameters.values(). It is just more convenient to retrieve the parameter directly than it is to iterate through a collection inspecting each element for the value of a specific attribute. > In fact, the check types example (in the PEP) is currently shorter and > easier to read with 'Signature.parameters' than with dedicated property > for '**kwargs' parameter. Agreed; the short-cuts *args and **kwargs are only useful because they are special; they aren't needed when you're doing the same thing to all parameters regardless of type. > And if after all you need direct references to *args or **kwargs - write > a little helper, which finds them in 'Signature.parameters'. Looking at http://bugs.python.org/review/15008/diff/5143/Lib/inspect.py you already need one in _bind; it is just that saving the info when you pass it isn't too bad if you're already iterating through the whole collection anyhow. >> ?Also, >> positional_only, *args, and **kwargs should be able to remove name >> from the list of compared attributes. > I still believe in the most contexts the name of a parameter matters > (even if it's **kwargs). ?Besides, how can we make __eq__ to be > configurable? __eq__ can can an _eq_fields attribute to see which other attributes matter -- but it makes more sense for that to be (sub-) class property. -jJ From steve at pearwood.info Mon Jun 18 23:08:21 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 19 Jun 2012 07:08:21 +1000 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: <8423854D-5ECD-4061-8CA8-50EA41CB1D51@gmail.com> References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> <4FDED6A9.1090209@v.loewis.de> <4FDF3B08.4070700@hotpy.org> <20120618170410.12fbf1a0@pitrou.net> <20120618213549.2b22bc81@pitrou.net> <8423854D-5ECD-4061-8CA8-50EA41CB1D51@gmail.com> Message-ID: <4FDF98C5.4000408@pearwood.info> Raymond Hettinger wrote: > Sorry I can make a more detailed post right now. I'll make time in > the next couple of weeks to post some code and timings that > document the collision counts, total memory size, and its affect > on various dict use cases. Is there some way to instrument dictionary sparseness, number of hits and misses, etc. from Python? A secret command-line switch, perhaps, or a compile-time option? And if there isn't, perhaps there should be. -- Steven From solipsis at pitrou.net Mon Jun 18 23:11:42 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 18 Jun 2012 23:11:42 +0200 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: <8423854D-5ECD-4061-8CA8-50EA41CB1D51@gmail.com> References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> <4FDED6A9.1090209@v.loewis.de> <4FDF3B08.4070700@hotpy.org> <20120618170410.12fbf1a0@pitrou.net> <20120618213549.2b22bc81@pitrou.net> <8423854D-5ECD-4061-8CA8-50EA41CB1D51@gmail.com> Message-ID: <20120618231142.3f4a389f@pitrou.net> On Mon, 18 Jun 2012 13:46:25 -0700 Raymond Hettinger wrote: > > On Jun 18, 2012, at 12:35 PM, Antoine Pitrou wrote: > > > You are right. I was thinking 50 nanoseconds (which for a - relatively > > high-end - 3GHz CPU puts us at 150 cycles). > > The last guidance I read from Intel said that > a cache miss was roughly as expensive as a floating-point divide. Floating-point divides are not performance-critical in most Python code, so the comparison is not very useful. Besides, this statement does not state which kind of cache would be involved in the cache miss. On recent Intel CPUs a floating-point divide takes between 10 and 24 cycles (*), which could be in line with a L3 cache access (i.e. a L2 cache miss), but certainly not a main memory access (150+ cycles). (*) according to http://www.agner.org/optimize/instruction_tables.pdf (also, modern CPUs have automatic prefetchers which try to hide the latency of accessing main memory, but AFAIK this only really helps with large streaming accesses) > When a dictionary is less sparse, it has more collisions > which means there are more cache misses. Only in the eventuality that a single dict access is done (or a very small number of them compared to the number of stored elements). But if you are doing many accesses with good temporal locality, then the sparse dict's bigger size implies a bigger cache footprint and therefore a lesser efficiency - not only for the dict accesses themselves, but for the rest of the code since more data will get evicted to make place for the dict. Furthermore, total memory consumption can be important regardless of execution time. RAM is cheap but VMs are common where memory is much smaller than on entire systems. > For dictionaries large enough to require multiple resizes, > the 4x factor cuts the number of resizes in half and makes > each resize faster (because of few collisions). Only the > growth phase is affected though. Indeed. > From a high-level view, I question efforts to desparsify dictionaries. > When you have a bucket of water, the weight is in the water, not > in the bucket itself. But caches store whole cache lines, not individual bytes, so you do care about the bucket's size. With 16-byte or 24-byte dict entries, and 64-byte cache lines, it is easy to see that a sparse dict could result in a significant waste of the cache lines' storage if e.g. only 1 of 3 entries were used (and used entries were distributed in a regular manner). > The actual keys and values dominate dict size > unless you're reusing the same values over and over again. It certainly depends a lot on the use case. Regards Antoine. From arigo at tunes.org Mon Jun 18 23:29:12 2012 From: arigo at tunes.org (Armin Rigo) Date: Mon, 18 Jun 2012 23:29:12 +0200 Subject: [Python-Dev] CFFI released In-Reply-To: References: Message-ID: Hi, On Mon, Jun 18, 2012 at 9:10 PM, Maciej Fijalkowski wrote: >> Make cffi less buggy (check the tracker for new test cases ;-), faster >> (closer to swig type wrappers), and easier to use than ctypes, and I am sure >> there will be interest. > > I would say it's already fulfilling those three, but I suppose you should > try for yourself. I don't think the first is fulfilled so far, as we found various issues on various Linux and non-Linux platforms (and fixed them, so I suppose that release 0.1.1 is coming soon). But I agree with Fijal about speed and ease of use. Like SWIG it generates wrappers in the form of a CPython C extension with built-in functions, so I suppose the performance is similar to SWIG and not ctypes. Well, SWIG wrappers typically have custom logic written in C, whereas in cffi this logic is typically written as Python code, so I suppose it ends up being slower (on CPython; on PyPy small wrapping functions are inlined and have little cost). But the same argument can be pushed further to "why did you use a slow language like Python to write your app in the first place", which I hope we agree is bogus :-) A bient?t, Armin. From martin at v.loewis.de Mon Jun 18 23:55:54 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 18 Jun 2012 23:55:54 +0200 Subject: [Python-Dev] Tunable parameters in dictobject.c (was dictnotes.txt out of date?) In-Reply-To: <4FDF98C5.4000408@pearwood.info> References: <5DC7D3C2-4BE4-4ECC-A037-7FC5E3244DA8@gmail.com> <4FD90819.4040305@hotpy.org> <4FDED6A9.1090209@v.loewis.de> <4FDF3B08.4070700@hotpy.org> <20120618170410.12fbf1a0@pitrou.net> <20120618213549.2b22bc81@pitrou.net> <8423854D-5ECD-4061-8CA8-50EA41CB1D51@gmail.com> <4FDF98C5.4000408@pearwood.info> Message-ID: <4FDFA3EA.4070105@v.loewis.de> On 18.06.2012 23:08, Steven D'Aprano wrote: > Raymond Hettinger wrote: > >> Sorry I can make a more detailed post right now. I'll make time in >> the next couple of weeks to post some code and timings that >> document the collision counts, total memory size, and its affect >> on various dict use cases. > > > Is there some way to instrument dictionary sparseness, number of hits > and misses, etc. from Python? > > A secret command-line switch, perhaps, or a compile-time option? Not that I know of, no. > And if there isn't, perhaps there should be. If so, only compile time options could be acceptable. However, in my experience with profiling, the specific statistics that you want to obtain are rarely known in advance, so you have to write specific instrumentation every time you want to do an experiment - and then the instrumentation is only good for that single experiment. Thus, nobody publishes the instrumentation, since it would accumulate as clutter. Regards, Martin From yselivanov.ml at gmail.com Tue Jun 19 00:54:54 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 18 Jun 2012 18:54:54 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: <0142FA9D-C413-4874-A8BD-7639F1A0621A@gmail.com> Jim, On 2012-06-18, at 5:06 PM, Jim Jewett wrote: > On Mon, Jun 18, 2012 at 10:37 AM, Yury Selivanov > wrote: >> Jim, >> >> On 2012-06-18, at 3:08 AM, Jim Jewett wrote: >>> On Sat, Jun 16, 2012 at 11:27 AM, Nick Coghlan wrote: >>>> On Sat, Jun 16, 2012 at 1:56 PM, Jim J. Jewett wrote: > >>>>> Instead of defining a BoundArguments class, just return >>>>> a copy of the Signature, with "value" attributes added to >>>>> the Parameters. > >>>> No, the "BoundArguments" class is designed to be easy to >>>> feed to a function call as f(*args, **kwds) > >>> Why does that take a full class, as opposed to a method returning a >>> tuple and a dict? > >> Read this thread, please: http://mail.python.org/pipermail/python-dev/2012-June/120000.html > > I reread that. I still don't see why it needs to be an instance of a > specific independent class, as opposed to a Signature method that > returns a (tuple of) a tuple and a dict. > > ((arg1, arg2, arg3...), {key1: val2, key2: val2}) I'll try to explain with the code: def foo(a, *args, b, **kwargs): pass sig = signature(foo) * Case one. We have BoundArguments: ba = sig.bind(1, 2, 3, b=123, c='foo', d='bar') Now, in 'ba.arguments': {'a': 1, 'args': (2, 3), 'b': 123, 'kwargs': {'c': 'foo', 'd': 'bar'}} It's easy to work with 'ba.arguments', just traverse it as follows: for arg_name, arg_value in ba.arguments: param = sig.parameters[arg_name] So you have argument name and value, and the corresponding parameter. * Case two. We return tuple and dict: ba = sig.bind(1, 2, 3, b=123, c='foo', d='bar') ((1, 2, 3), {'b': 123, 'c': 'foo', 'd': 'bar'}) Now, how are you going to work with that? How will you map those values to the corresponding parameters? The whole point of having 'Signature.bind()' is to provide you that mapping. In the link I gave you, I also explained why we need 'BoundArguments.args' and 'BoundArguments.kwargs'. >>>>> Use subclasses to distinguish the parameter kind. > >>>> Please, no, using subclasses when there is no behavioural >>>> change is annoying. > > [Examples of how the "kinds" of parameters are qualitatively different.] > >>> A **kwargs argument is very different from an ordinary parameter. Its >>> name doesn't matter (and therefore should not be considered in >>> __eq__), > >> The importance of its name depends hugely on the use context. In some >> it may be very important. > > The name of kwargs can only be for documentation purposes. Not really. I've seen once the code where types of acceptable values were encoded parameter names (after '__', like 'kwargs__int'). This is a rather extreme example, but it illustrates that names may be important. > Like an > annotation or a docstring, it won't affect the success of an attempted > call. It very well may affect it. For instance, I use annotations to specify arguments types in RPC dispatch. The call will not be dispatched in case of a wrong type. So 'foo(a:int)' in my context does not equal to 'foo(a:str)'. >> And it is treated specially, along with the *args. > > Right -- but this was in response to Nick's claim that the > distinctions should not be represented as a subclass, because the > behavior wasn't different. Yes, it's behaviour of the outer code (Signature) that is different, not the Parameter instances. It's Signature who makes a decision of how to map parameters, not parameters themselves. > I consider different __eq__ implementations or formatting concers to > be sufficient on their own; I also consider different possible use > locations and counts, different used-by-the-system attributes (name), > or different value types (object vs collection) to be sufficiently > behavioral. > >>>>>> A Signature object has the following public attributes and methods: > >>> The more I try to work with it, the more I want direct references to >>> the two special arguments (*args, **kwargs) if they exist. FWIW, the >>> current bind logic to find them -- particularly kwargs -- seems >>> contorted, compared to self.kwargsparameter. > >> Well, 'self.kwargsparameter' will break 'self.parameters' collection, >> unless you want one parameter to be in two places. > > Correct; it should be redundant. Signature.kwargsparameter should be > the same object that occurs as the nth element of > Signature.parameters.values(). It will be always the last parameter (if specified at all) > It is just more convenient to retrieve > the parameter directly than it is to iterate through a collection > inspecting each element for the value of a specific attribute. Again, depends on the use case. Can you show a realistic use case, that needs that? (The use case should be also common and widespread, i.e. it should worth it to uglify Signature class structure) >> In fact, the check types example (in the PEP) is currently shorter and >> easier to read with 'Signature.parameters' than with dedicated property >> for '**kwargs' parameter. > > Agreed; the short-cuts *args and **kwargs are only useful because they > are special; they aren't needed when you're doing the same thing to > all parameters regardless of type. > >> And if after all you need direct references to *args or **kwargs - write >> a little helper, which finds them in 'Signature.parameters'. > > Looking at http://bugs.python.org/review/15008/diff/5143/Lib/inspect.py > you already need one in _bind; it is just that saving the info when > you pass it isn't too bad if you're already iterating through the > whole collection anyhow. Having a 'Signature.kwargs_param' would save just 2 lines of code (1736:'kwargs_param = None', and 1745:'kwargs_param = param') out of 120. And that's BTW is a pretty complex piece of code. Again, it all depends on the use-case. >>> Also, >>> positional_only, *args, and **kwargs should be able to remove name >>> from the list of compared attributes. > >> I still believe in the most contexts the name of a parameter matters >> (even if it's **kwargs). Besides, how can we make __eq__ to be >> configurable? > > __eq__ can can an _eq_fields attribute to see which other attributes > matter -- but it makes more sense for that to be (sub-) class > property. Well we can make __eq__ customizable, but I'm afraid it will just complicate things too much. It should take you 10-20 lines of code to implement any comparison algorithm you want - why try to predict all of them and put in the stdlib? Thank you, - Yury From yselivanov.ml at gmail.com Tue Jun 19 01:10:33 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 18 Jun 2012 19:10:33 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> Message-ID: <2F008309-9E9E-467C-A2DC-3AE04792D51C@gmail.com> On 2012-06-18, at 4:25 PM, Guido van Rossum wrote: > On Mon, Jun 18, 2012 at 11:09 AM, Yury Selivanov > wrote: >> The rationale is that sometimes you need to modify signatures. >> For instance, in decorators. > > A decorator should make a modified copy, not modify it in place (since > the signature of the decorated function does not change, and you have > no guarantee that that function is no longer accessible.) It seems that we have the following options for 'signature(obj)': 1. If 'obj' has a '__signature__' attribute - return a copy of it, if not - create a new one. 2. If 'obj' has a '__signature__' attribute - return it, if not - create a new one. 3. Same as '2', but Signature is also immutable. The first option is the one currently implemented. Its advantage is consistency - we always have a Signature we can safely modify. The second option has a design flaw - sometimes the result Signature is safe to modify, sometimes not, you never know. The third option is hard to work with. Instead of: sig = signature(wrapper) sig.parameters.popitem(last=False) decorator.__signature__ = sig We will have (because Signature is immutable): sig = signature(wrapper) params = OrderedDict(sig.parameters.items()) params.popitem(last=False) attrs = {'parameters': params} try: ra = sig.return_annotation except AttributeError: pass else: attrs['return_annotation'] = ra decorator.__signature__ = Signature.from_attrs(**attrs) It looks like a total overkill (unless we can come up with a nicer API). So it was decided to go with the first option, as it has the least complications. Plus, the copying itself should be fast, as Signatures contain little information. What do you think? - Yury From ncoghlan at gmail.com Tue Jun 19 03:22:37 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Jun 2012 11:22:37 +1000 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: On Mon, Jun 18, 2012 at 5:08 PM, Jim Jewett wrote: > On Sat, Jun 16, 2012 at 11:27 AM, Nick Coghlan wrote: >> No. This is the full set of binding behaviours. "self" is just an >> ordinary "POSITIONAL_OR_KEYWORD" argument (or POSITIONAL_ONLY, in some >> builtin cases). > > Or no longer a "parameter" at all, once the method is bound. ?Except > it sort of still is. ?Same for the space parameter in PyPy. ?I don't > expect the stdlib implementation to support them initially, but I > don't want it to get in the way, either. ?A supposedly closed set gets > in the way. It's not supposedly closed, it *is* closed: Python doesn't support any other ways of binding arguments to parameters. Now, you can have additional state on a callable that gets used by that callable (such as __closure__ and __globals__ on a function, or __self__ on a method, or arbitrary state on an object that implements __call__) but that extra state is not part of the call *signature*, and thus should not be exposed on the result of inspect.getsignature(). Remember, this object is not meant to be a representation of the full state of a callable, it's solely about providing a consistent introspection mechanism that allows arbitrary callables to define how they will bind arguments to parameters. That's why I keep pointing out that there will always need to be a higher level object that brings in other related information, such as the docstring, the name of the callable, etc. This is not an API that describes *everything* that is worth knowing about an arbitrary callable, nor is it intended to be. I believe you have raised a legitimate question regarding whether or not there is sufficient variation in behaviour amongst the parameter kinds for it to be worth using a subclassing approach to override __str__ and __eq__, compared to just implementing a few "kind" checks in the base implementation. However, given that the possible binding behaviours (positional only, keyword-or-positional, excess positional, keyword-only, excess keywords) *is* a closed set, I think a subclass based solution is overkill and adds excessive complexity to the public API. Cheers, Nick. P.S. A more complete response to the question of what constitutes "suitable complexity" for this API. Consider the following implementation sketch for a kind based implementation that pushes as much behaviour as is reasonable into the Parameter object (including the definition of equality and decisions on how the parameter should be displayed): _sentinel = object() class Parameter: def __init__(self, name, kind, default=_sentinel, annotation=_sentinel): if not name: raise ValueError("All parameters must be named for introspection purposes (even positional-only parameters)") self.name = name if kind not in Parameter.KINDS: raise ValueError("Unrecognised parameter binding type {}".format(kind)) self.kind = kind if default is not _sentinel: if kind.startswith("VAR"): raise ValueError("Cannot specify default value for {} parameter".format(kind)) self.default = default if annotation is not _sentinel: self.annotation = annotation def _get_key(self): default = getattr(self, "default", _sentinel) annotation = getattr(self, "annotation", _sentinel) if self.kind in (Parameter.KEYWORD_OR_POSITIONAL, Parameter.KEYWORD_ONLY): # The name is considered significant for parameters that can be specified # as keyword arguments return (self.name, self.kind, default, annotation) # Otherwise, we don't really care about the name return (self.kind, default, annotation) def __eq__(self, other): if not isinstance(other, Parameter): return NotImplemented return self._get_key() == other._get_key() def __str__(self): kind = self.kind components = [] if kind == Parameter.VAR_POSITIONAL: components += ["*"] elif kind == Parameter.VAR_KEYWORD: components += ["**"] if kind == Parameter.POSITIONAL_ONLY: components += ["<", self.name, ">"] else: components += [self.name] try: default = self.default except AttributeError: pass else: components += ["=", repr(default)] try: annotation = self.annotation except AttributeError: pass else: components += [":", repr(annotation)] return "".join(components) The points of variation: - VAR_POSITIONAL and VAR_KEYWORD do not permit a "default" attribute - VAR_POSITIONAL adds a "*" before the name when printed out - VAR_KEYWORD adds a "**" before the name when printed out - POSITIONAL_ONLY adds "<>" around the name when printed out - all three of those ignore "name" for comparisons Now, suppose we dispense with the kind attribute and use subclassing instead: _sentinel = object() class _ParameterBase: """Common behaviour for all parameter types""" def __init__(self, name, default=_sentinel, annotation=_sentinel): if not name: raise ValueError("All parameters must be named for introspection purposes") self.name = name self.kind = kind if default is not _sentinel: self.default = default if annotation is not _sentinel: self.annotation = annotation def _get_key(self): default = getattr(self, "default", _sentinel) annotation = getattr(self, "annotation", _sentinel) return (self.name, default, annotation) def __eq__(self, other): if not isinstance(other, type(self)): return NotImplemented return self._get_key() == other._get_key() def __str__(self): components = [self.name] try: default = self.default except AttributeError: pass else: components += ["=", repr(default)] try: annotation = self.annotation except AttributeError: pass else: components += [":", repr(annotation)] return "".join(components) class Parameter(_ParameterBase): """Representation of a normal Python function parameter""" class KeywordOnlyParameter(_ParameterBase): """Representation of a keyword-only Python function parameter""" class PositionalOnlyParameter(_ParameterBase): """Representation of a positional-only parameter""" def __init__(self, index, name=None, default=_sentinel, annotation=_sentinel): if name is not None: display_name = "<{}:{}>".format(index, name) else: display_name = "<{}>".format(index) super().__init__(display_name, default, annotation) def _get_key(self): default = getattr(self, "default", _sentinel) annotation = getattr(self, "annotation", _sentinel) return (default, annotation) class _VarParameterBase(_ParameterBase): """Common behaviour for variable argument parameters""" class VarPositionalParameter(_VarParameterBase): """Representation of a parameter for excess positional arguments""" def __str__(self): return "*" + super().__str__() class VarKeywordParameter(_VarParameterBase): """Representation of a parameter for excess keyword arguments""" def __str__(self): return "**" + super().__str__() Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ethan at stoneleaf.us Tue Jun 19 03:17:57 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 18 Jun 2012 18:17:57 -0700 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <2F008309-9E9E-467C-A2DC-3AE04792D51C@gmail.com> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> <2F008309-9E9E-467C-A2DC-3AE04792D51C@gmail.com> Message-ID: <4FDFD345.2090706@stoneleaf.us> Yury Selivanov wrote: > On 2012-06-18, at 4:25 PM, Guido van Rossum wrote: > >> On Mon, Jun 18, 2012 at 11:09 AM, Yury Selivanov >> wrote: >>> The rationale is that sometimes you need to modify signatures. >>> For instance, in decorators. >> A decorator should make a modified copy, not modify it in place (since >> the signature of the decorated function does not change, and you have >> no guarantee that that function is no longer accessible.) > > > > It seems that we have the following options for 'signature(obj)': > > 1. If 'obj' has a '__signature__' attribute - return a copy of it, > if not - create a new one. > > 2. If 'obj' has a '__signature__' attribute - return it, > if not - create a new one. > > 3. Same as '2', but Signature is also immutable. > > > The first option is the one currently implemented. Its advantage > is consistency - we always have a Signature we can safely modify. > > The second option has a design flaw - sometimes the result Signature > is safe to modify, sometimes not, you never know. > > The third option is hard to work with. > Instead of: > > sig = signature(wrapper) > sig.parameters.popitem(last=False) > decorator.__signature__ = sig > > We will have (because Signature is immutable): > > sig = signature(wrapper) > params = OrderedDict(sig.parameters.items()) > params.popitem(last=False) > > attrs = {'parameters': params} > try: > ra = sig.return_annotation > except AttributeError: > pass > else: > attrs['return_annotation'] = ra > > decorator.__signature__ = Signature.from_attrs(**attrs) > > It looks like a total overkill (unless we can come up with a nicer > API). > > So it was decided to go with the first option, as it has the least > complications. Plus, the copying itself should be fast, as > Signatures contain little information. > > What do you think? Option 1 makes sense to me -- we already know we'll have cases where we want to modify a given signature, so why make it hard on ourselves? ~Ethan~ From ncoghlan at gmail.com Tue Jun 19 03:29:56 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Jun 2012 11:29:56 +1000 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: On Tue, Jun 19, 2012 at 7:06 AM, Jim Jewett wrote: > Correct; it should be redundant. ?Signature.kwargsparameter should be > the same object that occurs as the nth element of > Signature.parameters.values(). ?It is just more convenient to retrieve > the parameter directly than it is to iterate through a collection > inspecting each element for the value of a specific attribute. I suspect in 3.4 we will add the following additional convenience properties: Signature.positional -> list[Parameter] List of POSITIONAL_ONLY and KEYWORD_OR_POSITIONAL parameters Signature.var_positional -> None or Parameter Reference to the VAR_POSITIONAL parameter, if any Signature.keyword -> dict{name:Parameter} Mapping of all KEYWORD_ONLY and KEYWORD_OR_POSITIONAL parameters Signature.var_keyword -> None or Parameter Reference to the VAR_KEYWORD parameter, if any However, I don't think we should add such convenience properties *right now*. One step at a time. > __eq__ can can an _eq_fields attribute to see which other attributes > matter -- but it makes more sense for that to be (sub-) class > property. Only if you accept the premise that there are other possible parameter binding behaviours beyond the five already defined. Hypergeneralisation is a great way to make an API far more complex than it needs to be. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Tue Jun 19 03:33:54 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 18 Jun 2012 21:33:54 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <4fdc03ef.8759320a.5112.6c67@mx.google.com> Message-ID: <7DE182F1-7D07-4D56-A273-0761AA61BD2B@gmail.com> On 2012-06-18, at 9:29 PM, Nick Coghlan wrote: > On Tue, Jun 19, 2012 at 7:06 AM, Jim Jewett wrote: >> Correct; it should be redundant. Signature.kwargsparameter should be >> the same object that occurs as the nth element of >> Signature.parameters.values(). It is just more convenient to retrieve >> the parameter directly than it is to iterate through a collection >> inspecting each element for the value of a specific attribute. > > I suspect in 3.4 we will add the following additional convenience properties: > > Signature.positional -> list[Parameter] > List of POSITIONAL_ONLY and KEYWORD_OR_POSITIONAL parameters > Signature.var_positional -> None or Parameter > Reference to the VAR_POSITIONAL parameter, if any > Signature.keyword -> dict{name:Parameter} > Mapping of all KEYWORD_ONLY and KEYWORD_OR_POSITIONAL parameters > Signature.var_keyword -> None or Parameter > Reference to the VAR_KEYWORD parameter, if any Maybe. But I'd suggest to avoid the intersection of 'Signature.positional' and 'Signature.keyword'. Better to have 'Signature.keywordonly'. > However, I don't think we should add such convenience properties > *right now*. One step at a time. +1. 'Signature.parameters' seems to be enough right now. - Yury From ncoghlan at gmail.com Tue Jun 19 03:36:20 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Jun 2012 11:36:20 +1000 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> Message-ID: On Tue, Jun 19, 2012 at 4:09 AM, Yury Selivanov wrote: > On 2012-06-18, at 1:35 PM, PJ Eby wrote: >> Then just copy the signature itself; as currently written, this is going to copy the annotation objects, which could produce weird side-effects from introspection. ?Using deepcopy seems like overkill when all that's needed is a new Signature instance with a fresh OrderedDict. > > That's an excerpt from Signature.__deepcopy__: > > ? ? cls = type(self) > ? ? sig = cls.__new__(cls) > ? ? sig.parameters = OrderedDict((name, param.__copy__()) \ > ? ? ? ? ? ? ? ? ? ? ? ? ? for name, param in self.parameters.items()) > > And Parameter.__copy__: > > ? ? ? ?cls = type(self) > ? ? ? ?copy = cls.__new__(cls) > ? ? ? ?copy.__dict__.update(self.__dict__) > ? ? ? ?return copy > > So we don't recursively deepcopy parameters in Signature.__deepcopy__ > (I hope that we don't violate the deepcopy meaning here) In my opinion, It's better to redefine what you mean by a shallow copy (making it a bit deeper than just the direct attributes) rather than making a so-called deep copy shallower. So keep the current copying semantics for Signature objects (i.e. creating new copies of the Parameter objects as well), but call it a shallow copy rather than a deep copy. Make it clear in the documentation that any defaults and annotations are still shared with the underlying callable. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From steve at pearwood.info Tue Jun 19 03:50:49 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 19 Jun 2012 11:50:49 +1000 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <2F008309-9E9E-467C-A2DC-3AE04792D51C@gmail.com> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> <2F008309-9E9E-467C-A2DC-3AE04792D51C@gmail.com> Message-ID: <20120619015048.GA14418@ando> On Mon, Jun 18, 2012 at 07:10:33PM -0400, Yury Selivanov wrote: > It seems that we have the following options for 'signature(obj)': > > 1. If 'obj' has a '__signature__' attribute - return a copy of it, > if not - create a new one. > > 2. If 'obj' has a '__signature__' attribute - return it, > if not - create a new one. > > 3. Same as '2', but Signature is also immutable. There's a slight ambiguity there. Do you mean, create a __signature__ attribute, or just create a new Signature instance? I presume you mean the later, a Signature instance, and not cache it in __signature__. -- Steven From yselivanov.ml at gmail.com Tue Jun 19 03:57:38 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 18 Jun 2012 21:57:38 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <20120619015048.GA14418@ando> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> <2F008309-9E9E-467C-A2DC-3AE04792D51C@gmail.com> <20120619015048.GA14418@ando> Message-ID: <4F2A5A0F-6BD2-4DA3-9BA9-D05AB6A361F6@gmail.com> On 2012-06-18, at 9:50 PM, Steven D'Aprano wrote: > On Mon, Jun 18, 2012 at 07:10:33PM -0400, Yury Selivanov wrote: > >> It seems that we have the following options for 'signature(obj)': >> >> 1. If 'obj' has a '__signature__' attribute - return a copy of it, >> if not - create a new one. >> >> 2. If 'obj' has a '__signature__' attribute - return it, >> if not - create a new one. >> >> 3. Same as '2', but Signature is also immutable. > > There's a slight ambiguity there. Do you mean, create a __signature__ > attribute, or just create a new Signature instance? > > I presume you mean the later, a Signature instance, and not cache it in > __signature__. Right. No implicit caching to __signature__ by the signature() function. So, more verbose: 1. If 'obj' has a '__signature__' attribute - return a copy of its value, if not - create a new Signature and return it. 2. If 'obj' has a '__signature__' attribute - return a copy of its value, if not - create a new Signature and return it. 3. Same as '2', but Signature is also immutable. Thanks! - Yury From yselivanov.ml at gmail.com Tue Jun 19 04:00:57 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 18 Jun 2012 22:00:57 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> Message-ID: <2F1B1C35-6FEB-4816-9EB5-53D9D2AD146C@gmail.com> On 2012-06-18, at 9:36 PM, Nick Coghlan wrote: > On Tue, Jun 19, 2012 at 4:09 AM, Yury Selivanov wrote: >> On 2012-06-18, at 1:35 PM, PJ Eby wrote: >>> Then just copy the signature itself; as currently written, this is going to copy the annotation objects, which could produce weird side-effects from introspection. Using deepcopy seems like overkill when all that's needed is a new Signature instance with a fresh OrderedDict. >> >> That's an excerpt from Signature.__deepcopy__: >> >> cls = type(self) >> sig = cls.__new__(cls) >> sig.parameters = OrderedDict((name, param.__copy__()) \ >> for name, param in self.parameters.items()) >> >> And Parameter.__copy__: >> >> cls = type(self) >> copy = cls.__new__(cls) >> copy.__dict__.update(self.__dict__) >> return copy >> >> So we don't recursively deepcopy parameters in Signature.__deepcopy__ >> (I hope that we don't violate the deepcopy meaning here) > > In my opinion, It's better to redefine what you mean by a shallow copy > (making it a bit deeper than just the direct attributes) rather than > making a so-called deep copy shallower. Agree. That's the only thing about the implementation that I really didn't like - deepcopy that's not exactly deep. > So keep the current copying semantics for Signature objects (i.e. > creating new copies of the Parameter objects as well), but call it a > shallow copy rather than a deep copy. Make it clear in the > documentation that any defaults and annotations are still shared with > the underlying callable. So, 'Signature.__deepcopy__()' -> 'Signature.shallow_copy()'? Or make it private - 'Signature._shallow_copy()'? - Yury From steve at pearwood.info Tue Jun 19 04:02:03 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 19 Jun 2012 12:02:03 +1000 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> Message-ID: <20120619020203.GB14418@ando> On Mon, Jun 18, 2012 at 02:09:17PM -0400, Yury Selivanov wrote: > That's an excerpt from Signature.__deepcopy__: > > cls = type(self) > sig = cls.__new__(cls) > sig.parameters = OrderedDict((name, param.__copy__()) \ > for name, param in self.parameters.items()) > > And Parameter.__copy__: > > cls = type(self) > copy = cls.__new__(cls) > copy.__dict__.update(self.__dict__) > return copy > > So we don't recursively deepcopy parameters in Signature.__deepcopy__ > (I hope that we don't violate the deepcopy meaning here) I think you are. I would describe the above as a shallow copy, not a deep copy. I expect a deep copy to go *all the way down*, as deep as possible. Even if it makes no practical difference, I think it will be less confusing to just describe it as a "copy" rather than a deep copy, unless you recursively copy everything all the way down. -- Steven From ncoghlan at gmail.com Tue Jun 19 04:06:40 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Jun 2012 12:06:40 +1000 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <2F1B1C35-6FEB-4816-9EB5-53D9D2AD146C@gmail.com> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> <2F1B1C35-6FEB-4816-9EB5-53D9D2AD146C@gmail.com> Message-ID: On Tue, Jun 19, 2012 at 12:00 PM, Yury Selivanov wrote: > On 2012-06-18, at 9:36 PM, Nick Coghlan wrote: >> So keep the current copying semantics for Signature objects (i.e. >> creating new copies of the Parameter objects as well), but call it a >> shallow copy rather than a deep copy. Make it clear in the >> documentation that any defaults and annotations are still shared with >> the underlying callable. > > So, 'Signature.__deepcopy__()' -> 'Signature.shallow_copy()'? ?Or make > it private - 'Signature._shallow_copy()'? I'd just call it Signature.__copy__ :) You're not doing anything unusual here, just declaring that the list of parameters is a part of the Signature object's state, and thus even a shallow copy shouldn't share the parameter objects. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Tue Jun 19 04:35:06 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 18 Jun 2012 22:35:06 -0400 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> <2F1B1C35-6FEB-4816-9EB5-53D9D2AD146C@gmail.com> Message-ID: <6DF46636-6F93-4A06-B6CC-BE8E2D411A1E@gmail.com> On 2012-06-18, at 10:06 PM, Nick Coghlan wrote: > On Tue, Jun 19, 2012 at 12:00 PM, Yury Selivanov > wrote: >> On 2012-06-18, at 9:36 PM, Nick Coghlan wrote: >>> So keep the current copying semantics for Signature objects (i.e. >>> creating new copies of the Parameter objects as well), but call it a >>> shallow copy rather than a deep copy. Make it clear in the >>> documentation that any defaults and annotations are still shared with >>> the underlying callable. >> >> So, 'Signature.__deepcopy__()' -> 'Signature.shallow_copy()'? Or make >> it private - 'Signature._shallow_copy()'? > > I'd just call it Signature.__copy__ :) > > You're not doing anything unusual here, just declaring that the list > of parameters is a part of the Signature object's state, and thus even > a shallow copy shouldn't share the parameter objects. OK, done ;) BTW, http://bugs.python.org/issue15008 has the latest implementation attached (if anybody wants to play with it) - Yury From tjreedy at udel.edu Tue Jun 19 04:35:35 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 18 Jun 2012 22:35:35 -0400 Subject: [Python-Dev] CFFI released In-Reply-To: References: Message-ID: On 6/18/2012 5:29 PM, Armin Rigo wrote: > On Mon, Jun 18, 2012 at 9:10 PM, Maciej Fijalkowski wrote: >> Me >>> Make cffi less buggy (check the tracker for new test cases ;-), faster >>> (closer to swig type wrappers), and easier to use than ctypes, and I am sure >>> there will be interest. >> >> I would say it's already fulfilling those three, but I suppose you should >> try for yourself. > I don't think the first is fulfilled so far, as we found various > issues on various Linux and non-Linux platforms (and fixed them, so I > suppose that release 0.1.1 is coming soon). But I agree with Fijal > about speed and ease of use. Like SWIG it generates wrappers in the > form of a CPython C extension with built-in functions, so I suppose > the performance is similar to SWIG and not ctypes. Well, SWIG > wrappers typically have custom logic written in C, whereas in cffi > this logic is typically written as Python code, so I suppose it ends > up being slower (on CPython; on PyPy small wrapping functions are > inlined and have little cost). But the same argument can be pushed > further to "why did you use a slow language like Python to write your > app in the first place", which I hope we agree is bogus :-) Yes, because languages have no speed, only implementations do; and yes, because when CPython really is too slow for a particular task, it can be pushed onto C. But some people (pygame, others on python-list) have reported that for their project, ctypes negates too much of the C speedup, relative to swig or similar. So it has not been quite the C wrapper generator killer that some people hoped for. (This is not to say that is not great for uses it does succeed at.) -- Terry Jan Reedy From ncoghlan at gmail.com Tue Jun 19 05:38:26 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Jun 2012 13:38:26 +1000 Subject: [Python-Dev] CFFI released In-Reply-To: References: Message-ID: On Tue, Jun 19, 2012 at 12:35 PM, Terry Reedy wrote: > Yes, because languages have no speed, only implementations do; and yes, > because when CPython really is too slow for a particular task, it can be > pushed onto C. But some people (pygame, others on python-list) have reported > that for their project, ctypes negates too much of the C speedup, relative > to swig or similar. So it has not been quite the C wrapper generator killer > that some people hoped for. (This is not to say that is not great for uses > it does succeed at.) There's also another reason ctypes hasn't taken over from Cython and SWIG: because it's entirely ABI based and doesn't look at the C header files, it loses even what little type safety C possesses. SWIG and Cython, on the other hand, suffer from the fact that you can't just decide to wrap an arbitrary DLL on the fly *without* predefining an extension module, and you also can't just use C syntax to define the ABI you want to access (although SWIG actually gets pretty close in many cases). The approach Armin and Maciej have chosen here (using C declarations to define the ABI, and supporting verification against the C headers as a separate step) looks very promising. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From brian at python.org Tue Jun 19 06:31:21 2012 From: brian at python.org (Brian Curtin) Date: Mon, 18 Jun 2012 23:31:21 -0500 Subject: [Python-Dev] PEP 397 - Last Comments Message-ID: Martin approached me earlier and requested that I act as PEP czar for 397. I haven't been involved in the writing of the PEP and have been mostly observing from the outside, so I accepted and hope to get this wrapped up quickly and implemented in time for the beta. The PEP is pretty complete, but there are a few outstanding issues. On Mon, Jun 18, 2012 at 1:05 PM, Terry Reedy wrote: > "Independent installations will always only overwrite newer versions of the > launcher with older versions." 'always only' is a bit awkward and the > sentence looks backwards to me. I would expect only overwriting older > versions with newer versions. Agreed, I would expect the same. I would think taking out the word "only" and then flipping newer and older in the sentence would correct it. On Mon, Jun 18, 2012 at 1:05 PM, Terry Reedy wrote: > These seem contradictory: > > "The 32-bit distribution of Python will not install a 32-bit version of the > launcher on a 64-bit system." > > I presume this mean that it will install the 64-bit version and that there > will always be only one version of the launcher on the system. > > "On 64bit Windows with both 32bit and 64bit implementations of the same > (major.minor) Python version installed, the 64bit version will always be > preferred. ?This will be true for both 32bit and 64bit implementations of > the launcher - a 32bit launcher will prefer to execute a 64bit Python > installation of the specified version if available." > > This implies to me that the 32bit installation *will* install a 32bit > launcher and that there could be both versions of the launcher installed. I took that as covering an independently-installed launcher. You could always install your own 32-bit launcher, and it'd prefer to launch a binary matching the machine type. So yes, there could be multiple launchers installed for different machine types, and I'm not sure why we'd want to (or how we could) prevent people from installing them. You could have a 64-bit launcher available system-wide in your Windows folder, then you could have a 32-bit launcher running out of C:\Users\Terry for some purposes. Martin - is that correct? === Outside of Terry's concerns, I find the updated PEP almost ready to go as-is. Many of the updates were in line with what Martin and I briefly talked about at PyCon, and I believe some of them came out of previous PEP discussions on here, so I see nothing unexpected at this point. My only additional comment would be to have the "Configuration file" implementation details supplemented with a readable example of where the py.ini file should be placed. On my machine that is "C:\Users\brian\AppData\Local", rather than making people have to run that parameter through the listed function via pywin32. From stefan_ml at behnel.de Tue Jun 19 08:28:37 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 19 Jun 2012 08:28:37 +0200 Subject: [Python-Dev] CFFI released In-Reply-To: References: Message-ID: Armin Rigo, 18.06.2012 23:29: > On Mon, Jun 18, 2012 at 9:10 PM, Maciej Fijalkowski wrote: >>> Make cffi less buggy (check the tracker for new test cases ;-), faster >>> (closer to swig type wrappers), and easier to use than ctypes, and I am sure >>> there will be interest. >> >> I would say it's already fulfilling those three, but I suppose you should >> try for yourself. > > I don't think the first is fulfilled so far, as we found various > issues on various Linux and non-Linux platforms (and fixed them, so I > suppose that release 0.1.1 is coming soon). But I agree with Fijal > about speed and ease of use. Like SWIG it generates wrappers in the > form of a CPython C extension with built-in functions, so I suppose > the performance is similar to SWIG and not ctypes. Well, SWIG > wrappers typically have custom logic written in C, whereas in cffi > this logic is typically written as Python code, so I suppose it ends > up being slower Any reason you didn't write the C parts in Cython? Skipping through the code, it seems like it could benefit, both in terms of code overhead and performance. For something that heavily relies on Python calls, reducing the call overhead some more should be quite interesting. It would also be cool to have Cython integration in the sense that cffi could generate a .pxd file from the parsed C declarations (and eventually pass the declarations directly into the compiler), which Cython could use for its one-the-fly code compilation (similar to what cffi does in the background, as I understand it) or even in its pure Python mode to allow for C code interaction (which currently requires Cython syntax). Both could then support the same interface for C declarations. That would allow users to start with cffi and then compile the performance critical parts of their code (e.g. a couple of functions in a Python module) through Cython without major changes - in some cases, applying the "cython.compile" decorator could be enough already. Sounds great to me. Stefan From martin at v.loewis.de Tue Jun 19 08:30:40 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 19 Jun 2012 08:30:40 +0200 Subject: [Python-Dev] PEP 397 - Last Comments In-Reply-To: References: Message-ID: <4FE01C90.7010603@v.loewis.de> > Agreed, I would expect the same. I would think taking out the word > "only" and then flipping newer and older in the sentence would correct > it. Will change. >> "On 64bit Windows with both 32bit and 64bit implementations of the same >> (major.minor) Python version installed, the 64bit version will always be >> preferred. This will be true for both 32bit and 64bit implementations of >> the launcher - a 32bit launcher will prefer to execute a 64bit Python >> installation of the specified version if available." >> >> This implies to me that the 32bit installation *will* install a 32bit >> launcher and that there could be both versions of the launcher installed. No - this paragraph talks about the Python being launched, not the bitness of the launcher. As (currently) the launcher creates a subprocess always, this is quite feasible. The bitness of the launcher really doesn't matter, except that a 32-bit launcher cannot access all directories, and a 64-bit launcher does not work on a 32-bit system. Now that I think about it, it might be that it's best to always have the launcher as a 32-bit binary. It could disable the filesystem and registry redirection if it really wanted to, and would work on both 32-bit and 64-bit systems. > I took that as covering an independently-installed launcher. > > You could always install your own 32-bit launcher, and it'd prefer to > launch a binary matching the machine type. No, that's not the plan. The binary being launched is entirely controlled by command line arguments, ini files, and shebang lines. I personally find it sad that it always creates a subprocess, and it could avoid doing so if the launched Python has the same bitness, but alas, the problems with doing so are mostly convincing. > So yes, there could be > multiple launchers installed for different machine types, and I'm not > sure why we'd want to (or how we could) prevent people from installing > them. You could have a 64-bit launcher available system-wide in your > Windows folder, then you could have a 32-bit launcher running out of > C:\Users\Terry for some purposes. The PEP doesn't really consider launcher binaries not installed into the standard location. It would work, but it's out of scope of the PEP. The PEP actually only talks about launcher binaries in c:\windows, and essentially says that they must match the bitness of the system. > My only additional comment would be to have the "Configuration file" > implementation details supplemented with a readable example of where > the py.ini file should be placed. On my machine that is > "C:\Users\brian\AppData\Local", rather than making people have to run > that parameter through the listed function via pywin32. Will do. Martin From greg.ewing at canterbury.ac.nz Tue Jun 19 08:30:57 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 19 Jun 2012 18:30:57 +1200 Subject: [Python-Dev] CFFI released In-Reply-To: References: Message-ID: <4FE01CA1.6020605@canterbury.ac.nz> Is there any provision for keeping the compiled C code and distributing it along with an application? Requiring a C compiler to be present at all times could be a difficulty for Windows. -- Greg From arigo at tunes.org Tue Jun 19 09:08:08 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 19 Jun 2012 09:08:08 +0200 Subject: [Python-Dev] CFFI released In-Reply-To: <4FE01CA1.6020605@canterbury.ac.nz> References: <4FE01CA1.6020605@canterbury.ac.nz> Message-ID: Hi Greg, On Tue, Jun 19, 2012 at 8:30 AM, Greg Ewing wrote: > Is there any provision for keeping the compiled > C code and distributing it along with an application? > Requiring a C compiler to be present at all times > could be a difficulty for Windows. We are aware of it and doing that is work in progress. Armin. From arigo at tunes.org Tue Jun 19 09:59:14 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 19 Jun 2012 09:59:14 +0200 Subject: [Python-Dev] CFFI released In-Reply-To: References: Message-ID: Hi Stefan, On Tue, Jun 19, 2012 at 8:28 AM, Stefan Behnel wrote: > Any reason you didn't write the C parts in Cython? '''As a general rule, when there is a design issue to resolve, we pick the solution that is the ?most C-like?.''' (from the documentation). But you are welcome to write Cython bindings for/from/around cffi if you like. A bient?t, Armin. From larry at hastings.org Tue Jun 19 11:43:30 2012 From: larry at hastings.org (Larry Hastings) Date: Tue, 19 Jun 2012 02:43:30 -0700 Subject: [Python-Dev] PEP 362: 4th edition In-Reply-To: <6DF46636-6F93-4A06-B6CC-BE8E2D411A1E@gmail.com> References: <739E47A2-F1A3-4FBB-8549-01B745288AA5@gmail.com> <20120615210343.87279250031@webabinitio.net> <2F1B1C35-6FEB-4816-9EB5-53D9D2AD146C@gmail.com> <6DF46636-6F93-4A06-B6CC-BE8E2D411A1E@gmail.com> Message-ID: <4FE049C2.3090101@hastings.org> On 06/18/2012 07:35 PM, Yury Selivanov wrote: > BTW, http://bugs.python.org/issue15008 has the latest implementation > attached (if anybody wants to play with it) I've also posted the latest minor tweaks to the PEP, on behalf of Yury. The new version is already live: http://www.python.org/dev/peps/pep-0362 Cheers, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Jun 19 11:48:19 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 19 Jun 2012 11:48:19 +0200 Subject: [Python-Dev] cpython: Fix #14772: Return the destination from some shutil functions. References: Message-ID: <20120619114819.6312859a@pitrou.net> On Tue, 19 Jun 2012 01:41:44 +0200 brian.curtin wrote: > http://hg.python.org/cpython/rev/8281233ec648 > changeset: 77514:8281233ec648 > user: Brian Curtin > date: Mon Jun 18 18:41:07 2012 -0500 > summary: > Fix #14772: Return the destination from some shutil functions. > > files: > Doc/library/shutil.rst | 14 ++++++--- > Lib/shutil.py | 13 +++++++-- > Lib/test/test_shutil.py | 41 +++++++++++++++++++++++++++++ > Misc/NEWS | 2 + > 4 files changed, 62 insertions(+), 8 deletions(-) > > > diff --git a/Doc/library/shutil.rst b/Doc/library/shutil.rst > --- a/Doc/library/shutil.rst > +++ b/Doc/library/shutil.rst You forgot to add some versionchanged tags for these changes. Regards Antoine. From gmspro at yahoo.com Tue Jun 19 13:39:30 2012 From: gmspro at yahoo.com (gmspro) Date: Tue, 19 Jun 2012 04:39:30 -0700 (PDT) Subject: [Python-Dev] Help to fix this bug http://bugs.python.org/issue15068 Message-ID: <1340105970.7662.YahooMailClassic@web164603.mail.gq1.yahoo.com> Hi, I'm working on this bug to fix it. http://bugs.python.org/issue15068 >>> from sys import stdin >>> str=stdin.read() hello hello world CTRL+D CTRL+D Can anyone tell me where is stdin.read() function defined? Or where is sys.stdin defined? Or which function is called for str=stdin.read() ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Tue Jun 19 14:01:01 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 19 Jun 2012 15:01:01 +0300 Subject: [Python-Dev] Help to fix this bug http://bugs.python.org/issue15068 In-Reply-To: <1340105970.7662.YahooMailClassic@web164603.mail.gq1.yahoo.com> References: <1340105970.7662.YahooMailClassic@web164603.mail.gq1.yahoo.com> Message-ID: It depends on the Python version. In 3.3, for example, look into Modules/_io/fileio.c Eli On Tue, Jun 19, 2012 at 2:39 PM, gmspro wrote: > Hi, > > I'm working on this bug to fix it. http://bugs.python.org/issue15068 > > >>> from sys import stdin > >>> str=stdin.read() > hello > hello world > CTRL+D > CTRL+D > > Can anyone tell me where is stdin.read() function defined? > Or where is sys.stdin defined? > Or which function is called for str=stdin.read() ? > > Thanks > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/eliben%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Jun 19 14:13:29 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 19 Jun 2012 14:13:29 +0200 Subject: [Python-Dev] Help to fix this bug http://bugs.python.org/issue15068 References: <1340105970.7662.YahooMailClassic@web164603.mail.gq1.yahoo.com> Message-ID: <20120619141329.6d002e82@pitrou.net> Hi, On Tue, 19 Jun 2012 04:39:30 -0700 (PDT) gmspro wrote: > Hi, > > I'm working on this bug to fix it. http://bugs.python.org/issue15068 I'm not sure why you think this is fixable, given the comments on the tracker. What is your plan? > >>> from sys import stdin > >>> str=stdin.read() > hello > hello world > CTRL+D > CTRL+D > > Can anyone tell me where is stdin.read() function defined? > Or where is sys.stdin defined? Can I suggest you try to investigate it a bit yourself: >>> sys.stdin <_io.TextIOWrapper name='' mode='r' encoding='UTF-8'> So it's a TextIOWrapper from the _io module (which is really the implementation of the io module). You'll find its source in Modules/_io. TextIOWrapper objects are defined in Modules/_io/textio.c. But as you know, they wrap buffered I/O objects, which are defined in Modules/_io/bufferedio.c. In sys.stdin's case, the buffered I/O object wraps a raw FileIO object, defined in Modules/_io/fileio.c: >>> sys.stdin.buffer <_io.BufferedReader name=''> >>> sys.stdin.buffer.raw <_io.FileIO name='' mode='rb'> Regards Antoine. From storchaka at gmail.com Tue Jun 19 16:28:28 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 19 Jun 2012 17:28:28 +0300 Subject: [Python-Dev] Help to fix this bug http://bugs.python.org/issue15068 In-Reply-To: <20120619141329.6d002e82@pitrou.net> References: <1340105970.7662.YahooMailClassic@web164603.mail.gq1.yahoo.com> <20120619141329.6d002e82@pitrou.net> Message-ID: On 19.06.12 15:13, Antoine Pitrou wrote: >>>> sys.stdin > <_io.TextIOWrapper name='' mode='r' encoding='UTF-8'> > > So it's a TextIOWrapper from the _io module (which is really the > implementation of the io module). You'll find its source in > Modules/_io. TextIOWrapper objects are defined in Modules/_io/textio.c. > But as you know, they wrap buffered I/O objects, which are defined in > Modules/_io/bufferedio.c. In sys.stdin's case, the buffered I/O object > wraps a raw FileIO object, defined in Modules/_io/fileio.c: > >>>> sys.stdin.buffer > <_io.BufferedReader name=''> >>>> sys.stdin.buffer.raw > <_io.FileIO name='' mode='rb'> And don't forget about _pyio module. From brian at python.org Tue Jun 19 17:13:28 2012 From: brian at python.org (Brian Curtin) Date: Tue, 19 Jun 2012 10:13:28 -0500 Subject: [Python-Dev] PEP 397 - Last Comments In-Reply-To: <4FE01C90.7010603@v.loewis.de> References: <4FE01C90.7010603@v.loewis.de> Message-ID: On Tue, Jun 19, 2012 at 1:30 AM, "Martin v. L?wis" wrote: >> Agreed, I would expect the same. I would think taking out the word >> "only" and then flipping newer and older in the sentence would correct >> it. > > Will change. > >>> "On 64bit Windows with both 32bit and 64bit implementations of the same >>> (major.minor) Python version installed, the 64bit version will always be >>> preferred. ?This will be true for both 32bit and 64bit implementations of >>> the launcher - a 32bit launcher will prefer to execute a 64bit Python >>> installation of the specified version if available." >>> >>> This implies to me that the 32bit installation *will* install a 32bit >>> launcher and that there could be both versions of the launcher installed. > > No - this paragraph talks about the Python being launched, not the > bitness of the launcher. As (currently) the launcher creates a > subprocess always, this is quite feasible. > > The bitness of the launcher really doesn't matter, except that a 32-bit > launcher cannot access all directories, and a 64-bit launcher does not > work on a 32-bit system. > > Now that I think about it, it might be that it's best to always have the > launcher as a 32-bit binary. It could disable the filesystem and > registry redirection if it really wanted to, and would work on both > 32-bit and 64-bit systems. Always doing a 32-bit binary seems like a better move to me. >> I took that as covering an independently-installed launcher. >> >> You could always install your own 32-bit launcher, and it'd prefer to >> launch a binary matching the machine type. > > No, that's not the plan. The binary being launched is entirely > controlled by command line arguments, ini files, and shebang lines. > > I personally find it sad that it always creates a subprocess, and it > could avoid doing so if the launched Python has the same bitness, but > alas, the problems with doing so are mostly convincing. > >> So yes, there could be >> multiple launchers installed for different machine types, and I'm not >> sure why we'd want to (or how we could) prevent people from installing >> them. You could have a 64-bit launcher available system-wide in your >> Windows folder, then you could have a 32-bit launcher running out of >> C:\Users\Terry for some purposes. > > The PEP doesn't really consider launcher binaries not installed into > the standard location. It would work, but it's out of scope of the PEP. > > The PEP actually only talks about launcher binaries in c:\windows, and > essentially says that they must match the bitness of the system. True, got it. >> My only additional comment would be to have the "Configuration file" >> implementation details supplemented with a readable example of where >> the py.ini file should be placed. On my machine that is >> "C:\Users\brian\AppData\Local", rather than making people have to run >> that parameter through the listed function via pywin32. > > Will do. > > Martin From yselivanov.ml at gmail.com Tue Jun 19 17:27:20 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 11:27:20 -0400 Subject: [Python-Dev] pep 362 - 5th edition Message-ID: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> Hello, The new revision of PEP 362 has been posted: http://www.python.org/dev/peps/pep-0362/ Summary: 1. What was 'Signature.__deepcopy__' is now 'Signature.__copy__'. __copy__ creates a shallow copy of Signature, shallow copying its Parameters as well. 2. 'Signature.format()' was removed. I think we'll add something to customize formatting later, in 3.4. Although, Signature still has its __str__ method. 3. Built-in ('C') functions no longer have mutable '__signature__' attribute, that patch was reverted. In the "Design Considerations" section we stated clear that we don't support some callables. 4. Positions of keyword-only parameters now longer affect equality testing of Signatures, i.e. 'foo(*, a, b)' is equal to 'foo(*, b, a)' (Thanks to Jim Jewett for pointing that out) The only question we have now is: when we do equality test between Signatures, should we account for positional-only, var_positional and var_keyword arguments names? So that: 'foo(*args)' will be equal to 'bar(*arguments)', but not to 'spam(*coordinates:int)' (Again, I think that's a Jim's idea) Thank you! -- PEP: 362 Title: Function Signature Object Version: $Revision$ Last-Modified: $Date$ Author: Brett Cannon , Jiwon Seo , Yury Selivanov , Larry Hastings Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 21-Aug-2006 Python-Version: 3.3 Post-History: 04-Jun-2012 Abstract ======== Python has always supported powerful introspection capabilities, including introspecting functions and methods (for the rest of this PEP, "function" refers to both functions and methods). By examining a function object you can fully reconstruct the function's signature. Unfortunately this information is stored in an inconvenient manner, and is spread across a half-dozen deeply nested attributes. This PEP proposes a new representation for function signatures. The new representation contains all necessary information about a function and its parameters, and makes introspection easy and straightforward. However, this object does not replace the existing function metadata, which is used by Python itself to execute those functions. The new metadata object is intended solely to make function introspection easier for Python programmers. Signature Object ================ A Signature object represents the call signature of a function and its return annotation. For each parameter accepted by the function it stores a `Parameter object`_ in its ``parameters`` collection. A Signature object has the following public attributes and methods: * return_annotation : object The annotation for the return type of the function if specified. If the function has no annotation for its return type, this attribute is not set. * parameters : OrderedDict An ordered mapping of parameters' names to the corresponding Parameter objects (keyword-only arguments are in the same order as listed in ``code.co_varnames``). * bind(\*args, \*\*kwargs) -> BoundArguments Creates a mapping from positional and keyword arguments to parameters. Raises a ``TypeError`` if the passed arguments do not match the signature. * bind_partial(\*args, \*\*kwargs) -> BoundArguments Works the same way as ``bind()``, but allows the omission of some required arguments (mimics ``functools.partial`` behavior.) Raises a ``TypeError`` if the passed arguments do not match the signature. It's possible to test Signatures for equality. Two signatures are equal when their parameters are equal, their positional and positional-only parameters appear in the same order, and they have equal return annotations. Changes to the Signature object, or to any of its data members, do not affect the function itself. Signature also implements ``__str__`` and ``__copy__`` methods. The latter creates a shallow copy of Signature, with all Parameter objects copied as well. Parameter Object ================ Python's expressive syntax means functions can accept many different kinds of parameters with many subtle semantic differences. We propose a rich Parameter object designed to represent any possible function parameter. The structure of the Parameter object is: * name : str The name of the parameter as a string. * default : object The default value for the parameter, if specified. If the parameter has no default value, this attribute is not set. * annotation : object The annotation for the parameter if specified. If the parameter has no annotation, this attribute is not set. * kind : str Describes how argument values are bound to the parameter. Possible values: * ``Parameter.POSITIONAL_ONLY`` - value must be supplied as a positional argument. Python has no explicit syntax for defining positional-only parameters, but many builtin and extension module functions (especially those that accept only one or two parameters) accept them. * ``Parameter.POSITIONAL_OR_KEYWORD`` - value may be supplied as either a keyword or positional argument (this is the standard binding behaviour for functions implemented in Python.) * ``Parameter.KEYWORD_ONLY`` - value must be supplied as a keyword argument. Keyword only parameters are those which appear after a "*" or "\*args" entry in a Python function definition. * ``Parameter.VAR_POSITIONAL`` - a tuple of positional arguments that aren't bound to any other parameter. This corresponds to a "\*args" parameter in a Python function definition. * ``Parameter.VAR_KEYWORD`` - a dict of keyword arguments that aren't bound to any other parameter. This corresponds to a "\*\*kwds" parameter in a Python function definition. Two parameters are equal when they have equal names, kinds, defaults, and annotations. BoundArguments Object ===================== Result of a ``Signature.bind`` call. Holds the mapping of arguments to the function's parameters. Has the following public attributes: * arguments : OrderedDict An ordered, mutable mapping of parameters' names to arguments' values. Does not contain arguments' default values. * args : tuple Tuple of positional arguments values. Dynamically computed from the 'arguments' attribute. * kwargs : dict Dict of keyword arguments values. Dynamically computed from the 'arguments' attribute. The ``arguments`` attribute should be used in conjunction with ``Signature.parameters`` for any arguments processing purposes. ``args`` and ``kwargs`` properties can be used to invoke functions: :: def test(a, *, b): ... sig = signature(test) ba = sig.bind(10, b=20) test(*ba.args, **ba.kwargs) Implementation ============== The implementation adds a new function ``signature()`` to the ``inspect`` module. The function is the preferred way of getting a ``Signature`` for a callable object. The function implements the following algorithm: - If the object is not callable - raise a TypeError - If the object has a ``__signature__`` attribute and if it is not ``None`` - return a shallow copy of it - If it has a ``__wrapped__`` attribute, return ``signature(object.__wrapped__)`` - If the object is a an instance of ``FunctionType`` construct and return a new ``Signature`` for it - If the object is a method or a classmethod, construct and return a new ``Signature`` object, with its first parameter (usually ``self`` or ``cls``) removed - If the object is a staticmethod, construct and return a new ``Signature`` object - If the object is an instance of ``functools.partial``, construct a new ``Signature`` from its ``partial.func`` attribute, and account for already bound ``partial.args`` and ``partial.kwargs`` - If the object is a class or metaclass: - If the object's type has a ``__call__`` method defined in its MRO, return a Signature for it - If the object has a ``__new__`` method defined in its class, return a Signature object for it - If the object has a ``__init__`` method defined in its class, return a Signature object for it - Return ``signature(object.__call__)`` Note, that the ``Signature`` object is created in a lazy manner, and is not automatically cached. If, however, the Signature object was explicitly cached by the user, ``signature()`` returns a new shallow copy of it on each invocation. An implementation for Python 3.3 can be found at [#impl]_. The python issue tracking the patch is [#issue]_. Design Considerations ===================== No implicit caching of Signature objects ---------------------------------------- The first PEP design had a provision for implicit caching of ``Signature`` objects in the ``inspect.signature()`` function. However, this has the following downsides: * If the ``Signature`` object is cached then any changes to the function it describes will not be reflected in it. However, If the caching is needed, it can be always done manually and explicitly * It is better to reserve the ``__signature__`` attribute for the cases when there is a need to explicitly set to a ``Signature`` object that is different from the actual one Some functions may not be introspectable ---------------------------------------- Some functions may not be introspectable in certain implementations of Python. For example, in CPython, builtin functions defined in C provide no metadata about their arguments. Adding support for them is out of scope for this PEP. Examples ======== Visualizing Callable Objects' Signature --------------------------------------- Let's define some classes and functions: :: from inspect import signature from functools import partial, wraps class FooMeta(type): def __new__(mcls, name, bases, dct, *, bar:bool=False): return super().__new__(mcls, name, bases, dct) def __init__(cls, name, bases, dct, **kwargs): return super().__init__(name, bases, dct) class Foo(metaclass=FooMeta): def __init__(self, spam:int=42): self.spam = spam def __call__(self, a, b, *, c) -> tuple: return a, b, c def shared_vars(*shared_args): """Decorator factory that defines shared variables that are passed to every invocation of the function""" def decorator(f): @wraps(f) def wrapper(*args, **kwds): full_args = shared_args + args return f(*full_args, **kwds) # Override signature sig = wrapper.__signature__ = signature(f) for __ in shared_args: sig.parameters.popitem(last=False) return wrapper return decorator @shared_vars({}) def example(_state, a, b, c): return _state, a, b, c def format_signature(obj): return str(signature(obj)) Now, in the python REPL: :: >>> format_signature(FooMeta) '(name, bases, dct, *, bar:bool=False)' >>> format_signature(Foo) '(spam:int=42)' >>> format_signature(Foo.__call__) '(self, a, b, *, c) -> tuple' >>> format_signature(Foo().__call__) '(a, b, *, c) -> tuple' >>> format_signature(partial(Foo().__call__, 1, c=3)) '(b, *, c=3) -> tuple' >>> format_signature(partial(partial(Foo().__call__, 1, c=3), 2, c=20)) '(*, c=20) -> tuple' >>> format_signature(example) '(a, b, c)' >>> format_signature(partial(example, 1, 2)) '(c)' >>> format_signature(partial(partial(example, 1, b=2), c=3)) '(b=2, c=3)' Annotation Checker ------------------ :: import inspect import functools def checktypes(func): '''Decorator to verify arguments and return types Example: >>> @checktypes ... def test(a:int, b:str) -> int: ... return int(a * b) >>> test(10, '1') 1111111111 >>> test(10, 1) Traceback (most recent call last): ... ValueError: foo: wrong type of 'b' argument, 'str' expected, got 'int' ''' sig = inspect.signature(func) types = {} for param in sig.parameters.values(): # Iterate through function's parameters and build the list of # arguments types try: type_ = param.annotation except AttributeError: continue else: if not inspect.isclass(type_): # Not a type, skip it continue types[param.name] = type_ # If the argument has a type specified, let's check that its # default value (if present) conforms with the type. try: default = param.default except AttributeError: continue else: if not isinstance(default, type_): raise ValueError("{func}: wrong type of a default value for {arg!r}". \ format(func=func.__qualname__, arg=param.name)) def check_type(sig, arg_name, arg_type, arg_value): # Internal function that encapsulates arguments type checking if not isinstance(arg_value, arg_type): raise ValueError("{func}: wrong type of {arg!r} argument, " \ "{exp!r} expected, got {got!r}". \ format(func=func.__qualname__, arg=arg_name, exp=arg_type.__name__, got=type(arg_value).__name__)) @functools.wraps(func) def wrapper(*args, **kwargs): # Let's bind the arguments ba = sig.bind(*args, **kwargs) for arg_name, arg in ba.arguments.items(): # And iterate through the bound arguments try: type_ = types[arg_name] except KeyError: continue else: # OK, we have a type for the argument, lets get the corresponding # parameter description from the signature object param = sig.parameters[arg_name] if param.kind == param.VAR_POSITIONAL: # If this parameter is a variable-argument parameter, # then we need to check each of its values for value in arg: check_type(sig, arg_name, type_, value) elif param.kind == param.VAR_KEYWORD: # If this parameter is a variable-keyword-argument parameter: for subname, value in arg.items(): check_type(sig, arg_name + ':' + subname, type_, value) else: # And, finally, if this parameter a regular one: check_type(sig, arg_name, type_, arg) result = func(*ba.args, **ba.kwargs) # The last bit - let's check that the result is correct try: return_type = sig.return_annotation except AttributeError: # Looks like we don't have any restriction on the return type pass else: if isinstance(return_type, type) and not isinstance(result, return_type): raise ValueError('{func}: wrong return type, {exp} expected, got {got}'. \ format(func=func.__qualname__, exp=return_type.__name__, got=type(result).__name__)) return result return wrapper References ========== .. [#impl] pep362 branch (https://bitbucket.org/1st1/cpython/overview) .. [#issue] issue 15008 (http://bugs.python.org/issue15008) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From jimjjewett at gmail.com Tue Jun 19 17:33:41 2012 From: jimjjewett at gmail.com (Jim Jewett) Date: Tue, 19 Jun 2012 11:33:41 -0400 Subject: [Python-Dev] PEP 362 minor nits Message-ID: I've limited this to minor issues, but kept python-dev in the loop because some are questions, rather than merely editorial. Based on: http://hg.python.org/peps/file/tip/pep-0362.txt view pep-0362.txt @ 4466:659639095ace Committing the latest changes to PEP 362 on behalf of Yury Selivanov. author Larry Hastings date Tue, 19 Jun 2012 02:38:15 -0700 (3 hours ago) parents c1f693b39292 ================== 44 * return_annotation : object 45 The annotation for the return type of the function if specified. 46 If the function has no annotation for its return type, this 47 attribute is not set. I don't think you need the "if specified", given the next line. Similar comments around line 89 (Parameter.default) and 93 (Parameter.annotation). 48 * parameters : OrderedDict 49 An ordered mapping of parameters' names to the corresponding 50 Parameter objects (keyword-only arguments are in the same order 51 as listed in ``code.co_varnames``). Are you really sure you want to promise the keyword-only order in the PEP? [BoundArguments] 139 * arguments : OrderedDict 140 An ordered, mutable mapping of parameters' names to arguments' values. 141 Does not contain arguments' default values. I think 141 should be reworded, but I'm not certain my wording doesn't have similar problems, so I merely offer it: arguments contains only explicitly bound parameters; parameters for which the binding relied on a default value do not appear in arguments. 142 * args : tuple 143 Tuple of positional arguments values. Dynamically computed from 144 the 'arguments' attribute. 145 * kwargs : dict 146 Dict of keyword arguments values. Dynamically computed from 147 the 'arguments' attribute. Do you want to specify which will contain the normal parameters, that could be called either way? My naive assumption would be that as much as possible gets shoved into args, but once a positional parameter is left to default, remaining parameters are stuck in kwargs. 172 - If the object is not callable - raise a TypeError 173 174 - If the object has a ``__signature__`` attribute and if it 175 is not ``None`` - return a shallow copy of it Should these two be reversed? 183 - If the object is a method or a classmethod, construct and return 184 a new ``Signature`` object, with its first parameter (usually 185 ``self`` or ``cls``) removed 187 - If the object is a staticmethod, construct and return 188 a new ``Signature`` object I would reverse these two, to make it clear that a staticmethod is not treated as a method. 194 - If the object is a class or metaclass: 195 196 - If the object's type has a ``__call__`` method defined in 197 its MRO, return a Signature for it 198 199 - If the object has a ``__new__`` method defined in its class, 200 return a Signature object for it 201 202 - If the object has a ``__init__`` method defined in its class, 203 return a Signature object for it What happens if it inherits a __new__ or __init__ from something more derived than object? 207 Note, that I would remove the comma. 235 Some functions may not be introspectable 236 ---------------------------------------- 237 238 Some functions may not be introspectable in certain implementations of 239 Python. For example, in CPython, builtin functions defined in C provide 240 no metadata about their arguments. Adding support for them is out of 241 scope for this PEP. Ideally, it would at least be possible to manually construct a signature, and register them in some central location. (Similar to what is done with pickle or copy.) Checking that location would then have to be an early step in the signature algorithm. -jJ From ethan at stoneleaf.us Tue Jun 19 17:57:34 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 19 Jun 2012 08:57:34 -0700 Subject: [Python-Dev] PEP 362 minor nits In-Reply-To: References: Message-ID: <4FE0A16E.7080702@stoneleaf.us> Jim Jewett wrote: > 48 * parameters : OrderedDict > 49 An ordered mapping of parameters' names to the corresponding > 50 Parameter objects (keyword-only arguments are in the same order > 51 as listed in ``code.co_varnames``). > > Are you really sure you want to promise the keyword-only order in the PEP? Is keyword order even important? We're already ignoring for equality tests. ~Ethan~ From yselivanov.ml at gmail.com Tue Jun 19 17:53:44 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 11:53:44 -0400 Subject: [Python-Dev] PEP 362 minor nits In-Reply-To: References: Message-ID: Jim, On 2012-06-19, at 11:33 AM, Jim Jewett wrote: > I've limited this to minor issues, but kept python-dev in the loop > because some are questions, rather than merely editorial. > > > Based on: http://hg.python.org/peps/file/tip/pep-0362.txt > > view pep-0362.txt @ 4466:659639095ace > > > > Committing the latest changes to PEP 362 on behalf of Yury Selivanov. > author Larry Hastings > date Tue, 19 Jun 2012 02:38:15 -0700 (3 hours ago) > parents c1f693b39292 > > > ================== > > > 44 * return_annotation : object > 45 The annotation for the return type of the function if specified. > 46 If the function has no annotation for its return type, this > 47 attribute is not set. > > I don't think you need the "if specified", given the next line. > Similar comments around line 89 (Parameter.default) and 93 > (Parameter.annotation). +1. > 48 * parameters : OrderedDict > 49 An ordered mapping of parameters' names to the corresponding > 50 Parameter objects (keyword-only arguments are in the same order > 51 as listed in ``code.co_varnames``). > > Are you really sure you want to promise the keyword-only order in the PEP? Well we can remove that sentence (it seems that at least for CPython we can guarantee so, as we work with '__code__.co_varnames', but again, even for CPython there may be some opcode optimizer, that can screw up the order in some way) > [BoundArguments] > 139 * arguments : OrderedDict > 140 An ordered, mutable mapping of parameters' names to > arguments' values. > 141 Does not contain arguments' default values. > > I think 141 should be reworded, but I'm not certain my wording doesn't > have similar problems, so I merely offer it: > > arguments contains only explicitly bound parameters; parameters for > which the binding relied on a default value do not appear in > arguments. +1. > 142 * args : tuple > 143 Tuple of positional arguments values. Dynamically computed from > 144 the 'arguments' attribute. > 145 * kwargs : dict > 146 Dict of keyword arguments values. Dynamically computed from > 147 the 'arguments' attribute. > > Do you want to specify which will contain the normal parameters, that > could be called either way? My naive assumption would be that as much > as possible gets shoved into args, but once a positional parameter is > left to default, remaining parameters are stuck in kwargs. Correct, we push as much as possible to 'args'. Only var_keyword and keyword_only args go to 'kwargs'. But the words "positional" and "keyword" more refer to what particularly *args and **kwargs do, disconnected from the Signature's parameters. > 172 - If the object is not callable - raise a TypeError > 173 > 174 - If the object has a ``__signature__`` attribute and if it > 175 is not ``None`` - return a shallow copy of it > > Should these two be reversed? Do you have a use-case? I think we should only support callable types, otherwise we may mask an error in the user code. > 183 - If the object is a method or a classmethod, construct and return > 184 a new ``Signature`` object, with its first parameter (usually > 185 ``self`` or ``cls``) removed > > 187 - If the object is a staticmethod, construct and return > 188 a new ``Signature`` object > > I would reverse these two, to make it clear that a staticmethod is not > treated as a method. It's actually not how it's implemented. https://bitbucket.org/1st1/cpython/src/bf009eb6e1b4/Lib/inspect.py#cl-1262 We first check for (types.MethodType, classmethod), because they relay all attributes from the underlying function (including __signature__) But that's an implementation detail, the algorithm in the PEP just shows the big picture (is it OK?). > 194 - If the object is a class or metaclass: > 195 > 196 - If the object's type has a ``__call__`` method defined in > 197 its MRO, return a Signature for it > 198 > 199 - If the object has a ``__new__`` method defined in its class, > 200 return a Signature object for it > 201 > 202 - If the object has a ``__init__`` method defined in its class, > 203 return a Signature object for it > > What happens if it inherits a __new__ or __init__ from something more > derived than object? What do you mean by "more derived than object"? > 207 Note, that > > I would remove the comma. > > > 235 Some functions may not be introspectable > 236 ---------------------------------------- > 237 > 238 Some functions may not be introspectable in certain implementations of > 239 Python. For example, in CPython, builtin functions defined in C provide > 240 no metadata about their arguments. Adding support for them is out of > 241 scope for this PEP. > > Ideally, it would at least be possible to manually construct a > signature, and register them in some central location. (Similar to > what is done with pickle or copy.) Checking that location would then > have to be an early step in the signature algorithm. I think it's too early for this type of mechanism. Maybe in 3.4 we have a new way of defining parameters on C level? I'd not rush this in 3.3. - Yury From jimjjewett at gmail.com Tue Jun 19 18:33:52 2012 From: jimjjewett at gmail.com (Jim Jewett) Date: Tue, 19 Jun 2012 12:33:52 -0400 Subject: [Python-Dev] PEP 362 minor nits In-Reply-To: References: Message-ID: On Tue, Jun 19, 2012 at 11:53 AM, Yury Selivanov wrote: >> Based on: ?http://hg.python.org/peps/file/tip/pep-0362.txt >> view pep-0362.txt @ 4466:659639095ace >> ================== >> ? 142 * args : tuple >> ? 143 ? ? Tuple of positional arguments values. ?Dynamically computed from >> ? 144 ? ? the 'arguments' attribute. >> ? 145 * kwargs : dict >> ? 146 ? ? Dict of keyword arguments values. Dynamically computed from >> ? 147 ? ? the 'arguments' attribute. >> Do you want to specify which will contain the normal parameters, that >> could be called either way? ?My naive assumption would be that as much >> as possible gets shoved into args, but once a positional parameter is >> left to default, remaining parameters are stuck in kwargs. > Correct, we push as much as possible to 'args'. ?Only var_keyword > and keyword_only args go to 'kwargs'. > But the words "positional" and "keyword" more refer to what particularly > *args and ?**kwargs do, disconnected from the Signature's parameters. Which is why there is some ambiguity, and I wondered if you were intentionally leaving it open or not. >>> def f(a): pass >>> s=signature(f) >>> ba1=s.bind(1) Now which of the following are true? >>> # Ambiguous parameters to args >>> ba.args==(1,) and ba.kwargs=={} >>> # or ambiguous parameters to kwargs >>> ba.args=() and ba.kwargs={a:1} Does it matter how the argument was bound? As in, would >>> ba2=s.bind(a=2) produce a different answer? If as much as possible goes to args, then: >>> def g(a=1, b=2, c=3): pass >>> s=signature(g) >>> ba=s.bind(a=10, c=13) would imply >>> ba.args == (10,) and ba.kwargs={c:13} True because a can be written positionally, but c can't unless b is, and b shouldn't be because it relied on the default value. >> ? 172 ? ? - If the object is not callable - raise a TypeError >> ? 173 >> ? 174 ? ? - If the object has a ``__signature__`` attribute and if it >> ? 175 ? ? ? is not ``None`` - return a shallow copy of it >> Should these two be reversed? > Do you have a use-case? Not really; the only cases that come to mind are cases where it makes sense to look at an explicit signature attribute, instead of calling the factory. >> ? 183 ? ? - If the object is a method or a classmethod, construct and return >> ? 184 ? ? ? a new ``Signature`` object, with its first parameter (usually >> ? 185 ? ? ? ``self`` or ``cls``) removed >> >> ? 187 ? ? - If the object is a staticmethod, construct and return >> ? 188 ? ? ? a new ``Signature`` object >> I would reverse these two, to make it clear that a staticmethod is not >> treated as a method. > It's actually not how it's implemented. ... > But that's an implementation detail, the algorithm in the PEP just > shows the big picture (is it OK?). Right; implementing it in the other order is fine, so long as the actual tests for methods exclude staticmethods. But for someone trying to understand it, staticmethods sound like a kind of method, and I would expect them to be included in something that handles methods, unless they were already excluded by a prior clause. >> ? 194 ? ? - If the object is a class or metaclass: >> ? 195 >> ? 196 ? ? ? ? - If the object's type has a ``__call__`` method defined in >> ? 197 ? ? ? ? ? its MRO, return a Signature for it >> ? 198 >> ? 199 ? ? ? ? - If the object has a ``__new__`` method defined in its class, >> ? 200 ? ? ? ? ? return a Signature object for it >> ? 201 >> ? 202 ? ? ? ? - If the object has a ``__init__`` method defined in its class, >> ? 203 ? ? ? ? ? return a Signature object for it >> >> What happens if it inherits a __new__ or __init__ from something more >> derived than object? > What do you mean by "more derived than object"? >>> class A: def __init__(self): pass >>> class B(A): ... Because of the distinction between "in its MRO" and "in its class", it looks like the signature of A is based on its __init__, but the signature of subclass B is not. -jJ From ethan at stoneleaf.us Tue Jun 19 17:55:19 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 19 Jun 2012 08:55:19 -0700 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> Message-ID: <4FE0A0E7.1020804@stoneleaf.us> Yury Selivanov wrote: > Hello, > > The new revision of PEP 362 has been posted: > http://www.python.org/dev/peps/pep-0362/ > > > Summary: > > 1. What was 'Signature.__deepcopy__' is now 'Signature.__copy__'. > __copy__ creates a shallow copy of Signature, shallow copying its > Parameters as well. > > 2. 'Signature.format()' was removed. I think we'll add something > to customize formatting later, in 3.4. Although, Signature still has > its __str__ method. > > 3. Built-in ('C') functions no longer have mutable '__signature__' > attribute, that patch was reverted. In the "Design Considerations" > section we stated clear that we don't support some callables. > > 4. Positions of keyword-only parameters now longer affect equality > testing of Signatures, i.e. 'foo(*, a, b)' is equal to 'foo(*, b, a)' > (Thanks to Jim Jewett for pointing that out) > > > The only question we have now is: when we do equality test between > Signatures, should we account for positional-only, var_positional > and var_keyword arguments names? So that: 'foo(*args)' will > be equal to 'bar(*arguments)', but not to 'spam(*coordinates:int)' > (Again, I think that's a Jim's idea) There are obviously cases where the names should be considered (such as foo(source, dest) and bar(dest, source) ) and cases where it should not be (spam(text, count) and eggs(txt, cnt) )... I think the default implementation should be strict (names are considered) as it is safer to have a false negative than a false positive. However, rather than force everyone who is willing to cope with the possible false positives from rewriting a Signature equality routine that ignores names, perhaps a method can be added to the class that does so? class Signature: . . . def equivalent(self, other): "compares two Signatures for equality (ignores parameter names)" . . . ~Ethan~ From yselivanov.ml at gmail.com Tue Jun 19 18:38:38 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 12:38:38 -0400 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: <4FE0A0E7.1020804@stoneleaf.us> References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> Message-ID: On 2012-06-19, at 11:55 AM, Ethan Furman wrote: > Yury Selivanov wrote: >> Hello, >> The new revision of PEP 362 has been posted: >> http://www.python.org/dev/peps/pep-0362/ >> Summary: >> 1. What was 'Signature.__deepcopy__' is now 'Signature.__copy__'. >> __copy__ creates a shallow copy of Signature, shallow copying its >> Parameters as well. >> 2. 'Signature.format()' was removed. I think we'll add something >> to customize formatting later, in 3.4. Although, Signature still has >> its __str__ method. >> 3. Built-in ('C') functions no longer have mutable '__signature__' >> attribute, that patch was reverted. In the "Design Considerations" >> section we stated clear that we don't support some callables. >> 4. Positions of keyword-only parameters now longer affect equality >> testing of Signatures, i.e. 'foo(*, a, b)' is equal to 'foo(*, b, a)' >> (Thanks to Jim Jewett for pointing that out) >> The only question we have now is: when we do equality test between >> Signatures, should we account for positional-only, var_positional >> and var_keyword arguments names? So that: 'foo(*args)' will >> be equal to 'bar(*arguments)', but not to 'spam(*coordinates:int)' >> (Again, I think that's a Jim's idea) > > There are obviously cases where the names should be considered (such as foo(source, dest) and bar(dest, source) ) and cases where it should not be (spam(text, count) and eggs(txt, cnt) )... > > I think the default implementation should be strict (names are considered) as it is safer to have a false negative than a false positive. +1 > However, rather than force everyone who is willing to cope with the possible false positives from rewriting a Signature equality routine that ignores names, perhaps a method can be added to the class that does so? > > class Signature: > . . . > def equivalent(self, other): > "compares two Signatures for equality (ignores parameter names)" > . . . I don't think that comparing signatures will be a common operation, so maybe we can postpone adding any additional methods for that? - Yury From ethan at stoneleaf.us Tue Jun 19 19:03:33 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 19 Jun 2012 10:03:33 -0700 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> Message-ID: <4FE0B0E5.2000405@stoneleaf.us> Yury Selivanov wrote: > On 2012-06-19, at 11:55 AM, Ethan Furman wrote: > >> Yury Selivanov wrote: >>> Hello, >>> The new revision of PEP 362 has been posted: >>> http://www.python.org/dev/peps/pep-0362/ >>> Summary: >>> 1. What was 'Signature.__deepcopy__' is now 'Signature.__copy__'. >>> __copy__ creates a shallow copy of Signature, shallow copying its >>> Parameters as well. >>> 2. 'Signature.format()' was removed. I think we'll add something >>> to customize formatting later, in 3.4. Although, Signature still has >>> its __str__ method. >>> 3. Built-in ('C') functions no longer have mutable '__signature__' >>> attribute, that patch was reverted. In the "Design Considerations" >>> section we stated clear that we don't support some callables. >>> 4. Positions of keyword-only parameters now longer affect equality >>> testing of Signatures, i.e. 'foo(*, a, b)' is equal to 'foo(*, b, a)' >>> (Thanks to Jim Jewett for pointing that out) >>> The only question we have now is: when we do equality test between >>> Signatures, should we account for positional-only, var_positional >>> and var_keyword arguments names? So that: 'foo(*args)' will >>> be equal to 'bar(*arguments)', but not to 'spam(*coordinates:int)' >>> (Again, I think that's a Jim's idea) >> There are obviously cases where the names should be considered (such as foo(source, dest) and bar(dest, source) ) and cases where it should not be (spam(text, count) and eggs(txt, cnt) )... >> >> I think the default implementation should be strict (names are considered) as it is safer to have a false negative than a false positive. > > +1 > >> However, rather than force everyone who is willing to cope with the possible false positives from rewriting a Signature equality routine that ignores names, perhaps a method can be added to the class that does so? >> >> class Signature: >> . . . >> def equivalent(self, other): >> "compares two Signatures for equality (ignores parameter names)" >> . . . > > I don't think that comparing signatures will be a common operation, > so maybe we can postpone adding any additional methods for that? Sure, toss it in the list of possible adds for 3.4. At some point it was suggested that Signature be put in provisionally so we could modify the API if needed -- are we doing that? ~Ethan~ From yselivanov.ml at gmail.com Tue Jun 19 19:53:29 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 13:53:29 -0400 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: <4FE0B0E5.2000405@stoneleaf.us> References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> Message-ID: On 2012-06-19, at 1:03 PM, Ethan Furman wrote: > Yury Selivanov wrote: >> On 2012-06-19, at 11:55 AM, Ethan Furman wrote: >>> Yury Selivanov wrote: >>>> Hello, >>>> The new revision of PEP 362 has been posted: >>>> http://www.python.org/dev/peps/pep-0362/ >>>> Summary: >>>> 1. What was 'Signature.__deepcopy__' is now 'Signature.__copy__'. >>>> __copy__ creates a shallow copy of Signature, shallow copying its >>>> Parameters as well. >>>> 2. 'Signature.format()' was removed. I think we'll add something >>>> to customize formatting later, in 3.4. Although, Signature still has >>>> its __str__ method. >>>> 3. Built-in ('C') functions no longer have mutable '__signature__' >>>> attribute, that patch was reverted. In the "Design Considerations" >>>> section we stated clear that we don't support some callables. >>>> 4. Positions of keyword-only parameters now longer affect equality >>>> testing of Signatures, i.e. 'foo(*, a, b)' is equal to 'foo(*, b, a)' >>>> (Thanks to Jim Jewett for pointing that out) >>>> The only question we have now is: when we do equality test between >>>> Signatures, should we account for positional-only, var_positional >>>> and var_keyword arguments names? So that: 'foo(*args)' will >>>> be equal to 'bar(*arguments)', but not to 'spam(*coordinates:int)' >>>> (Again, I think that's a Jim's idea) >>> There are obviously cases where the names should be considered (such as foo(source, dest) and bar(dest, source) ) and cases where it should not be (spam(text, count) and eggs(txt, cnt) )... >>> >>> I think the default implementation should be strict (names are considered) as it is safer to have a false negative than a false positive. >> +1 >>> However, rather than force everyone who is willing to cope with the possible false positives from rewriting a Signature equality routine that ignores names, perhaps a method can be added to the class that does so? >>> >>> class Signature: >>> . . . >>> def equivalent(self, other): >>> "compares two Signatures for equality (ignores parameter names)" >>> . . . >> I don't think that comparing signatures will be a common operation, >> so maybe we can postpone adding any additional methods for that? > > Sure, toss it in the list of possible adds for 3.4. > > At some point it was suggested that Signature be put in provisionally so we could modify the API if needed -- are we doing that? Well, it doesn't have much of an API right now (just few methods) - Yury From yselivanov.ml at gmail.com Tue Jun 19 20:10:23 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 14:10:23 -0400 Subject: [Python-Dev] PEP 362 minor nits In-Reply-To: References: Message-ID: <569CD2C1-FDA8-4309-8F9A-931A889DE5C4@gmail.com> On 2012-06-19, at 12:33 PM, Jim Jewett wrote: > On Tue, Jun 19, 2012 at 11:53 AM, Yury Selivanov > wrote: > >>> Based on: http://hg.python.org/peps/file/tip/pep-0362.txt >>> view pep-0362.txt @ 4466:659639095ace >>> ================== > >>> 142 * args : tuple >>> 143 Tuple of positional arguments values. Dynamically computed from >>> 144 the 'arguments' attribute. >>> 145 * kwargs : dict >>> 146 Dict of keyword arguments values. Dynamically computed from >>> 147 the 'arguments' attribute. > >>> Do you want to specify which will contain the normal parameters, that >>> could be called either way? My naive assumption would be that as much >>> as possible gets shoved into args, but once a positional parameter is >>> left to default, remaining parameters are stuck in kwargs. > >> Correct, we push as much as possible to 'args'. Only var_keyword >> and keyword_only args go to 'kwargs'. > >> But the words "positional" and "keyword" more refer to what particularly >> *args and **kwargs do, disconnected from the Signature's parameters. > > Which is why there is some ambiguity, and I wondered if you were > intentionally leaving it open or not. > >>>> def f(a): pass >>>> s=signature(f) > >>>> ba1=s.bind(1) > > Now which of the following are true? > >>>> # Ambiguous parameters to args >>>> ba.args==(1,) and ba.kwargs=={} > >>>> # or ambiguous parameters to kwargs >>>> ba.args=() and ba.kwargs={a:1} The first one (ba.args == (1,)) > Does it matter how the argument was bound? As in, would > >>>> ba2=s.bind(a=2) > > produce a different answer? No. > If as much as possible goes to args, then: > >>>> def g(a=1, b=2, c=3): pass >>>> s=signature(g) >>>> ba=s.bind(a=10, c=13) > > would imply > >>>> ba.args == (10,) and ba.kwargs={c:13} > True Right (there was a bug in BoundArguments.args & kwargs - fixed and unit-tested now) > because a can be written positionally, but c can't unless b is, and b > shouldn't be because it relied on the default value. OK, so do you want to elaborate on BoundArguments.args & kwargs? (I think it's fine now) ... >>> 183 - If the object is a method or a classmethod, construct and return >>> 184 a new ``Signature`` object, with its first parameter (usually >>> 185 ``self`` or ``cls``) removed >>> >>> 187 - If the object is a staticmethod, construct and return >>> 188 a new ``Signature`` object > >>> I would reverse these two, to make it clear that a staticmethod is not >>> treated as a method. > >> It's actually not how it's implemented. > ... >> But that's an implementation detail, the algorithm in the PEP just >> shows the big picture (is it OK?). > > Right; implementing it in the other order is fine, so long as the > actual tests for methods exclude staticmethods. But for someone > trying to understand it, staticmethods sound like a kind of method, > and I would expect them to be included in something that handles > methods, unless they were already excluded by a prior clause. I can tweak the PEP to make it more clear for those who don't know that staticmethods are not exactly methods, but do we really need that? (I don't want to change the implementation, because every 'isinstance' call matters, and we need to check on 'types.FunctionType' as soon as possible) >>> 194 - If the object is a class or metaclass: >>> 195 >>> 196 - If the object's type has a ``__call__`` method defined in >>> 197 its MRO, return a Signature for it >>> 198 >>> 199 - If the object has a ``__new__`` method defined in its class, >>> 200 return a Signature object for it >>> 201 >>> 202 - If the object has a ``__init__`` method defined in its class, >>> 203 return a Signature object for it >>> >>> What happens if it inherits a __new__ or __init__ from something more >>> derived than object? > >> What do you mean by "more derived than object"? > >>>> class A: > def __init__(self): pass >>>> class B(A): ... > > Because of the distinction between "in its MRO" and "in its class", it > looks like the signature of A is based on its __init__, but the > signature of subclass B is not. Got it. I currently check the MRO in the implementation, so your example will work as expected - B will have a signature of A.__init__ I'll update the PEP wording. - Yury From jimjjewett at gmail.com Tue Jun 19 22:17:46 2012 From: jimjjewett at gmail.com (Jim Jewett) Date: Tue, 19 Jun 2012 16:17:46 -0400 Subject: [Python-Dev] PEP 362 minor nits In-Reply-To: <569CD2C1-FDA8-4309-8F9A-931A889DE5C4@gmail.com> References: <569CD2C1-FDA8-4309-8F9A-931A889DE5C4@gmail.com> Message-ID: On Tue, Jun 19, 2012 at 2:10 PM, Yury Selivanov wrote: > On 2012-06-19, at 12:33 PM, Jim Jewett wrote: > >> On Tue, Jun 19, 2012 at 11:53 AM, Yury Selivanov >> wrote: >> >>>> Based on: ?http://hg.python.org/peps/file/tip/pep-0362.txt >>>> view pep-0362.txt @ 4466:659639095ace >>>> ================== >> >>>> ? 142 * args : tuple >>>> ? 143 ? ? Tuple of positional arguments values. ?Dynamically computed from >>>> ? 144 ? ? the 'arguments' attribute. >>>> ? 145 * kwargs : dict >>>> ? 146 ? ? Dict of keyword arguments values. Dynamically computed from >>>> ? 147 ? ? the 'arguments' attribute. >>> Correct, we push as much as possible to 'args'. [examples to clarify] OK, I would just add a sentence and commented example then, something like. Arguments which could be passed as part of either *args or **kwargs will be included only in the args attribute. In the following example: >>> def g(a=1, b=2, c=3): pass >>> s=signature(g) >>> ba=s.bind(a=10, c=13) >>> ba.args (10,) >>> ba.kwargs {'c': 13} Parameter a is part of args, because it can be. Parameter c must be passed as a keyword, because (earlier) parameter b is not being passed an explicit value. > I can tweak the PEP to make it more clear for those who don't know > that staticmethods are not exactly methods, but do we really need that? I would prefer it, if only because it surprised me. When do distinguish between methods, staticmethod isn't usually the odd man out. And I also agree that the implementation doesn't need to change (except to add a comment), only the PEP. -jJ From merwok at netwok.org Tue Jun 19 23:46:30 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Tue, 19 Jun 2012 17:46:30 -0400 Subject: [Python-Dev] Status of packaging in 3.3 Message-ID: <4FE0F336.7030709@netwok.org> Hi all, We need to make a decision about the packaging module in Python 3.3. Please read this message and breathe deeply before replying :) [Sorry this ends up being so long; Tarek, Georg, Guido, I hope you have the time to read it.] Let me first summarize the history of packaging in the standard library. (Please correct if I misremember something; this email expresses my opinion and I did not talk with Tarek or Alexis before writing it.) Two years ago the distutils2 (hereafter d2) project was started outside of the stdlib, to allow for fast-paced changes, releases and testing before merging it back. Changes in distutils were reverted to go back to misfeature-for-misfeature compatibility (but not bug-for-bug: bug fixes are allowed, unless we know for sure everyone is depending on them or working around them). Tarek?s first hope was to have something that could be included in 2.7 and 3.2, but these deadlines came too fast. At one point near the start of 2011 (didn?t find the email) there was a discussion with Martin about adding support for the stable ABI or parallel builds to distutils, in which Tarek and I opposed adding this new feature to distutils as per the freeze policy, and Martin declared he was not willing to work outside of the standard library. We (d2 developers and python-dev) then quickly agreed that distutils2 would be merged back after the release of 3.2, which was done. There was no PEP requested for this addition, maybe because this was not a fully new module but an improvement of an existing one with real-world-tested features, or maybe just because nobody thought about the process. In retrospect, a PEP would have really helped define the scope of the changes and the roadmap for packaging. Real life caused contributors to come and go, and the primary maintainer (Tarek at first, me since last December) to be at times very busy (like me these last three months), with the result that packaging is in my opinion just not ready. Many big and small things need more work: the three packaging PEPs implemented in d2 have small flaws or missing pieces (I?m not repeating the list here to avoid spawning subthreads) that need to be addressed, we?ve started to get feedback from users and developers only recently (pip and buildout devs since last PyCon for example) the public Python API of d2 is far from great, the implementation is of very unequal quality, important features have patches that are not fully ready (?and I do acknowledge that I am the blocker for reviews on many of them?), the compiler system has not been revised, tests are not all clear and robust, some of the bdist commands need to be removed, a new bdist format needs to be designed, etc. With beta coming, a way to deal with that unfortunate situation needs to be found. We could (a) grant an exception to packaging to allow changes after beta1; (b) keep packaging as it is now under a provisional status, with due warnings that many things are expected to change; (c) remove the unstable parts and deliver a subset that works (proposed by Tarek to the Pyramid author on distutils-sig); (d) not release packaging as part of Python 3.3 (I think that was also suggested on distutils-sig last month). I don?t think (a) would give us enough time; we really want a few months (and releases) to hash out the API (most notably with the pip and buildout developers) and clean the bdist situation. Likewise (c) would require developer (my) time that is currently in short supply. (b) also requires time and would make development harder, not to mention probable user pain. This leaves (d), after long reflection, as my preferred choice, even though I disliked the idea at first (and I fully expect Tarek to feel the same way). I?d like to stress that this is not as bad as it appears at first. We (I) will have to craft reassuring wording to explain why 3.3b1 does not include packaging any more, but I believe that it would be worse for our users (code-wise and PR-wise) to deliver a half-finished version in 3.3 rather than to retract it and wait for 3.4. And if we keep in mind that many people are still using a 2.x version, releasing in 3.3 or 3.4 makes no change for them: the standalone releases on PyPI will keep coming. Developer-wise, this would *not* mean that the considerable effort that went into porting and merging, and the really appreciated patches from developers such as Vinay, would have been in vain: even if packaging is removed from the main repo (or just from the release systems), there would be a clone to continue development, or the module would be added back right after the 3.3 release, or we would develop in the d2 repo and merge it back when it?s ready?this is really an implementation detail for the decision; my point is that the work will not be lost. Thanks for reading; please express your opinion (especially Tarek as d2 project lead, Georg as RM and Guido as BDFL). From ncoghlan at gmail.com Wed Jun 20 00:14:54 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 08:14:54 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE0F336.7030709@netwok.org> References: <4FE0F336.7030709@netwok.org> Message-ID: Reverting and writing a full packaging PEP for 3.4 sounds like a wise course of action to me. Regards, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Wed Jun 20 00:09:42 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 19 Jun 2012 15:09:42 -0700 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE0F336.7030709@netwok.org> References: <4FE0F336.7030709@netwok.org> Message-ID: <4FE0F8A6.3070708@stoneleaf.us> ?ric Araujo wrote: > This leaves (d), after long reflection, as my preferred > choice, even though I disliked the idea at first (and I fully expect > Tarek to feel the same way). > > Thanks for reading; please express your opinion (especially Tarek as > d2 project lead, Georg as RM and Guido as BDFL). I would go with (d) -- it's still available on PyPI, and having a half-done product in the final release would not be good. ~Ethan~ (as ordinary user ;) From p.f.moore at gmail.com Wed Jun 20 00:54:04 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 19 Jun 2012 23:54:04 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE0F336.7030709@netwok.org> References: <4FE0F336.7030709@netwok.org> Message-ID: On 19 June 2012 22:46, ?ric Araujo wrote: [...] > ?This leaves (d), after long reflection, as my preferred choice, even though > I disliked the idea at first (and I fully expect Tarek to feel the same > way). I agree with Nick. It's regrettable, but this is probably the wisest course of action. Remove packaging from 3.3, create a PEP clearly defining what packaging should be, and aim to implement for 3.4. It seems to me that there's a lot of interest in the packaging module, but it's fragmented and people have very different goals and expectations. Developing a PEP will likely be a big task in itself, but I'd hope that a well-crafted PEP will provide something the various people with an interest could get behind and work together on, which might help ease the developer time issue. (Assuming, of course, that championing the PEP doesn't burn ?ric out completely...) Paul. From chrism at plope.com Wed Jun 20 01:34:14 2012 From: chrism at plope.com (Chris McDonough) Date: Tue, 19 Jun 2012 19:34:14 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE0F336.7030709@netwok.org> References: <4FE0F336.7030709@netwok.org> Message-ID: <4FE10C76.4080500@plope.com> On 06/19/2012 05:46 PM, ?ric Araujo wrote: > Hi all, > > We need to make a decision about the packaging module in Python 3.3. > Please read this message and breathe deeply before replying :) ... > With beta coming, a way to deal with that unfortunate situation needs to > be found. We could (a) grant an exception to packaging to allow changes > after beta1; (b) keep packaging as it is now under a provisional status, > with due warnings that many things are expected to change; (c) remove > the unstable parts and deliver a subset that works (proposed by Tarek to > the Pyramid author on distutils-sig); (d) not release packaging as part > of Python 3.3 (I think that was also suggested on distutils-sig last > month). I think it'd be very wise to choose (d) here. We've lived so long without a credible packaging story that waiting one (or even two) more major release cycles isn't going to make a huge difference in the long run but including a version of packaging now which gets fixed in a rush would probably end up muddying the already dark waters of Python software distribution. - C From yselivanov.ml at gmail.com Wed Jun 20 02:11:26 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 20:11:26 -0400 Subject: [Python-Dev] PEP 362 minor nits In-Reply-To: References: <569CD2C1-FDA8-4309-8F9A-931A889DE5C4@gmail.com> Message-ID: On 2012-06-19, at 4:17 PM, Jim Jewett wrote: >> I can tweak the PEP to make it more clear for those who don't know >> that staticmethods are not exactly methods, but do we really need that? > > I would prefer it, if only because it surprised me. When do > distinguish between methods, staticmethod isn't usually the odd man > out. > > And I also agree that the implementation doesn't need to change > (except to add a comment), only the PEP. Actually, it appears we don't need those special checks (for classmethod and staticmethod) at all. class Foo: @staticmethod def bar(): pass >>> Foo.bar >>> Foo().bar >>> Foo.__dict__['bar'] So using the signature will be OK for 'Foo.bar' and 'Foo().bar', but not for 'Foo.__dict__['bar']' - which I think is fine (since staticmethod & classmethod instances are not callable) I'll just remove checks for static- and class-methods from the PEP signature() algorithm section. - Yury From ncoghlan at gmail.com Wed Jun 20 02:39:34 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 10:39:34 +1000 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> Message-ID: On Wed, Jun 20, 2012 at 3:53 AM, Yury Selivanov wrote: > On 2012-06-19, at 1:03 PM, Ethan Furman wrote: >> At some point it was suggested that Signature be put in provisionally so we could modify the API if needed -- are we doing that? > > Well, it doesn't have much of an API right now (just few methods) Right, provisional API status is a fairly blunt instrument that we should only use if we can't find any other way to allow the standard library to progress. In this particular case, we have a superior alternative: distill the API down to the bare minimum that is needed to provide a more robust, cross-implementation format for describing callable signatures. We can then implement that bare minimum API for 3.3 as the foundation for higher level layered APIs that offer more flexibility, and/or capture more information about the callables. Further comments on the PEP and implementation: 1. The PEP should specify the constructor signatures for Signature and Parameter objects (I'd also prefer it if "kind" was permitted as a positional argument) 2. The constructor for Parameter objects should require that names for positional-only parameters start with "<" and end with ">" to ensure they can always be distinguished from standard parameters in signature string representations and in BoundArguments.parameters 3. The standard Signature constructor should accept an iterable of Parameter objects directly (with the return annotation as an optional keyword-only "annotation" argument) and enforce the following constraints: - permitted parameter binding order is strictly (POSITIONAL_ONLY, POSITIONAL_OR_KEYWORD, VAR_POSITIONAL, KEYWORD_ONLY, VAR_KEYWORD) - all parameter names must be distinct (otherwise bind() won't work properly) - if a positional only parameter is not named (param.name is None), it is given a name based on its position in the parameter list ("<0>", "<1>", etc) 4. The current Signature constructor should become a "from_function" class method With these changes, the following would become straightforward: >>> def f1(a): pass >>> str(signature(f1)) (a) >>> def f2(*args): a, = args >>> f.__signature__ = Signature([Parameter("", Parameter.POSITIONAL_ONLY]) >>> str(signature(f2)) () >>> def f3(*args): a, = args >>> f.__signature__ = Signature([Parameter(None, Parameter.POSITIONAL_ONLY]) >>> str(signature(f3)) (<0>) 5. Given the updated constructor signature, we can revisit the question of immutable signature objects (even just using read-only properties for public attributes and exposing a dict proxy for the parameter list). Instead of mutating the parameter list, you would instead write code like: new_sig = Signature(old_sig.parameters[1:]) In my opinion, that's a *much* nicer approach than copying an existing signature object and mutating it. 6. I think "return_annotation" can safely be abbreviated to just "annotation". The fact it is on the Signature object rather than an individual parameter is enough to identify it as the return annotation. 7. The idea of immutable Signature objects does highlight an annoyance with the "attribute may be missing" style APIs. To actually duplicate a signature correctly, including its return annotation (and assuming the attribute is renamed), you would have to do something like: try: note = {"annotation": old_sig.annotation} except AttributeError: note = {} new_sig = Signature(old_sig.parameters[1:], **note) There's an alternative approach to optional attributes that's often easier to work with: expose them as containers. In this case, since we want to make them easy to pass as keyword-only arguments, one way to achieve that would be expose an "optional" attribute on both Signature and Parameter objects. Then the above would be written as: new_sig = Signature(old_sig.parameters[1:], **old_sig.optional) And copying a Parameter would be: new_param = Parameter("new name", old_param.kind, **old_param.optional) If we even keep that at all for the initial version of the API, the direct "default" and "annotation" attributes would just be read-only properties that accessed the "optional" container (reporting AttributeError if the corresponding attribute was missing) 8. Not essential, but I suggest moving most of the parameter formatting details to a Parameter.__str__ method 9. The PEP should explicitly note that we're taking a deliberately strict approach to the notion of signature and parameter equivalence by assuming that all parameter names have semantic significance. Looser checks that ignore the names of positional and variable keyword parameters can be handled with Signature.bind() or by implementing custom key or comparison functions. 10. Similar to the discussion of convenience properties on Signature objects themselves, I now think we should drop the "args" and "kwargs" properties from the initial version of BoundArguments. Instead, I propose the following attributes: - arguments (mapping of parameter names to values supplied as arguments) - defaults (mapping of unbound parameter names with defaults to their default values) - unbound (set of unbound names, always empty for bind(), may have entries for bind_partial()) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Wed Jun 20 03:22:58 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 21:22:58 -0400 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> Message-ID: <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> On 2012-06-19, at 8:39 PM, Nick Coghlan wrote: > On Wed, Jun 20, 2012 at 3:53 AM, Yury Selivanov wrote: >> On 2012-06-19, at 1:03 PM, Ethan Furman wrote: >>> At some point it was suggested that Signature be put in provisionally so we could modify the API if needed -- are we doing that? >> >> Well, it doesn't have much of an API right now (just few methods) > > Right, provisional API status is a fairly blunt instrument that we > should only use if we can't find any other way to allow the standard > library to progress. > > In this particular case, we have a superior alternative: distill the > API down to the bare minimum that is needed to provide a more robust, > cross-implementation format for describing callable signatures. We can > then implement that bare minimum API for 3.3 as the foundation for > higher level layered APIs that offer more flexibility, and/or capture > more information about the callables. > > Further comments on the PEP and implementation: > > 1. The PEP should specify the constructor signatures for Signature and > Parameter objects (I'd also prefer it if "kind" was permitted as a > positional argument) +1 > 2. The constructor for Parameter objects should require that names for > positional-only parameters start with "<" and end with ">" to ensure > they can always be distinguished from standard parameters in signature > string representations and in BoundArguments.parameters +1 > 3. The standard Signature constructor should accept an iterable of > Parameter objects directly (with the return annotation as an optional > keyword-only "annotation" argument) and enforce the following > constraints: > - permitted parameter binding order is strictly (POSITIONAL_ONLY, > POSITIONAL_OR_KEYWORD, VAR_POSITIONAL, KEYWORD_ONLY, VAR_KEYWORD) > - all parameter names must be distinct (otherwise bind() won't work properly) > - if a positional only parameter is not named (param.name is None), it > is given a name based on its position in the parameter list ("<0>", > "<1>", etc) +1 > 4. The current Signature constructor should become a "from_function" > class method +1 > With these changes, the following would become straightforward: > >>>> def f1(a): pass >>>> str(signature(f1)) > (a) >>>> def f2(*args): a, = args >>>> f.__signature__ = Signature([Parameter("", > Parameter.POSITIONAL_ONLY]) >>>> str(signature(f2)) > () >>>> def f3(*args): a, = args >>>> f.__signature__ = Signature([Parameter(None, Parameter.POSITIONAL_ONLY]) >>>> str(signature(f3)) > (<0>) > > 5. Given the updated constructor signature, we can revisit the > question of immutable signature objects (even just using read-only > properties for public attributes and exposing a dict proxy for the > parameter list). Instead of mutating the parameter list, you would > instead write code like: > new_sig = Signature(old_sig.parameters[1:]) I think that mocking immutability here is a better decision than to implement a truly immutable object in C. Read-only properties and a dict-proxy for Signature.parameters will work. > In my opinion, that's a *much* nicer approach than copying an existing > signature object and mutating it. +1 > 6. I think "return_annotation" can safely be abbreviated to just > "annotation". The fact it is on the Signature object rather than an > individual parameter is enough to identify it as the return > annotation. I'm not sure about this one. 'return_annotation' is a very self-descriptive and clear name. > 7. The idea of immutable Signature objects does highlight an annoyance > with the "attribute may be missing" style APIs. To actually duplicate > a signature correctly, including its return annotation (and assuming > the attribute is renamed), you would have to do something like: > > try: > note = {"annotation": old_sig.annotation} > except AttributeError: > note = {} > new_sig = Signature(old_sig.parameters[1:], **note) > > There's an alternative approach to optional attributes that's often > easier to work with: expose them as containers. In this case, since we > want to make them easy to pass as keyword-only arguments, one way to > achieve that would be expose an "optional" attribute on both Signature > and Parameter objects. Then the above would be written as: > > new_sig = Signature(old_sig.parameters[1:], **old_sig.optional) > > And copying a Parameter would be: > new_param = Parameter("new name", old_param.kind, **old_param.optional) > > If we even keep that at all for the initial version of the API, the > direct "default" and "annotation" attributes would just be read-only > properties that accessed the "optional" container (reporting > AttributeError if the corresponding attribute was missing) +0. I think that 'optional' is a bit unusual attribute for the stdlib, but it will work if we make Signature immutable. > 8. Not essential, but I suggest moving most of the parameter > formatting details to a Parameter.__str__ method +1 > 9. The PEP should explicitly note that we're taking a deliberately > strict approach to the notion of signature and parameter equivalence > by assuming that all parameter names have semantic significance. > Looser checks that ignore the names of positional and variable keyword > parameters can be handled with Signature.bind() or by implementing > custom key or comparison functions. +1 > 10. Similar to the discussion of convenience properties on Signature > objects themselves, I now think we should drop the "args" and "kwargs" Big -1 on this one. Look at the current implementation of those properties - it's quite complex. One of the major points of new API is to allow easy modifications of arguments. Without .args & .kwargs it will be a PITA to call a function. Imagine, that the "check types" example from the PEP is modified to coerce arguments to specified types. It won't be possible to do without .args & .kwargs. I, for instance, use this API to bind, validate, and coerce arguments for RPC calls. The whole point for me to work on this PEP was to make these types of functionality easy to implement. > properties from the initial version of BoundArguments. Instead, I > propose the following attributes: > - arguments (mapping of parameter names to values supplied as arguments) > - defaults (mapping of unbound parameter names with defaults to > their default values) Why would you need 'defaults'? It's very easy to compile that list manually (and I believe the use cases will be limited) > - unbound (set of unbound names, always empty for bind(), may have > entries for bind_partial()) This may be practical. But again - those are easy to deduce from 'BoundArguments.arguments' and 'Signature.parameters'. In summary - I like everything you've proposed, except comment #10. - Yury From solipsis at pitrou.net Wed Jun 20 03:23:38 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Jun 2012 03:23:38 +0200 Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> Message-ID: <20120620032338.2008f28d@pitrou.net> On Tue, 19 Jun 2012 17:46:30 -0400 ?ric Araujo wrote: > > I don?t think (a) would give us enough time; we really want a few > months (and releases) to hash out the API (most notably with the pip and > buildout developers) and clean the bdist situation. Likewise (c) would > require developer (my) time that is currently in short supply. (b) also > requires time and would make development harder, not to mention probable > user pain. This leaves (d), after long reflection, as my preferred > choice, even though I disliked the idea at first (and I fully expect > Tarek to feel the same way). The question is what will happen after 3.3. There doesn't seem to be a lot of activity around the project, does it? Regards Antoine. From yselivanov at gmail.com Wed Jun 20 03:29:29 2012 From: yselivanov at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 21:29:29 -0400 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> Message-ID: On 2012-06-19, at 9:22 PM, Yury Selivanov wrote: > On 2012-06-19, at 8:39 PM, Nick Coghlan wrote: > >> 2. The constructor for Parameter objects should require that names for >> positional-only parameters start with "<" and end with ">" to ensure >> they can always be distinguished from standard parameters in signature >> string representations and in BoundArguments.parameters > > +1 Actually, can we just make positional-only parameters to render brackets in their/Signature's __str__ methods? I think Parameter.kind should be enough, without adding additional obstacles. - Yury From ncoghlan at gmail.com Wed Jun 20 04:06:18 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 12:06:18 +1000 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> Message-ID: On Wed, Jun 20, 2012 at 11:29 AM, Yury Selivanov wrote: > On 2012-06-19, at 9:22 PM, Yury Selivanov wrote: > >> On 2012-06-19, at 8:39 PM, Nick Coghlan wrote: >> >>> 2. The constructor for Parameter objects should require that names for >>> positional-only parameters start with "<" and end with ">" to ensure >>> they can always be distinguished from standard parameters in signature >>> string representations and in BoundArguments.parameters >> >> +1 > > Actually, can we just make positional-only parameters to render brackets > in their/Signature's __str__ methods? ?I think Parameter.kind should be > enough, without adding additional obstacles. True, the check for name clashes in Signature (and the implied numeric "names") will cover the BoundArguments.parameters case Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov at gmail.com Wed Jun 20 04:15:04 2012 From: yselivanov at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 22:15:04 -0400 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> Message-ID: <25AB6DBE-0E77-484B-B4F4-95AEC2DFAC01@gmail.com> On 2012-06-19, at 10:06 PM, Nick Coghlan wrote: > On Wed, Jun 20, 2012 at 11:29 AM, Yury Selivanov wrote: >> On 2012-06-19, at 9:22 PM, Yury Selivanov wrote: >> >>> On 2012-06-19, at 8:39 PM, Nick Coghlan wrote: >>> >>>> 2. The constructor for Parameter objects should require that names for >>>> positional-only parameters start with "<" and end with ">" to ensure >>>> they can always be distinguished from standard parameters in signature >>>> string representations and in BoundArguments.parameters >>> >>> +1 >> >> Actually, can we just make positional-only parameters to render brackets >> in their/Signature's __str__ methods? I think Parameter.kind should be >> enough, without adding additional obstacles. > > True, the check for name clashes in Signature (and the implied numeric > "names") will cover the BoundArguments.parameters case Nick, I also would like to keep Parameter.name being required. I understand that *currently* we have no parameter names specified for builtin methods, but we don't have any mechanisms to introspect them too. Now, in 3.3 (I hope) we introduce a brand new mechanism, and, probably, in 3.4 we have way to define Signatures for builtins. Why not do it right? This whole positional-only case is just a weird anachronism of CPython. - Yury From ncoghlan at gmail.com Wed Jun 20 04:16:14 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 12:16:14 +1000 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> Message-ID: On Wed, Jun 20, 2012 at 11:22 AM, Yury Selivanov wrote: > On 2012-06-19, at 8:39 PM, Nick Coghlan wrote: >> 6. I think "return_annotation" can safely be abbreviated to just >> "annotation". The fact it is on the Signature object rather than an >> individual parameter is enough to identify it as the return >> annotation. > > I'm not sure about this one. ?'return_annotation' is a very > self-descriptive and clear name. I'm not too worried about that one. If you prefer the longer name, feel free to keep it. >> If we even keep that at all for the initial version of the API, the >> direct "default" and "annotation" attributes would just be read-only >> properties that accessed the "optional" container (reporting >> AttributeError if the corresponding attribute was missing) > > +0. ?I think that 'optional' is a bit unusual attribute for the stdlib, > but it will work if we make Signature immutable. The name isn't great, but the mapping is a lot more convenient when you need to handle the case of attributes potentially being missing. >> 10. Similar to the discussion of convenience properties on Signature >> objects themselves, I now think we should drop the "args" and "kwargs" > > Big -1 on this one. ?Look at the current implementation of those > properties - it's quite complex. ?One of the major points of new > API is to allow easy modifications of arguments. ?Without .args & > .kwargs it will be a PITA to call a function. > > Imagine, that the "check types" example from the PEP is modified > to coerce arguments to specified types. ?It won't be possible > to do without .args & .kwargs. ?I, for instance, use this API to bind, > validate, and coerce arguments for RPC calls. The whole point for me > to work on this PEP was to make these types of functionality easy to > implement. The one thing that slightly concerns me is that given: def f(a): pass s = signature(f) The following produce the same result: s.bind(1) s.bind(a=1) That said, I guess if a parameter is proclaiming itself to be KEYWORD_OR_POSITIONAL, then it really shouldn't care which way the arguments are passed, so a stated preference for binding those as positional parameters is fine. >> properties from the initial version of BoundArguments. Instead, I >> propose the following attributes: >> ?- arguments (mapping of parameter names to values supplied as arguments) >> ?- defaults (mapping of unbound parameter names with defaults to >> their default values) > > Why would you need 'defaults'? ?It's very easy to compile that list > manually (and I believe the use cases will be limited) > >> ?- unbound (set of unbound names, always empty for bind(), may have >> entries for bind_partial()) > > This may be practical. ?But again - those are easy to deduce from > 'BoundArguments.arguments' and 'Signature.parameters'. OK, leave them out for now. Perhaps add a simple example showing how to calculate them if you want them? (The most obvious use case I can see is calculating a new signature after using bind_partial) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Wed Jun 20 04:31:45 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 12:31:45 +1000 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: <25AB6DBE-0E77-484B-B4F4-95AEC2DFAC01@gmail.com> References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> <25AB6DBE-0E77-484B-B4F4-95AEC2DFAC01@gmail.com> Message-ID: On Wed, Jun 20, 2012 at 12:15 PM, Yury Selivanov wrote: > On 2012-06-19, at 10:06 PM, Nick Coghlan wrote: >> True, the check for name clashes in Signature (and the implied numeric >> "names") will cover the BoundArguments.parameters case > > Nick, I also would like to keep Parameter.name being required. > I understand that *currently* we have no parameter names specified > for builtin methods, but we don't have any mechanisms to introspect > them too. Sure, so long as the name requirement is enforced at construction time - the current code will happily accept None as the first argument to parameter. > Now, in 3.3 (I hope) we introduce a brand new mechanism, and, probably, in > 3.4 we have way to define Signatures for builtins. ?Why not do it right? > This whole positional-only case is just a weird anachronism of CPython. No, it's a characteristic of any FFI - not all target languages will support keyword arguments. CPython just happens to use such an FFI as part of its implementation (due to the nature of the PyArg_Parse* APIs). There have been serious (albeit failed so far) attempts at coming up with an acceptable language level syntax for positional only arguments on python-ideas, since they're a useful concept when you want to avoid name clashes with arbitrary keyword arguments and you *can* effectively implement them in Python using a nested function call to get the correct kind of error: def _positional_only(a, b, c): return a, b, c def f(*args, **kwds): # "a", "b", "c" are supported as values in "kwds" a, b, c = _positional_only(*args) With PEP 362, we can at least *represent* the above API cleanly, even though there's still no dedicated syntax: >>> def _param(name): return Parameter(name, Parameter.POSITIONAL_ONLY) >>> s = Signature([_param("a"), _param("b"), _param("c"), Parameter("kwds", Parameter.VAR_KEYWORD]) >>> str(s) (, , , **kwds) If a syntax for positional only parameters is ever defined in the future, then the Signature __str__ implementation can be updated to use it at the time. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Wed Jun 20 04:36:58 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 22:36:58 -0400 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> Message-ID: <3D85C52B-D49A-4D61-AA8A-969BA054F525@gmail.com> On 2012-06-19, at 9:22 PM, Yury Selivanov wrote: > On 2012-06-19, at 8:39 PM, Nick Coghlan wrote: >> 7. The idea of immutable Signature objects does highlight an annoyance >> with the "attribute may be missing" style APIs. To actually duplicate >> a signature correctly, including its return annotation (and assuming >> the attribute is renamed), you would have to do something like: >> >> try: >> note = {"annotation": old_sig.annotation} >> except AttributeError: >> note = {} >> new_sig = Signature(old_sig.parameters[1:], **note) BTW, we don't have slices for OrderedDict. Since the slice object is not hashable, we can implement it safely. I can create an issue (and draft implementation), as I think it'd be quite a useful feature. - Yury From yselivanov.ml at gmail.com Wed Jun 20 05:04:22 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 23:04:22 -0400 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> Message-ID: <03E87583-BF10-4570-8F1A-32F0AFDA4DD8@gmail.com> On 2012-06-19, at 10:16 PM, Nick Coghlan wrote: > On Wed, Jun 20, 2012 at 11:22 AM, Yury Selivanov > wrote: >>> 10. Similar to the discussion of convenience properties on Signature >>> objects themselves, I now think we should drop the "args" and "kwargs" >> >> Big -1 on this one. Look at the current implementation of those >> properties - it's quite complex. One of the major points of new >> API is to allow easy modifications of arguments. Without .args & >> .kwargs it will be a PITA to call a function. >> >> Imagine, that the "check types" example from the PEP is modified >> to coerce arguments to specified types. It won't be possible >> to do without .args & .kwargs. I, for instance, use this API to bind, >> validate, and coerce arguments for RPC calls. The whole point for me >> to work on this PEP was to make these types of functionality easy to >> implement. > > The one thing that slightly concerns me is that given: > > def f(a): pass > s = signature(f) > > The following produce the same result: > s.bind(1) > s.bind(a=1) > > That said, I guess if a parameter is proclaiming itself to be > KEYWORD_OR_POSITIONAL, then it really shouldn't care which way the > arguments are passed, so a stated preference for binding those as > positional parameters is fine. Right. >>> properties from the initial version of BoundArguments. Instead, I >>> propose the following attributes: >>> - arguments (mapping of parameter names to values supplied as arguments) >>> - defaults (mapping of unbound parameter names with defaults to >>> their default values) >> >> Why would you need 'defaults'? It's very easy to compile that list >> manually (and I believe the use cases will be limited) >> >>> - unbound (set of unbound names, always empty for bind(), may have >>> entries for bind_partial()) >> >> This may be practical. But again - those are easy to deduce from >> 'BoundArguments.arguments' and 'Signature.parameters'. > > OK, leave them out for now. Perhaps add a simple example showing how > to calculate them if you want them? (The most obvious use case I can > see is calculating a new signature after using bind_partial) OK, will add an example to the PEP - Yury From yselivanov.ml at gmail.com Wed Jun 20 05:07:38 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 23:07:38 -0400 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> Message-ID: Nick, I started a new branch to experiment with immutable Signatures. So far, almost everything works (except a couple unit-tests, that are modifying now immutable Parameters & Signatures) https://bitbucket.org/1st1/cpython/changesets/tip/branch(%22pep362-2%22) I hope tomorrow we get some feedback on this, and if it's positive - I'll finish off the implementation and update the PEP. I hope we still can make it to 3.3. - Yury From ncoghlan at gmail.com Wed Jun 20 05:24:50 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 13:24:50 +1000 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: <3D85C52B-D49A-4D61-AA8A-969BA054F525@gmail.com> References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> <3D85C52B-D49A-4D61-AA8A-969BA054F525@gmail.com> Message-ID: On Wed, Jun 20, 2012 at 12:36 PM, Yury Selivanov wrote: > On 2012-06-19, at 9:22 PM, Yury Selivanov wrote: >> On 2012-06-19, at 8:39 PM, Nick Coghlan wrote: >>> 7. The idea of immutable Signature objects does highlight an annoyance >>> with the "attribute may be missing" style APIs. To actually duplicate >>> a signature correctly, including its return annotation (and assuming >>> the attribute is renamed), you would have to do something like: >>> >>> ? try: >>> ? ? ? note = {"annotation": old_sig.annotation} >>> ? except AttributeError: >>> ? ? ? note = {} >>> ? new_sig = Signature(old_sig.parameters[1:], **note) > > BTW, we don't have slices for OrderedDict. ?Since the slice object is > not hashable, we can implement it safely.?I can create an issue (and draft > implementation), as I think it'd be quite a useful feature. No need, my example was just wrong, it should be: new_sig = Signature(old_sig.parameters.values()[1:]) The constructor accepts an iterable of Parameter objects rather than a mapping. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Wed Jun 20 05:28:29 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 23:28:29 -0400 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> <3D85C52B-D49A-4D61-AA8A-969BA054F525@gmail.com> Message-ID: On 2012-06-19, at 11:24 PM, Nick Coghlan wrote: > On Wed, Jun 20, 2012 at 12:36 PM, Yury Selivanov > wrote: >> On 2012-06-19, at 9:22 PM, Yury Selivanov wrote: >>> On 2012-06-19, at 8:39 PM, Nick Coghlan wrote: >>>> 7. The idea of immutable Signature objects does highlight an annoyance >>>> with the "attribute may be missing" style APIs. To actually duplicate >>>> a signature correctly, including its return annotation (and assuming >>>> the attribute is renamed), you would have to do something like: >>>> >>>> try: >>>> note = {"annotation": old_sig.annotation} >>>> except AttributeError: >>>> note = {} >>>> new_sig = Signature(old_sig.parameters[1:], **note) >> >> BTW, we don't have slices for OrderedDict. Since the slice object is >> not hashable, we can implement it safely. I can create an issue (and draft >> implementation), as I think it'd be quite a useful feature. > > No need, my example was just wrong, it should be: > > new_sig = Signature(old_sig.parameters.values()[1:]) > > The constructor accepts an iterable of Parameter objects rather than a mapping. That's the code I've ended up with: sig = signature(obj.__func__) return Signature(OrderedDict(tuple(sig.parameters.items())[1:]), **sig.optional) Still looks better than creating implicit & explicit copies ;) As for slices support in OrderedDict -- it would return values, so it won't solve the problem anyways. - Yury From yselivanov.ml at gmail.com Wed Jun 20 05:51:56 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 19 Jun 2012 23:51:56 -0400 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> Message-ID: <6BFC3C69-97DA-41AB-BDAC-6EDC10D2AFDA@gmail.com> On 2012-06-19, at 10:16 PM, Nick Coghlan wrote: > On Wed, Jun 20, 2012 at 11:22 AM, Yury Selivanov > wrote: >> On 2012-06-19, at 8:39 PM, Nick Coghlan wrote: >>> If we even keep that at all for the initial version of the API, the >>> direct "default" and "annotation" attributes would just be read-only >>> properties that accessed the "optional" container (reporting >>> AttributeError if the corresponding attribute was missing) >> >> +0. I think that 'optional' is a bit unusual attribute for the stdlib, >> but it will work if we make Signature immutable. > > The name isn't great, but the mapping is a lot more convenient when > you need to handle the case of attributes potentially being missing. What if instead of 'optional', we have 'base_signature' (or 'from_signature')? sig = signature(func) params = OrderedDict(tuple(sig.parameters.items())[1:]) new_sig = Signature(params, base_signature=sig) And for Paramater: param = sig.parameters['foo'] param1 = Parameter('bar', base_parameter=param) param2 = Parameter('spam', annotation=int, base_parameter=param) param3 = Parameter(base_parameter=param) param4 = Parameter(default=42, base_parameter=param) So 'base_parameter' will be a template from which Parameter's constructor will copy the missing arguments. - Yury From skippy.hammond at gmail.com Wed Jun 20 06:16:23 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Wed, 20 Jun 2012 14:16:23 +1000 Subject: [Python-Dev] PEP 397 - Last Comments In-Reply-To: References: Message-ID: <4FE14E97.2020302@gmail.com> Sorry, but I missed the announcement of an updated PEP. It looks good to me! Also, I see no reason not to always use a 32bit version of the launcher other than (a) the 64bit code already exists and works and (b) it might mean it is no longer possible to do a complete build of a 64bit Python without the 32bit compilers installed. But (b) is really only a theoretical problem so I think in practice it would be fine either way. Thanks to Martin for updating it - I agree it is vastly improved! Cheers, Mark On 19/06/2012 2:31 PM, Brian Curtin wrote: > Martin approached me earlier and requested that I act as PEP czar for > 397. I haven't been involved in the writing of the PEP and have been > mostly observing from the outside, so I accepted and hope to get this > wrapped up quickly and implemented in time for the beta. The PEP is > pretty complete, but there are a few outstanding issues. > > On Mon, Jun 18, 2012 at 1:05 PM, Terry Reedy wrote: >> "Independent installations will always only overwrite newer versions of the >> launcher with older versions." 'always only' is a bit awkward and the >> sentence looks backwards to me. I would expect only overwriting older >> versions with newer versions. > > Agreed, I would expect the same. I would think taking out the word > "only" and then flipping newer and older in the sentence would correct > it. > > On Mon, Jun 18, 2012 at 1:05 PM, Terry Reedy wrote: >> These seem contradictory: >> >> "The 32-bit distribution of Python will not install a 32-bit version of the >> launcher on a 64-bit system." >> >> I presume this mean that it will install the 64-bit version and that there >> will always be only one version of the launcher on the system. >> >> "On 64bit Windows with both 32bit and 64bit implementations of the same >> (major.minor) Python version installed, the 64bit version will always be >> preferred. This will be true for both 32bit and 64bit implementations of >> the launcher - a 32bit launcher will prefer to execute a 64bit Python >> installation of the specified version if available." >> >> This implies to me that the 32bit installation *will* install a 32bit >> launcher and that there could be both versions of the launcher installed. > > I took that as covering an independently-installed launcher. > > You could always install your own 32-bit launcher, and it'd prefer to > launch a binary matching the machine type. So yes, there could be > multiple launchers installed for different machine types, and I'm not > sure why we'd want to (or how we could) prevent people from installing > them. You could have a 64-bit launcher available system-wide in your > Windows folder, then you could have a 32-bit launcher running out of > C:\Users\Terry for some purposes. > > Martin - is that correct? > > === > > Outside of Terry's concerns, I find the updated PEP almost ready to go > as-is. Many of the updates were in line with what Martin and I briefly > talked about at PyCon, and I believe some of them came out of previous > PEP discussions on here, so I see nothing unexpected at this point. > > My only additional comment would be to have the "Configuration file" > implementation details supplemented with a readable example of where > the py.ini file should be placed. On my machine that is > "C:\Users\brian\AppData\Local", rather than making people have to run > that parameter through the listed function via pywin32. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/skippy.hammond%40gmail.com > From guido at python.org Wed Jun 20 06:36:35 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 19 Jun 2012 21:36:35 -0700 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> Message-ID: Nick nailed it (again). On Tue, Jun 19, 2012 at 3:14 PM, Nick Coghlan wrote: > Reverting and writing a full packaging PEP for 3.4 sounds like a wise > course of action to me. > > Regards, > Nick. > -- > Sent from my phone, thus the relative brevity :) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Jun 20 06:39:41 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 14:39:41 +1000 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> <3D85C52B-D49A-4D61-AA8A-969BA054F525@gmail.com> Message-ID: On Wed, Jun 20, 2012 at 1:28 PM, Yury Selivanov wrote: > On 2012-06-19, at 11:24 PM, Nick Coghlan wrote: >> The constructor accepts an iterable of Parameter objects rather than a mapping. > > That's the code I've ended up with: > > ? ? ? ?sig = signature(obj.__func__) > ? ? ? ?return Signature(OrderedDict(tuple(sig.parameters.items())[1:]), > ? ? ? ? ? ? ? ? ? ? ? ? **sig.optional) Why require a mapping as the argument? A simple iterable of Parameter objects seems like a more convenient constructor API to me. The constructor can take care of building the mapping from that internally via: param_map = OrderedDict((param.name, param for param in parameters)) > Still looks better than creating implicit & explicit copies ;) Indeed :) > > As for slices support in OrderedDict -- it would return values, so > it won't solve the problem anyways. You wouldn't want to do it anyway - while slices happen to not be hashable *now*, there's no strong reason behind that. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Wed Jun 20 06:55:54 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 14:55:54 +1000 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: <6BFC3C69-97DA-41AB-BDAC-6EDC10D2AFDA@gmail.com> References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> <4FE0B0E5.2000405@stoneleaf.us> <8DC9B729-9254-4CCB-9291-033A3B62D9B7@gmail.com> <6BFC3C69-97DA-41AB-BDAC-6EDC10D2AFDA@gmail.com> Message-ID: On Wed, Jun 20, 2012 at 1:51 PM, Yury Selivanov wrote: > What if instead of 'optional', we have 'base_signature' > (or 'from_signature')? > > ? ?sig = signature(func) > ? ?params = OrderedDict(tuple(sig.parameters.items())[1:]) > ? ?new_sig = Signature(params, base_signature=sig) > > And for Paramater: > > ? ?param = sig.parameters['foo'] > > ? ?param1 = Parameter('bar', base_parameter=param) > ? ?param2 = Parameter('spam', annotation=int, base_parameter=param) > ? ?param3 = Parameter(base_parameter=param) > ? ?param4 = Parameter(default=42, base_parameter=param) > > So 'base_parameter' will be a template from which Parameter's constructor > will copy the missing arguments. Good thought (and better than my initial idea), but I'd follow the model of namedtuple._replace and make it a separate instance method: sig = signature(f) new_sig = sig.replace(parameters=sig.parameters.values()[1:]) param = sig.parameters['foo'] param1 = param.replace(name='bar') param2 = param.replace(name='spam', annotation=int) param3 = param.replace() # namedtuple._replace also allows this edge case param4 = param.replace(default=42) Such a copy-and-override method should be the last piece needed to make immutable Signature objects a viable approach. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Wed Jun 20 07:00:52 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 15:00:52 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120620032338.2008f28d@pitrou.net> References: <4FE0F336.7030709@netwok.org> <20120620032338.2008f28d@pitrou.net> Message-ID: On Wed, Jun 20, 2012 at 11:23 AM, Antoine Pitrou wrote: > The question is what will happen after 3.3. There doesn't seem to be a > lot of activity around the project, does it? I think the desire is there, but there are enough "good enough" approaches around that people find other more immediately satisfying things to do with their time (hammering out consensus on packaging issues takes quite a bit of lead time to fully resolve). This will make a good guinea pig for my "release alphas early" proposal, though :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From martin at v.loewis.de Wed Jun 20 08:31:42 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 20 Jun 2012 08:31:42 +0200 Subject: [Python-Dev] PEP 397 - Last Comments In-Reply-To: <4FE14E97.2020302@gmail.com> References: <4FE14E97.2020302@gmail.com> Message-ID: <4FE16E4E.7050400@v.loewis.de> > It looks good to me! Also, I see no reason not to always use a 32bit > version of the launcher other than I'll change it, then - the strong reason *for* always using a 32-bit launcher is packaging, as the 32-bit installer would otherwise have to include both a 32-bit launcher and a 64-bit launcher, and install the right one depending on what the target system is. > Thanks to Martin for updating it - I agree it is vastly improved! Thanks! Martin From victor.stinner at gmail.com Wed Jun 20 08:36:42 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 20 Jun 2012 08:36:42 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE0F336.7030709@netwok.org> References: <4FE0F336.7030709@netwok.org> Message-ID: What is the status of the third party module on PyPI (distutils2)? Does it contain all fixes done in the packaging module? Does it have exactly the same API? Does it support Python 2.5 to 3.3, or maybe also 2.4? How is the distutils2 module installed? Installed manually? Using pip or setuptools? Is distutils2 included in some Linux distributions? If it's simple to install distutils2, it's not a big deal to not have it in the stdlib. -- It is sometimes a pain to have a buggy module in Python. For example, I got a lot of issues with the subprocess module of Python 2.5. I started to include a copy of the subprocess module from Python 2.7 in my projects to workaround these issues. In my previous work we did also backport various modules to get the last version of the xmlrpc client on Python 2.5 (especially for HTTP/1.1, to not open a new TCP socket at each request). I don't want to reopen the discussion "the stdlib should be an external project". I just want to confirm that it is better to wait until important users of the packaging API have finished their work (on porting their project to distutils2, especially pip), before we can declare the module (and its API) as stable. By the way, what is the status of "pip using distutils2"? Victor From g.brandl at gmx.net Wed Jun 20 09:09:19 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 20 Jun 2012 09:09:19 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE0F336.7030709@netwok.org> References: <4FE0F336.7030709@netwok.org> Message-ID: Am 19.06.2012 23:46, schrieb ?ric Araujo: Thanks for the detailed explanation, ?ric. Just quoting this paragraph, since it contains the possibilities to judge: > With beta coming, a way to deal with that unfortunate situation needs > to be found. We could (a) grant an exception to packaging to allow > changes after beta1; (b) keep packaging as it is now under a provisional > status, with due warnings that many things are expected to change; (c) > remove the unstable parts and deliver a subset that works (proposed by > Tarek to the Pyramid author on distutils-sig); (d) not release packaging > as part of Python 3.3 (I think that was also suggested on distutils-sig > last month). (a) and (b) are certainly out of the question. packaging must be solid when shipped, and there's not enough time. (c) might work (depending on what features we're talking about), but you say yourself that you won't be able to spend the time required, so I agree with basically everybody else that (d) is the way to go (together with a PEP). Georg From dirkjan at ochtman.nl Wed Jun 20 09:32:09 2012 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Wed, 20 Jun 2012 09:32:09 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE0F336.7030709@netwok.org> References: <4FE0F336.7030709@netwok.org> Message-ID: On Tue, Jun 19, 2012 at 11:46 PM, ?ric Araujo wrote: > ?I don?t think (a) would give us enough time; we really want a few months > (and releases) to hash out the API (most notably with the pip and buildout > developers) and clean the bdist situation. ?Likewise (c) would require > developer (my) time that is currently in short supply. ?(b) also requires > time and would make development harder, not to mention probable user pain. > ?This leaves (d), after long reflection, as my preferred choice, even though > I disliked the idea at first (and I fully expect Tarek to feel the same > way). It's a pity, but it sounds like the way to go. This may be crazy, but just idly wondering: is there an opportunity for the PSF to make things better by throwing some money at it? Packaging appears to be one of those Hard problems, it might be a good investment. Cheers, Dirkjan From donald.stufft at gmail.com Wed Jun 20 09:36:03 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Wed, 20 Jun 2012 03:36:03 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> Message-ID: <09788D3B19C545D8AEBD5507E7797072@gmail.com> On Wednesday, June 20, 2012 at 2:36 AM, Victor Stinner wrote: > What is the status of the third party module on PyPI (distutils2)? > Does it contain all fixes done in the packaging module? Does it have > exactly the same API? Does it support Python 2.5 to 3.3, or maybe also > 2.4? > > How is the distutils2 module installed? Installed manually? Using pip > or setuptools? Is distutils2 included in some Linux distributions? > > If it's simple to install distutils2, it's not a big deal to not have > it in the stdlib. > > -- > > It is sometimes a pain to have a buggy module in Python. For example, > I got a lot of issues with the subprocess module of Python 2.5. I > started to include a copy of the subprocess module from Python 2.7 in > my projects to workaround these issues. > > In my previous work we did also backport various modules to get the > last version of the xmlrpc client on Python 2.5 (especially for > HTTP/1.1, to not open a new TCP socket at each request). > > I don't want to reopen the discussion "the stdlib should be an > external project". I just want to confirm that it is better to wait > until important users of the packaging API have finished their work > (on porting their project to distutils2, especially pip), before we > can declare the module (and its API) as stable. > > By the way, what is the status of "pip using distutils2"? Some students started on a pip2 that was based on distutils2, but I don't think they've really done much/anything with actually using distutils2 and have mostly been working on other parts. > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org (mailto:Python-Dev at python.org) > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/donald.stufft%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Wed Jun 20 10:15:57 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 20 Jun 2012 18:15:57 +1000 Subject: [Python-Dev] pep 362 - 5th edition In-Reply-To: References: <5ECF5062-678E-4281-A89C-C2E14B2662DB@gmail.com> <4FE0A0E7.1020804@stoneleaf.us> Message-ID: <20120620081556.GA26786@ando> On Tue, Jun 19, 2012 at 12:38:38PM -0400, Yury Selivanov wrote: > > class Signature: > > . . . > > def equivalent(self, other): > > "compares two Signatures for equality (ignores parameter names)" > > . . . > > I don't think that comparing signatures will be a common operation, > so maybe we can postpone adding any additional methods for that? I think it may be. Consider callback functions: the caller may wish to check that the callback function takes the right number of positional arguments, but without caring whether those arguments are given any particular name. Checking for compatible signatures is a little more subtle than just ignoring names. For example, keyword-only arguments need to always be compared by names. Also, you might want to ignore optional arguments with defaults, or at least _private optional arguments. I think equality should be strict, including names, and we should defer any decision about less-strict comparisons until 3.4, when we'll have more solid use-cases for it. I guess that's a long-winded way of saying +1 :) -- Steven From martin at v.loewis.de Wed Jun 20 10:18:09 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 20 Jun 2012 10:18:09 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> Message-ID: <4FE18741.70503@v.loewis.de> > This may be crazy, but just idly wondering: is there an opportunity > for the PSF to make things better by throwing some money at it? > Packaging appears to be one of those Hard problems, it might be a good > investment. Only if somebody steps forward to take the money - and somebody who can be trusted to achieve something, as well. The general problem is that issues may only occur when packages actually use the library; so it may even be difficult to fix it in a concerted effort since that fixing may actually spread over several months (or years). Regards, Martin From steve at pearwood.info Wed Jun 20 10:30:07 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 20 Jun 2012 18:30:07 +1000 Subject: [Python-Dev] PEP 362 minor nits In-Reply-To: References: <569CD2C1-FDA8-4309-8F9A-931A889DE5C4@gmail.com> Message-ID: <20120620083007.GB26786@ando> On Tue, Jun 19, 2012 at 08:11:26PM -0400, Yury Selivanov wrote: > So using the signature will be OK for 'Foo.bar' and 'Foo().bar', but > not for 'Foo.__dict__['bar']' - which I think is fine (since > staticmethod & classmethod instances are not callable) There has been some talk on Python-ideas about making staticmethod and classmethod instances callable. Speaking of non-instance method descriptors, please excuse this silly question, I haven't quite understood the implementation well enough to answer this question myself. Is there anything needed to make signature() work correctly with custom method-like descriptors such as this? http://code.activestate.com/recipes/577030-dualmethod-descriptor -- Steven From tarek at ziade.org Wed Jun 20 10:55:02 2012 From: tarek at ziade.org (=?UTF-8?B?VGFyZWsgWmlhZMOp?=) Date: Wed, 20 Jun 2012 10:55:02 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE0F336.7030709@netwok.org> References: <4FE0F336.7030709@netwok.org> Message-ID: <4FE18FE6.3030703@ziade.org> On 6/19/12 11:46 PM, ?ric Araujo wrote: ... > > I don?t think (a) would give us enough time; we really want a few > months (and releases) to hash out the API (most notably with the pip > and buildout developers) and clean the bdist situation. Likewise (c) > would require developer (my) time that is currently in short supply. > (b) also requires time and would make development harder, not to > mention probable user pain. This leaves (d), after long reflection, > as my preferred choice, even though I disliked the idea at first (and > I fully expect Tarek to feel the same way). Yeah I feel the same way. +1 for (d). I had unfortunately no time lately. Thanks for picking up things. We want a solid distutils replacement, and I think we wrote solid PEPs and seemed to have find consensus for most issues in the past two years. So I prefer to hold it and have a solid implementation in the stldib. The only thing I am asking is to retain ourselves to do *anything* in distutils and continue to declare it frozen, because I know it will be tempting to do stuff there... Cheers Tarek From dirkjan at ochtman.nl Wed Jun 20 11:05:43 2012 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Wed, 20 Jun 2012 11:05:43 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE18FE6.3030703@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> Message-ID: On Wed, Jun 20, 2012 at 10:55 AM, Tarek Ziad? wrote: > So I prefer to hold it and have a solid implementation in the stldib. The > only thing I am asking is to retain ourselves to do *anything* in distutils > and continue to declare it frozen, because I know it will be tempting to do > stuff there... That policy has been a bit annoying. Gentoo has been carrying patches forever to improve compilation with C++ stuff (mostly about correctly passing on environment variables), and forward-porting them on every release gets tedious, but the packaging/distutils2 effort has made it harder to get them included in plain distutils. I understand there shouldn't be crazy patching in distutils, but allowing it to inch forward a little would make the lives of the Gentoo Python team easier, at least. Cheers, Dirkjan From solipsis at pitrou.net Wed Jun 20 11:04:55 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Jun 2012 11:04:55 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <20120620032338.2008f28d@pitrou.net> Message-ID: <20120620110455.6ae5f12e@pitrou.net> On Wed, 20 Jun 2012 15:00:52 +1000 Nick Coghlan wrote: > On Wed, Jun 20, 2012 at 11:23 AM, Antoine Pitrou wrote: > > The question is what will happen after 3.3. There doesn't seem to be a > > lot of activity around the project, does it? > > I think the desire is there, What makes you think that, exactly? Regards Antoine. From solipsis at pitrou.net Wed Jun 20 11:12:27 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Jun 2012 11:12:27 +0200 Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> Message-ID: <20120620111227.1058a864@pitrou.net> On Wed, 20 Jun 2012 11:05:43 +0200 Dirkjan Ochtman wrote: > On Wed, Jun 20, 2012 at 10:55 AM, Tarek Ziad? wrote: > > So I prefer to hold it and have a solid implementation in the stldib. The > > only thing I am asking is to retain ourselves to do *anything* in distutils > > and continue to declare it frozen, because I know it will be tempting to do > > stuff there... > > That policy has been a bit annoying. Gentoo has been carrying patches > forever to improve compilation with C++ stuff (mostly about correctly > passing on environment variables), and forward-porting them on every > release gets tedious, but the packaging/distutils2 effort has made it > harder to get them included in plain distutils. I understand there > shouldn't be crazy patching in distutils, but allowing it to inch > forward a little would make the lives of the Gentoo Python team > easier, at least. I think the whole idea that distutils should be frozen and improvements should only go in distutils2 has been misled. Had distutils been improved instead, many of those enhancements would already have been available in 3.2 (and others would soon be released in 3.3). Deciding to remove packaging from 3.3 is another instance of the same mistake, IMO. Regards Antoine. From lists at cheimes.de Wed Jun 20 11:22:09 2012 From: lists at cheimes.de (Christian Heimes) Date: Wed, 20 Jun 2012 11:22:09 +0200 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: References: <4FDDF0B4.3020805@v.loewis.de> <4FDEC3CA.9070109@v.loewis.de> Message-ID: <4FE19641.7040706@cheimes.de> Am 18.06.2012 17:12, schrieb Guido van Rossum: > Ok, banning ru"..." and ur"..." altogether is fine too (assuming it's > fine with the originators of the PEP). It's gone for good. http://hg.python.org/cpython/rev/8e47e9af826e (My first push for a very long time. Man, that feels good!) From solipsis at pitrou.net Wed Jun 20 11:27:06 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Jun 2012 11:27:06 +0200 Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> Message-ID: <20120620112706.105ca077@pitrou.net> On Tue, 19 Jun 2012 21:36:35 -0700 Guido van Rossum wrote: > Nick nailed it (again). Let's make things clear: packaging is suffering from a lack of developer involvement, and a lack of user interest. What makes you think that removing packaging from 3.3, and adding the constraint of a new PEP to be written, will actually *improve* things? Regards Antoine. From tarek at ziade.org Wed Jun 20 11:17:13 2012 From: tarek at ziade.org (=?UTF-8?B?VGFyZWsgWmlhZMOp?=) Date: Wed, 20 Jun 2012 11:17:13 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> Message-ID: <4FE19519.3090006@ziade.org> On 6/20/12 11:05 AM, Dirkjan Ochtman wrote: > On Wed, Jun 20, 2012 at 10:55 AM, Tarek Ziad? wrote: >> So I prefer to hold it and have a solid implementation in the stldib. The >> only thing I am asking is to retain ourselves to do *anything* in distutils >> and continue to declare it frozen, because I know it will be tempting to do >> stuff there... > That policy has been a bit annoying. Gentoo has been carrying patches > forever to improve compilation with C++ stuff (mostly about correctly > passing on environment variables), and forward-porting them on every > release gets tedious, but the packaging/distutils2 effort has made it > harder to get them included in plain distutils. I understand there > shouldn't be crazy patching in distutils, but allowing it to inch > forward a little would make the lives of the Gentoo Python team > easier, at least. > > Cheers, If distutils gets new features I think it's killing the packaging effort. Maybe these new features could be implemented in packaging, then bridged in Distutils ? the Compilation feature is isolated enough to do this. In any case, I guess we should have some kind of policy in place where we list the exceptions when distutils can be changed. Maybe in the packaging PEP ? Cheers Tarek From tarek at ziade.org Wed Jun 20 11:18:58 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Wed, 20 Jun 2012 11:18:58 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120620110455.6ae5f12e@pitrou.net> References: <4FE0F336.7030709@netwok.org> <20120620032338.2008f28d@pitrou.net> <20120620110455.6ae5f12e@pitrou.net> Message-ID: <4FE19582.1010206@ziade.org> On 6/20/12 11:04 AM, Antoine Pitrou wrote: > On Wed, 20 Jun 2012 15:00:52 +1000 > Nick Coghlan wrote: >> On Wed, Jun 20, 2012 at 11:23 AM, Antoine Pitrou wrote: >>> The question is what will happen after 3.3. There doesn't seem to be a >>> lot of activity around the project, does it? >> I think the desire is there, > What makes you think that, exactly? Maybe because the packaging fatigue occurs around 3 years after you start fighting that best, and we do have fresh blood working on it ? :) > > Regards > > Antoine. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com From tarek at ziade.org Wed Jun 20 11:22:07 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Wed, 20 Jun 2012 11:22:07 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120620111227.1058a864@pitrou.net> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> Message-ID: <4FE1963F.3040603@ziade.org> On 6/20/12 11:12 AM, Antoine Pitrou wrote: > On Wed, 20 Jun 2012 11:05:43 +0200 > Dirkjan Ochtman wrote: >> On Wed, Jun 20, 2012 at 10:55 AM, Tarek Ziad? wrote: >>> So I prefer to hold it and have a solid implementation in the stldib. The >>> only thing I am asking is to retain ourselves to do *anything* in distutils >>> and continue to declare it frozen, because I know it will be tempting to do >>> stuff there... >> That policy has been a bit annoying. Gentoo has been carrying patches >> forever to improve compilation with C++ stuff (mostly about correctly >> passing on environment variables), and forward-porting them on every >> release gets tedious, but the packaging/distutils2 effort has made it >> harder to get them included in plain distutils. I understand there >> shouldn't be crazy patching in distutils, but allowing it to inch >> forward a little would make the lives of the Gentoo Python team >> easier, at least. > I think the whole idea that distutils should be frozen and improvements > should only go in distutils2 has been misled. Had distutils been > improved instead, many of those enhancements would already have been > available in 3.2 (and others would soon be released in 3.3). I tried to improve Distutils and I was stopped and told to start distutils2, because distutils is so rotten, any *real* change/improvment potentially brakes the outside world. This has not changed. > > Deciding to remove packaging from 3.3 is another instance of the same > mistake, IMO. So what are your suggesting, since you seem to know what's a mistake and what's not ? (time-travel machine not allowed) > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com From vinay_sajip at yahoo.co.uk Wed Jun 20 11:51:03 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 20 Jun 2012 09:51:03 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> Message-ID: Antoine Pitrou pitrou.net> writes: > > Deciding to remove packaging from 3.3 is another instance of the same > mistake, IMO. > What's the rationale for leaving it in, when it's known to be incomplete/unfinished? Regards, Vinay Sajip From solipsis at pitrou.net Wed Jun 20 11:49:22 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Jun 2012 11:49:22 +0200 Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <4FE1963F.3040603@ziade.org> Message-ID: <20120620114922.74c4dd18@pitrou.net> On Wed, 20 Jun 2012 11:22:07 +0200 Tarek Ziad? wrote: > I tried to improve Distutils and I was stopped and told to start > distutils2, because > distutils is so rotten, any *real* change/improvment potentially brakes > the outside world. If distutils was so rotten, distutils2 would not reuse much of its structure and concepts (and test suite), would it? Most of the distutils2 improvements (new PEPs, setup.cfg, etc.) were totally possible in distutils, weren't they? > > Deciding to remove packaging from 3.3 is another instance of the same > > mistake, IMO. > So what are your suggesting, since you seem to know what's a mistake and > what's not ? I don't have any suggestion apart from keeping packaging in 3.3. But I also think it would be better for the community if people were not delusional when making decisions. Removing packaging from 3.3 is a big risk: users and potential contributors will be even less interested than they already are. Here's a datapoint: distribute (*) is downloaded 100x more times than distutils2 (**). (*) http://pypi.python.org/pypi/distribute/ (**) http://pypi.python.org/pypi/Distutils2/ Regards Antoine. From solipsis at pitrou.net Wed Jun 20 11:54:23 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Jun 2012 11:54:23 +0200 Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> Message-ID: <20120620115423.435d7604@pitrou.net> On Wed, 20 Jun 2012 09:51:03 +0000 (UTC) Vinay Sajip wrote: > Antoine Pitrou pitrou.net> writes: > > > > > Deciding to remove packaging from 3.3 is another instance of the same > > mistake, IMO. > > > > What's the rationale for leaving it in, when it's known to be > incomplete/unfinished? As an incentive for users to start using the features that are finished enough, and exercise the migration path from distutils. The module can be marked "provisional" so as to allow further API variations. Regards Antoine. From tarek at ziade.org Wed Jun 20 12:30:51 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Wed, 20 Jun 2012 12:30:51 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120620114922.74c4dd18@pitrou.net> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <4FE1963F.3040603@ziade.org> <20120620114922.74c4dd18@pitrou.net> Message-ID: <4FE1A65B.7060609@ziade.org> On 6/20/12 11:49 AM, Antoine Pitrou wrote: > On Wed, 20 Jun 2012 11:22:07 +0200 > Tarek Ziad? wrote: >> I tried to improve Distutils and I was stopped and told to start >> distutils2, because >> distutils is so rotten, any *real* change/improvment potentially brakes >> the outside world. > If distutils was so rotten, distutils2 would not reuse much of its > structure and concepts (and test suite), would it? 'much' is pretty vague here. distutils2 is a fork of distutils, that has evolved a *lot* if you look at the code, beside the compilation part and some commands, most things are different. distutils is "rotten" because when you change its internals, you might break some software that rely on them. > > Most of the distutils2 improvements (new PEPs, setup.cfg, etc.) were > totally possible in distutils, weren't they? I started there, remember ? And we ended up saying it was impossible to continue without breaking the packaging world. >>> Deciding to remove packaging from 3.3 is another instance of the same >>> mistake, IMO. >> So what are your suggesting, since you seem to know what's a mistake and >> what's not ? > I don't have any suggestion apart from keeping packaging in 3.3. > > But I also think it would be better for the community if people were > not delusional when making decisions. Removing packaging from 3.3 is a > big risk: users and potential contributors will be even less interested > than they already are. That's a good point. But if no one works on its polishing *now*, it's going to be the same effect on people: they'll likely to be very annoyed if the replacer is not rock solid. > > Here's a datapoint: distribute (*) is downloaded 100x more times than > distutils2 (**). > > (*) http://pypi.python.org/pypi/distribute/ > (**) http://pypi.python.org/pypi/Distutils2/ why would you expect a different datapoint ? - Distutils2 was released as a beta software, and not really promoted yet - Distribute is downloaded automatically by many stacks out there, and PyPI does not make a difference whether the hit was from a human behind pip, or from a stack like zc.buildout > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com From tarek at ziade.org Wed Jun 20 12:34:12 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Wed, 20 Jun 2012 12:34:12 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120620115423.435d7604@pitrou.net> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620115423.435d7604@pitrou.net> Message-ID: <4FE1A724.4050104@ziade.org> On 6/20/12 11:54 AM, Antoine Pitrou wrote: > On Wed, 20 Jun 2012 09:51:03 +0000 (UTC) > Vinay Sajip wrote: >> Antoine Pitrou pitrou.net> writes: >> >>> Deciding to remove packaging from 3.3 is another instance of the same >>> mistake, IMO. >>> >> What's the rationale for leaving it in, when it's known to be >> incomplete/unfinished? > As an incentive for users to start using the features that are > finished enough, and exercise the migration path from distutils. > The module can be marked "provisional" so as to allow further API > variations. It's true that some modules are quite mature and already useful: - packaging.version (PEP 386) - packaging.pypi - packaging.metadata (PEP 345) - packaging.database (PEP 386) the part that is not ready is the installer and some setuptools bridging > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com From solipsis at pitrou.net Wed Jun 20 12:39:34 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Jun 2012 12:39:34 +0200 Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <4FE1963F.3040603@ziade.org> <20120620114922.74c4dd18@pitrou.net> <4FE1A65B.7060609@ziade.org> Message-ID: <20120620123934.6cbbc1e9@pitrou.net> On Wed, 20 Jun 2012 12:30:51 +0200 Tarek Ziad? wrote: > > > > > Most of the distutils2 improvements (new PEPs, setup.cfg, etc.) were > > totally possible in distutils, weren't they? > I started there, remember ? And we ended up saying it was impossible to > continue without > breaking the packaging world. "we" were only certain people, AFAIR. > why would you expect a different datapoint ? I wasn't expecting a different datapoint, I'm pointing that shipping packaging in the stdlib would provide a much better exposure. Regards Antoine. From tarek at ziade.org Wed Jun 20 12:54:32 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Wed, 20 Jun 2012 12:54:32 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120620123934.6cbbc1e9@pitrou.net> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <4FE1963F.3040603@ziade.org> <20120620114922.74c4dd18@pitrou.net> <4FE1A65B.7060609@ziade.org> <20120620123934.6cbbc1e9@pitrou.net> Message-ID: <4FE1ABE8.8070205@ziade.org> On 6/20/12 12:39 PM, Antoine Pitrou wrote: > On Wed, 20 Jun 2012 12:30:51 +0200 > Tarek Ziad? wrote: >>> Most of the distutils2 improvements (new PEPs, setup.cfg, etc.) were >>> totally possible in distutils, weren't they? >> I started there, remember ? And we ended up saying it was impossible to >> continue without >> breaking the packaging world. > "we" were only certain people, AFAIR. That was the BDFL decision after a language summit. Having tried to innovate in Distutils in the past, I think it's a very good decision, From g.brandl at gmx.net Wed Jun 20 13:06:39 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 20 Jun 2012 13:06:39 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120620123934.6cbbc1e9@pitrou.net> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <4FE1963F.3040603@ziade.org> <20120620114922.74c4dd18@pitrou.net> <4FE1A65B.7060609@ziade.org> <20120620123934.6cbbc1e9@pitrou.net> Message-ID: Am 20.06.2012 12:39, schrieb Antoine Pitrou: > On Wed, 20 Jun 2012 12:30:51 +0200 > Tarek Ziad? wrote: >> >> > >> > Most of the distutils2 improvements (new PEPs, setup.cfg, etc.) were >> > totally possible in distutils, weren't they? >> I started there, remember ? And we ended up saying it was impossible to >> continue without >> breaking the packaging world. > > "we" were only certain people, AFAIR. Yes. The people willing to work on packaging in Python, to be exact. >> why would you expect a different datapoint ? > > I wasn't expecting a different datapoint, I'm pointing that shipping > packaging in the stdlib would provide a much better exposure. But not if it's not ready for prime time. (And providing the finished distutils2 for Python 2 will provide even better exposure at the moment.) Georg From p.f.moore at gmail.com Wed Jun 20 13:11:03 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 20 Jun 2012 12:11:03 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120620111227.1058a864@pitrou.net> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> Message-ID: On 20 June 2012 10:12, Antoine Pitrou wrote: > I think the whole idea that distutils should be frozen and improvements > should only go in distutils2 has been misled. Had distutils been > improved instead, many of those enhancements would already have been > available in 3.2 (and others would soon be released in 3.3). The problem seems to be that in the past, any changes in distutils have been met with cries of "you can't do that", basically because the lack of a clearly distinguished extension API means that the assumption is that for any bit of the internals, someone somewhere has monkeypatched or relied on it. The issue is compounded by the fact that a lot of distutils is undocumented, or at least badly documented, so saying "if it's not documented, it's internal" doesn't work :-( Maybe if we could be a bit more aggressive in saying what counts as "internal" and resist the cries of "but I use it", modifying distutils might be a more viable option, But there doesn't seem to be anyone willing to take and defend that step. IIRC, Tarek proposed distutils2/packaging after getting frustrated with how little he could usefully do on distutils itself. > Deciding to remove packaging from 3.3 is another instance of the same > mistake, IMO. I see your point, but without sufficient developer resource, the question is whether packaging is in a usable state at all. Nobody but ?ric is actually working on packaging (and as he says, even he is not able to at the moment), so what alternatives are there? I guess one extra option not mentioned by ?ric is to make the packaging issues into release blocker bugs. That would effectively stall the release of 3.3 until packaging could be brought to an acceptable state, effectively a form of blackmail. I can't imagine anyone wants to take that approach. And yet, some of the existing bugs would clearly be release blockers if they were in any other part of Python. I think the first question is, do we need an enhanced distutils in the stdlib? As far as I can see, this one has been answered strongly, in the affirmative, a few times now. And yet, the need seems to be a diffuse thing, with no real champion (Tarek and ?ric have both tried to take that role, and both appear to have been unable to invest the necessary amount of time - which doesn't surprise me, it seems to be a massive task). Removing packaging from 3.3, in my mind acknowledges that as it stands the approach was a failed experiment[1]. Better to get it taken out before it appears in a released version of Python. We need to rethink the approach. I see a number of options going forward, all of which are based round the need to ensure enough developer involvement, so that Tarek and ?ric get help, and don't simply burn out before we have anything useful. 1. Reconsider the decision to freeze distutils, with a view to migrating incrementally to the feature set we want from packaging. That'll be hard as we'd need to take a much stronger line on making changes that could break existing code (stronger in the sense of "so we broke your code, tough - you were using undocumented/internal APIs"). And I suspect Tarek wouldn't be willing to take this route, so we immediately lose one resource. Maybe the other core developers could take up the slack, though. For example, Antoine, you seem to be implying that you would have worked on distutils if this had happened. 2. Free up distutils2 to develop as an external package, and then have a PEP proposing its inclusion in the stdlib in due course, when it is ready and has been proven in the wild. The benefit here is that I assume that as a separate project, becoming a committer would be easier than becoming a Python core developer, so there's a wider pool of developers. The downside is that the timescales would be a lot longer (I doubt we'd see anything in 3.4 this way, and maybe not even 3.5). 3. Write a PEP describing precisely what the packaging module will do, get consensus/agreement, and then restart development based on a solid scope and spec. This is the correct route for getting something added direct to the stdlib, and it's a shame it didn't happen in the first place for packaging. Having said that, the PEP would likely be huge, given the scope of packaging, and would require a lot of time from a champion. There's no guarantee that championing a PEP wouldn't burn someone out just as rapidly as developing the module itself :-( And also, given that the packaging documentation is one of its weak areas, I'd have to say I have concerns as to whether a PEP would come together in any case... The assumption here, though, is that the PEP process creates the debate, and results in interested parties coming together in the discussion. If we can keep that momentum, we get a pool of interested developers who may well assist in the coding aspects. The one option I don't like is taking packaging out, releasing 3.3, and then putting it straight back in as is, and simply carry on as now, in the hope that it'll be ready for 3.4. I honestly doubt that the only issue is that we've run out of time before 3.3. There are more fundamental problems that need to be addressed as well - specifically the reliance on one individual to bear all of the load. Just my thoughts, Paul. [1] That reads really harshly. I don't mean to criticise any of the work that's been done, I'm a huge fan of the idea of packaging, and its goals. The "experiment" in this case is around process - developing something as big and important as packaging with limited developer resource, relatively directly in the core (bringing it in from distutils2 sooner rather than later) and working from a series of smaller PEPs focused on particular details, rather than an overall one covering the whole package. From p.f.moore at gmail.com Wed Jun 20 13:19:53 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 20 Jun 2012 12:19:53 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE1A724.4050104@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620115423.435d7604@pitrou.net> <4FE1A724.4050104@ziade.org> Message-ID: On 20 June 2012 11:34, Tarek Ziad? wrote: > On 6/20/12 11:54 AM, Antoine Pitrou wrote: >> >> On Wed, 20 Jun 2012 09:51:03 +0000 (UTC) >> Vinay Sajip ?wrote: >>> >>> Antoine Pitrou ?pitrou.net> ?writes: >>> >>>> Deciding to remove packaging from 3.3 is another instance of the same >>>> mistake, IMO. >>>> >>> What's the rationale for leaving it in, when it's known to be >>> incomplete/unfinished? >> >> As an incentive for users to start using the features that are >> finished enough, and exercise the migration path from distutils. >> The module can be marked "provisional" so as to allow further API >> variations. > > It's true that some modules are quite mature and already useful: > > - packaging.version ? ? (PEP 386) > - packaging.pypi > - packaging.metadata ?(PEP 345) > - packaging.database ? (PEP 386) > > the part that is not ready is the installer and some setuptools bridging I've never seen that information mentioned before. So that's (good) news. A question, then. Why is it not an option to: 1. Rip out all bar those 4 modules. 2. Make sure they are documented and tested solidly (they may already be, I don't know). 3. Declare that to be what packaging *is* for Python 3.3. Whether any of those modules are of any use in isolation, is a slightly more complex question. As is whether the APIs are guaranteed to be sufficient for further development on "the rest" of packaging, given that by doing this we commit to API stability and backward compatibility. Your comment "quite mature and already useful" is not quite firm enough to reassure me that we're ready to set those modules in stone (although presumably the 3 relating to the PEPs are, simply because they implement what the PEPs say). Paul. From hs at ox.cx Wed Jun 20 13:20:04 2012 From: hs at ox.cx (Hynek Schlawack) Date: Wed, 20 Jun 2012 13:20:04 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120620112706.105ca077@pitrou.net> References: <4FE0F336.7030709@netwok.org> <20120620112706.105ca077@pitrou.net> Message-ID: <20120620112004.GA11316@omega.vm.ag> On 06/20, Antoine Pitrou wrote: > Let's make things clear: packaging is suffering from a lack of > developer involvement, Absolutely. And to be more precise: solid hands-on leadership. Eric wrote it in his original mail: both packaging maintainers are burned out/busy. That?s a state that is very unlikely to attract more developers ? myself included. > and a lack of user interest. Maybe I'm getting you wrong here, but ISTM that proper packaging is in the short list on nearly everybody?s ?things I wish they'd fix in Python?. From tarek at ziade.org Wed Jun 20 13:31:50 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Wed, 20 Jun 2012 13:31:50 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620115423.435d7604@pitrou.net> <4FE1A724.4050104@ziade.org> Message-ID: <4FE1B4A6.6030405@ziade.org> On 6/20/12 1:19 PM, Paul Moore wrote: > On 20 June 2012 11:34, Tarek Ziad? wrote: >> On 6/20/12 11:54 AM, Antoine Pitrou wrote: >>> On Wed, 20 Jun 2012 09:51:03 +0000 (UTC) >>> Vinay Sajip wrote: >>>> Antoine Pitrou pitrou.net> writes: >>>> >>>>> Deciding to remove packaging from 3.3 is another instance of the same >>>>> mistake, IMO. >>>>> >>>> What's the rationale for leaving it in, when it's known to be >>>> incomplete/unfinished? >>> As an incentive for users to start using the features that are >>> finished enough, and exercise the migration path from distutils. >>> The module can be marked "provisional" so as to allow further API >>> variations. >> It's true that some modules are quite mature and already useful: >> >> - packaging.version (PEP 386) >> - packaging.pypi >> - packaging.metadata (PEP 345) >> - packaging.database (PEP 386) >> >> the part that is not ready is the installer and some setuptools bridging > I've never seen that information mentioned before. So that's (good) news. > > A question, then. Why is it not an option to: > > 1. Rip out all bar those 4 modules. > 2. Make sure they are documented and tested solidly (they may already > be, I don't know). > 3. Declare that to be what packaging *is* for Python 3.3. > > Whether any of those modules are of any use in isolation, is a > slightly more complex question. As is whether the APIs are guaranteed > to be sufficient for further development on "the rest" of packaging, > given that by doing this we commit to API stability and backward > compatibility. Your comment "quite mature and already useful" is not > quite firm enough to reassure me that we're ready to set those modules > in stone (although presumably the 3 relating to the PEPs are, simply > because they implement what the PEPs say). The last time I checked: packaging.version is the implementation of PEP 386, and stable. It's one building block that would be helpful as-is in the stdlib. it's completely standalone. packaging.metadata is the implementation of all metadata versions. standalone too. packaging.pypi is the PyPI crawler, and has fairly advanced features. I defer to Alexis to tell us is it's completely stable packaging.database is where PEP 376 is. It has the most innovations, implements PEP 376 packaging.config is the setup.cfg reader. Ity's awseome because together with packaging.database and packaging.markers, it gives you OS-independant data files. see http://docs.python.org/dev/packaging/setupcfg.html#resources Yeah maybe this subset could be left in 3.3 and we'd remove packaging-the-installer part (pysetup, commands, compilers) I think it's a good idea ! > > Paul. From solipsis at pitrou.net Wed Jun 20 13:29:48 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Jun 2012 13:29:48 +0200 Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <20120620112706.105ca077@pitrou.net> <20120620112004.GA11316@omega.vm.ag> Message-ID: <20120620132948.2a61efda@pitrou.net> On Wed, 20 Jun 2012 13:20:04 +0200 Hynek Schlawack wrote: > > > and a lack of user interest. > > Maybe I'm getting you wrong here, but ISTM that proper packaging is in the > short list on nearly everybody?s ?things I wish they'd fix in Python?. I agree, but I think people have also been burnt by the setuptools maintenance problem, the setuptools -> distribute migration, the easy_install / pip duality, and other stuff. I'm not sure they want to try out "yet another third-party distutils improvement from the cheeseshop". Regards Antoine. From ncoghlan at gmail.com Wed Jun 20 13:47:34 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 21:47:34 +1000 Subject: [Python-Dev] Raw string syntax inconsistency In-Reply-To: <4FE19641.7040706@cheimes.de> References: <4FDDF0B4.3020805@v.loewis.de> <4FDEC3CA.9070109@v.loewis.de> <4FE19641.7040706@cheimes.de> Message-ID: On Wed, Jun 20, 2012 at 7:22 PM, Christian Heimes wrote: > Am 18.06.2012 17:12, schrieb Guido van Rossum: >> Ok, banning ru"..." and ur"..." altogether is fine too (assuming it's >> fine with the originators of the PEP). > > It's gone for good. http://hg.python.org/cpython/rev/8e47e9af826e > > (My first push for a very long time. Man, that feels good!) And I just pushed an update to PEP 414 that should appear on the site soon. There's probably more text in that section than the issue really deserves, but at least it's captured now :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From solipsis at pitrou.net Wed Jun 20 13:46:12 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Jun 2012 13:46:12 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> Message-ID: <20120620134612.3a9f7cfe@pitrou.net> On Wed, 20 Jun 2012 12:11:03 +0100 Paul Moore wrote: > > I think the first question is, do we need an enhanced distutils in the > stdlib? I would answer a different question: we definitely need a better distutils/packaging story. Whether it's in the form of distutils enhancements, or another package, is not clear-cut. By the way, let me point out that the "distutils freeze" has been broken to implement PEP 405 (I approve the breakage myself): http://hg.python.org/cpython/rev/294f8aeb4d4b#l4.1 > As far as I can see, this one has been answered strongly, in > the affirmative, a few times now. And yet, the need seems to be a > diffuse thing, with no real champion Packaging is not a very motivating topic for many developers (myself included). It's like the build process or the buildbot fleet :-) > 2. Free up distutils2 to develop as an external package, and then have > a PEP proposing its inclusion in the stdlib in due course, when it is > ready and has been proven in the wild. [...] > The downside is that the timescales would be a lot > longer (I doubt we'd see anything in 3.4 this way, and maybe not even > 3.5). Agreed, especially if the "proven in the wild" criterion is required (people won't rush to another third-party distutils replacement, IMHO). > 3. Write a PEP describing precisely what the packaging module will do, > get consensus/agreement, and then restart development based on a solid > scope and spec. I think it's the best way to sink the project definitively. Our community isn't organized for such huge monolithic undertakings. > [1] That reads really harshly. I don't mean to criticise any of the > work that's been done, I'm a huge fan of the idea of packaging, and > its goals. The "experiment" in this case is around process - > developing something as big and important as packaging with limited > developer resource, relatively directly in the core (bringing it in > from distutils2 sooner rather than later) and working from a series of > smaller PEPs focused on particular details, rather than an overall one > covering the whole package. I cannot speak for Tarek, but one of the reasons it's been done as a set of smaller PEPs is that these PEPs were meant to be included in *distutils*, not distutils2. That is, the module already existed and the PEPs were individual, incremental improvements. Regards Antoine. From ncoghlan at gmail.com Wed Jun 20 13:54:46 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 21:54:46 +1000 Subject: [Python-Dev] PEP 362 minor nits In-Reply-To: <20120620083007.GB26786@ando> References: <569CD2C1-FDA8-4309-8F9A-931A889DE5C4@gmail.com> <20120620083007.GB26786@ando> Message-ID: On Wed, Jun 20, 2012 at 6:30 PM, Steven D'Aprano wrote: > Speaking of non-instance method descriptors, please excuse this silly > question, I haven't quite understood the implementation well enough to > answer this question myself. Is there anything needed to make > signature() work correctly with custom method-like descriptors such as > this? > > http://code.activestate.com/recipes/577030-dualmethod-descriptor They're odd enough that they would probably need to implement __signature__ as a property (or custom descriptor) and construct the appropriate signature on the fly. However, that's only necessary if you wanted to support passing the descriptor directly to inspect.signature. The result of Example.method or Example().method would be an ordinary function, which would delegate to signature(self.func) by default (thanks to the use of functools.wraps). To account for the hidden argument correctly, you would actually want to set a custom signature that dropped the first parameter. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Wed Jun 20 14:53:07 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 22:53:07 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE1B4A6.6030405@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620115423.435d7604@pitrou.net> <4FE1A724.4050104@ziade.org> <4FE1B4A6.6030405@ziade.org> Message-ID: On Wed, Jun 20, 2012 at 9:31 PM, Tarek Ziad? wrote: > Yeah maybe this subset could be left in 3.3 > > and we'd remove packaging-the-installer part (pysetup, commands, compilers) > > I think it's a good idea ! OK, to turn this into a concrete suggestion based on the packaging docs. Declare stable, include in 3.3 ------------------------------------------ packaging.version ? Version number classes packaging.metadata ? Metadata handling packaging.markers ? Environment markers packaging.database ? Database of installed distributions Maybe needed as dependencies for above? ------------------------------------------------ packaging.errors ? Packaging exceptions packaging.util ? Miscellaneous utility functions It seems to me that stripping the library, docs and tests back to just these 4 modules and their dependencies shouldn't be much harder than stripping packaging in its entirety, but my question is what benefit would we gain from having these (and just these) in the 3.3 stdlib over having them available on PyPI in distutils2? Third party projects over the next couple of years are going to want to support more than just 3.3, so simply depending on distutils2 for this functionality seems like a far more sensible option. OTOH, it does send a clear message that progress *is* being made, we just tried to take too big a jump from implementing these lower level standards up to "wholesale replacement of distutils" without first clearly documenting exactly what was wrong with the status quo and what we wanted to change about it as a sequence of PEPs. I've broken up the rest of packaging's functionality below into a series of candidate PEPs that may be more manageable than a single monolothic "fix packaging" PEP. If we can get multiple interested parties focusing on different aspects, that may also help with reducing the burnout factor. Python's current packaging and distribution story is held together with duct tape and baling wire due to decisions that were made years ago - unwinding some of those decisions and replacing them with an alternative that is built on a solid architectural foundation backed by some level of community consensus is *not* an easy task, and not one that will be completed quickly (undue haste will fail the "some level of community consensus" part, thus missing much of the point of the exercise). That said, I don't think it's unsolvable either, and there's a reasonable chance to get something cleaner in place for 3.4. 3.4 PEP: Distutils replacement: Packaging, Distribution & Installation -------------------------------------------- # This is one of the big balls of mud w.r.t distutils where third party projects dig deep into the implementation details because that is the only way to get things done # It may even be the case that this can be broken up even further packaging.install ? Installation tools packaging.dist ? The Distribution class packaging.manifest ? The Manifest class packaging.command ? Standard Packaging commands packaging.command.cmd ? Abstract base class for Packaging commands 3.4 PEP: Distutils replacement: Compiling Extension Modules -------------------------------------------- # Another big ball of mud. It sounds like the Gentoo folks may have some feedback in this space. packaging.compiler ? Compiler classes packaging.compiler.ccompiler ? CCompiler base class packaging.compiler.extension ? The Extension class 3.4 PEP: Standard library package downloader (pysetup) ---------------------------------- # Amongst other things, this needs to have a really good security story (refusing to install unsigned packages by default, etc) packaging.depgraph ? Dependency graph builder packaging.pypi ? Interface to projects indexes packaging.pypi.client ? High-level query API packaging.pypi.base ? Base class for index crawlers packaging.pypi.dist ? Classes representing query results packaging.pypi.simple ? Crawler using the PyPI ?simple? interface packaging.pypi.xmlrpc ? Crawler using the PyPI XML-RPC interface packaging.tests.pypi_server ? PyPI mock server packaging.fancy_getopt ? Wrapper around the getopt module # Why does this exist? 3.4 PEP: Simple binary package distribution format -------------------------------------------------------------------------- bdist_simple has been discussed enough times, finally seeing a PEP for it would be nice :) I think the main lesson to be learned here is that "fix packaging" is simply too big a task to be managed sensibly. Smaller goals like "Standardise versioning", "Fix package metadata", "Support uninstall" are hard enough to achieve, but also provide concrete milestones along the way to the larger goal. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Wed Jun 20 15:01:23 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 20 Jun 2012 09:01:23 -0400 Subject: [Python-Dev] PEP 362 minor nits In-Reply-To: <20120620083007.GB26786@ando> References: <569CD2C1-FDA8-4309-8F9A-931A889DE5C4@gmail.com> <20120620083007.GB26786@ando> Message-ID: <917BF4EB-BE9C-4317-B5AC-F74DB2330310@gmail.com> On 2012-06-20, at 4:30 AM, Steven D'Aprano wrote: > On Tue, Jun 19, 2012 at 08:11:26PM -0400, Yury Selivanov wrote: > >> So using the signature will be OK for 'Foo.bar' and 'Foo().bar', but >> not for 'Foo.__dict__['bar']' - which I think is fine (since >> staticmethod & classmethod instances are not callable) > > There has been some talk on Python-ideas about making staticmethod and > classmethod instances callable. > > Speaking of non-instance method descriptors, please excuse this silly > question, I haven't quite understood the implementation well enough to > answer this question myself. Is there anything needed to make > signature() work correctly with custom method-like descriptors such as > this? > > http://code.activestate.com/recipes/577030-dualmethod-descriptor Well, as Nick said -- the PEP way is to create a new Signature with a first parameter skipped. But in this particular case you can rewrite it (I'd say preferred way): class dualmethod: def __init__(self, func): self.func = func def __get__(self, instance, owner): if instance is None: return types.MethodType(self.func, owner) else: return types.MethodType(self.func, instance) Or another way, using functools.partial: class dualmethod: def __init__(self, func): self.func = func def __get__(self, instance, owner): if instance is None: return functools.partial(self.func, owner) else: return functools.partial(self.func, instance) Since 'MethodType' and 'partial' are supported by signature(), everything will work automatically (i.e. first argument will be skipped) - Yury From ncoghlan at gmail.com Wed Jun 20 15:02:25 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 23:02:25 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120620134612.3a9f7cfe@pitrou.net> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> Message-ID: On Wed, Jun 20, 2012 at 9:46 PM, Antoine Pitrou wrote: > Agreed, especially if the "proven in the wild" criterion is required > (people won't rush to another third-party distutils replacement, IMHO). The existence of setuptools means that "proven in the wild" is never going to fly - a whole lot of people use setuptools and easy_install happily, because they just don't care about the downsides it has in terms of loss of control of a system configuration. > I cannot speak for Tarek, but one of the reasons it's been done as a > set of smaller PEPs is that these PEPs were meant to be included in > *distutils*, not distutils2. That is, the module already existed and > the PEPs were individual, incremental improvements. That initial set of PEPs were also aimed at defining interoperability standards that multiple projects could implement independently, even *without* support in the standard library. As I wrote in my other email, I think one key aspect of where we went wrong after that point was in never clearly spelling out just what we collectively meant by "fix packaging". Most of the burden of interpreting that phrase thus landed directly on the shoulders of the distutils2 project lead. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From p.f.moore at gmail.com Wed Jun 20 15:10:46 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 20 Jun 2012 14:10:46 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620115423.435d7604@pitrou.net> <4FE1A724.4050104@ziade.org> <4FE1B4A6.6030405@ziade.org> Message-ID: On 20 June 2012 13:53, Nick Coghlan wrote: [...] > 3.4 PEP: Simple binary package distribution format > -------------------------------------------------------------------------- > > ? ?bdist_simple has been discussed enough times, finally seeing a PEP > for it would be nice :) I had a PEP for this one part written - ?ric had a brief look at it but I never posted it publicly. Before it'll go anywhere, a bit more of the "infrastructure PEPs" you mentioned and I trimmed would need to be completed, but I'd be willing to resurrect it when we get to that stage... Paul. From alexis at notmyidea.org Wed Jun 20 15:19:22 2012 From: alexis at notmyidea.org (=?windows-1252?Q?Alexis_M=E9taireau?=) Date: Wed, 20 Jun 2012 15:19:22 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620115423.435d7604@pitrou.net> <4FE1A724.4050104@ziade.org> <4FE1B4A6.6030405@ziade.org> Message-ID: <4FE1CDDA.2060105@notmyidea.org> On 20/06/2012 14:53, Nick Coghlan wrote: > 3.4 PEP: Standard library package downloader (pysetup) > ---------------------------------- > # Amongst other things, this needs to have a really good security > story (refusing to install unsigned packages by default, etc) > packaging.depgraph ? Dependency graph builder > packaging.pypi ? Interface to projects indexes > packaging.pypi.client ? High-level query API > packaging.pypi.base ? Base class for index crawlers > packaging.pypi.dist ? Classes representing query results > packaging.pypi.simple ? Crawler using the PyPI ?simple? interface > packaging.pypi.xmlrpc ? Crawler using the PyPI XML-RPC interface > packaging.tests.pypi_server ? PyPI mock server > packaging.fancy_getopt ? Wrapper around the getopt module # Why > does this exist? I'm okay and willing to work on this part. I started a full review of the code I wrote years ago, and which clearly needs some cleaning. Alos, I'm not sure to understand what having a PEP to manage this means: should I describe all the API in a text document (with examples) so we can discuss this and validate before doing the changes/cleanings to the API? Alexis From alexis at notmyidea.org Wed Jun 20 15:16:25 2012 From: alexis at notmyidea.org (=?windows-1252?Q?Alexis_M=E9taireau?=) Date: Wed, 20 Jun 2012 15:16:25 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE1B4A6.6030405@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620115423.435d7604@pitrou.net> <4FE1A724.4050104@ziade.org> <4FE1B4A6.6030405@ziade.org> Message-ID: <4FE1CD29.2050300@notmyidea.org> On 20/06/2012 13:31, Tarek Ziad? wrote: > packaging.metadata is the implementation of all metadata versions. > standalone too. > > packaging.pypi is the PyPI crawler, and has fairly advanced features. > I defer to Alexis to tell us > is it's completely stable packaging.pypi is functionally working but IMO the API can (and probably should) be improved (we really lack feedback to know that). From ncoghlan at gmail.com Wed Jun 20 15:28:56 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jun 2012 23:28:56 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE1CDDA.2060105@notmyidea.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620115423.435d7604@pitrou.net> <4FE1A724.4050104@ziade.org> <4FE1B4A6.6030405@ziade.org> <4FE1CDDA.2060105@notmyidea.org> Message-ID: On Wed, Jun 20, 2012 at 11:19 PM, Alexis M?taireau wrote: > On 20/06/2012 14:53, Nick Coghlan wrote: >> >> 3.4 PEP: Standard library package downloader (pysetup) >> ---------------------------------- >> ? ? # Amongst other things, this needs to have a really good security >> story (refusing to install unsigned packages by default, etc) >> ? ? packaging.depgraph ? Dependency graph builder >> ? ? packaging.pypi ? Interface to projects indexes >> ? ? packaging.pypi.client ? High-level query API >> ? ? packaging.pypi.base ? Base class for index crawlers >> ? ? packaging.pypi.dist ? Classes representing query results >> ? ? packaging.pypi.simple ? Crawler using the PyPI ?simple? interface >> ? ? packaging.pypi.xmlrpc ? Crawler using the PyPI XML-RPC interface >> ? ? packaging.tests.pypi_server ? PyPI mock server >> ? ? packaging.fancy_getopt ? Wrapper around the getopt module ?# Why >> does this exist? > > I'm okay and willing to work on this part. I started a full review of the > code I wrote years ago, and which clearly needs some cleaning. > Alos, I'm not sure to understand what having a PEP to manage this means: > should I describe all the API in a text document (with examples) so we can > discuss this and validate before doing the changes/cleanings to the API? There would be two main parts to such a PEP: - defining the command line interface and capabilities (pysetup) - defining the programmatic API (packaging.pypi and the dependency graph management) I would suggest looking at PEP 405 (venv) and PEP 397 (Windows launcher) to get an idea of the kind of content that might be appropriate. It's definitely not necessary to reproduce the full API details verbatim in the PEP text - it's OK to provide highlights and point to a reference implementation for the full details. The PEP process can also be a good way to get feedback on an API design that otherwise may not be forthcoming (cf. the recent inspect.Signature discussions). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From alexis at notmyidea.org Wed Jun 20 15:36:08 2012 From: alexis at notmyidea.org (=?UTF-8?B?QWxleGlzIE3DqXRhaXJlYXU=?=) Date: Wed, 20 Jun 2012 15:36:08 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620115423.435d7604@pitrou.net> <4FE1A724.4050104@ziade.org> <4FE1B4A6.6030405@ziade.org> <4FE1CDDA.2060105@notmyidea.org> Message-ID: <4FE1D1C8.4050104@notmyidea.org> Le mer. 20 juin 2012 15:28:56 CEST, Nick Coghlan a ?crit : > There would be two main parts to such a PEP: > - defining the command line interface and capabilities (pysetup) > - defining the programmatic API (packaging.pypi and the dependency > graph management) Okay. I don't think that the command line has anything to do with packaging.pypi and dependency management tools. One is dealing with the hole cli for different tings (install / remove / search etc.) while the other one is only how to communicate with the indexes and build dependency graphs from there. We probably should put the cli part in a separate PEP, as the scopes aren't the same that the ones I see for packaging.pypi / depgraph > I would suggest looking at PEP 405 (venv) and PEP 397 (Windows > launcher) to get an idea of the kind of content that might be > appropriate. It's definitely not necessary to reproduce the full API > details verbatim in the PEP text - it's OK to provide highlights and > point to a reference implementation for the full details. Thanks for the pointers, will read them and try to come back with a PEP proposal. Alexis From p.f.moore at gmail.com Wed Jun 20 15:47:25 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 20 Jun 2012 14:47:25 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE1CD29.2050300@notmyidea.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620115423.435d7604@pitrou.net> <4FE1A724.4050104@ziade.org> <4FE1B4A6.6030405@ziade.org> <4FE1CD29.2050300@notmyidea.org> Message-ID: On 20 June 2012 14:16, Alexis M?taireau wrote: > On 20/06/2012 13:31, Tarek Ziad? wrote: >> >> packaging.metadata is the implementation of all metadata versions. >> standalone too. >> >> packaging.pypi is the PyPI crawler, and has fairly advanced features. I >> defer to Alexis to tell us >> is it's completely stable > > > packaging.pypi is functionally working but IMO the API can (and probably > should) be improved (we really lack feedback to know that). I wasn't aware of this - I've had a look and my first thought is that the documentation needs completing. At the moment, there's a lot that isn't documented, and we should avoid getting into the same situation as with distutils where people have to use undocumented APIs to get anything done. There are a lot of examples, but not so much formal API documentation. I don't mean to pick on this one module - unless things have changed a lot, the same is probably true of much of the rest of packaging. Lack of documentation is the #1 criticism I've seen. Are there people willing to do some serious documentation work to get the docs for the "agreed stable" parts of packaging complete? There's more time to do this (doc changes don't have to be done before the beta), but by deciding to retain parts of packaging, we *are* making a commitment to complete that documentation, in my view. Paul. PS packaging.pypi is nice - I wish I'd known of its existence for a bit of work I was doing a little while ago... From janssen at parc.com Wed Jun 20 16:46:51 2012 From: janssen at parc.com (Bill Janssen) Date: Wed, 20 Jun 2012 07:46:51 PDT Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> Message-ID: <46364.1340203611@parc.com> Nick Coghlan wrote: > On Wed, Jun 20, 2012 at 9:46 PM, Antoine Pitrou wrote: > > Agreed, especially if the "proven in the wild" criterion is required > > (people won't rush to another third-party distutils replacement, IMHO). > > The existence of setuptools means that "proven in the wild" is never > going to fly - a whole lot of people use setuptools and easy_install > happily, because they just don't care about the downsides it has in > terms of loss of control of a system configuration. Maybe not "happily" :-). Speaking for myself, I'd love to find an alternative, but setuptools seems to be the only system that knows how to build shared libraries across all my platforms. I've got little interest in a packaging module that doesn't include the compiler magic to do that. Bill From p.f.moore at gmail.com Wed Jun 20 17:29:32 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 20 Jun 2012 16:29:32 +0100 Subject: [Python-Dev] Packaging documentation and packaging.pypi API Message-ID: On 20 June 2012 14:47, Paul Moore wrote: > On 20 June 2012 14:16, Alexis M?taireau wrote: >> packaging.pypi is functionally working but IMO the API can (and probably >> should) be improved (we really lack feedback to know that). > > I wasn't aware of this - I've had a look and my first thought is that > the documentation needs completing. At the moment, there's a lot that > isn't documented, and we should avoid getting into the same situation > as with distutils where people have to use undocumented APIs to get > anything done. There are a lot of examples, but not so much formal API > documentation. As a specific example, one thing I would like to do is to be able to set up a packaging.pypi client object that lets me query and download distributions. However, rather than just querying PyPI (the default) I'd like to be able to set up a list of locations (PyPI, a local server, and maybe some distribution files stored on my PC) and combine results from all of them. This differs from the mirror support in that I want to combine the lists, not use one as a fallback if the other doesn't exist. From the documentation, I can't tell if this is possible, or a feature request, or unsupported... (Actually, there's not even any documentation saying how the URL(s) in index_url should behave, so how exactly do I set up a local repository anyway...?) On a similar note, at some point, crawler.get_releases('pywin32') needs to work. I believe the issue here is technically with pywin32, which uses non-standard version numbers (214) and is hosted externally (Sourceforge) but I'd expect that packaging.pypi should be able to access anything that's on PyPI, even if other APIs like packaging.version can't deal with them. Ideally, these would be simply things I'd raise as issues on bugs.python.org. But as things stand, such issues aren't getting fixed, and we don't move forward. And without the documentation to back up a debate, it's hard to argue "X is a bug, Y is a feature request, Z is behaving correctly". Paul. PS As I write this, it suggests to me that maybe even the "pick out the stable APIs" approach isn't as simple as we'd like to think it is. You can either read this email as a set of specific documentation points to fix, or as evidence that we should drop packaging.pypi as well for now, even though it's a pretty cool feature... From merwok at netwok.org Wed Jun 20 17:34:09 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Wed, 20 Jun 2012 11:34:09 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120620112706.105ca077@pitrou.net> References: <4FE0F336.7030709@netwok.org> <20120620112706.105ca077@pitrou.net> Message-ID: <4FE1ED71.8060403@netwok.org> Hi all, Sorry I can?t take the time to reply to all messages, this week I?m fully busy with work and moving out. To answer or correct a few things: - I am lacking time these months, but that?s because I?m still getting used to having a full-time job and being settled into a new country. With the feedback we?ve been getting from people recently, I am motivated, not burned out. - I have started building a group of distutils2 contributors here in Montreal. People are motivated, but it takes time to learn the codebase and tackle on the big things. - The four modules identified as minimal, standalone, good subset all have big problems (the PEPs have open issues, and the modules' APIs need improvements). - Tarek, Georg and Guido have pronounced. With all the respect I have for Antoine?s opinion, and the valid concerns he raises and that I don?t answer here, I consider option (d) accepted and will scrap one hour to do it before b1. Regards From g.brandl at gmx.net Wed Jun 20 17:37:52 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 20 Jun 2012 17:37:52 +0200 Subject: [Python-Dev] cpython: Prefer assertEqual to simply assert per recommendation in issue6727. In-Reply-To: References: Message-ID: Am 20.06.2012 16:25, schrieb jason.coombs: > http://hg.python.org/cpython/rev/24369f6c4a22 > changeset: 77525:24369f6c4a22 > user: Jason R. Coombs > date: Wed Jun 20 10:24:24 2012 -0400 > summary: > Prefer assertEqual to simply assert per recommendation in issue6727. > Clarified comment on disabled code to reference issue15093. > > files: > Lib/test/test_import.py | 11 ++++++++--- > 1 files changed, 8 insertions(+), 3 deletions(-) > > > diff --git a/Lib/test/test_import.py b/Lib/test/test_import.py > --- a/Lib/test/test_import.py > +++ b/Lib/test/test_import.py > @@ -707,14 +707,19 @@ > os.mkdir(self.tagged) > init_file = os.path.join(self.tagged, '__init__.py') > open(init_file, 'w').close() > - assert os.path.exists(init_file) > + self.assertEqual(os.path.exists(init_file), True) > > # now create a symlink to the tagged package > # sample -> sample-tagged > os.symlink(self.tagged, self.package_name) > > - # assert os.path.isdir(self.package_name) # currently fails > - assert os.path.isfile(os.path.join(self.package_name, '__init__.py')) > + # disabled because os.isdir currently fails (see issue 15093) > + # self.assertEqual(os.path.isdir(self.package_name), True) > + > + self.assertEqual( > + os.path.isfile(os.path.join(self.package_name, '__init__.py')), > + True, > + ) Actually, in this case self.assertTrue() is the correct method. cheers, Georg From g.brandl at gmx.net Wed Jun 20 17:44:22 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 20 Jun 2012 17:44:22 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE1ED71.8060403@netwok.org> References: <4FE0F336.7030709@netwok.org> <20120620112706.105ca077@pitrou.net> <4FE1ED71.8060403@netwok.org> Message-ID: Am 20.06.2012 17:34, schrieb ?ric Araujo: > Hi all, > > Sorry I can?t take the time to reply to all messages, this week I?m > fully busy with work and moving out. > > To answer or correct a few things: > > - I am lacking time these months, but that?s because I?m still getting > used to having a full-time job and being settled into a new country. > With the feedback we?ve been getting from people recently, I am > motivated, not burned out. > > - I have started building a group of distutils2 contributors here in > Montreal. People are motivated, but it takes time to learn the codebase > and tackle on the big things. > > - The four modules identified as minimal, standalone, good subset all > have big problems (the PEPs have open issues, and the modules' APIs need > improvements). Tarek seems to think otherwise... looks like in the end, this subset could only be included as "provisional". Georg From carl at oddbird.net Wed Jun 20 18:07:46 2012 From: carl at oddbird.net (Carl Meyer) Date: Wed, 20 Jun 2012 10:07:46 -0600 Subject: [Python-Dev] Packaging documentation and packaging.pypi API In-Reply-To: References: Message-ID: <4FE1F552.4000908@oddbird.net> Hi Paul, On 06/20/2012 09:29 AM, Paul Moore wrote: > As a specific example, one thing I would like to do is to be able to > set up a packaging.pypi client object that lets me query and download > distributions. However, rather than just querying PyPI (the default) > I'd like to be able to set up a list of locations (PyPI, a local > server, and maybe some distribution files stored on my PC) and combine > results from all of them. This differs from the mirror support in that > I want to combine the lists, not use one as a fallback if the other > doesn't exist. From the documentation, I can't tell if this is > possible, or a feature request, or unsupported... (Actually, there's > not even any documentation saying how the URL(s) in index_url should > behave, so how exactly do I set up a local repository anyway...?) This is perhaps a tangent, as your point here is to point out what the API of packaging.pypi ought to allow - but pip's PackageFinder class can easily do exactly this for you. Feel free to follow up with me for details if this is actually still a problem you need to solve. Carl From tarek at ziade.org Wed Jun 20 18:24:44 2012 From: tarek at ziade.org (=?UTF-8?B?VGFyZWsgWmlhZMOp?=) Date: Wed, 20 Jun 2012 18:24:44 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <20120620112706.105ca077@pitrou.net> <4FE1ED71.8060403@netwok.org> Message-ID: <4FE1F94C.1000401@ziade.org> On 6/20/12 5:44 PM, Georg Brandl wrote: > Am 20.06.2012 17:34, schrieb ?ric Araujo: >> Hi all, >> >> Sorry I can?t take the time to reply to all messages, this week I?m >> fully busy with work and moving out. >> >> To answer or correct a few things: >> >> - I am lacking time these months, but that?s because I?m still getting >> used to having a full-time job and being settled into a new country. >> With the feedback we?ve been getting from people recently, I am >> motivated, not burned out. >> >> - I have started building a group of distutils2 contributors here in >> Montreal. People are motivated, but it takes time to learn the codebase >> and tackle on the big things. >> >> - The four modules identified as minimal, standalone, good subset all >> have big problems (the PEPs have open issues, and the modules' APIs need >> improvements). > Tarek seems to think otherwise... looks like in the end, this subset could > only be included as "provisional". I defer to Eric -- My answers are probably missing recent changes he knows > > Georg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com From p.f.moore at gmail.com Wed Jun 20 18:45:23 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 20 Jun 2012 17:45:23 +0100 Subject: [Python-Dev] Packaging documentation and packaging.pypi API In-Reply-To: <4FE1F552.4000908@oddbird.net> References: <4FE1F552.4000908@oddbird.net> Message-ID: On 20 June 2012 17:07, Carl Meyer wrote: > Hi Paul, > > On 06/20/2012 09:29 AM, Paul Moore wrote: >> As a specific example, one thing I would like to do is to be able to >> set up a packaging.pypi client object that lets me query and download >> distributions. However, rather than just querying PyPI (the default) >> I'd like to be able to set up a list of locations (PyPI, a local >> server, and maybe some distribution files stored on my PC) and combine >> results from all of them. This differs from the mirror support in that >> I want to combine the lists, not use one as a fallback if the other >> doesn't exist. From the documentation, I can't tell if this is >> possible, or a feature request, or unsupported... (Actually, there's >> not even any documentation saying how the URL(s) in index_url should >> behave, so how exactly do I set up a local repository anyway...?) > > This is perhaps a tangent, as your point here is to point out what the > API of packaging.pypi ought to allow - but pip's PackageFinder class can > easily do exactly this for you. Feel free to follow up with me for > details if this is actually still a problem you need to solve. Thanks - as you say, it's not so much the actual problem as the principle of what the packaging API offers that matters here. Although it does make a good point - to what extent do the packaging APIs draw on existing experience like that of pip? Given that tools like pip are used widely to address real requirements, it would seem foolish to *not* draw on that experience in designing a stdlib API. (As regards my actual use, it's very much a "back burner" project of mine - I keep dabbling with writing utilities to grab bdist_wininst installers and unpack/install them in virtualenvs. The PackageFinder class might well be useful there. I'll keep it in mind for the next time I go back to that problem). Paul. From brian at python.org Wed Jun 20 18:54:12 2012 From: brian at python.org (Brian Curtin) Date: Wed, 20 Jun 2012 11:54:12 -0500 Subject: [Python-Dev] Accepting PEP 397 Message-ID: As the PEP czar for 397, after Martin's final updates, I hereby pronounce this PEP "accepted"! Thanks to Mark Hammond for kicking it off, Vinay Sajip for writing up the code, Martin von Loewis for recent updates, and everyone in the community who contributed to the discussions. I will begin integration work this evening. From alexis at notmyidea.org Wed Jun 20 18:57:32 2012 From: alexis at notmyidea.org (=?UTF-8?B?QWxleGlzIE3DqXRhaXJlYXU=?=) Date: Wed, 20 Jun 2012 18:57:32 +0200 Subject: [Python-Dev] Packaging documentation and packaging.pypi API In-Reply-To: References: <4FE1F552.4000908@oddbird.net> Message-ID: <4FE200FC.4090206@notmyidea.org> Le mer. 20 juin 2012 18:45:23 CEST, Paul Moore a ?crit : > Thanks - as you say, it's not so much the actual problem as the > principle of what the packaging API offers that matters here. Although > it does make a good point - to what extent do the packaging APIs draw > on existing experience like that of pip? Given that tools like pip are > used widely to address real requirements, it would seem foolish to > *not* draw on that experience in designing a stdlib API. IIRC, pip relies nly onthe XML/RPC API to get information about the distributions from the cheeseshop. the code that's in packaging.pypi was built with the implementation in setuptools in mind, so we keep compatibility with setuptools "easy_install". But this is for the internal implementation. You're right and I will have a deep look at what the API in pip looks like when it comes to download packages from PyPI. That is, this leverages one question more on my side: was/is pip intended to be used as a library rather than as a tool / are there some people that are actually building tools on top of pip this way? From alexis at notmyidea.org Wed Jun 20 19:05:57 2012 From: alexis at notmyidea.org (=?windows-1252?Q?Alexis_M=E9taireau?=) Date: Wed, 20 Jun 2012 19:05:57 +0200 Subject: [Python-Dev] Packaging documentation and packaging.pypi API In-Reply-To: References: Message-ID: <4FE202F5.1060808@notmyidea.org> On 20/06/2012 17:29, Paul Moore wrote: >> >> I wasn't aware of this - I've had a look and my first thought is that >> the documentation needs completing. At the moment, there's a lot that >> isn't documented, and we should avoid getting into the same situation >> as with distutils where people have to use undocumented APIs to get >> anything done. There are a lot of examples, but not so much formal API >> documentation. So that's something we definitely want to fix. The code is heavily annotated, and this had been made to generate the documentation automatically with sphinx in the first time, so? that would make no sense to not make it. This is for the format API documentation, which seems to be easy to hook to sphinx. I'll also review all the documentation there to make sure that it perfectly makes sense. > As a specific example, one thing I would like to do is to be able to > set up a packaging.pypi client object that lets me query and download > distributions. However, rather than just querying PyPI (the default) > I'd like to be able to set up a list of locations (PyPI, a local > server, and maybe some distribution files stored on my PC) and combine > results from all of them. This differs from the mirror support in that > I want to combine the lists, not use one as a fallback if the other > doesn't exist. From the documentation, I can't tell if this is > possible, or a feature request, or unsupported... (Actually, there's > not even any documentation saying how the URL(s) in index_url should > behave, so how exactly do I set up a local repository anyway...?) that's not something possible out of the box using the crawler the way they are defined (iow, that's not one supported use case), *but* it's possible to make a class on top of the existing ones which could provide this kind of fallback feature. I'm not sure that we want or don't want that to be a part of packaging.pypi, but that's definitely something that this API makes possible without too much trouble. > On a similar note, at some point, crawler.get_releases('pywin32') > needs to work. I believe the issue here is technically with pywin32, > which uses non-standard version numbers (214) and is hosted externally > (Sourceforge) but I'd expect that packaging.pypi should be able to > access anything that's on PyPI, even if other APIs like > packaging.version can't deal with them. If this is not working / following the links that are present in the cheeseshp then this should be considered a bug. > Ideally, these would be simply things I'd raise as issues on > bugs.python.org. But as things stand, such issues aren't getting > fixed, and we don't move forward. And without the documentation to > back up a debate, it's hard to argue "X is a bug, Y is a feature > request, Z is behaving correctly". Alright, so this is a true documentation issue. I will make it clearer what packaging.pypi makes and doesn't make possible. From carl at oddbird.net Wed Jun 20 19:09:25 2012 From: carl at oddbird.net (Carl Meyer) Date: Wed, 20 Jun 2012 11:09:25 -0600 Subject: [Python-Dev] Packaging documentation and packaging.pypi API In-Reply-To: <4FE200FC.4090206@notmyidea.org> References: <4FE1F552.4000908@oddbird.net> <4FE200FC.4090206@notmyidea.org> Message-ID: <4FE203C5.1020602@oddbird.net> Hi Alexis, On 06/20/2012 10:57 AM, Alexis M?taireau wrote: > Le mer. 20 juin 2012 18:45:23 CEST, Paul Moore a ?crit : >> Thanks - as you say, it's not so much the actual problem as the >> principle of what the packaging API offers that matters here. Although >> it does make a good point - to what extent do the packaging APIs draw >> on existing experience like that of pip? Given that tools like pip are >> used widely to address real requirements, it would seem foolish to >> *not* draw on that experience in designing a stdlib API. > > IIRC, pip relies nly onthe XML/RPC API to get information about the > distributions from the cheeseshop. the code that's in packaging.pypi was > built with the implementation in setuptools in mind, so we keep > compatibility with setuptools "easy_install". No, this is not accurate. Pip's PackageFinder uses setuptools-compatible link-scraping, not the XMLRPC API, and it is the PackageFinder that is used to actually find distributions to install. I think PackageFinder is pretty much equivalent to what packaging.pypi is intended to do. Pip does have a separate "search" command that uses the XMLRPC API - this is the only part of pip that uses XMLRPC. I consider this a bug in pip, because the results can be inconsistent with actual installation using PackageFinder, and "search" can't be used with mirrors or private indexes (unless they implement the XMLRPC API). The "search" command should just use PackageFinder instead. > That is, this leverages one question more on my side: was/is pip > intended to be used as a library rather than as a tool / are there some > people that are actually building tools on top of pip this way? Pip's internal APIs are not documented, and they aren't the cleanest APIs ever, but some of them (particularly PackageFinder and InstallRequirement/RequirementSet) can be used without too much difficulty, and some people are using them. Not a lot of people, I don't think; I don't have hard numbers. I haven't seen much in the way of public reusable tools built atop pip, but I've talked with a few people building internal deployment tools that use pip as a library. Carl From pje at telecommunity.com Wed Jun 20 19:29:36 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 20 Jun 2012 13:29:36 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> Message-ID: On Wed, Jun 20, 2012 at 9:02 AM, Nick Coghlan wrote: > On Wed, Jun 20, 2012 at 9:46 PM, Antoine Pitrou > wrote: > > Agreed, especially if the "proven in the wild" criterion is required > > (people won't rush to another third-party distutils replacement, IMHO). > > The existence of setuptools means that "proven in the wild" is never > going to fly - a whole lot of people use setuptools and easy_install > happily, because they just don't care about the downsides it has in > terms of loss of control of a system configuration. > Um, this may be a smidge off topic, but what "loss of control" are we talking about here? AFAIK, there isn't anything it does that you can't override with command line options or the config file. (In most cases, standard distutils options or config files.) Do you just mean that most people use the defaults and don't care about there being other options? And if that's the case, which other options are you referring to? If the long-term goal is to draw setuptools users over to packaging, then AFAIK the packaging effort is still missing a few things, like build-time dependencies and alternatives to setuptools' entry points and "extras", as well as the ability to integrate version control for building sdists (without requiring the sdist's recipient to *also* have the version control integration in order to build the package or recreate a new sdist). These are just the missing features that I know of, from recent distutils-sig discussions; I don't know how complete a list this is. While no single one of these features is directly used by every project or even a majority of such projects, there is a correlation between size of a project and the likelihood that they are depending on one or more of these features. i.e., the bigger and more widely-used the project, the more likely it is to either use one of these features, or depend on a project that does. Some of these features could be built on top of packaging, in more or less the same way setuptools is built on top of distutils. But whether they're done inside or outside of the packaging library, somebody's going to have to do them, for people to be able to migrate off of setuptools. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexis at notmyidea.org Wed Jun 20 19:44:58 2012 From: alexis at notmyidea.org (=?UTF-8?B?QWxleGlzIE3DqXRhaXJlYXU=?=) Date: Wed, 20 Jun 2012 19:44:58 +0200 Subject: [Python-Dev] Packaging documentation and packaging.pypi API In-Reply-To: <4FE203C5.1020602@oddbird.net> References: <4FE1F552.4000908@oddbird.net> <4FE200FC.4090206@notmyidea.org> <4FE203C5.1020602@oddbird.net> Message-ID: <4FE20C1A.5090403@notmyidea.org> Hi Carl, Thanks for clarifying this. This means that indeed we have the same goals. I'll have a closer look at the internal pip APIs, as they are probably really useful and already used in production environment :) From ncoghlan at gmail.com Thu Jun 21 05:57:00 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 21 Jun 2012 13:57:00 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> Message-ID: On Thu, Jun 21, 2012 at 3:29 AM, PJ Eby wrote: > On Wed, Jun 20, 2012 at 9:02 AM, Nick Coghlan wrote: >> >> On Wed, Jun 20, 2012 at 9:46 PM, Antoine Pitrou >> wrote: >> > Agreed, especially if the "proven in the wild" criterion is required >> > (people won't rush to another third-party distutils replacement, IMHO). >> >> The existence of setuptools means that "proven in the wild" is never >> going to fly - a whole lot of people use setuptools and easy_install >> happily, because they just don't care about the downsides it has in >> terms of loss of control of a system configuration. > > > Um, this may be a smidge off topic, but what "loss of control" are we > talking about here?? AFAIK, there isn't anything it does that you can't > override with command line options or the config file.? (In most cases, > standard distutils options or config files.)? Do you just mean that most > people use the defaults and don't care about there being other options?? And > if that's the case, which other options are you referring to? No, I mean there are design choices in setuptools that explain why many people don't like it and are irritated when software they want to use depends on it without a good reason. Clearly articulating the reasons that "just include setuptools" is no longer being considered as an option should be one of the goals of any PEPs associated with adding packaging back for 3.4. The reasons I'm personally aware of: - it's a unilateral runtime fork of the standard library that bears a lot of responsibility for the ongoing feature freeze in distutils. Standard assumptions about the behaviour of site and distutils cease to be valid once setuptools is installed - overuse of "*.pth" files and the associated sys.path changes for all Python programs running on a system. setuptools gleefully encourages the inclusion of non-trivial code snippets in *.pth files that will be executed by all programs. - advocacy for the "egg" format and the associated sys.path changes that result for all Python programs running on a system - too much magic that is enabled by default and is hard to switch off (e.g. http://rhodesmill.org/brandon/2009/eby-magic/) System administrators (and developers that think like system administrators when it comes to configuration management) *hate* what setuptools (and setuptools based installers) can do to their systems. It doesn't matter that package developers don't *have* to do those things - what matters is that the needs and concerns of system administrators simply don't appear to have been anywhere on the radar when setuptools was being designed. (If those concerns actually were taken into account at some point, it's sure hard to tell from the end result and the choices of default behaviour) setuptools is a masterful achievement built on shaky foundations that will work most of the time. However, when it doesn't work, you're probably screwed, and as soon as it's present on a system, you know that your assumptions about understanding the Python interpreter's startup sequences are probably off. The efforts around distutils2/packaging have been focused on taking the time to *fix the foundations first* rather than accepting the inevitable shortcomings of trying to build something in the middle of a swamp. > If the long-term goal is to draw setuptools users over to packaging, then > AFAIK the packaging effort is still missing a few things, like build-time > dependencies and alternatives to setuptools' entry points and "extras", as > well as the ability to integrate version control for building sdists > (without requiring the sdist's recipient to *also* have the version control > integration in order to build the package or recreate a new sdist). Right - clearly enumerating the features that draw people to use setuptools over just using distutils should be a key element in any PEP for 3.4 I honestly think a big part of why packaging ended up being incomplete for 3.3 is that we still don't have a clearly documented answer to two critical questions: 1. Why do people choose setuptools over distutils? 2. What's wrong with setuptools that meant the idea of including it directly in the stdlib was ultimately dropped and eventually replaced with the goal of incorporating distutils2? I imagine there are answers to both of those questions embedded in past python-dev, distutils-sig, setuptools and distutils2 mailing list discussions, but that's no substitute for having them clearly documented in a PEP (or PEPs, given the scope of the questions). We've tried to shortcircuit this process twice now, first with "just include setuptools" back around 2.5, and again now with "just include distutils2 as packaging" for 3.3. It hasn't worked, so maybe it's time to try doing it properly and clearly articulating the desired end result. If the end goal is "the bulk of the setuptools feature set without the problematic features and default behaviours that make system administrators break out the torches and pitchforks", then we should *write that down* (and spell out the implications) rather than assuming that everyone knows the purpose of the exercise. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From chrism at plope.com Thu Jun 21 06:44:55 2012 From: chrism at plope.com (Chris McDonough) Date: Thu, 21 Jun 2012 00:44:55 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> Message-ID: <4FE2A6C7.9070207@plope.com> On 06/20/2012 11:57 PM, Nick Coghlan wrote: > On Thu, Jun 21, 2012 at 3:29 AM, PJ Eby wrote: >> On Wed, Jun 20, 2012 at 9:02 AM, Nick Coghlan wrote: >>> >>> On Wed, Jun 20, 2012 at 9:46 PM, Antoine Pitrou >>> wrote: >>>> Agreed, especially if the "proven in the wild" criterion is required >>>> (people won't rush to another third-party distutils replacement, IMHO). >>> >>> The existence of setuptools means that "proven in the wild" is never >>> going to fly - a whole lot of people use setuptools and easy_install >>> happily, because they just don't care about the downsides it has in >>> terms of loss of control of a system configuration. >> >> >> Um, this may be a smidge off topic, but what "loss of control" are we >> talking about here? AFAIK, there isn't anything it does that you can't >> override with command line options or the config file. (In most cases, >> standard distutils options or config files.) Do you just mean that most >> people use the defaults and don't care about there being other options? And >> if that's the case, which other options are you referring to? > > No, I mean there are design choices in setuptools that explain why > many people don't like it and are irritated when software they want to > use depends on it without a good reason. Clearly articulating the > reasons that "just include setuptools" is no longer being considered > as an option should be one of the goals of any PEPs associated with > adding packaging back for 3.4. > > The reasons I'm personally aware of: > - it's a unilateral runtime fork of the standard library that bears a > lot of responsibility for the ongoing feature freeze in distutils. > Standard assumptions about the behaviour of site and distutils cease > to be valid once setuptools is installed > - overuse of "*.pth" files and the associated sys.path changes for all > Python programs running on a system. setuptools gleefully encourages > the inclusion of non-trivial code snippets in *.pth files that will be > executed by all programs. > - advocacy for the "egg" format and the associated sys.path changes > that result for all Python programs running on a system > - too much magic that is enabled by default and is hard to switch off > (e.g. http://rhodesmill.org/brandon/2009/eby-magic/) All of these are really pretty minor issues compared with the main benefit of not needing to ship everything with everything else. The killer feature is that developers can specify dependencies and users can have those dependencies installed automatically in a cross-platform way. Everything else is complete noise if this use case is not served. IMO, the second and third things you mention above (use of pth files and eggs) are actually features when compared against the result of something like pip, which installs things using --single-version-externally-managed and then tries to manage the resulting potentially-intertwined directories. Eggs are *easier* to manage than files potentially overlapping files and directories installed into some other directory. Either they exist or they don't. Either they're mentioned in a .pth file or they aren't. It's not really that hard. In any case, any tool that tries to manage distribution installation will need somewhere to keep distribution metadata. It's a minor mystery to me why people think it could be done much better than in something very close to egg format. > System administrators (and developers that think like system > administrators when it comes to configuration management) *hate* what > setuptools (and setuptools based installers) can do to their systems. > It doesn't matter that package developers don't *have* to do those > things - what matters is that the needs and concerns of system > administrators simply don't appear to have been anywhere on the radar > when setuptools was being designed. (If those concerns actually were > taken into account at some point, it's sure hard to tell from the end > result and the choices of default behaviour) I think you mean easy_install here. And I guess you mean managing .pth files. Note that if you use pip, neither thing needs to happen. And even easy_install lets you install a distribution that way (with --single-version-externally-managed). So I think, as you mention, this is a matter of defaults (tool and or flag defaults) rather than core functionality. > setuptools is a masterful achievement built on shaky foundations that > will work most of the time. However, when it doesn't work, you're > probably screwed, and as soon as it's present on a system, you know > that your assumptions about understanding the Python interpreter's > startup sequences are probably off. It's true setuptools is based on shaky foundations. The rest of the stuff you say above is pretty darn specious, I think. > The efforts around > distutils2/packaging have been focused on taking the time to *fix the > foundations first* rather than accepting the inevitable shortcomings > of trying to build something in the middle of a swamp. > >> If the long-term goal is to draw setuptools users over to packaging, then >> AFAIK the packaging effort is still missing a few things, like build-time >> dependencies and alternatives to setuptools' entry points and "extras", as >> well as the ability to integrate version control for building sdists >> (without requiring the sdist's recipient to *also* have the version control >> integration in order to build the package or recreate a new sdist). > > Right - clearly enumerating the features that draw people to use > setuptools over just using distutils should be a key element in any > PEP for 3.4 > > I honestly think a big part of why packaging ended up being incomplete > for 3.3 is that we still don't have a clearly documented answer to two > critical questions: > 1. Why do people choose setuptools over distutils? Because it supports automated installation of dependencies. Almost everything else is noise (although some of the other things that setuptools provides, like entry points and console scripts, is important noise). > 2. What's wrong with setuptools that meant the idea of including it > directly in the stdlib was ultimately dropped and eventually replaced > with the goal of incorporating distutils2? Because distutils sucks and setuptools is based on distutils. It's horrible to need to hack on. Setuptools also has documentation which is effectively deltas to the distutils docs. As a result, it's very painful to try to follow the setuptools docs. IMO, it's not that the ideas in setuptools are bad, it's that setuptools requires a *lot* more docs to be consumable by normal humans, and those docs need to be a lot more accessible. > I imagine there are answers to both of those questions embedded in > past python-dev, distutils-sig, setuptools and distutils2 mailing list > discussions, but that's no substitute for having them clearly > documented in a PEP (or PEPs, given the scope of the questions). > > We've tried to shortcircuit this process twice now, first with "just > include setuptools" back around 2.5, and again now with "just include > distutils2 as packaging" for 3.3. It hasn't worked, so maybe it's time > to try doing it properly and clearly articulating the desired end > result. If the end goal is "the bulk of the setuptools feature set > without the problematic features and default behaviours that make > system administrators break out the torches and pitchforks", then we > should *write that down* (and spell out the implications) rather than > assuming that everyone knows the purpose of the exercise. There's all kinds of built in conflict here wrt to those pitchforks. Most of it is stupid. System admininstrators tend to be stuck in a "one package to rule them all" model of deployment and that model *just cant work* on a system where you need repeatable deployments of multiple pieces of Python-based software which may require mutually exclusive different Python and library versions. Trying to pretend it can work is just plain madness. Telling developers they must work on an exact replica of the production system in order to develop the software is also a terrible, unproductive idea. This is a hopeless, 1990s waterfall model of deployment and devlopment. This is why packages like virtualenv and buildout are so popular. Using them gets developers what they need. Developers get repeatable cross-platform deployments without requiring special privilege, and this allows for a *reduction* in the system administrator's role in deployment. Sometimes a certain type of system administrator can be a hindrance to deployment and maintenance, like sometimes a DBA can be a hindrance to a developer who just needs to add a damn table. With the tools available today (Fabric, buildout, salt, virtualenv, pip), it's a heck of a lot easier to script a cross-platform deployment that will work simultaneously on Debian, Red Hat, BSD, and Mac OS X than it is to build system-level packages for multiple platforms or even *one* platform. And to be honest, if a system administrator can't cope with the notion that he may need to forsake his system-level package installer and instead follow the instructions we give to him to type four or five commands to get a completely working system deployed or updated, he probably should not be a system administrator. His job is going to quickly be taken by folks who *can* cope with such deployment mechanisms like any cloud service: all the existing Python cloud deployment services handle distutils/setuptools installs just fine and these tend to be the *only* way you can get Python software installed into a system on them. - C From ncoghlan at gmail.com Thu Jun 21 10:45:58 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 21 Jun 2012 18:45:58 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE2A6C7.9070207@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> Message-ID: On Thu, Jun 21, 2012 at 2:44 PM, Chris McDonough wrote: > All of these are really pretty minor issues compared with the main benefit > of not needing to ship everything with everything else.?The killer feature > is that developers can specify dependencies and users can have those > dependencies installed automatically in a cross-platform way. ?Everything > else is complete noise if this use case is not served. Cool. This is the kind of thing we need recorded in a PEP - there's a lot of domain knowledge floating around in the heads of packaging folks that needs to be captured so we can know *what the addition of packaging to the standard library is intended to fix*. And, like it or not, setuptools has a serious PR problem due to the fact it monkeypatches the standard library, uses *.pth files to alter sys.path for every installed application by default, actually *uses* the ability to run code in *.pth files and has hard to follow documentation to boot. I *don't* trust that I fully understand the import system on any machine with setuptools installed, because it is demonstrably happy to install state to the file system that will affect *all* Python programs running on the machine. A packaging PEP needs to explain: - what needs to be done to eliminate any need for monkeypatching - what's involved in making sure that *.pth are *not* needed by default - making sure that executable code in implicitly loaded *.pth files isn't used *at all* I *think* trying to achieve this is actually the genesis of the original distribute fork, that subsequently became distutils2 as Tarek discovered how much of the complexity in setuptools was actually due to the desire to *not* officially fork distutils (and instead monkeypatch it, effectively creating a runtime fork). However, for those of us that weren't directly involved, this is all still a strange mystery dealt with by other people. I've cribbed together bits and pieces just from following the fragments of the discussions that have happened on python-dev and at PyCon US, but if we want the madness to ever stop, then *the problems with the status quo* need to be written down so that other core developers can understand them. In fact, I just remembered that Tarek *has* written a lot of this down, just not in PEP form: http://www.aosabook.org/en/packaging.html Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From gaofanv at 126.com Thu Jun 21 10:43:37 2012 From: gaofanv at 126.com (Van Gao) Date: Thu, 21 Jun 2012 01:43:37 -0700 (PDT) Subject: [Python-Dev] Cannot find the main Python library during installing some app. Message-ID: <1340268217300-4979076.post@n6.nabble.com> hi, I got the error below during installing the libmobiledevice: checking consistency of all components of *python development environment... no* configure: error: Could not link test program to Python. Maybe the main Python library has been installed in some non-standard library path. If so, pass it to configure, via the LDFLAGS environment variable. Example: ./configure LDFLAGS="-L/usr/non-standard-path/python/lib" ============================================================================ ERROR! You probably have to install the development version of the Python package for your distribution. The exact name of this package varies among them. ============================================================================ I have installed the python2.7, but I cannot find the lib under the /usr/local/lib/python2.7, *so where can I get the development version for python*? I downloaded the Python-2.7.3.tgz from python.org, is there any different between the development version with the tgz file? Thanks. -- View this message in context: http://python.6.n6.nabble.com/Cannot-find-the-main-Python-library-during-installing-some-app-tp4979076.html Sent from the Python - python-dev mailing list archive at Nabble.com. From amauryfa at gmail.com Thu Jun 21 11:11:39 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 21 Jun 2012 11:11:39 +0200 Subject: [Python-Dev] Cannot find the main Python library during installing some app. In-Reply-To: <1340268217300-4979076.post@n6.nabble.com> References: <1340268217300-4979076.post@n6.nabble.com> Message-ID: Hi, This mailing list is for the development *of* Python. Development *with* Python should be discussed on the python-list mailing list, or the comp.lang.python usenet group. There will be many people there willing to answer your question... 2012/6/21 Van Gao > hi, > > I got the error below during installing the libmobiledevice: > checking consistency of all components of *python development > environment... > no* > configure: error: > Could not link test program to Python. Maybe the main Python library has > been > installed in some non-standard library path. If so, pass it to configure, > via the LDFLAGS environment variable. > Example: ./configure LDFLAGS="-L/usr/non-standard-path/python/lib" > > > ============================================================================ > ERROR! > You probably have to install the development version of the Python > package > for your distribution. The exact name of this package varies among them. > > > ============================================================================ > > I have installed the python2.7, but I cannot find the lib under the > /usr/local/lib/python2.7, *so where can I get the development version for > python*? I downloaded the Python-2.7.3.tgz from python.org, is there any > different between the development version with the tgz file? Thanks. > > -- > View this message in context: > http://python.6.n6.nabble.com/Cannot-find-the-main-Python-library-during-installing-some-app-tp4979076.html > Sent from the Python - python-dev mailing list archive at Nabble.com. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/amauryfa%40gmail.com > -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Thu Jun 21 11:14:20 2012 From: phd at phdru.name (Oleg Broytman) Date: Thu, 21 Jun 2012 13:14:20 +0400 Subject: [Python-Dev] Cannot find the main Python library during installing some app. In-Reply-To: <1340268217300-4979076.post@n6.nabble.com> References: <1340268217300-4979076.post@n6.nabble.com> Message-ID: <20120621091420.GB8706@iskra.aviel.ru> Hello. We are sorry but we cannot help you. This mailing list is to work on developing Python (adding new features to Python itself and fixing bugs); if you're having problems learning, understanding or using Python, please find another forum. Probably python-list/comp.lang.python mailing list/news group is the best place; there are Python developers who participate in it; you may get a faster, and probably more complete, answer there. See http://www.python.org/community/ for other lists/news groups/fora. Thank you for understanding. On Thu, Jun 21, 2012 at 01:43:37AM -0700, Van Gao wrote: > ============================================================================ > ERROR! > You probably have to install the development version of the Python > package > for your distribution. The exact name of this package varies among them. > ============================================================================ This is the key. You have to install the development version of the Python package *for your distribution*, not python from sources. Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From d.s.seljebotn at astro.uio.no Thu Jun 21 11:08:33 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Thu, 21 Jun 2012 11:08:33 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> Message-ID: <4FE2E491.8010504@astro.uio.no> On 06/21/2012 05:57 AM, Nick Coghlan wrote: > On Thu, Jun 21, 2012 at 3:29 AM, PJ Eby wrote: >> On Wed, Jun 20, 2012 at 9:02 AM, Nick Coghlan wrote: >>> >>> On Wed, Jun 20, 2012 at 9:46 PM, Antoine Pitrou >>> wrote: >>>> Agreed, especially if the "proven in the wild" criterion is required >>>> (people won't rush to another third-party distutils replacement, IMHO). >>> >>> The existence of setuptools means that "proven in the wild" is never >>> going to fly - a whole lot of people use setuptools and easy_install >>> happily, because they just don't care about the downsides it has in >>> terms of loss of control of a system configuration. >> >> >> Um, this may be a smidge off topic, but what "loss of control" are we >> talking about here? AFAIK, there isn't anything it does that you can't >> override with command line options or the config file. (In most cases, >> standard distutils options or config files.) Do you just mean that most >> people use the defaults and don't care about there being other options? And >> if that's the case, which other options are you referring to? > > No, I mean there are design choices in setuptools that explain why > many people don't like it and are irritated when software they want to > use depends on it without a good reason. Clearly articulating the > reasons that "just include setuptools" is no longer being considered > as an option should be one of the goals of any PEPs associated with > adding packaging back for 3.4. > > The reasons I'm personally aware of: > - it's a unilateral runtime fork of the standard library that bears a > lot of responsibility for the ongoing feature freeze in distutils. > Standard assumptions about the behaviour of site and distutils cease > to be valid once setuptools is installed > - overuse of "*.pth" files and the associated sys.path changes for all > Python programs running on a system. setuptools gleefully encourages > the inclusion of non-trivial code snippets in *.pth files that will be > executed by all programs. > - advocacy for the "egg" format and the associated sys.path changes > that result for all Python programs running on a system > - too much magic that is enabled by default and is hard to switch off > (e.g. http://rhodesmill.org/brandon/2009/eby-magic/) > > System administrators (and developers that think like system > administrators when it comes to configuration management) *hate* what > setuptools (and setuptools based installers) can do to their systems. > It doesn't matter that package developers don't *have* to do those > things - what matters is that the needs and concerns of system > administrators simply don't appear to have been anywhere on the radar > when setuptools was being designed. (If those concerns actually were > taken into account at some point, it's sure hard to tell from the end > result and the choices of default behaviour) David Cournapeau's Bento project takes the opposite approach, everything is explicit and without any magic. http://cournape.github.com/Bento/ It had its 0.1.0 release a week ago. Please, I don't want to reopen any discussions about Bento here -- distutils2 vs. Bento discussions have been less than constructive in the past -- I just wanted to make sure everybody is aware that distutils2 isn't the only horse in this race. I don't know if there are others too? -- Dag Sverre Seljebotn From cournape at gmail.com Thu Jun 21 11:28:17 2012 From: cournape at gmail.com (David Cournapeau) Date: Thu, 21 Jun 2012 10:28:17 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> Message-ID: On Thu, Jun 21, 2012 at 9:45 AM, Nick Coghlan wrote: > On Thu, Jun 21, 2012 at 2:44 PM, Chris McDonough wrote: > > All of these are really pretty minor issues compared with the main > benefit > > of not needing to ship everything with everything else. The killer > feature > > is that developers can specify dependencies and users can have those > > dependencies installed automatically in a cross-platform way. Everything > > else is complete noise if this use case is not served. > > Cool. This is the kind of thing we need recorded in a PEP - there's a > lot of domain knowledge floating around in the heads of packaging > folks that needs to be captured so we can know *what the addition of > packaging to the standard library is intended to fix*. > > And, like it or not, setuptools has a serious PR problem due to the > fact it monkeypatches the standard library, uses *.pth files to alter > sys.path for every installed application by default, actually *uses* > the ability to run code in *.pth files and has hard to follow > documentation to boot. I *don't* trust that I fully understand the > import system on any machine with setuptools installed, because it is > demonstrably happy to install state to the file system that will > affect *all* Python programs running on the machine. > > A packaging PEP needs to explain: > - what needs to be done to eliminate any need for monkeypatching > - what's involved in making sure that *.pth are *not* needed by default > - making sure that executable code in implicitly loaded *.pth files > isn't used *at all* > It is not a PEP, but here are a few reasons why extending distutils is difficult (taken from our experience in the scipy community, which has by far the biggest extension of distutils AFAIK): http://cournape.github.com/Bento/html/faq.html#why-not-extending-existing-tools-distutils-etc While I believe setuptools has been a net negative for the scipy community because of the way it works and for the reason you mentioned, I think it is fair to say it is not really possible to do any differently if you rely on distutils. If specifying install dependencies is the killer feature of setuptools, why can't we have a very simple module that adds the necessary 3 keywords to record it, and let 3rd party tools deal with it as they wish ? That would not even require speciying the format, and would let us more time to deal with the other, more difficult questions. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From danny at cs.huji.ac.il Thu Jun 21 12:17:01 2012 From: danny at cs.huji.ac.il (Daniel Braniss) Date: Thu, 21 Jun 2012 13:17:01 +0300 Subject: [Python-Dev] import too slow on NFS based systems Message-ID: Hi, when lib/python/site-packages/ is accessed via NFS, open/stat/access is very expensive/slow. A simple solution is to use an in memory directory search/hash, so I was wondering if this has been concidered in the past, if not, and I come with a working solution for Unix (at least Linux/Freebsd) will it be concidered. thanks, danny From phd at phdru.name Thu Jun 21 12:30:17 2012 From: phd at phdru.name (Oleg Broytman) Date: Thu, 21 Jun 2012 14:30:17 +0400 Subject: [Python-Dev] import too slow on NFS based systems In-Reply-To: References: Message-ID: <20120621103017.GA10215@iskra.aviel.ru> On Thu, Jun 21, 2012 at 01:17:01PM +0300, Daniel Braniss wrote: > when lib/python/site-packages/ is accessed via NFS, open/stat/access is very > expensive/slow. > > A simple solution is to use an in memory directory search/hash, so I was > wondering if this has been concidered in the past, if not, and I come > with a working solution for Unix (at least Linux/Freebsd) will it be > concidered. I'm sure it'll be considered providing that the solution doesn't slow down local FS access. Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From armin.ronacher at active-4.com Thu Jun 21 12:23:25 2012 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Thu, 21 Jun 2012 10:23:25 -0000 Subject: [Python-Dev] Add os.path.resolve to simplify the use of os.readlink Message-ID: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> Due to an user error on my part I was not using os.readlink correctly. Since links can be relative to their location I think it would make sense to provide an os.path.resolve helper that automatically returns the absolute path: def resolve(filename): try: target = os.readlink(filename) except OSError as e: if e.errno == errno.EINVAL: return abspath(filename) raise return normpath(join(dirname(filename), target)) The above implementation also does not fail if an entity exists but is not a link and just returns the absolute path of the given filename in that case. Regards, Armin From solipsis at pitrou.net Thu Jun 21 12:33:37 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 21 Jun 2012 12:33:37 +0200 Subject: [Python-Dev] import too slow on NFS based systems References: Message-ID: <20120621123337.430395c4@pitrou.net> On Thu, 21 Jun 2012 13:17:01 +0300 Daniel Braniss wrote: > Hi, > when lib/python/site-packages/ is accessed via NFS, open/stat/access is very > expensive/slow. > > A simple solution is to use an in memory directory search/hash, so I was > wondering if this has been concidered in the past, if not, and I come > with a working solution for Unix (at least Linux/Freebsd) will it be > concidered. There is such a thing in Python 3.3, although some stat() calls are still necessary to know whether the directory caches are fresh. Can you give it a try and provide some feedback? Regards Antoine. From lists at cheimes.de Thu Jun 21 12:52:28 2012 From: lists at cheimes.de (Christian Heimes) Date: Thu, 21 Jun 2012 12:52:28 +0200 Subject: [Python-Dev] Add os.path.resolve to simplify the use of os.readlink In-Reply-To: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> References: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> Message-ID: <4FE2FCEC.30000@cheimes.de> Am 21.06.2012 12:23, schrieb Armin Ronacher: > Due to an user error on my part I was not using os.readlink correctly. > Since links can be relative to their location I think it would make sense > to provide an os.path.resolve helper that automatically returns the > absolute path: > > def resolve(filename): > try: > target = os.readlink(filename) > except OSError as e: > if e.errno == errno.EINVAL: > return abspath(filename) > raise > return normpath(join(dirname(filename), target)) > > The above implementation also does not fail if an entity exists but is not > a link and just returns the absolute path of the given filename in that > case. +1 Does the code handle a chain of absolute and relative symlinks correctly, for example a relative symlink that points to another relative symlink in a different directory that points to a file in a third directry? Christian From armin.ronacher at active-4.com Thu Jun 21 13:10:44 2012 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Thu, 21 Jun 2012 11:10:44 -0000 Subject: [Python-Dev] Add os.path.resolve to simplify the use of os.readlink In-Reply-To: <4FE2FCEC.30000@cheimes.de> References: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> <4FE2FCEC.30000@cheimes.de> Message-ID: Hi, > Am 21.06.2012 12:23, schrieb Armin Ronacher: > Does the code handle a chain of absolute and relative symlinks > correctly, for example a relative symlink that points to another > relative symlink in a different directory that points to a file in a > third directry? No, but that's a good point. It should attempt to resolve these in a loop until it either loops too often (would have to check the POSIX spec for a reasonable value) or until it terminates by finding an actual file or directory. Regards, Armin From solipsis at pitrou.net Thu Jun 21 13:18:43 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 21 Jun 2012 13:18:43 +0200 Subject: [Python-Dev] Add os.path.resolve to simplify the use of os.readlink References: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> <4FE2FCEC.30000@cheimes.de> Message-ID: <20120621131843.5318ff0b@pitrou.net> On Thu, 21 Jun 2012 11:10:44 -0000 "Armin Ronacher" wrote: > Hi, > > > Am 21.06.2012 12:23, schrieb Armin Ronacher: > > Does the code handle a chain of absolute and relative symlinks > > correctly, for example a relative symlink that points to another > > relative symlink in a different directory that points to a file in a > > third directry? > No, but that's a good point. It should attempt to resolve these in a loop > until it either loops too often (would have to check the POSIX spec for a > reasonable value) or until it terminates by finding an actual file or > directory. You could take a look at the resolve() algorithm in pathlib: http://pypi.python.org/pypi/pathlib/ Regards Antoine. From lists at cheimes.de Thu Jun 21 13:26:12 2012 From: lists at cheimes.de (Christian Heimes) Date: Thu, 21 Jun 2012 13:26:12 +0200 Subject: [Python-Dev] Add os.path.resolve to simplify the use of os.readlink In-Reply-To: References: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> <4FE2FCEC.30000@cheimes.de> Message-ID: <4FE304D4.3060002@cheimes.de> Am 21.06.2012 13:10, schrieb Armin Ronacher: Hello Armin, > No, but that's a good point. It should attempt to resolve these in a loop > until it either loops too often (would have to check the POSIX spec for a > reasonable value) or until it terminates by finding an actual file or > directory. The specs mention sysconf(SYMLOOP_MAX) / _POSIX_SYMLOOP_MAX for the maximum count of lookups. The limit is lower than I expected. On my system it's defined as 8 in /usr/include/x86_64-linux-gnu/bits/posix1_lim.h. The limit would also handle self referencing loops correctly. BTW Is there a better way than raise OSError(errno.ELOOP, os.strerror(errno.ELOOP), filename) to raise a correct OSError with errno, errno message and filename? A classmethod like "OSError.from_errno(errno, filename=None) -> proper subclass auf OSError with sterror() set" would reduce the burden for developers. PEP mentions the a similar idea at http://www.python.org/dev/peps/pep-3151/#implementation but this was never implemented. Christian From phd at phdru.name Thu Jun 21 13:34:10 2012 From: phd at phdru.name (Oleg Broytman) Date: Thu, 21 Jun 2012 15:34:10 +0400 Subject: [Python-Dev] Add os.path.resolve to simplify the use of os.readlink In-Reply-To: References: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> <4FE2FCEC.30000@cheimes.de> Message-ID: <20120621113410.GB10215@iskra.aviel.ru> On Thu, Jun 21, 2012 at 11:10:44AM -0000, Armin Ronacher wrote: > would have to check the POSIX spec for a > reasonable value POSIX allows 8 links: http://infohost.nmt.edu/~eweiss/222_book/222_book/0201433079/ch02lev1sec5.html _POSIX_SYMLOOP_MAX - number of symbolic links that can be traversed during pathname resolution: 8 The constant _POSIX_SYMLOOP_MAX from unistd.h: #define _POSIX_SYMLOOP_MAX 8 Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From ironfroggy at gmail.com Thu Jun 21 13:34:58 2012 From: ironfroggy at gmail.com (Calvin Spealman) Date: Thu, 21 Jun 2012 07:34:58 -0400 Subject: [Python-Dev] Fwd: Re: Add os.path.resolve to simplify the use of os.readlink In-Reply-To: References: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> Message-ID: ---------- Forwarded message ---------- (whoops from my phone) On Jun 21, 2012 6:32 AM, "Armin Ronacher" wrote: > > Due to an user error on my part I was not using os.readlink correctly. > Since links can be relative to their location I think it would make sense > to provide an os.path.resolve helper that automatically returns the > absolute path: > > def resolve(filename): > try: > target = os.readlink(filename) > except OSError as e: > if e.errno == errno.EINVAL: > return abspath(filename) > raise > return normpath(join(dirname(filename), target)) > > The above implementation also does not fail if an entity exists but is not > a link and just returns the absolute path of the given filename in that > case. > Does it need to be an absolute path, and what if the advantage of that? Can it returned absolute if that's what you gave it, and relative otherwise? > > Regards, > Armin > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrism at plope.com Thu Jun 21 13:48:34 2012 From: chrism at plope.com (Chris McDonough) Date: Thu, 21 Jun 2012 07:48:34 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> Message-ID: <4FE30A12.8020202@plope.com> On 06/21/2012 04:45 AM, Nick Coghlan wrote: > On Thu, Jun 21, 2012 at 2:44 PM, Chris McDonough wrote: >> All of these are really pretty minor issues compared with the main benefit >> of not needing to ship everything with everything else. The killer feature >> is that developers can specify dependencies and users can have those >> dependencies installed automatically in a cross-platform way. Everything >> else is complete noise if this use case is not served. > > Cool. This is the kind of thing we need recorded in a PEP - there's a > lot of domain knowledge floating around in the heads of packaging > folks that needs to be captured so we can know *what the addition of > packaging to the standard library is intended to fix*. > > And, like it or not, setuptools has a serious PR problem due to the > fact it monkeypatches the standard library, uses *.pth files to alter > sys.path for every installed application by default, actually *uses* > the ability to run code in *.pth files and has hard to follow > documentation to boot. I *don't* trust that I fully understand the > import system on any machine with setuptools installed, because it is > demonstrably happy to install state to the file system that will > affect *all* Python programs running on the machine. I don't know about Red Hat but both Ubuntu and Apple put all kinds of stuff on the default sys.path of the system Python of the box that's related to their software's concerns only. I don't understand why people accept this but get crazy about the fact that installing a setuptools distribution using easy_install changes the default sys.path. Installing a distribution will change behavior whether or not sys.path is changed as a result. That's its purpose. The code that runs in the .pth *file* (there's only one that matters: easy_install.pth) just mutates sys.path. The end result is this: if you understand how sys.path works, you understand how eggs work. Each egg is addded to sys.path. That's all there is to it. It's the same as manually mutating a global PYTHONPATH, except you don't need to do it. And note that this is not "setuptools" in general. It's easy_install in particular. Everything you've brought up so far I think is limited to easy_install. It doesn't happen when you use pip. I think it's a mistake that pip doesn't do it, but I think you have to make more accurate distinctions. > A packaging PEP needs to explain: > - what needs to be done to eliminate any need for monkeypatching > - what's involved in making sure that *.pth are *not* needed by default > - making sure that executable code in implicitly loaded *.pth files > isn't used *at all* I'll note that these goals are completely sideways to any actual functional goal. It'd be a shame to have monkeypatching going on, but the other stuff I don't think are reasonable goals. Instead they represent fears, and those fears just need to be managed. > I *think* trying to achieve this is actually the genesis of the > original distribute fork, that subsequently became distutils2 as Tarek > discovered how much of the complexity in setuptools was actually due > to the desire to *not* officially fork distutils (and instead > monkeypatch it, effectively creating a runtime fork). > > However, for those of us that weren't directly involved, this is all > still a strange mystery dealt with by other people. I've cribbed > together bits and pieces just from following the fragments of the > discussions that have happened on python-dev and at PyCon US, but if > we want the madness to ever stop, then *the problems with the status > quo* need to be written down so that other core developers can > understand them. It'd also be useful if other core developers actually tried to use setuptools in anger. That'd be a good start towards understanding some of its tradeoffs. People can write this stuff down til they're blue in the face, but if core devs don't try the stuff, they'll always fear it. > In fact, I just remembered that Tarek *has* written a lot of this > down, just not in PEP form: http://www.aosabook.org/en/packaging.html Cool. - C From tarek at ziade.org Thu Jun 21 13:56:06 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Thu, 21 Jun 2012 13:56:06 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE2E491.8010504@astro.uio.no> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> Message-ID: <4FE30BD6.1060706@ziade.org> On 6/21/12 11:08 AM, Dag Sverre Seljebotn wrote: > ... > David Cournapeau's Bento project takes the opposite approach, > everything is explicit and without any magic. > > http://cournape.github.com/Bento/ > > It had its 0.1.0 release a week ago. > > Please, I don't want to reopen any discussions about Bento here -- > distutils2 vs. Bento discussions have been less than constructive in > the past -- I just wanted to make sure everybody is aware that > distutils2 isn't the only horse in this race. I don't know if there > are others too? > That's *exactly* the kind of approach that has made me not want to continue. People are too focused on implementations, and 'how distutils sucks' 'how setuptools sucks' etc 'I'll do better' etc Instead of having all the folks involved in packaging sit down together and try to fix the issues together by building PEPs describing what would be a common set of standards, they want to create their own tools from scratch. That will not work. And I will say here again what I think we should do imho: 1/ take all the packaging PEPs and rework them until everyone is happy (compilation sucks in distutils ? write a PEP !!!) 2/ once we have a consensus, write as many tools as you want, if they rely on the same standards => interoperability => win. But I must be naive because everytime I tried to reach people that were building their own tools to ask them to work with us on the PEPs, all I was getting was "distutils sucks!' It worked with the OS packagers guys though, we have built a great data files managment system in packaging + the versions (386) > -- > Dag Sverre Seljebotn > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com From ncoghlan at gmail.com Thu Jun 21 13:58:51 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 21 Jun 2012 21:58:51 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> Message-ID: On Thu, Jun 21, 2012 at 7:28 PM, David Cournapeau wrote: > If specifying install dependencies is the killer feature of setuptools, why > can't we have a very simple module that adds the necessary 3 keywords to > record it, and let 3rd party tools deal with it as they wish ? That would > not even require speciying the format, and would let us more time to deal > with the other, more difficult questions. That low level role is filled by PEP 345 (the latest PyPI metadata format, which adds the new fields), PEP 376 (local installation database) and PEP 386 (version numbering schema). The corresponding packaging submodules are the ones that were being considered for retention as a reference implementation in 3.3, but are still slated for removal along with the rest of the package (the reference implementations will remain available as part of distutils2 on PyPI). Whatever UI a Python packaging solution presents to a user, it needs to support those 3 PEPs on the back end for interoperability with other tools (including, eventually, the packaging module in the standard library). Your feedback on the commands/compilers design sounds valuable, and I would be very interested in seeing a PEP targeting that aspect of the new packaging module (if you look at the start of this thread, the failure to improve the compiler API is one of the reasons for pulling the code from 3.3). If python-dev ends up playing referee on multiple competing PEPs, that's not necessarily a bad thing. If a consensus solution doesn't meet the needs of key parties that aren't well served by existing approaches (specifically, the scientific community, and enterprise users that want to be able to translate the plethora of language specific packaging systems to a common format for internal use to simplify system administration and configuration management and auditing), then we may as well not bother and let the status quo continue indefinitely. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tarek at ziade.org Thu Jun 21 14:00:58 2012 From: tarek at ziade.org (=?windows-1252?Q?Tarek_Ziad=E9?=) Date: Thu, 21 Jun 2012 14:00:58 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620115423.435d7604@pitrou.net> <4FE1A724.4050104@ziade.org> <4FE1B4A6.6030405@ziade.org> Message-ID: <4FE30CFA.4070106@ziade.org> On 6/20/12 2:53 PM, Nick Coghlan wrote: > On Wed, Jun 20, 2012 at 9:31 PM, Tarek Ziad? wrote: >> Yeah maybe this subset could be left in 3.3 >> >> and we'd remove packaging-the-installer part (pysetup, commands, compilers) >> >> I think it's a good idea ! > OK, to turn this into a concrete suggestion based on the packaging docs. > Declare stable, include in 3.3 > ------------------------------------------ > packaging.version ? Version number classes > packaging.metadata ? Metadata handling > packaging.markers ? Environment markers > packaging.database ? Database of installed distributions I think that's a good subset. +1 on all of the things you said after If you succeed on getting the sci people working on "PEP: Distutils replacement: Compiling Extension Modules" it will be a big win. From oscar.j.benjamin at gmail.com Thu Jun 21 14:07:58 2012 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Thu, 21 Jun 2012 13:07:58 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE30A12.8020202@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> Message-ID: On 21 June 2012 12:48, Chris McDonough wrote: > On 06/21/2012 04:45 AM, Nick Coghlan wrote: > >> On Thu, Jun 21, 2012 at 2:44 PM, Chris McDonough >> wrote: >> >>> All of these are really pretty minor issues compared with the main >>> benefit >>> of not needing to ship everything with everything else. The killer >>> feature >>> is that developers can specify dependencies and users can have those >>> dependencies installed automatically in a cross-platform way. Everything >>> else is complete noise if this use case is not served. >>> >> >> Cool. This is the kind of thing we need recorded in a PEP - there's a >> lot of domain knowledge floating around in the heads of packaging >> folks that needs to be captured so we can know *what the addition of >> packaging to the standard library is intended to fix*. >> >> And, like it or not, setuptools has a serious PR problem due to the >> fact it monkeypatches the standard library, uses *.pth files to alter >> sys.path for every installed application by default, actually *uses* >> the ability to run code in *.pth files and has hard to follow >> documentation to boot. I *don't* trust that I fully understand the >> import system on any machine with setuptools installed, because it is >> demonstrably happy to install state to the file system that will >> affect *all* Python programs running on the machine. >> > > I don't know about Red Hat but both Ubuntu and Apple put all kinds of > stuff on the default sys.path of the system Python of the box that's > related to their software's concerns only. I don't understand why people > accept this but get crazy about the fact that installing a setuptools > distribution using easy_install changes the default sys.path. > I don't like the particular way that easy_install modifies sys.path so that it can no longer be overridden by PYTHONPATH. For a discussion, see: http://stackoverflow.com/questions/5984523/eggs-in-path-before-pythonpath-environment-variable The fact that ubuntu does this for some system ubuntu packages has never bothered me, but the fact that it happens for packages that I install with easy_install has. The typical scenario would be that I: 1) Install some package X with easy_install. 2) Find a bug or some aspect of X that I want to change and checkout the latest version from e.g. github. 3) Try to use PYTHONPATH to test the checked out version and find that easy_install's path modification prevents me from doing so. 4) Run the quickfix script in the stackoverflow question above and consider not using easy_install for X in future. Oscar -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Jun 21 14:19:53 2012 From: cournape at gmail.com (David Cournapeau) Date: Thu, 21 Jun 2012 13:19:53 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> Message-ID: On Thu, Jun 21, 2012 at 12:58 PM, Nick Coghlan wrote: > On Thu, Jun 21, 2012 at 7:28 PM, David Cournapeau > wrote: > > If specifying install dependencies is the killer feature of setuptools, > why > > can't we have a very simple module that adds the necessary 3 keywords to > > record it, and let 3rd party tools deal with it as they wish ? That would > > not even require speciying the format, and would let us more time to deal > > with the other, more difficult questions. > > That low level role is filled by PEP 345 (the latest PyPI metadata > format, which adds the new fields), PEP 376 (local installation > database) and PEP 386 (version numbering schema). > > The corresponding packaging submodules are the ones that were being > considered for retention as a reference implementation in 3.3, but are > still slated for removal along with the rest of the package (the > reference implementations will remain available as part of distutils2 > on PyPI). > I understand the code is already implemented, but I meant that it may be a good idea to have a simple, self-contained module that does just provide the necessary bits for the "setuptools killer feature", and let competing tools deal with it as they please. > Whatever UI a Python packaging solution presents to a user, it needs > to support those 3 PEPs on the back end for interoperability with > other tools (including, eventually, the packaging module in the > standard library). > > Your feedback on the commands/compilers design sounds valuable, and I > would be very interested in seeing a PEP targeting that aspect of the > new packaging module (if you look at the start of this thread, the > failure to improve the compiler API is one of the reasons for pulling > the code from 3.3). The problem with compilation is not just the way the compiler classes work. It it how they interact with commands and the likes, which ends up being most of the original distutils code. What's wrong with distutils is the whole underlying model, if one can call that. No PEP will fix the issue if the premise is to work within that model. There are similar kind of arguments around the extensibility of distutils: it is not just about monkey-patching, but what kind of API you offer to allow for extensibility, and I think the only way to design this sensibly is to work on real packages and iterate, not writing a PEP as a first step. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jun 21 14:21:55 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 21 Jun 2012 22:21:55 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE30A12.8020202@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> Message-ID: On Thu, Jun 21, 2012 at 9:48 PM, Chris McDonough wrote: > On 06/21/2012 04:45 AM, Nick Coghlan wrote: >> And, like it or not, setuptools has a serious PR problem due to the >> fact it monkeypatches the standard library, uses *.pth files to alter >> sys.path for every installed application by default, actually *uses* >> the ability to run code in *.pth files and has hard to follow >> documentation to boot. I *don't* trust that I fully understand the >> import system on any machine with setuptools installed, because it is >> demonstrably happy to install state to the file system that will >> affect *all* Python programs running on the machine. > > > I don't know about Red Hat but both Ubuntu and Apple put all kinds of stuff > on the default sys.path of the system Python of the box that's related to > their software's concerns only. ?I don't understand why people accept this > but get crazy about the fact that installing a setuptools distribution using > easy_install changes the default sys.path. Because the vendor gets to decide what goes into the base install of the OS. If I'm using the system Python, then I expect sys.path to contain the system paths, just as I expect gcc to be able to see the system include paths. If I don't want that, I'll use virtualenv or a completely separate Python installation. However, when I install a new Python package into site-packages it *should* just sit there and have zero impact on other Python applications that don't import that package. As soon as someone installs a *.pth file, however, that's *no longer the case* - every Python application on that machine will now be scanning additional paths for modules whether it wants to or not. It's unnecessary coupling between components that *should* be completely independent of each other. Now, *.pth support in the interpreter certainly cannot be blamed on setuptools, but encouraging use of a packaging format that effectively requires them certainly can be. It's similar to the reason why monkeypatching and global environment variable modifications (including PYTHONPATH) are a problem: as soon as you start doing that kind of thing, you're introducing coupling that *shouldn't exist*. If there is no better solution, then sure, do it as a near term workaround, but that isn't the same as accepting it as the long term answer. > Installing a distribution will change behavior whether or not sys.path is > changed as a result. ?That's its purpose. No it won't. An ordinary package will only change the behaviour of Python applications that import a package by that name. Other Python applications will be completely unaffected (as it should be). >?The code that runs in the .pth > *file* (there's only one that matters: easy_install.pth) just mutates > sys.path. ?The end result is this: if you understand how sys.path works, you > understand how eggs work. ?Each egg is addded to sys.path. ?That's all there > is to it. ?It's the same as manually mutating a global PYTHONPATH, except > you don't need to do it. Yes, it's the same as mutating PYTHONPATH. That's a similarly bad system global change. Individual libraries do not have the right to change the sys.path seen on initialisation by every other Python application on that system. > And note that this is not "setuptools" in general. ?It's easy_install in > particular. ?Everything you've brought up so far I think is limited to > easy_install. ?It doesn't happen when you use pip. ?I think it's a mistake > that pip doesn't do it, but I think you have to make more accurate > distinctions. What part of "PR problem" was unclear? setuptools and easy_install are inextricably linked in everyone's minds, just like pip and distribute. >> A packaging PEP needs to explain: >> - what needs to be done to eliminate any need for monkeypatching >> - what's involved in making sure that *.pth are *not* needed by default >> - making sure that executable code in implicitly loaded *.pth files >> isn't used *at all* > > I'll note that these goals are completely sideways to any actual functional > goal. ?It'd be a shame to have monkeypatching going on, but the other stuff > I don't think are reasonable goals. ?Instead they represent fears, and those > fears just need to be managed. No, they reflect the mindset of someone with configuration management and auditing responsibilities for shared systems with multiple applications installed which may be written in a variety of languages, not just Python. You may not care about those people, but I do. > It'd also be useful if other core developers actually tried to use > setuptools in anger. ?That'd be a good start towards understanding some of > its tradeoffs. ?People can write this stuff down til they're blue in the > face, but if core devs ?don't try the stuff, they'll always fear it. setuptools (or, perhaps, easy_install, although I've seen enough posts about eggs being uploaded to PyPI to suspect otherwise), encourages the deployment of system configuration changes that alter the runtime environment of every single Python application executed on the system. That's simply not cool. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Thu Jun 21 14:31:32 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 21 Jun 2012 22:31:32 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> Message-ID: On Thu, Jun 21, 2012 at 10:19 PM, David Cournapeau wrote: > > > On Thu, Jun 21, 2012 at 12:58 PM, Nick Coghlan wrote: >> >> On Thu, Jun 21, 2012 at 7:28 PM, David Cournapeau >> wrote: >> > If specifying install dependencies is the killer feature of setuptools, >> > why >> > can't we have a very simple module that adds the necessary 3 keywords to >> > record it, and let 3rd party tools deal with it as they wish ? That >> > would >> > not even require speciying the format, and would let us more time to >> > deal >> > with the other, more difficult questions. >> >> That low level role is filled by PEP 345 (the latest PyPI metadata >> format, which adds the new fields), PEP 376 (local installation >> database) and PEP 386 (version numbering schema). >> >> The corresponding packaging submodules are the ones that were being >> considered for retention as a reference implementation in 3.3, but are >> still slated for removal along with the rest of the package (the >> reference implementations will remain available as part of distutils2 >> on PyPI). > > > I understand the code is already implemented, but I meant that it may be a > good idea to have a simple, self-contained module that does just provide the > necessary bits for the "setuptools killer feature", and let competing tools > deal with it as they please. If you're genuinely interested in that prospect, I suggest collaborating with the distutils2 team to extract the four identified modules (and any necessary support code) as a "distmeta" project on PyPI: distmeta.version ? Version number classes distmeta.metadata ? Metadata handling distmeta.markers ? Environment markers distmeta.database ? Database of installed distributions That will allow faster iteration on the core interoperability standards prior to reincorporation in 3.4, and explicitly decouple them from the higher level (more contentious) features. >> Whatever UI a Python packaging solution presents to a user, it needs >> to support those 3 PEPs on the back end for interoperability with >> other tools (including, eventually, the packaging module in the >> standard library). >> >> Your feedback on the commands/compilers design sounds valuable, and I >> would be very interested in seeing a PEP targeting that aspect of the >> new packaging module (if you look at the start of this thread, the >> failure to improve the compiler API is one of the reasons for pulling >> the code from 3.3). > > > The problem with compilation is not just the way the compiler classes work. > It it how they interact with commands and the likes, which ends up being > most of the original distutils code. What's wrong with ?distutils is the > whole underlying model, if one can call that. No PEP will fix the issue if > the premise is to work within that model. I don't accept the premise that the 3.4 packaging solution must be restricted to the distutils semantic model. However, no alternative strategy has been formally presented to python-dev. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From d.s.seljebotn at astro.uio.no Thu Jun 21 14:45:23 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Thu, 21 Jun 2012 14:45:23 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE30BD6.1060706@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> Message-ID: <4FE31763.1020609@astro.uio.no> On 06/21/2012 01:56 PM, Tarek Ziad? wrote: > On 6/21/12 11:08 AM, Dag Sverre Seljebotn wrote: >> ... >> David Cournapeau's Bento project takes the opposite approach, >> everything is explicit and without any magic. >> >> http://cournape.github.com/Bento/ >> >> It had its 0.1.0 release a week ago. >> >> Please, I don't want to reopen any discussions about Bento here -- >> distutils2 vs. Bento discussions have been less than constructive in >> the past -- I just wanted to make sure everybody is aware that >> distutils2 isn't the only horse in this race. I don't know if there >> are others too? >> > That's *exactly* the kind of approach that has made me not want to > continue. > > People are too focused on implementations, and 'how distutils sucks' > 'how setuptools sucks' etc 'I'll do better' etc > > Instead of having all the folks involved in packaging sit down together > and try to fix the issues together by building PEPs describing what > would be a common set of standards, they want to create their own tools > from scratch. Guido was asked about build issues and scientific software at PyData this spring, and his take was that "if scientific users have concerns that are that special, perhaps you just need to go and do your own thing". Which is what David is doing. Trailing Q&A session here: http://www.youtube.com/watch?v=QjXJLVINsSA Generalizing a bit I think it's "web developers" and "scientists" typically completely failing to see each others' usecases. I don't know if that bridge can be crossed through mailing list discussion alone. I know that David tried but came to a point where he just had to unsubscribe to distutils-sig. Sometimes design by committee is just what you want, and sometimes design by committee doesn't work. ZeroMQ, for instance, is a great piece of software resulting from dropping out of the AQMP committee. > > That will not work. And I will say here again what I think we should do > imho: > > 1/ take all the packaging PEPs and rework them until everyone is happy > (compilation sucks in distutils ? write a PEP !!!) I think the only way of making scientists happy is to make the build tool choice arbitrary (and allow the use of waf, scons, cmake, jam, ant, etc. for the build). After all, many projects contains more C++ and Fortran code than Python code. (Of course, one could make a PEP saying that.) Right now things are so horribly broken for the scientific community that I'm not sure if one *can* sanely specify PEPs. It's more a question of playing around and throwing things at the wall and see what sticks -- 5 years from now one is perhaps in a position where the problem is really understood and one can write PEPs. Perhaps the "web developers" are at the PEP-ing stage already. Great for you. But the usecases are really different. Anyway: I really don't want to start a flame-war here. So let's accept up front that we likely won't agree here; I just wanted to clarify my position. (Some context: I might have funding to work 2 months full-time on distributing Python software on HPC clusters this autumn. It's not really related to Bento (or distutils though, more of a client tool using those libraries) Dag Sverre Seljebotn From chrism at plope.com Thu Jun 21 14:51:04 2012 From: chrism at plope.com (Chris McDonough) Date: Thu, 21 Jun 2012 08:51:04 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> Message-ID: <4FE318B8.3010005@plope.com> On 06/21/2012 08:21 AM, Nick Coghlan wrote: > >> Installing a distribution will change behavior whether or not sys.path is >> changed as a result. That's its purpose. > > No it won't. An ordinary package will only change the behaviour of > Python applications that import a package by that name. Other Python > applications will be completely unaffected (as it should be). If a Python application is effected by a change to sys.path which doesn't impact modules it uses, then that Python application is plain broken, because the developer of that application cannot make assumptions about what a user does to sys.path unrelated to the modules it requires. This is completely independent of easy_install. Any Python application is going to be effected by the installation of a distribution that does impact modules it imports, whether sys.path is used to change the working set of modules or not. So what concrete situation are we actually talking about here? >> The code that runs in the .pth >> *file* (there's only one that matters: easy_install.pth) just mutates >> sys.path. The end result is this: if you understand how sys.path works, you >> understand how eggs work. Each egg is addded to sys.path. That's all there >> is to it. It's the same as manually mutating a global PYTHONPATH, except >> you don't need to do it. > > Yes, it's the same as mutating PYTHONPATH. That's a similarly bad > system global change. Individual libraries do not have the right to > change the sys.path seen on initialisation by every other Python > application on that system. Is it reasonable to even assume there is only one-sys.path-to-rule-them-all? And that users install "the set of libraries they need" into a common place? This quickly turns into failure, because Python is used for many, many tasks, and those tasks sometimes *require conflicting versions of libraries*. This is the root cause of why virtualenv exists and is popular. The reason it's disappointing to see OS vendors mutating the default sys.path is because they put *very old versions of very common non-stdlib packages* (e.g. zope.interface, lxml) on sys.path by default. The path is tainted out of the box for anyone who wants to use the system Python for development of newer software. So at some point they invariably punt to virtualenv or a virtualenv-like system where the OS-vendor-provided path is not present. If Python supported the installation of multiple versions of the same module and versioned imports, both PYTHONPATH and virtualenv would be much less important. But given lack of enthusiasm for that, I don't think it's reasonable to assume there is only one sys.path on every system. I sympathize, however, with Oscar's report that PYTHONPATH can't the setuptools-derived path. That's indeed a mistake that a future tool should not make. >> And note that this is not "setuptools" in general. It's easy_install in >> particular. Everything you've brought up so far I think is limited to >> easy_install. It doesn't happen when you use pip. I think it's a mistake >> that pip doesn't do it, but I think you have to make more accurate >> distinctions. > > What part of "PR problem" was unclear? setuptools and easy_install are > inextricably linked in everyone's minds, just like pip and distribute. Hopefully for the purposes of the discussion, folks here can make the mental separation between setuptools and easy_install. We can't help what other folks think in the meantime, certainly not solely by making technological compromises anyway. >>> A packaging PEP needs to explain: >>> - what needs to be done to eliminate any need for monkeypatching >>> - what's involved in making sure that *.pth are *not* needed by default >>> - making sure that executable code in implicitly loaded *.pth files >>> isn't used *at all* >> >> I'll note that these goals are completely sideways to any actual functional >> goal. It'd be a shame to have monkeypatching going on, but the other stuff >> I don't think are reasonable goals. Instead they represent fears, and those >> fears just need to be managed. > > No, they reflect the mindset of someone with configuration management > and auditing responsibilities for shared systems with multiple > applications installed which may be written in a variety of languages, > not just Python. You may not care about those people, but I do. I care about deploying Python-based applications to many platforms. You care about deploying multilanguage-based applications to a single platform. There's going to be conflict there. My only comment on that is this: Since this is a problem related to the installation of Python distributions, it should deal with the problems that Python developers have more forcefully than non-Python developers and non-programmers. >> It'd also be useful if other core developers actually tried to use >> setuptools in anger. That'd be a good start towards understanding some of >> its tradeoffs. People can write this stuff down til they're blue in the >> face, but if core devs don't try the stuff, they'll always fear it. > > setuptools (or, perhaps, easy_install, although I've seen enough posts > about eggs being uploaded to PyPI to suspect otherwise), encourages > the deployment of system configuration changes that alter the runtime > environment of every single Python application executed on the system. > That's simply not cool. Again, it would help if you tried it in anger. What's the worst that could happen? You might like it! ;-) - C From ncoghlan at gmail.com Thu Jun 21 14:55:09 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 21 Jun 2012 22:55:09 +1000 Subject: [Python-Dev] Add os.path.resolve to simplify the use of os.readlink In-Reply-To: <4FE304D4.3060002@cheimes.de> References: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> <4FE2FCEC.30000@cheimes.de> <4FE304D4.3060002@cheimes.de> Message-ID: On Thu, Jun 21, 2012 at 9:26 PM, Christian Heimes wrote: > BTW Is there a better way than raise OSError(errno.ELOOP, > os.strerror(errno.ELOOP), filename) to raise a correct OSError with > errno, errno message and filename? A classmethod like > "OSError.from_errno(errno, filename=None) -> proper subclass auf OSError > with sterror() set" would reduce the burden for developers. PEP mentions > the a similar idea at > http://www.python.org/dev/peps/pep-3151/#implementation but this was > never implemented. According to the C code, it should be working at least for recognised errno values: http://hg.python.org/cpython/file/009ac63759e9/Objects/exceptions.c#l890 I can't get it to trigger properly in my local build, though :( Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From lists at cheimes.de Thu Jun 21 15:04:17 2012 From: lists at cheimes.de (Christian Heimes) Date: Thu, 21 Jun 2012 15:04:17 +0200 Subject: [Python-Dev] Add os.path.resolve to simplify the use of os.readlink In-Reply-To: References: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> <4FE2FCEC.30000@cheimes.de> <4FE304D4.3060002@cheimes.de> Message-ID: <4FE31BD1.7020403@cheimes.de> Am 21.06.2012 14:55, schrieb Nick Coghlan: > On Thu, Jun 21, 2012 at 9:26 PM, Christian Heimes wrote: >> BTW Is there a better way than raise OSError(errno.ELOOP, >> os.strerror(errno.ELOOP), filename) to raise a correct OSError with >> errno, errno message and filename? A classmethod like >> "OSError.from_errno(errno, filename=None) -> proper subclass auf OSError >> with sterror() set" would reduce the burden for developers. PEP mentions >> the a similar idea at >> http://www.python.org/dev/peps/pep-3151/#implementation but this was >> never implemented. > > According to the C code, it should be working at least for recognised > errno values: > > http://hg.python.org/cpython/file/009ac63759e9/Objects/exceptions.c#l890 > > I can't get it to trigger properly in my local build, though :( Me neither with the one argument variant: Python 3.3.0a4+ (default:c3616595dada+, Jun 19 2012, 23:12:25) [GCC 4.6.3] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import errno [73872 refs] >>> type(OSError(errno.ENOENT)) [73877 refs] It works work two arguments but it doesn't set strerror and filename correctly: >>> exc = OSError(errno.ENOENT, "filename") [73948 refs] >>> exc FileNotFoundError(2, 'filename') [73914 refs] >>> exc.strerror 'filename' [73914 refs] >>> exc.filename [73914 refs] OSError doesn't accept keyword args: >>> OSError(errno.ENOENT, filename="filename") Traceback (most recent call last): File "", line 1, in TypeError: OSError does not take keyword arguments How about adding keyword support to OSError and derive the strerror from errno if the second argument is not given? Christian From solipsis at pitrou.net Thu Jun 21 15:16:26 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 21 Jun 2012 15:16:26 +0200 Subject: [Python-Dev] Add os.path.resolve to simplify the use of os.readlink In-Reply-To: <4FE31BD1.7020403@cheimes.de> References: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> <4FE2FCEC.30000@cheimes.de> <4FE304D4.3060002@cheimes.de> <4FE31BD1.7020403@cheimes.de> Message-ID: <20120621151626.42767527@pitrou.net> On Thu, 21 Jun 2012 15:04:17 +0200 Christian Heimes wrote: > > How about adding keyword support to OSError and derive the strerror from > errno if the second argument is not given? That's not the original behaviour: Python 3.2.2+ (3.2:9ef20fbd340f, Oct 15 2011, 21:22:07) [GCC 4.5.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> e = OSError(5) >>> e.errno >>> e.strerror >>> str(e) '5' I don't mind making this particular compatibility-breaking change, though. Regards Antoine. From tarek at ziade.org Thu Jun 21 15:23:26 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Thu, 21 Jun 2012 15:23:26 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE31763.1020609@astro.uio.no> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> <4FE31763.1020609@astro.uio.no> Message-ID: <4FE3204E.3060301@ziade.org> On 6/21/12 2:45 PM, Dag Sverre Seljebotn wrote: > > Guido was asked about build issues and scientific software at PyData > this spring, and his take was that "if scientific users have concerns > that are that special, perhaps you just need to go and do your own > thing". Which is what David is doing. > > Trailing Q&A session here: http://www.youtube.com/watch?v=QjXJLVINsSA if you know what you want and have a tool that does it, why bother using distutils ? But then, what your community will do with the guy that create packages with distutils ? just tell him he suck ? The whole idea is *interoperability*, not the tool used. > > Generalizing a bit I think it's "web developers" and "scientists" > typically completely failing to see each others' usecases. I don't > know if that bridge can be crossed through mailing list discussion > alone. I know that David tried but came to a point where he just had > to unsubscribe to distutils-sig. I was there, and sorry to be blunt, but he came to tell us we had to drop distutils because it sucked, and left because we did not follow that path > > Sometimes design by committee is just what you want, and sometimes > design by committee doesn't work. ZeroMQ, for instance, is a great > piece of software resulting from dropping out of the AQMP committee. > >> >> That will not work. And I will say here again what I think we should do >> imho: >> >> 1/ take all the packaging PEPs and rework them until everyone is happy >> (compilation sucks in distutils ? write a PEP !!!) > > I think the only way of making scientists happy is to make the build > tool choice arbitrary (and allow the use of waf, scons, cmake, jam, > ant, etc. for the build). After all, many projects contains more C++ > and Fortran code than Python code. (Of course, one could make a PEP > saying that.) > > Right now things are so horribly broken for the scientific community > that I'm not sure if one *can* sanely specify PEPs. It's more a > question of playing around and throwing things at the wall and see > what sticks -- 5 years from now one is perhaps in a position where the > problem is really understood and one can write PEPs. > > Perhaps the "web developers" are at the PEP-ing stage already. Great > for you. But the usecases are really different. If you sit down and ask your self: "what are the information a python project should give me so I can compile its extensions ?" I think this has nothing to do with the tools/implementations. And if we're able to write down in a PEP this, e.g. the information a compiler is looking for to do its job, then any tool out there waf, scons, cmake, jam, ant, etc, can do the job, no ? > > Anyway: I really don't want to start a flame-war here. So let's accept > up front that we likely won't agree here; I just wanted to clarify my > position. After 4 years I still don't understand what "we won't agree" means in this context. *NO ONE* ever ever came and told me : here's what I want a Python project to describe for its extensions. Just "we won't agree" or "distutils sucks" :) Gosh I hope we will overcome this lock one day, and move forward :D From ncoghlan at gmail.com Thu Jun 21 15:29:23 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 21 Jun 2012 23:29:23 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE318B8.3010005@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> <4FE318B8.3010005@plope.com> Message-ID: On Thu, Jun 21, 2012 at 10:51 PM, Chris McDonough wrote: > Is it reasonable to even assume there is only one-sys.path-to-rule-them-all? > And that users install "the set of libraries they need" into a common place? > ?This quickly turns into failure, because Python is used for many, many > tasks, and those tasks sometimes *require conflicting versions of > libraries*. ?This is the root cause of why virtualenv exists and is popular. And why I'm very happy to see pyvenv make it's way into the standard library :) > I care about deploying Python-based applications to many platforms. ?You > care about deploying multilanguage-based applications to a single platform. > ?There's going to be conflict there. > > My only comment on that is this: Since this is a problem related to the > installation of Python distributions, it should deal with the problems that > Python developers have more forcefully than non-Python developers and > non-programmers. Thanks to venv, there's an alternative available that may be able to keep both of us happy: split the defaults. For system installs, adopt a vendor-centric, multi-language, easy-to-translate-to-language-neutral-packaging mindset (e.g. avoiding *.pth files by unpacking eggs to the file system). For venv installs, do whatever is most convenient for pure Python developers (e.g. leaving eggs packed and using *.pth files to extend sys.path within the venv). One of Python's great virtues is its role as a glue language, and part of being an effective glue language is playing well with others. That should apply to packaging & distribution as well, not just to runtime bindings to tools written in other languages. When we add the scientific users into the mix, we're actually getting to a *third* audience: multi-language developers that want to use *Python's* packaging utilities for their source and binary distribution formats. The Python community covers a broad spectrum of use cases, and I suspect that's one of the big reasons packaging can get so contentious - the goals end up being in direct conflict. Currently, I've identified at least half a dozen significant communities with very different needs (the names aren't meant to be all encompassing, just good representatives of each category, and many individuals will span multiple categories depending on which hat they're wearing at the time): Library authors: just want to quickly and easily publish their work on the Python package index in a way that is discoverable by others and allows feedback to reach them at their development site Web developers: creators of Python applications, relying primarily on other Python software and underlying OS provided functionality, potentially with some native extensions, that may need to run on multiple platforms, but can require installation using a language specific mechanism by technical staff Rich client developers: creators of Python applications relying primarily on other Python software and underlying OS provided functionality, potentially with native extensions, that need to run on multiple platforms, but must be installed using standard system utilities for the benefit of non-technical end users Enterprise developers: creators of Python or mixed language applications that need to integrate with corporate system administration policies (including packaging, auditing and configuration management) Scientists: creators of Python data analysis and modelling applications, with complex dependencies on software written in a variety of other languages and using various build systems Python embedders: developers that embed a Python runtime inside a larger application >> setuptools (or, perhaps, easy_install, although I've seen enough posts >> about eggs being uploaded to PyPI to suspect otherwise), encourages >> the deployment of system configuration changes that alter the runtime >> environment of every single Python application executed on the system. >> That's simply not cool. > > Again, it would help if you tried it in anger. ?What's the worst that could > happen? ?You might like it! ;-) Oh, believe me, if I ever had distribution needs that required the power and flexibility of setuptools, I would reach for it in a heartbeat (in fact, I already use it today, albeit for tasks that ordinary distutils could probably handle). That said, I do get to cheat though - since I don't need to worry about cross-platform deployment, I can just use the relevant RPM hooks directly :) You're right that most of my ire should be directed at the default behaviour of easy_install rather than at setuptools itself, though. I shall moderate my expressed opinions accordingly. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From pje at telecommunity.com Thu Jun 21 15:31:24 2012 From: pje at telecommunity.com (PJ Eby) Date: Thu, 21 Jun 2012 09:31:24 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> Message-ID: On Wed, Jun 20, 2012 at 11:57 PM, Nick Coghlan wrote: > > Right - clearly enumerating the features that draw people to use > setuptools over just using distutils should be a key element in any > PEP for 3.4 > > I honestly think a big part of why packaging ended up being incomplete > for 3.3 is that we still don't have a clearly documented answer to two > critical questions: > 1. Why do people choose setuptools over distutils? Some of the reasons: * Dependencies * Namespace packages * Less boilerplate in setup.py (revision control, data files support, find_packages(), etc.) * Entry points system for creating extensible applications and frameworks that need runtime plugin discovery * Command-line script wrappers * Binary plugin installation system for apps (i.e. dump eggs in a directory and let pkg_resources figure out what to put on sys.path) * "Test" command * Easy distribution of (and runtime access to) static data resources Of these, automatic dependency resolution with as close to 100% backward compatibility for installing other projects on PyPI was almost certainly the #1 factor driving setuptools' initial adoption. The 20% that drives the 80%, as it were. The rest are the 80% that brings in the remaining 20%. > > 2. What's wrong with setuptools that meant the idea of including it > directly in the stdlib was ultimately dropped and eventually replaced > with the goal of incorporating distutils2? Based on the feedback from Python-Dev, I withdrew setuptools from 2.5 because of what I considered valid concerns raised regarding: 1. Lack of available persons besides myself familiar with the code base and design 2. Lack of design documents to remedy #1 3. Lack of unified end-user documentation And there was no time for me to fix all of that before 2.5 came out, although I did throw together the EggFormats documentation. After that, the time window where I was being paid (by OSAF) for setuptools improvements came to an end, and other projects started taking precedence. Since then, setuptools *itself* has become stable legacy code in much the same way that the distutils has: pip, buildout, and virtualenv all built on top of it, as it built on top of the distutils. Problem #3 remains, but at least now there are other people working on the codebase. > If the end goal is "the bulk of the setuptools feature set > without the problematic features and default behaviours that make > system administrators break out the torches and pitchforks", then we > should *write that down* (and spell out the implications) rather than > assuming that everyone knows the purpose of the exercise. That's why I brought this up. ISTM that far too much of the knowledge of what those use cases and implications are, has been either buried in my head or spread out among diverse user communities in the past. Luckily, a lot of people from those communities are now getting considerably more involved in this effort. At the time of, say, the 2.5 setuptools question, there wasn't anybody around but me who was able to argue the "why eggs are good and useful" side of the discussion, for example. (If you look back to the early days of setuptools, I often asked on distutils-sig for people who could help assemble specs for various things... which I ended up just deciding for myself, because nobody was there to comment on them. It took *years* of setuptools actually being in the field and used before enough people knew enough to *want* to take part in the design discussions. The versioning and metadata PEPs were things I asked about many years prior, but nobody knew what they wanted yet, or even knew yet why they should care.) Similarly, in the years since then, MvL -- who originally argued against all things setuptools at 2.5 time -- actually proposed the original namespace package PEP. So I don't think it's unfair to say that, seven years ago, the ideas in setuptools were still a few years ahead of their "time". Today, console script generation, virtual environments, namespace packages, entry point discovery, setup.py-driven testing tools, static file inclusion, etc. are closer to "of course we should have that/everybody uses that" features, rather than esoteric oddities. That being said, setuptools *itself* is not such a good thing. It was originally a *private* add-on to distutils (like numpy's distutils extensions) and a prototyping sandbox for additions to the distutils. (E.g. setuptools features were added to distutils in 2.4 and 2.5.) I honestly didn't think at the time that I was writing those features (or even the egg stuff), that the *long term* goal would be for those things to be maintained in a separate package. Instead, I (rather optimistically) assumed that the value of the approaches would be self-evident, and copied the way the other setuptools features were. (To this day, there are an odd variety of other little experimental "future distutils enhancements" still living in the setuptools code base, like support for building shared libraries to be used in common between multiple C extensions.) By the way, for an overview of setuptools' components and use cases, and what happened with 2.5, see here: http://mail.python.org/pipermail/python-dev/2006-April/064145.html The plan I proposed was to phase out setuptools and merge its functionality into distutils for 2.6, but as I mentioned above, my available bandwidth to work on the project essentially vanished shortly after the above post; setuptools was pretty much "good enough" for OSAF's needs at the time, and they had other development priorities for my time. So, if we are to draw any lesson from the past, it would seem to be, "make sure that the people who'll be doing the work are actually going to be available through to the next Python version". After all, if they are not, it may not much matter whether the code is in the stdlib or not. ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Thu Jun 21 15:45:16 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 21 Jun 2012 13:45:16 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> Message-ID: Chris McDonough plope.com> writes: > On 06/21/2012 04:45 AM, Nick Coghlan wrote: > > A packaging PEP needs to explain: > > - what needs to be done to eliminate any need for monkeypatching > > - what's involved in making sure that *.pth are *not* needed by default > > - making sure that executable code in implicitly loaded *.pth files > > isn't used *at all* > > I'll note that these goals are completely sideways to any actual > functional goal. It'd be a shame to have monkeypatching going on, but > the other stuff I don't think are reasonable goals. Instead they > represent fears, and those fears just need to be managed. Managed how? Whose functional goals? It's good to have something that works here and now, but surely there's more to it. Presumably distutils worked for some value of "worked" up until the point where it didn't, and setuptools needed to improve on it. Oscar's example shows how setuptools is broken for some use cases. Nor does it consider, for example, the goals of OS distro packagers in the same way that packaging has tried to. You're encouraging core devs to use setuptools, but as most seem to agree that distutils is (quick-)sand and setuptools is built on sand, it's hard to see setuptools as anything other than a stopgap, the best we have until something better can be devised. The command-class based design of distutils and hence setuptools doesn't seem to be something to bet the future on. As an infrastructure concern, this area of functionality definitely needs to be supported in the stdlib, even if it's a painful process getting there. The barriers seem more social than technical, but hopefully the divide-and-conquer-with-multiple-PEPs approach will prevail. Regards, Vinay Sajip From barry at python.org Thu Jun 21 15:57:18 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 21 Jun 2012 09:57:18 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE30A12.8020202@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> Message-ID: <20120621095718.52fec892@limelight.wooz.org> On Jun 21, 2012, at 07:48 AM, Chris McDonough wrote: >I don't know about Red Hat but both Ubuntu and Apple put all kinds of stuff >on the default sys.path of the system Python of the box that's related to >their software's concerns only. I don't understand why people accept this >but get crazy about the fact that installing a setuptools distribution using >easy_install changes the default sys.path. Frankly, I've long thought that distros like Debian/Ubuntu which rely so much on Python for essential system functions should basically have two Python stacks. One would be used for just those system functions and the other would be for application deployment. OTOH, I often hear from application developers on Ubuntu that they basically have to build up their own stack *anyway* if they want to ensure they've got the right suite of dependencies. This is where tools like virtualenv and buildout on the lower end and chef/puppet/juju on the higher end come into play. -Barry From ncoghlan at gmail.com Thu Jun 21 16:03:51 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 22 Jun 2012 00:03:51 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> Message-ID: On Thu, Jun 21, 2012 at 11:31 PM, PJ Eby wrote: > So, if we are to draw any lesson from the past, it would seem to be, "make > sure that the people who'll be doing the work are actually going to be > available through to the next Python version". Thanks for that write-up - I learned quite a few things I didn't know, even though I was actually around for 2.5 development (the fact I had less of a vested interest in packaging issues then probably made a big difference, too). > After all, if they are not, it may not much matter whether the code is in > the stdlib or not.? ;-) Yeah, I think Tarek had the right idea with working through the slow painful process of reaching consensus from the bottom up, feature by feature - we just got impatient and tried to skip to the end without working through the rest of the list. It's worth reflecting on the progress we've made so far, and looking ahead to see what else remains In the standard library for 3.3: - native namespace packages (PEP 420) - native venv support (PEP 405) Packaging tool interoperability standards as Accepted PEPs (may still require further tweaks): - updated PyPI metadata standard (PEP 345) - PyPI enforced orderable dist versioning standard (PEP 386) - common dist installation database format (PEP 376) As I noted earlier in the thread, it would be good to see the components of distutils2/packaging aimed at this interoperability level split out as a separate utility library that can more easily be shared between projects (distmeta was my suggested name for such a PyPI project) Other components where python-dev has a role to play as an interoperability clearing house: - improved command and compiler extension API Other components where python-dev has a role to play in smoothing the entry of beginners into the Python ecosystem: - a package installer shipped with Python to reduce bootstrapping issues - a pypi client for the standard library - dependency graph builder - reduced boilerplate in package definition (setup.cfg should help there) Other components where standard library inclusion is a "nice-to-have" but not critical: - most of the other convenience features in setuptools Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From danny at cs.huji.ac.il Thu Jun 21 16:08:09 2012 From: danny at cs.huji.ac.il (Daniel Braniss) Date: Thu, 21 Jun 2012 17:08:09 +0300 Subject: [Python-Dev] import too slow on NFS based systems In-Reply-To: <20120621123337.430395c4@pitrou.net> References: <20120621123337.430395c4@pitrou.net> Message-ID: > On Thu, 21 Jun 2012 13:17:01 +0300 > Daniel Braniss wrote: > > Hi, > > when lib/python/site-packages/ is accessed via NFS, open/stat/access is very > > expensive/slow. > > > > A simple solution is to use an in memory directory search/hash, so I was > > wondering if this has been concidered in the past, if not, and I come > > with a working solution for Unix (at least Linux/Freebsd) will it be > > concidered. > > There is such a thing in Python 3.3, although some stat() calls are > still necessary to know whether the directory caches are fresh. > Can you give it a try and provide some feedback? WOW! with a sample python program: in 2.7 there are: stats open 2736 9037 in 3.3 288 57 now I have to fix my 2.7 to work with 3.3 :-) any chance that this can be backported to 2.7? cheers, danny From barry at python.org Thu Jun 21 16:08:16 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 21 Jun 2012 10:08:16 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE318B8.3010005@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> <4FE318B8.3010005@plope.com> Message-ID: <20120621100816.5785540b@limelight.wooz.org> On Jun 21, 2012, at 08:51 AM, Chris McDonough wrote: >The reason it's disappointing to see OS vendors mutating the default sys.path >is because they put *very old versions of very common non-stdlib packages* >(e.g. zope.interface, lxml) on sys.path by default. The path is tainted out >of the box for anyone who wants to use the system Python for development of >newer software. So at some point they invariably punt to virtualenv or a >virtualenv-like system where the OS-vendor-provided path is not present. > >If Python supported the installation of multiple versions of the same module >and versioned imports, both PYTHONPATH and virtualenv would be much less >important. But given lack of enthusiasm for that, I don't think it's >reasonable to assume there is only one sys.path on every system. This is really the key insight that should be driving us IMO. From the system vendor point of view, my job is to ensure the *system* works right, and that everything written in Python that provides system functionality is compatible with whatever versions of third party Python packages I provide in a particular OS version. That's already a hard enough problem, that frankly any illusions that I can also provide useful versions for higher level applications that people will deploy on my OS is just madness. This is why I get lots of people requesting versioned imports, or simply resorting to venv/buildout/chef/puppet/juju to deploy *their* applications on the OS. There's just no other sane way to do it. I do think Python could do better, but obviously it's a difficult problem. I suspect that having venv support out of the box in 3.3 will go a long way to solving some class of these problems. I don't know if that will be the *only* answer. -Barry From ncoghlan at gmail.com Thu Jun 21 16:09:06 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 22 Jun 2012 00:09:06 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120621095718.52fec892@limelight.wooz.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> <20120621095718.52fec892@limelight.wooz.org> Message-ID: On Thu, Jun 21, 2012 at 11:57 PM, Barry Warsaw wrote: > On Jun 21, 2012, at 07:48 AM, Chris McDonough wrote: > >>I don't know about Red Hat but both Ubuntu and Apple put all kinds of stuff >>on the default sys.path of the system Python of the box that's related to >>their software's concerns only. ?I don't understand why people accept this >>but get crazy about the fact that installing a setuptools distribution using >>easy_install changes the default sys.path. > > Frankly, I've long thought that distros like Debian/Ubuntu which rely so much > on Python for essential system functions should basically have two Python > stacks. ?One would be used for just those system functions and the other would > be for application deployment. ?OTOH, I often hear from application developers > on Ubuntu that they basically have to build up their own stack *anyway* if > they want to ensure they've got the right suite of dependencies. ?This is > where tools like virtualenv and buildout on the lower end and chef/puppet/juju > on the higher end come into play. Yeah, I liked Hynek's method for blending a Python-centric application development approach with a system packaging centric configuration management approach: take an entire virtualenv and package *that* as a single system package. Another strategy that can work is application specific system package repos, but you have to be very committed to a particular OS and packaging system for that approach to make a lot of sense :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From aclark at aclark.net Thu Jun 21 15:56:41 2012 From: aclark at aclark.net (Alex Clark) Date: Thu, 21 Jun 2012 09:56:41 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE30BD6.1060706@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> Message-ID: Hi, On 6/21/12 7:56 AM, Tarek Ziad? wrote: > On 6/21/12 11:08 AM, Dag Sverre Seljebotn wrote: >> ... >> David Cournapeau's Bento project takes the opposite approach, >> everything is explicit and without any magic. >> >> http://cournape.github.com/Bento/ >> >> It had its 0.1.0 release a week ago. >> >> Please, I don't want to reopen any discussions about Bento here -- >> distutils2 vs. Bento discussions have been less than constructive in >> the past -- I just wanted to make sure everybody is aware that >> distutils2 isn't the only horse in this race. I don't know if there >> are others too? >> > That's *exactly* the kind of approach that has made me not want to > continue. > > People are too focused on implementations, and 'how distutils sucks' > 'how setuptools sucks' etc 'I'll do better' etc > > Instead of having all the folks involved in packaging sit down together > and try to fix the issues together by building PEPs describing what > would be a common set of standards, they want to create their own tools > from scratch. > > That will not work. But you can't tell someone or some group of folks that, and expect them to listen. Most times NIH is pejorative[1], but sometimes something positive comes out of it. > And I will say here again what I think we should do > imho: > > 1/ take all the packaging PEPs and rework them until everyone is happy > (compilation sucks in distutils ? write a PEP !!!) > > 2/ once we have a consensus, write as many tools as you want, if they > rely on the same standards => interoperability => win. > > But I must be naive because everytime I tried to reach people that were > building their own tools to ask them to work with us on the PEPs, all I > was getting was "distutils sucks!' And that's the best you can do: give your opinion. I understand the frustration, but we have to let people succeed and/or fail on their own[2]. > > It worked with the OS packagers guys though, we have built a great data > files managment system in packaging + the versions (386) Are you referring to "the" packaging/distutils2 or something else? Alex [1] http://en.wikipedia.org/wiki/Not_invented_here [2] http://docs.pythonpackages.com/en/latest/advanced.html#buildout-easy-install-vs-virtualenv-pip > > > > >> -- >> Dag Sverre Seljebotn >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com > -- Alex Clark ? http://pythonpackages.com From chrism at plope.com Thu Jun 21 16:12:06 2012 From: chrism at plope.com (Chris McDonough) Date: Thu, 21 Jun 2012 10:12:06 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> <4FE318B8.3010005@plope.com> Message-ID: <4FE32BB6.6090601@plope.com> On 06/21/2012 09:29 AM, Nick Coghlan wrote: >> My only comment on that is this: Since this is a problem related to the >> installation of Python distributions, it should deal with the problems that >> Python developers have more forcefully than non-Python developers and >> non-programmers. > > Thanks to venv, there's an alternative available that may be able to > keep both of us happy: split the defaults. For system installs, adopt > a vendor-centric, multi-language, > easy-to-translate-to-language-neutral-packaging mindset (e.g. avoiding > *.pth files by unpacking eggs to the file system). For venv installs, > do whatever is most convenient for pure Python developers (e.g. > leaving eggs packed and using *.pth files to extend sys.path within > the venv). I'd like to agree with this, but I think there's a distinction that needs to be made here that's maybe not obvious to everyone. A tool to generate an OS-specific system package from a Python library project should be unrelated to a Python distribution *installer*. Instead, you'd use related tools that understood how to unpack the distribution packaging format to build one or more package structures. The resulting structures will be processed and then eventually installed by native OS install tools. But the Python distribution installer (e.g easy_install, pip, or some future similar tool) would just never come into play to create those structures. The Python distribution installer and the OS-specific build tool might share code to introspect and unpack files from the packaging format, but they'd otherwise have nothing to do with one another. This seems like the most reasonable separation of concerns to me anyway, and I'd be willing to work on the code that would be shared by both the Python-level installer and by OS-level packaging tools. > One of Python's great virtues is its role as a glue language, and part > of being an effective glue language is playing well with others. That > should apply to packaging& distribution as well, not just to runtime > bindings to tools written in other languages. > > When we add the scientific users into the mix, we're actually getting > to a *third* audience: multi-language developers that want to use > *Python's* packaging utilities for their source and binary > distribution formats. > > The Python community covers a broad spectrum of use cases, and I > suspect that's one of the big reasons packaging can get so contentious > - the goals end up being in direct conflict. Currently, I've > identified at least half a dozen significant communities with very > different needs (the names aren't meant to be all encompassing, just > good representatives of each category, and many individuals will span > multiple categories depending on which hat they're wearing at the > time): > > Library authors: just want to quickly and easily publish their work on > the Python package index in a way that is discoverable by others and > allows feedback to reach them at their development site > > Web developers: creators of Python applications, relying primarily on > other Python software and underlying OS provided functionality, > potentially with some native extensions, that may need to run on > multiple platforms, but can require installation using a language > specific mechanism by technical staff > > Rich client developers: creators of Python applications relying > primarily on other Python software and underlying OS provided > functionality, potentially with native extensions, that need to run on > multiple platforms, but must be installed using standard system > utilities for the benefit of non-technical end users > > Enterprise developers: creators of Python or mixed language > applications that need to integrate with corporate system > administration policies (including packaging, auditing and > configuration management) > > Scientists: creators of Python data analysis and modelling > applications, with complex dependencies on software written in a > variety of other languages and using various build systems > > Python embedders: developers that embed a Python runtime inside a > larger application I think we'll also need to put some limits on the goal independent of the union of everything all the audiences require. Here's some scope suggestions that I believe could be shared by all of the audiences you list above except for embedders; I think that use case is pretty much separate. It might also leave "rich client developers" wanting, but no more than they're already wanting. - Install code that can *later be imported*. This could be pure Python code or C code which requires compilation. But it's not for the purpose of compiling and installing completely arbitrary C code to arbitrary locations, it's ust written for the purpose of compiling C code which then *lives in the installed distribution* to provide an importable Python module that lives in the same distribution with logic. - Install "console scripts" which are shell-scripts/batch-files that cause some logic written in Python to get run as a result. These console scripts are written to sys.prefix + '/{bin/Scripts}' depending on the platform. - Install "package resources", which are non-Python source files that happen to live in package directories. IOW, an installer should be about installing Python libraries and supporting files to a well-known location defined by the interpreter or venv that runs it, not full applications-that-require-persistent-state which just happen to be written in Python and which require deployment to arbitrary locations. You shouldn't expect the Python packaging tools to install an instance of an application on a system, you should expect them to install enough code that would allow you to *generate* an instance of such an application. Most tools make that possible by installing a console script which can generate a sandbox that can be used to keep application state. Hopefully this is preaching to the choir. >>> setuptools (or, perhaps, easy_install, although I've seen enough posts >>> about eggs being uploaded to PyPI to suspect otherwise), encourages >>> the deployment of system configuration changes that alter the runtime >>> environment of every single Python application executed on the system. >>> That's simply not cool. >> >> Again, it would help if you tried it in anger. What's the worst that could >> happen? You might like it! ;-) > > Oh, believe me, if I ever had distribution needs that required the > power and flexibility of setuptools, I would reach for it in a > heartbeat (in fact, I already use it today, albeit for tasks that > ordinary distutils could probably handle). That said, I do get to > cheat though - since I don't need to worry about cross-platform > deployment, I can just use the relevant RPM hooks directly :) Ideally this is all you'd ever need to care deeply about in an ideal world, too, given the separation of installer vs. system-packaging-support-tools outlined above. > You're right that most of my ire should be directed at the default > behaviour of easy_install rather than at setuptools itself, though. I > shall moderate my expressed opinions accordingly. Woot! - C From ncoghlan at gmail.com Thu Jun 21 16:12:20 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 22 Jun 2012 00:12:20 +1000 Subject: [Python-Dev] Add os.path.resolve to simplify the use of os.readlink In-Reply-To: <20120621151626.42767527@pitrou.net> References: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> <4FE2FCEC.30000@cheimes.de> <4FE304D4.3060002@cheimes.de> <4FE31BD1.7020403@cheimes.de> <20120621151626.42767527@pitrou.net> Message-ID: On Thu, Jun 21, 2012 at 11:16 PM, Antoine Pitrou wrote: > On Thu, 21 Jun 2012 15:04:17 +0200 > Christian Heimes wrote: >> >> How about adding keyword support to OSError and derive the strerror from >> errno if the second argument is not given? > > That's not the original behaviour: > > Python 3.2.2+ (3.2:9ef20fbd340f, Oct 15 2011, 21:22:07) > [GCC 4.5.2] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> e = OSError(5) >>>> e.errno >>>> e.strerror >>>> str(e) > '5' > > > I don't mind making this particular compatibility-breaking change, > though. +1 from me. Existing code that just passes errno will now get strerror set automatically, and existing code *can't* just be passing the errno and filename, since OSError doesn't yet support keyword arguments. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From solipsis at pitrou.net Thu Jun 21 16:15:03 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 21 Jun 2012 16:15:03 +0200 Subject: [Python-Dev] import too slow on NFS based systems In-Reply-To: References: <20120621123337.430395c4@pitrou.net> Message-ID: <20120621161503.703b6075@pitrou.net> On Thu, 21 Jun 2012 17:08:09 +0300 Daniel Braniss wrote: > > There is such a thing in Python 3.3, although some stat() calls are > > still necessary to know whether the directory caches are fresh. > > Can you give it a try and provide some feedback? > > WOW! > with a sample python program: > > in 2.7 there are: > stats open > 2736 9037 > in 3.3 > 288 57 > > now I have to fix my 2.7 to work with 3.3 :-) > > any chance that this can be backported to 2.7? Not a chance. It is all based on using importlib as the default import mechanism, and that's a gory piece of work that we wouldn't port in a bugfix release. Regards Antoine. From d.s.seljebotn at astro.uio.no Thu Jun 21 16:26:27 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Thu, 21 Jun 2012 16:26:27 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE3204E.3060301@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> <4FE31763.1020609@astro.uio.no> <4FE3204E.3060301@ziade.org> Message-ID: <4FE32F13.7030102@astro.uio.no> On 06/21/2012 03:23 PM, Tarek Ziad? wrote: > On 6/21/12 2:45 PM, Dag Sverre Seljebotn wrote: >> >> Guido was asked about build issues and scientific software at PyData >> this spring, and his take was that "if scientific users have concerns >> that are that special, perhaps you just need to go and do your own >> thing". Which is what David is doing. >> >> Trailing Q&A session here: http://www.youtube.com/watch?v=QjXJLVINsSA > > if you know what you want and have a tool that does it, why bother using > distutils ? > > But then, what your community will do with the guy that create packages > with distutils ? just tell him he suck ? > > The whole idea is *interoperability*, not the tool used. > >> >> Generalizing a bit I think it's "web developers" and "scientists" >> typically completely failing to see each others' usecases. I don't >> know if that bridge can be crossed through mailing list discussion >> alone. I know that David tried but came to a point where he just had >> to unsubscribe to distutils-sig. > I was there, and sorry to be blunt, but he came to tell us we had to > drop distutils because it sucked, and left because we did not follow > that path > > >> >> Sometimes design by committee is just what you want, and sometimes >> design by committee doesn't work. ZeroMQ, for instance, is a great >> piece of software resulting from dropping out of the AQMP committee. >> >>> >>> That will not work. And I will say here again what I think we should do >>> imho: >>> >>> 1/ take all the packaging PEPs and rework them until everyone is happy >>> (compilation sucks in distutils ? write a PEP !!!) >> >> I think the only way of making scientists happy is to make the build >> tool choice arbitrary (and allow the use of waf, scons, cmake, jam, >> ant, etc. for the build). After all, many projects contains more C++ >> and Fortran code than Python code. (Of course, one could make a PEP >> saying that.) >> >> Right now things are so horribly broken for the scientific community >> that I'm not sure if one *can* sanely specify PEPs. It's more a >> question of playing around and throwing things at the wall and see >> what sticks -- 5 years from now one is perhaps in a position where the >> problem is really understood and one can write PEPs. >> >> Perhaps the "web developers" are at the PEP-ing stage already. Great >> for you. But the usecases are really different. > If you sit down and ask your self: "what are the information a python > project should give me so I can compile its extensions ?" I think this > has nothing to do with the tools/implementations. I'm not sure if I understand. A project can't "give the information needed to build it". The build system is an integrated piece of the code and package itself. Making the build of library X work on some ugly HPC setup Y is part of the development of X. To my mind a solution looks something like (and Bento is close to this): Step 1) "Some standard" to do configuration of a package (--prefix and other what-goes-where options, what libraries to link with, what compilers to use...) Step 2) Launch the package's custom build system (may be Unix shell script or makefile in some cases (sometimes portability is not a goal), may be a waf build) Step 3) "Some standard" to be able to cleanly install/uninstall/upgrade the product of step 2) An attempt to do Step 2) in a major way in the packaging framework itself, and have the package just "declare" its C extensions, would not work. It's fine to have a way in the packaging framework that works for trivial cases, but it's impossible to create something that works for every case. > > And if we're able to write down in a PEP this, e.g. the information a > compiler is looking for to do its job, then any tool out there waf, > scons, cmake, jam, ant, etc, can do the job, no ? > > >> >> Anyway: I really don't want to start a flame-war here. So let's accept >> up front that we likely won't agree here; I just wanted to clarify my >> position. > After 4 years I still don't understand what "we won't agree" means in > this context. *NO ONE* ever ever came and told me : here's what I want a > Python project to describe for its extensions. That's unfortunate. To be honest, it's probably partly because it's easier to say what won't work than come with a constructive suggestion. A lot of people (me included) just use waf/cmake/autotools, and forget about making the code installable through PyPI or any of the standard Python tools. Just because that works *now* for us, but we don't have any good ideas for how to make this into something that works on a wider scale. I think David is one of the few who has really dug into the matter and tried to find something that can both do builds and work through standard install mechanisms. I can't answer for why you haven't been able to understand one another. It may also be an issue with how much one can constructively do on mailing lists. Perhaps the only route forward is to to bring people together in person and walk distutils2 people through some hairy scientific HPC builds (and vice versa). > Just "we won't agree" or "distutils sucks" :) > > > Gosh I hope we will overcome this lock one day, and move forward :D Well, me too. Dag From ncoghlan at gmail.com Thu Jun 21 16:30:04 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 22 Jun 2012 00:30:04 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE32BB6.6090601@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> <4FE318B8.3010005@plope.com> <4FE32BB6.6090601@plope.com> Message-ID: On Fri, Jun 22, 2012 at 12:12 AM, Chris McDonough wrote: > On 06/21/2012 09:29 AM, Nick Coghlan wrote: >>> >>> My only comment on that is this: Since this is a problem related to the >>> installation of Python distributions, it should deal with the problems >>> that >>> Python developers have more forcefully than non-Python developers and >>> non-programmers. >> >> >> Thanks to venv, there's an alternative available that may be able to >> keep both of us happy: split the defaults. For system installs, adopt >> a vendor-centric, multi-language, >> easy-to-translate-to-language-neutral-packaging mindset (e.g. avoiding >> *.pth files by unpacking eggs to the file system). For venv installs, >> do whatever is most convenient for pure Python developers (e.g. >> leaving eggs packed and using *.pth files to extend sys.path within >> the venv). > > > I'd like to agree with this, but I think there's a distinction that needs to > be made here that's maybe not obvious to everyone. > > A tool to generate an OS-specific system package from a Python library > project should be unrelated to a Python distribution *installer*. Instead, > you'd use related tools that understood how to unpack the distribution > packaging format to build one or more package structures. The resulting > structures will be processed and then eventually installed by native OS > install tools. ?But the Python distribution installer (e.g easy_install, > pip, or some future similar tool) would just never come into play to create > those structures. ?The Python distribution installer and the OS-specific > build tool might share code to introspect and unpack files from the > packaging format, but they'd otherwise have nothing to do with one another. > > This seems like the most reasonable separation of concerns to me anyway, and > I'd be willing to work on the code that would be shared by both the > Python-level installer and by OS-level packaging tools. Right, but if the standard library grows a dist installer (and I think it eventually should), we're going to need to define how it should behave when executed with the *system* Python. That will give at least 3 mechanisms for Python code to get onto a system: 1. Python dist -> converter -> system package -> system Python path 2. Python dist -> system Python installer -> system Python path 3. Python dist -> venv Python installer -> venv Python path While I agree that path 2 should be discouraged for production systems, I don't think it should be prevented altogether (since it can be very convenient on personal systems). As far as the scope of the packaging utilities and what they can install goes, I think the distutils2 folks have done a pretty good job of defining that with their static metadata format: http://alexis.notmyidea.org/distutils2/setupcfg.html#files Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From vandry at TZoNE.ORG Thu Jun 21 15:46:12 2012 From: vandry at TZoNE.ORG (Phil Vandry) Date: Thu, 21 Jun 2012 09:46:12 -0400 Subject: [Python-Dev] Add os.path.resolve to simplify the use of os.readlink In-Reply-To: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> References: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> Message-ID: <4FE325A4.4090609@TZoNE.ORG> On 2012-06-21 06:23, Armin Ronacher wrote: > Due to an user error on my part I was not using os.readlink correctly. > Since links can be relative to their location I think it would make sense > to provide an os.path.resolve helper that automatically returns the > absolute path: > > def resolve(filename): > try: > target = os.readlink(filename) > except OSError as e: > if e.errno == errno.EINVAL: > return abspath(filename) > raise > return normpath(join(dirname(filename), target)) > > The above implementation also does not fail if an entity exists but is not > a link and just returns the absolute path of the given filename in that > case. It's expensive (not to mention racy) to do this correctly, when any component of the pathname (not just the component after the last slash) might be a symlink. For example: mkdir -p foo1/foo2 touch bar ln -s ../../bar foo1/foo2/symlink ln -s foo1/foo2 foo Now try to resolve "foo/symlink" using your function. It produces "../bar", which doesn't exist. Why not just work with the pathname you're given and let the kernel worry about resolving it? -Phil From solipsis at pitrou.net Thu Jun 21 16:48:00 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 21 Jun 2012 16:48:00 +0200 Subject: [Python-Dev] Add os.path.resolve to simplify the use of os.readlink References: <8908029a4e9614c209dff69df4f5a79d.squirrel@mail.active-4.com> Message-ID: <20120621164800.0ae44247@pitrou.net> On Thu, 21 Jun 2012 10:23:25 -0000 "Armin Ronacher" wrote: > Due to an user error on my part I was not using os.readlink correctly. > Since links can be relative to their location I think it would make sense > to provide an os.path.resolve helper that automatically returns the > absolute path: > > def resolve(filename): > try: > target = os.readlink(filename) > except OSError as e: > if e.errno == errno.EINVAL: > return abspath(filename) > raise > return normpath(join(dirname(filename), target)) Note that abspath() is buggy in the face of symlinks, for example it will happily collapse /etc/foo/../bar into /etc/bar, even though /etc/foo might be a link to /usr/lib/foo The only safe way to collapse ".." elements is to resolve symlinks. Regards Antoine. From chrism at plope.com Thu Jun 21 16:59:31 2012 From: chrism at plope.com (Chris McDonough) Date: Thu, 21 Jun 2012 10:59:31 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> <4FE318B8.3010005@plope.com> <4FE32BB6.6090601@plope.com> Message-ID: <4FE336D3.1090406@plope.com> On 06/21/2012 10:30 AM, Nick Coghlan wrote: >> A tool to generate an OS-specific system package from a Python library >> project should be unrelated to a Python distribution *installer*. Instead, >> you'd use related tools that understood how to unpack the distribution >> packaging format to build one or more package structures. The resulting >> structures will be processed and then eventually installed by native OS >> install tools. But the Python distribution installer (e.g easy_install, >> pip, or some future similar tool) would just never come into play to create >> those structures. The Python distribution installer and the OS-specific >> build tool might share code to introspect and unpack files from the >> packaging format, but they'd otherwise have nothing to do with one another. >> >> This seems like the most reasonable separation of concerns to me anyway, and >> I'd be willing to work on the code that would be shared by both the >> Python-level installer and by OS-level packaging tools. > > Right, but if the standard library grows a dist installer (and I think > it eventually should), we're going to need to define how it should > behave when executed with the *system* Python. > > That will give at least 3 mechanisms for Python code to get onto a system: > > 1. Python dist -> converter -> system package -> system Python path > > 2. Python dist -> system Python installer -> system Python path > > 3. Python dist -> venv Python installer -> venv Python path > > While I agree that path 2 should be discouraged for production > systems, I don't think it should be prevented altogether (since it can > be very convenient on personal systems). I'm not sure under what circumstance 2 and 3 wouldn't do the same thing. Do you have a concrete idea? > As far as the scope of the packaging utilities and what they can > install goes, I think the distutils2 folks have done a pretty good job > of defining that with their static metadata format: > http://alexis.notmyidea.org/distutils2/setupcfg.html#files Yeah definitely a good start. - C From zooko at zooko.com Thu Jun 21 17:02:58 2012 From: zooko at zooko.com (Zooko Wilcox-O'Hearn) Date: Thu, 21 Jun 2012 12:02:58 -0300 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> Message-ID: On Thu, Jun 21, 2012 at 12:57 AM, Nick Coghlan wrote: > > Standard assumptions about the behaviour of site and distutils cease to be valid once setuptools is installed ? > - advocacy for the "egg" format and the associated sys.path changes that result for all Python programs running on a system ? > System administrators (and developers that think like system administrators when it comes to configuration management) *hate* what setuptools (and setuptools based installers) can do to their systems. I have extensive experience with this, including quite a few bug reports and a few patches in setuptools and distribute, plus maintaining my own fork of setuptools to build and deploy my own projects, plus interviewing quite a few Python developers about why they hated setuptools, plus supporting one of them who hates setuptools even though he and I use it in a build system (https://tahoe-lafs.org). I believe that 80% to 90% of the hatred alluded to above is due to a single issue: the fact that setuptools causes your Python interpreter to disrespect the PYTHONPATH, in violation of the documentation in http://docs.python.org/release/2.7.2/install/index.html#inst-search-path , which says: """ The PYTHONPATH variable can be set to a list of paths that will be added to the beginning of sys.path. For example, if PYTHONPATH is set to /www/python:/opt/py, the search path will begin with ['/www/python', '/opt/py']. (Note that directories must exist in order to be added to sys.path; the site module removes paths that don?t exist.) """ Fortunately, this issue is fixable! I opened a bug report and I and a others have provided patches that makes setuptools stop doing this behavior. This makes the above documentation true again. The negative impact on features or backwards-compatibility doesn't seem to be great. http://bugs.python.org/setuptools/issue53 Philip J. Eby provisionally approved of one of the patches, except for some specific requirement that I didn't really understand how to fix and that now I don't exactly remember: http://mail.python.org/pipermail/distutils-sig/2009-January/010880.html Regards, Zooko From solipsis at pitrou.net Thu Jun 21 17:10:21 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 21 Jun 2012 17:10:21 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> Message-ID: <20120621171021.7d5de727@pitrou.net> On Thu, 21 Jun 2012 12:02:58 -0300 "Zooko Wilcox-O'Hearn" wrote: > > Fortunately, this issue is fixable! I opened a bug report and I and a > others have provided patches that makes setuptools stop doing this > behavior. This makes the above documentation true again. The negative > impact on features or backwards-compatibility doesn't seem to be > great. > > http://bugs.python.org/setuptools/issue53 > > Philip J. Eby provisionally approved of one of the patches, except for > some specific requirement that I didn't really understand how to fix > and that now I don't exactly remember: > > http://mail.python.org/pipermail/distutils-sig/2009-January/010880.html These days, I think you should really target distribute, not setuptools. Regards Antoine. From chris at kateandchris.net Thu Jun 21 17:17:24 2012 From: chris at kateandchris.net (Chris Lambacher) Date: Thu, 21 Jun 2012 15:17:24 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> <4FE318B8.3010005@plope.com> Message-ID: Nick Coghlan gmail.com> writes: > > The Python community covers a broad spectrum of use cases, and I > suspect that's one of the big reasons packaging can get so contentious > - the goals end up being in direct conflict. Currently, I've > identified at least half a dozen significant communities with very > different needs (the names aren't meant to be all encompassing, just > good representatives of each category, and many individuals will span > multiple categories depending on which hat they're wearing at the > time): > One set of users not covered by your list is people who need to Cross-Compile Python to another CPU architecture (i.e. x86 to ARM/PowerPC) for use with embedded computers. Distutils does not handle this very well. If you want a recent overview of what these users go through you should see my talk from PyCon 2012: http://pyvideo.org/video/682/cross-compiling-python-c-extensions-for-embedde -Chris From ericsnowcurrently at gmail.com Thu Jun 21 17:34:10 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Thu, 21 Jun 2012 09:34:10 -0600 Subject: [Python-Dev] [Python-checkins] peps: The latest changes from Yury Selivanov. I can almost taste the acceptance! In-Reply-To: References: Message-ID: On Thu, Jun 21, 2012 at 2:44 AM, larry.hastings wrote: > http://hg.python.org/peps/rev/1edf1cecae7d > changeset: ? 4472:1edf1cecae7d > user: ? ? ? ?Larry Hastings > date: ? ? ? ?Thu Jun 21 01:44:15 2012 -0700 > summary: > ?The latest changes from Yury Selivanov. ?I can almost taste the acceptance! > > files: > ?pep-0362.txt | ?159 +++++++++++++++++++++++++++++++------- > ?1 files changed, 128 insertions(+), 31 deletions(-) > > > diff --git a/pep-0362.txt b/pep-0362.txt > --- a/pep-0362.txt > +++ b/pep-0362.txt > @@ -42,23 +42,58 @@ > ?A Signature object has the following public attributes and methods: > > ?* return_annotation : object > - ? ?The annotation for the return type of the function if specified. > - ? ?If the function has no annotation for its return type, this > - ? ?attribute is not set. > + ? ?The "return" annotation for the function. If the function > + ? ?has no "return" annotation, this attribute is not set. > + > ?* parameters : OrderedDict > ? ? An ordered mapping of parameters' names to the corresponding > - ? ?Parameter objects (keyword-only arguments are in the same order > - ? ?as listed in ``code.co_varnames``). > + ? ?Parameter objects. > + > ?* bind(\*args, \*\*kwargs) -> BoundArguments > ? ? Creates a mapping from positional and keyword arguments to > ? ? parameters. ?Raises a ``TypeError`` if the passed arguments do > ? ? not match the signature. > + > ?* bind_partial(\*args, \*\*kwargs) -> BoundArguments > ? ? Works the same way as ``bind()``, but allows the omission > ? ? of some required arguments (mimics ``functools.partial`` > ? ? behavior.) ?Raises a ``TypeError`` if the passed arguments do > ? ? not match the signature. > > +* replace(parameters, \*, return_annotation) -> Signature Shouldn't it be something like this: * replace_(*parameters, [return_annotation]) -> Signature Or is parameters supposed to be a dict/OrderedDict of replacements/additions? > + ? ?Creates a new Signature instance based on the instance > + ? ?``replace`` was invoked on. ?It is possible to pass different > + ? ?``parameters`` and/or ``return_annotation`` to override the > + ? ?corresponding properties of the base signature. ?To remove > + ? ?``return_annotation`` from the copied ``Signature``, pass in > + ? ?``Signature.empty``. Can you likewise remove parameters this way? > + > +Signature objects are immutable. ?Use ``Signature.replace()`` to > +make a modified copy: > +:: > + > + ? ?>>> sig = signature(foo) > + ? ?>>> new_sig = sig.replace(return_annotation="new return annotation") > + ? ?>>> new_sig is not sig > + ? ?True > + ? ?>>> new_sig.return_annotation == sig.return_annotation > + ? ?True Should be False here, right? > + ? ?>>> new_sig.parameters == sig.parameters > + ? ?True An example of replacing parameters would also be good here. > + > +There are two ways to instantiate a Signature class: > + > +* Signature(parameters, *, return_annotation) Same here as with Signature.replace(). > + ? ?Default Signature constructor. ?Accepts an optional sequence > + ? ?of ``Parameter`` objects, and an optional ``return_annotation``. > + ? ?Parameters sequence is validated to check that there are no > + ? ?parameters with duplicate names, and that the parameters > + ? ?are in the right order, i.e. positional-only first, then > + ? ?positional-or-keyword, etc. > +* Signature.from_function(function) > + ? ?Returns a Signature object reflecting the signature of the > + ? ?function passed in. > + > ?It's possible to test Signatures for equality. ?Two signatures are > ?equal when their parameters are equal, their positional and > ?positional-only parameters appear in the same order, and they > @@ -67,9 +102,14 @@ > ?Changes to the Signature object, or to any of its data members, > ?do not affect the function itself. > > -Signature also implements ``__str__`` and ``__copy__`` methods. > -The latter creates a shallow copy of Signature, with all Parameter > -objects copied as well. > +Signature also implements ``__str__``: > +:: > + > + ? ?>>> str(Signature.from_function((lambda *args: None))) > + ? ?'(*args)' > + > + ? ?>>> str(Signature()) > + ? ?'()' > > > ?Parameter Object > @@ -80,20 +120,22 @@ > ?propose a rich Parameter object designed to represent any possible > ?function parameter. > > -The structure of the Parameter object is: > +A Parameter object has the following public attributes and methods: > > ?* name : str > - ? ?The name of the parameter as a string. > + ? ?The name of the parameter as a string. ?Must be a valid > + ? ?python identifier name (with the exception of ``POSITIONAL_ONLY`` > + ? ?parameters, which can have it set to ``None``.) > > ?* default : object > - ? ?The default value for the parameter, if specified. ?If the > - ? ?parameter has no default value, this attribute is not set. > + ? ?The default value for the parameter. ?If the parameter has no > + ? ?default value, this attribute is not set. > > ?* annotation : object > - ? ?The annotation for the parameter if specified. ?If the > - ? ?parameter has no annotation, this attribute is not set. > + ? ?The annotation for the parameter. ?If the parameter has no > + ? ?annotation, this attribute is not set. > > -* kind : str > +* kind > ? ? Describes how argument values are bound to the parameter. > ? ? Possible values: > > @@ -101,7 +143,7 @@ > ? ? ? ? ?as a positional argument. > > ? ? ? ? ?Python has no explicit syntax for defining positional-only > - ? ? ? ? parameters, but many builtin and extension module functions > + ? ? ? ? parameters, but many built-in and extension module functions > ? ? ? ? ?(especially those that accept only one or two parameters) > ? ? ? ? ?accept them. > > @@ -124,9 +166,30 @@ > ? ? ? ? ?that aren't bound to any other parameter. This corresponds > ? ? ? ? ?to a "\*\*kwds" parameter in a Python function definition. > > +* replace(\*, name, kind, default, annotation) -> Parameter > + ? ?Creates a new Parameter instance based on the instance > + ? ?``replaced`` was invoked on. ?To override a Parameter > + ? ?attribute, pass the corresponding argument. ?To remove > + ? ?an attribute from a ``Parameter``, pass ``Parameter.empty``. > + > + > ?Two parameters are equal when they have equal names, kinds, defaults, > ?and annotations. > > +Parameter objects are immutable. ?Instead of modifying a Parameter object, > +you can use ``Parameter.replace()`` to create a modified copy like so: > +:: > + > + ? ?>>> param = Parameter('foo', Parameter.KEYWORD_ONLY, default=42) > + ? ?>>> str(param) > + ? ?'foo=42' > + > + ? ?>>> str(param.replace()) > + ? ?'foo=42' > + > + ? ?>>> str(param.replace(default=Parameter.empty, annotation='spam')) > + ? ?"foo:'spam'" > + > > ?BoundArguments Object > ?===================== > @@ -138,7 +201,8 @@ > > ?* arguments : OrderedDict > ? ? An ordered, mutable mapping of parameters' names to arguments' values. > - ? ?Does not contain arguments' default values. > + ? ?Contains only explicitly bound arguments. ?Arguments for > + ? ?which ``bind()`` relied on a default value are skipped. > ?* args : tuple > ? ? Tuple of positional arguments values. ?Dynamically computed from > ? ? the 'arguments' attribute. > @@ -159,6 +223,23 @@ > ? ? ba = sig.bind(10, b=20) > ? ? test(*ba.args, **ba.kwargs) > > +Arguments which could be passed as part of either ``*args`` or ``**kwargs`` > +will be included only in the ``BoundArguments.args`` attribute. ?Consider the Why wouldn't the kwargs go into BoundArguments.kwargs? > +following example: > +:: > + > + ? ?def test(a=1, b=2, c=3): > + ? ? ? ?pass > + > + ? ?sig = signature(test) > + ? ?ba = sig.bind(a=10, c=13) > + > + ? ?>>> ba.args > + ? ?(10,) > + > + ? ?>>> ba.kwargs: > + ? ?{'c': 13} > + > > ?Implementation > ?============== > @@ -172,7 +253,7 @@ > ? ? - If the object is not callable - raise a TypeError > > ? ? - If the object has a ``__signature__`` attribute and if it > - ? ? ?is not ``None`` - return a shallow copy of it > + ? ? ?is not ``None`` - return it > > ? ? - If it has a ``__wrapped__`` attribute, return > ? ? ? ``signature(object.__wrapped__)`` > @@ -180,12 +261,9 @@ > ? ? - If the object is a an instance of ``FunctionType`` construct s/``FunctionType`` construct/``FunctionType``, construct/ > ? ? ? and return a new ``Signature`` for it > > - ? ?- If the object is a method or a classmethod, construct and return > - ? ? ?a new ``Signature`` object, with its first parameter (usually > - ? ? ?``self`` or ``cls``) removed > - > - ? ?- If the object is a staticmethod, construct and return > - ? ? ?a new ``Signature`` object > + ? ?- If the object is a method, construct and return a new ``Signature`` > + ? ? ?object, with its first parameter (usually ``self`` or ``cls``) > + ? ? ?removed It may be worth explicitly clarify that it refers to bound methods (and classmethods) here. Also, inspect.getfullargspec() doesn't strip out the self/cls. Would it be okay to store that implicit first argument (self/cls) on the Signature object somehow? Explicit is better than implicit. It's certainly a very special case: the only implicit (and unavoidable) arguments of any kind of callable. If the self were stored on the Signature, I'd also expect that Signature.replace() would leave it out (as would any other copy mechanism). > > ? ? - If the object is an instance of ``functools.partial``, construct > ? ? ? a new ``Signature`` from its ``partial.func`` attribute, and > @@ -196,15 +274,15 @@ > ? ? ? ? - If the object's type has a ``__call__`` method defined in > ? ? ? ? ? its MRO, return a Signature for it > > - ? ? ? ?- If the object has a ``__new__`` method defined in its class, > + ? ? ? ?- If the object has a ``__new__`` method defined in its MRO, > ? ? ? ? ? return a Signature object for it > > - ? ? ? ?- If the object has a ``__init__`` method defined in its class, > + ? ? ? ?- If the object has a ``__init__`` method defined in its MRO, > ? ? ? ? ? return a Signature object for it > > ? ? - Return ``signature(object.__call__)`` > > -Note, that the ``Signature`` object is created in a lazy manner, and > +Note that the ``Signature`` object is created in a lazy manner, and > ?is not automatically cached. ?If, however, the Signature object was > ?explicitly cached by the user, ``signature()`` returns a new shallow copy > ?of it on each invocation. > @@ -236,11 +314,21 @@ > ?---------------------------------------- > > ?Some functions may not be introspectable in certain implementations of > -Python. ?For example, in CPython, builtin functions defined in C provide > +Python. ?For example, in CPython, built-in functions defined in C provide > ?no metadata about their arguments. ?Adding support for them is out of > ?scope for this PEP. > > > +Signature and Parameter equivalence > +----------------------------------- > + > +We assume that parameter names have semantic significance--two > +signatures are equal only when their corresponding parameters have > +the exact same names. ?Users who want looser equivalence tests, perhaps > +ignoring names of VAR_KEYWORD or VAR_POSITIONAL parameters, will > +need to implement those themselves. > + > + > ?Examples > ?======== > > @@ -270,6 +358,10 @@ > ? ? ? ? def __call__(self, a, b, *, c) -> tuple: > ? ? ? ? ? ? return a, b, c > > + ? ? ? ?@classmethod > + ? ? ? ?def spam(cls, a): > + ? ? ? ? ? ?return a > + > > ? ? def shared_vars(*shared_args): > ? ? ? ? """Decorator factory that defines shared variables that are > @@ -280,10 +372,12 @@ > ? ? ? ? ? ? def wrapper(*args, **kwds): > ? ? ? ? ? ? ? ? full_args = shared_args + args > ? ? ? ? ? ? ? ? return f(*full_args, **kwds) > + > ? ? ? ? ? ? # Override signature > - ? ? ? ? ? ?sig = wrapper.__signature__ = signature(f) > - ? ? ? ? ? ?for __ in shared_args: > - ? ? ? ? ? ? ? ?sig.parameters.popitem(last=False) > + ? ? ? ? ? ?sig = signature(f) > + ? ? ? ? ? ?sig = sig.replace(tuple(sig.parameters.values())[1:]) > + ? ? ? ? ? ?wrapper.__signature__ = sig > + > ? ? ? ? ? ? return wrapper > ? ? ? ? return decorator > > @@ -313,6 +407,9 @@ > ? ? >>> format_signature(Foo().__call__) > ? ? '(a, b, *, c) -> tuple' > > + ? ?>>> format_signature(Foo.spam) > + ? ?'(a)' > + > ? ? >>> format_signature(partial(Foo().__call__, 1, c=3)) > ? ? '(b, *, c=3) -> tuple' I'm really impressed by the great work on this and how well the PEP process has been working here. This is a great addition to Python! -eric From pje at telecommunity.com Thu Jun 21 17:37:36 2012 From: pje at telecommunity.com (PJ Eby) Date: Thu, 21 Jun 2012 11:37:36 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> Message-ID: On Jun 21, 2012 11:02 AM, "Zooko Wilcox-O'Hearn" wrote: > > Philip J. Eby provisionally approved of one of the patches, except for > some specific requirement that I didn't really understand how to fix > and that now I don't exactly remember: > > http://mail.python.org/pipermail/distutils-sig/2009-January/010880.html > I don't remember either; I just reviewed the patch and discussion, and I'm not finding what the holdup was, exactly. Looking at it now, it looks to me like a good idea... oh wait, *now* I remember the problem, or at least, what needs reviewing. Basically, the challenge is that it doesn't allow an .egg in a PYTHONPATH directory to take precedence over that *specific* PYTHONPATH directory. With the perspective of hindsight, this was purely a transitional concern, since it only *really* mattered for site-packages; anyplace else you could just delete the legacy package if it was a problem. (And your patch works fine for that case.) However, for setuptools as it was when you proposed this, it was a potential backwards-compatibility problem. My best guess is that I was considering the approach for 0.7... which never got any serious development time. (It may be too late to fix the issue, in more than one sense. Even if the problem ceased to be a problem today, nobody's going to re-evaluate their position on setuptools, especially if their position wasn't even based on a personal experience with the issue.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Thu Jun 21 17:45:46 2012 From: pje at telecommunity.com (PJ Eby) Date: Thu, 21 Jun 2012 11:45:46 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE32BB6.6090601@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> <4FE318B8.3010005@plope.com> <4FE32BB6.6090601@plope.com> Message-ID: On Jun 21, 2012 10:12 AM, "Chris McDonough" wrote: > - Install "package resources", which are non-Python source files that > happen to live in package directories. I love this phrasing, by the way ("non-Python source files"). A pet peeve of mine is the insistence by some people that such files are "data" and don't belong in package directories, despite the fact that if you gave them a .py extension and added data="""...""" around them, they'd be considered part of the code. A file's name and internal format aren't what distinguishes code from data; it's the way it's *used* that matters. I think "packaging" has swung the wrong way on this particular point, and that resources and data files should be distinguished in setup.cfg, with sysadmins *not* being given the option to muck about with resources -- especially not to install them in locations where they might be mistaken for something editable. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jun 21 17:48:24 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 22 Jun 2012 01:48:24 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE336D3.1090406@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> <4FE318B8.3010005@plope.com> <4FE32BB6.6090601@plope.com> <4FE336D3.1090406@plope.com> Message-ID: On Fri, Jun 22, 2012 at 12:59 AM, Chris McDonough wrote: > On 06/21/2012 10:30 AM, Nick Coghlan wrote: >> That will give at least 3 mechanisms for Python code to get onto a system: >> >> 1. Python dist -> ?converter -> ?system package -> ?system Python path >> >> 2. Python dist -> ?system Python installer -> ?system Python path >> >> 3. Python dist -> ?venv Python installer -> ?venv Python path >> >> While I agree that path 2 should be discouraged for production >> systems, I don't think it should be prevented altogether (since it can >> be very convenient on personal systems). > > > I'm not sure under what circumstance 2 and 3 wouldn't do the same thing. ?Do > you have a concrete idea? Yep, this is what I was talking about in terms of objecting to installation of *.pth files: I think automatically installing *.pth files into the system Python path is *wrong* (just like globally editing PYTHONPATH), and that includes any *.pth files needed for egg installation. In a venv however, I assume the entire thing is application specific, so using *.pth files and eggs for ease of management makes a lot of sense and I would be fine with using that style of installation by default. If the *same* default was going to the used in both places, my preference would be to avoid *.pth files by default and require them to be explicitly requested regardless of the nature of the target environment. I really just wanted to be clear that I don't mind *.pth files at all in the venv case, because they're not affecting the runtime state of other applications. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From chrism at plope.com Thu Jun 21 17:50:33 2012 From: chrism at plope.com (Chris McDonough) Date: Thu, 21 Jun 2012 11:50:33 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> Message-ID: <4FE342C9.10004@plope.com> On 06/21/2012 11:37 AM, PJ Eby wrote: > > On Jun 21, 2012 11:02 AM, "Zooko Wilcox-O'Hearn" > wrote: > > > > Philip J. Eby provisionally approved of one of the patches, except for > > some specific requirement that I didn't really understand how to fix > > and that now I don't exactly remember: > > > > http://mail.python.org/pipermail/distutils-sig/2009-January/010880.html > > > > I don't remember either; I just reviewed the patch and discussion, and > I'm not finding what the holdup was, exactly. Looking at it now, it > looks to me like a good idea... oh wait, *now* I remember the problem, > or at least, what needs reviewing. > > Basically, the challenge is that it doesn't allow an .egg in a > PYTHONPATH directory to take precedence over that *specific* PYTHONPATH > directory. > > With the perspective of hindsight, this was purely a transitional > concern, since it only *really* mattered for site-packages; anyplace > else you could just delete the legacy package if it was a problem. (And > your patch works fine for that case.) > > However, for setuptools as it was when you proposed this, it was a > potential backwards-compatibility problem. My best guess is that I was > considering the approach for 0.7... which never got any serious > development time. > > (It may be too late to fix the issue, in more than one sense. Even if > the problem ceased to be a problem today, nobody's going to re-evaluate > their position on setuptools, especially if their position wasn't even > based on a personal experience with the issue.) A minor backwards incompat here to fix that issue would be appropriate, if only to be able to say "hey, that issue no longer exists" to folks who condemn the entire ecosystem based on that bug. At least, that is, if there will be another release of setuptools. Is that likely? - C From chrism at plope.com Thu Jun 21 18:02:55 2012 From: chrism at plope.com (Chris McDonough) Date: Thu, 21 Jun 2012 12:02:55 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> <4FE318B8.3010005@plope.com> <4FE32BB6.6090601@plope.com> Message-ID: <4FE345AF.9040203@plope.com> On 06/21/2012 11:45 AM, PJ Eby wrote: > > On Jun 21, 2012 10:12 AM, "Chris McDonough" > wrote: > > - Install "package resources", which are non-Python source files that > > happen to live in package directories. > > I love this phrasing, by the way ("non-Python source files"). > > A pet peeve of mine is the insistence by some people that such files are > "data" and don't belong in package directories, despite the fact that if > you gave them a .py extension and added data="""...""" around them, > they'd be considered part of the code. A file's name and internal > format aren't what distinguishes code from data; it's the way it's > *used* that matters. > > I think "packaging" has swung the wrong way on this particular point, > and that resources and data files should be distinguished in setup.cfg, > with sysadmins *not* being given the option to muck about with resources > -- especially not to install them in locations where they might be > mistaken for something editable. +1. A good number of the "package resource" files we deploy are not data files at all. In particular, a lot of them are files which represent HTML templates. These templates are exclusively the domain of the software being installed, and considering them explicitly "more editable" than the Python source they sit next to in the package structure is a grave mistake. They have exactly the same editability candidacy as the Python source files they are mixed in with. - C From tarek at ziade.org Thu Jun 21 18:03:06 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Thu, 21 Jun 2012 18:03:06 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE342C9.10004@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> Message-ID: <4FE345BA.90400@ziade.org> On 6/21/12 5:50 PM, Chris McDonough wrote: > A minor backwards incompat here to fix that issue would be > appropriate, if only to be able to say "hey, that issue no longer > exists" to folks who condemn the entire ecosystem based on that bug. > At least, that is, if there will be another release of setuptools. Is > that likely? or simply do that fix in distribute since it's Python 3 compatible -- and have setuptools officially discontinued for the sake of clarity. From pje at telecommunity.com Thu Jun 21 18:26:25 2012 From: pje at telecommunity.com (PJ Eby) Date: Thu, 21 Jun 2012 12:26:25 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE342C9.10004@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> Message-ID: On Thu, Jun 21, 2012 at 11:50 AM, Chris McDonough wrote: > On 06/21/2012 11:37 AM, PJ Eby wrote: > >> >> On Jun 21, 2012 11:02 AM, "Zooko Wilcox-O'Hearn" > > wrote: >> > >> > Philip J. Eby provisionally approved of one of the patches, except for >> > some specific requirement that I didn't really understand how to fix >> > and that now I don't exactly remember: >> > >> > http://mail.python.org/**pipermail/distutils-sig/2009-** >> January/010880.html >> > >> >> I don't remember either; I just reviewed the patch and discussion, and >> I'm not finding what the holdup was, exactly. Looking at it now, it >> looks to me like a good idea... oh wait, *now* I remember the problem, >> or at least, what needs reviewing. >> >> Basically, the challenge is that it doesn't allow an .egg in a >> PYTHONPATH directory to take precedence over that *specific* PYTHONPATH >> directory. >> >> With the perspective of hindsight, this was purely a transitional >> concern, since it only *really* mattered for site-packages; anyplace >> else you could just delete the legacy package if it was a problem. (And >> your patch works fine for that case.) >> >> However, for setuptools as it was when you proposed this, it was a >> potential backwards-compatibility problem. My best guess is that I was >> considering the approach for 0.7... which never got any serious >> development time. >> >> (It may be too late to fix the issue, in more than one sense. Even if >> the problem ceased to be a problem today, nobody's going to re-evaluate >> their position on setuptools, especially if their position wasn't even >> based on a personal experience with the issue.) >> > > A minor backwards incompat here to fix that issue would be appropriate, if > only to be able to say "hey, that issue no longer exists" to folks who > condemn the entire ecosystem based on that bug. At least, that is, if > there will be another release of setuptools. Is that likely? > Yes. At the very least, there will be updated development snapshots (which are what buildout uses anyway). (Official releases are in a bit of a weird holding pattern. distribute's versioning scheme leads to potential confusion: if I release e.g. 0.6.1, then it sounds like it's a lesser version than whatever distribute is up to now. OTOH, releasing a later version number than distribute implies that I'm supporting their feature enhancements, and I really don't want to add new features to 0.6... but don't have time right now to clean up all the stuff I started in the 0.7 line either, since I've been *hoping* that the work on packaging would make 0.7 unnecessary. And let's not even get started on the part where system-installed copies of distribute can prevent people from downloading or installing setuptools in the first place.) Anyway, changing this in a snapshot release shouldn't be a big concern; the main user of snapshots is buildout, and buildout doesn't use .pth files anyway, it just writes scripts that do sys.path manipulation. (A better approach, for everything except having stuff importable from the standard interpreter.) Of course, the flip side is that it means there won't be many people testing the fix. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrism at plope.com Thu Jun 21 18:44:01 2012 From: chrism at plope.com (Chris McDonough) Date: Thu, 21 Jun 2012 12:44:01 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> Message-ID: <4FE34F51.3010201@plope.com> On 06/21/2012 12:26 PM, PJ Eby wrote: > On Thu, Jun 21, 2012 at 11:50 AM, Chris McDonough > wrote: > > On 06/21/2012 11:37 AM, PJ Eby wrote: > > > On Jun 21, 2012 11:02 AM, "Zooko Wilcox-O'Hearn" > > >> wrote: > > > > Philip J. Eby provisionally approved of one of the patches, > except for > > some specific requirement that I didn't really understand how > to fix > > and that now I don't exactly remember: > > > > > http://mail.python.org/__pipermail/distutils-sig/2009-__January/010880.html > > > > > I don't remember either; I just reviewed the patch and > discussion, and > I'm not finding what the holdup was, exactly. Looking at it now, it > looks to me like a good idea... oh wait, *now* I remember the > problem, > or at least, what needs reviewing. > > Basically, the challenge is that it doesn't allow an .egg in a > PYTHONPATH directory to take precedence over that *specific* > PYTHONPATH > directory. > > With the perspective of hindsight, this was purely a transitional > concern, since it only *really* mattered for site-packages; anyplace > else you could just delete the legacy package if it was a > problem. (And > your patch works fine for that case.) > > However, for setuptools as it was when you proposed this, it was a > potential backwards-compatibility problem. My best guess is > that I was > considering the approach for 0.7... which never got any serious > development time. > > (It may be too late to fix the issue, in more than one sense. > Even if > the problem ceased to be a problem today, nobody's going to > re-evaluate > their position on setuptools, especially if their position > wasn't even > based on a personal experience with the issue.) > > > A minor backwards incompat here to fix that issue would be > appropriate, if only to be able to say "hey, that issue no longer > exists" to folks who condemn the entire ecosystem based on that bug. > At least, that is, if there will be another release of setuptools. > Is that likely? > > > Yes. At the very least, there will be updated development snapshots > (which are what buildout uses anyway). > > (Official releases are in a bit of a weird holding pattern. > distribute's versioning scheme leads to potential confusion: if I > release e.g. 0.6.1, then it sounds like it's a lesser version than > whatever distribute is up to now. OTOH, releasing a later version > number than distribute implies that I'm supporting their feature > enhancements, and I really don't want to add new features to 0.6... but > don't have time right now to clean up all the stuff I started in the 0.7 > line either, since I've been *hoping* that the work on packaging would > make 0.7 unnecessary. And let's not even get started on the part where > system-installed copies of distribute can prevent people from > downloading or installing setuptools in the first place.) Welp, I don't want to get in the middle of that whole mess. But maybe the distribute folks would be kind enough to do a major version bump in their next release; e.g. 1.67 instead of 0.67. That said, I don't think anyone would be confused by overlapping version numbers between the two projects. It's known that they have been diverging for a while. - C From tarek at ziade.org Thu Jun 21 19:20:24 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Thu, 21 Jun 2012 19:20:24 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE34F51.3010201@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> Message-ID: <4FE357D8.1090006@ziade.org> On 6/21/12 6:44 PM, Chris McDonough wrote: > >> >> Yes. At the very least, there will be updated development snapshots >> (which are what buildout uses anyway). >> >> (Official releases are in a bit of a weird holding pattern. >> distribute's versioning scheme leads to potential confusion: if I >> release e.g. 0.6.1, then it sounds like it's a lesser version than >> whatever distribute is up to now. OTOH, releasing a later version >> number than distribute implies that I'm supporting their feature >> enhancements, and I really don't want to add new features to 0.6... but >> don't have time right now to clean up all the stuff I started in the 0.7 >> line either, since I've been *hoping* that the work on packaging would >> make 0.7 unnecessary. And let's not even get started on the part where >> system-installed copies of distribute can prevent people from >> downloading or installing setuptools in the first place.) > > > Welp, I don't want to get in the middle of that whole mess. But maybe > the distribute folks would be kind enough to do a major version bump > in their next release; e.g. 1.67 instead of 0.67. That said, I don't > think anyone would be confused by overlapping version numbers between > the two projects. Oh yeah no problem, if Philip backports all the things we've done like Py3 compat, and bless more people to maintain setuptools, we can even discontinue distribute ! If not, I think you are just being joking here -- we don't want to go back into the lock situation we've suffered for many years were PJE is the only maintainer then suddenly disappears for a year, telling us no one that is willing to maintain setuptools is able to do so. (according to him) > It's known that they have been diverging for a while. Yeah the biggest difference is Py3 compat, other than that afaik I don't think any API has been removed or modified. In my opinion, distribute is the only project that should go forward since it's actively maintained and does not suffer from the bus factor. From yselivanov.ml at gmail.com Thu Jun 21 19:21:00 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 21 Jun 2012 13:21:00 -0400 Subject: [Python-Dev] PEP 362 6th edition Message-ID: <3CD502A7-4F26-4C4B-9B99-EACA2D9C6886@gmail.com> Hello, The new revision of PEP 362 has been posted: http://www.python.org/dev/peps/pep-0362/ Summary: 1. Signature & Parameter objects are now immutable 2. Signature.replace() and Parameter.replace() 3. Signature has a new default constructor, which accepts parameters list and a return_annotation. Parameters list is checked for the correct order (i.e. keyword-only before var-keyword, not vice-versa) The second way to instantiate Signatures is to use 'from_function', which creates a Signature object for the passed function. 4. Parameter.__str__ 5. Positional-only arguments are rendered in '<>' 6. PEP was updated to include new documentation & small examples. The implementation is updated and 100% test covered. Please see the issue: http://bugs.python.org/issue15008 Open questions: Just one - Should we rename 'replace()' to 'new()'? I like 'new()' a bit better - it suggests that we'll get a new object. - Yury PEP: 362 Title: Function Signature Object Version: $Revision$ Last-Modified: $Date$ Author: Brett Cannon , Jiwon Seo , Yury Selivanov , Larry Hastings Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 21-Aug-2006 Python-Version: 3.3 Post-History: 04-Jun-2012 Abstract ======== Python has always supported powerful introspection capabilities, including introspecting functions and methods (for the rest of this PEP, "function" refers to both functions and methods). By examining a function object you can fully reconstruct the function's signature. Unfortunately this information is stored in an inconvenient manner, and is spread across a half-dozen deeply nested attributes. This PEP proposes a new representation for function signatures. The new representation contains all necessary information about a function and its parameters, and makes introspection easy and straightforward. However, this object does not replace the existing function metadata, which is used by Python itself to execute those functions. The new metadata object is intended solely to make function introspection easier for Python programmers. Signature Object ================ A Signature object represents the call signature of a function and its return annotation. For each parameter accepted by the function it stores a `Parameter object`_ in its ``parameters`` collection. A Signature object has the following public attributes and methods: * return_annotation : object The "return" annotation for the function. If the function has no "return" annotation, this attribute is not set. * parameters : OrderedDict An ordered mapping of parameters' names to the corresponding Parameter objects. * bind(\*args, \*\*kwargs) -> BoundArguments Creates a mapping from positional and keyword arguments to parameters. Raises a ``TypeError`` if the passed arguments do not match the signature. * bind_partial(\*args, \*\*kwargs) -> BoundArguments Works the same way as ``bind()``, but allows the omission of some required arguments (mimics ``functools.partial`` behavior.) Raises a ``TypeError`` if the passed arguments do not match the signature. * replace(parameters, \*, return_annotation) -> Signature Creates a new Signature instance based on the instance ``replace`` was invoked on. It is possible to pass different ``parameters`` and/or ``return_annotation`` to override the corresponding properties of the base signature. To remove ``return_annotation`` from the copied ``Signature``, pass in ``Signature.empty``. Signature objects are immutable. Use ``Signature.replace()`` to make a modified copy: :: >>> def foo() -> None: ... pass >>> sig = signature(foo) >>> new_sig = sig.replace(return_annotation="new return annotation") >>> new_sig is not sig True >>> new_sig.return_annotation != sig.return_annotation True >>> new_sig.parameters == sig.parameters True >>> new_sig = new_sig.replace(return_annotation=new_sig.empty) >>> hasattr(new_sig, "return_annotation") False There are two ways to instantiate a Signature class: * Signature(parameters, \*, return_annotation) Default Signature constructor. Accepts an optional sequence of ``Parameter`` objects, and an optional ``return_annotation``. Parameters sequence is validated to check that there are no parameters with duplicate names, and that the parameters are in the right order, i.e. positional-only first, then positional-or-keyword, etc. * Signature.from_function(function) Returns a Signature object reflecting the signature of the function passed in. It's possible to test Signatures for equality. Two signatures are equal when their parameters are equal, their positional and positional-only parameters appear in the same order, and they have equal return annotations. Changes to the Signature object, or to any of its data members, do not affect the function itself. Signature also implements ``__str__``: :: >>> str(Signature.from_function((lambda *args: None))) '(*args)' >>> str(Signature()) '()' Parameter Object ================ Python's expressive syntax means functions can accept many different kinds of parameters with many subtle semantic differences. We propose a rich Parameter object designed to represent any possible function parameter. A Parameter object has the following public attributes and methods: * name : str The name of the parameter as a string. Must be a valid python identifier name (with the exception of ``POSITIONAL_ONLY`` parameters, which can have it set to ``None``.) * default : object The default value for the parameter. If the parameter has no default value, this attribute is not set. * annotation : object The annotation for the parameter. If the parameter has no annotation, this attribute is not set. * kind Describes how argument values are bound to the parameter. Possible values: * ``Parameter.POSITIONAL_ONLY`` - value must be supplied as a positional argument. Python has no explicit syntax for defining positional-only parameters, but many built-in and extension module functions (especially those that accept only one or two parameters) accept them. * ``Parameter.POSITIONAL_OR_KEYWORD`` - value may be supplied as either a keyword or positional argument (this is the standard binding behaviour for functions implemented in Python.) * ``Parameter.KEYWORD_ONLY`` - value must be supplied as a keyword argument. Keyword only parameters are those which appear after a "*" or "\*args" entry in a Python function definition. * ``Parameter.VAR_POSITIONAL`` - a tuple of positional arguments that aren't bound to any other parameter. This corresponds to a "\*args" parameter in a Python function definition. * ``Parameter.VAR_KEYWORD`` - a dict of keyword arguments that aren't bound to any other parameter. This corresponds to a "\*\*kwds" parameter in a Python function definition. Always use ``Parameter.*`` constants for setting and checking value of the ``kind`` attribute. * replace(\*, name, kind, default, annotation) -> Parameter Creates a new Parameter instance based on the instance ``replaced`` was invoked on. To override a Parameter attribute, pass the corresponding argument. To remove an attribute from a ``Parameter``, pass ``Parameter.empty``. Parameter constructor: * Parameter(name, kind, \*, annotation, default) Instantiates a Parameter object. ``name`` and ``kind`` are required, while ``annotation`` and ``default`` are optional. Two parameters are equal when they have equal names, kinds, defaults, and annotations. Parameter objects are immutable. Instead of modifying a Parameter object, you can use ``Parameter.replace()`` to create a modified copy like so: :: >>> param = Parameter('foo', Parameter.KEYWORD_ONLY, default=42) >>> str(param) 'foo=42' >>> str(param.replace()) 'foo=42' >>> str(param.replace(default=Parameter.empty, annotation='spam')) "foo:'spam'" BoundArguments Object ===================== Result of a ``Signature.bind`` call. Holds the mapping of arguments to the function's parameters. Has the following public attributes: * arguments : OrderedDict An ordered, mutable mapping of parameters' names to arguments' values. Contains only explicitly bound arguments. Arguments for which ``bind()`` relied on a default value are skipped. * args : tuple Tuple of positional arguments values. Dynamically computed from the 'arguments' attribute. * kwargs : dict Dict of keyword arguments values. Dynamically computed from the 'arguments' attribute. The ``arguments`` attribute should be used in conjunction with ``Signature.parameters`` for any arguments processing purposes. ``args`` and ``kwargs`` properties can be used to invoke functions: :: def test(a, *, b): ... sig = signature(test) ba = sig.bind(10, b=20) test(*ba.args, **ba.kwargs) Arguments which could be passed as part of either ``*args`` or ``**kwargs`` will be included only in the ``BoundArguments.args`` attribute. Consider the following example: :: def test(a=1, b=2, c=3): pass sig = signature(test) ba = sig.bind(a=10, c=13) >>> ba.args (10,) >>> ba.kwargs: {'c': 13} Implementation ============== The implementation adds a new function ``signature()`` to the ``inspect`` module. The function is the preferred way of getting a ``Signature`` for a callable object. The function implements the following algorithm: - If the object is not callable - raise a TypeError - If the object has a ``__signature__`` attribute and if it is not ``None`` - return it - If it has a ``__wrapped__`` attribute, return ``signature(object.__wrapped__)`` - If the object is a an instance of ``FunctionType`` construct and return a new ``Signature`` for it - If the object is a method, construct and return a new ``Signature`` object, with its first parameter (usually ``self`` or ``cls``) removed - If the object is an instance of ``functools.partial``, construct a new ``Signature`` from its ``partial.func`` attribute, and account for already bound ``partial.args`` and ``partial.kwargs`` - If the object is a class or metaclass: - If the object's type has a ``__call__`` method defined in its MRO, return a Signature for it - If the object has a ``__new__`` method defined in its MRO, return a Signature object for it - If the object has a ``__init__`` method defined in its MRO, return a Signature object for it - Return ``signature(object.__call__)`` Note that the ``Signature`` object is created in a lazy manner, and is not automatically cached. However, the user can manually cache a Signature by storing it in the ``__signature__`` attribute. An implementation for Python 3.3 can be found at [#impl]_. The python issue tracking the patch is [#issue]_. Design Considerations ===================== No implicit caching of Signature objects ---------------------------------------- The first PEP design had a provision for implicit caching of ``Signature`` objects in the ``inspect.signature()`` function. However, this has the following downsides: * If the ``Signature`` object is cached then any changes to the function it describes will not be reflected in it. However, If the caching is needed, it can be always done manually and explicitly * It is better to reserve the ``__signature__`` attribute for the cases when there is a need to explicitly set to a ``Signature`` object that is different from the actual one Some functions may not be introspectable ---------------------------------------- Some functions may not be introspectable in certain implementations of Python. For example, in CPython, built-in functions defined in C provide no metadata about their arguments. Adding support for them is out of scope for this PEP. Signature and Parameter equivalence ----------------------------------- We assume that parameter names have semantic significance--two signatures are equal only when their corresponding parameters are equal and have the exact same names. Users who want looser equivalence tests, perhaps ignoring names of VAR_KEYWORD or VAR_POSITIONAL parameters, will need to implement those themselves. Examples ======== Visualizing Callable Objects' Signature --------------------------------------- Let's define some classes and functions: :: from inspect import signature from functools import partial, wraps class FooMeta(type): def __new__(mcls, name, bases, dct, *, bar:bool=False): return super().__new__(mcls, name, bases, dct) def __init__(cls, name, bases, dct, **kwargs): return super().__init__(name, bases, dct) class Foo(metaclass=FooMeta): def __init__(self, spam:int=42): self.spam = spam def __call__(self, a, b, *, c) -> tuple: return a, b, c @classmethod def spam(cls, a): return a def shared_vars(*shared_args): """Decorator factory that defines shared variables that are passed to every invocation of the function""" def decorator(f): @wraps(f) def wrapper(*args, **kwds): full_args = shared_args + args return f(*full_args, **kwds) # Override signature sig = signature(f) sig = sig.replace(tuple(sig.parameters.values())[1:]) wrapper.__signature__ = sig return wrapper return decorator @shared_vars({}) def example(_state, a, b, c): return _state, a, b, c def format_signature(obj): return str(signature(obj)) Now, in the python REPL: :: >>> format_signature(FooMeta) '(name, bases, dct, *, bar:bool=False)' >>> format_signature(Foo) '(spam:int=42)' >>> format_signature(Foo.__call__) '(self, a, b, *, c) -> tuple' >>> format_signature(Foo().__call__) '(a, b, *, c) -> tuple' >>> format_signature(Foo.spam) '(a)' >>> format_signature(partial(Foo().__call__, 1, c=3)) '(b, *, c=3) -> tuple' >>> format_signature(partial(partial(Foo().__call__, 1, c=3), 2, c=20)) '(*, c=20) -> tuple' >>> format_signature(example) '(a, b, c)' >>> format_signature(partial(example, 1, 2)) '(c)' >>> format_signature(partial(partial(example, 1, b=2), c=3)) '(b=2, c=3)' Annotation Checker ------------------ :: import inspect import functools def checktypes(func): '''Decorator to verify arguments and return types Example: >>> @checktypes ... def test(a:int, b:str) -> int: ... return int(a * b) >>> test(10, '1') 1111111111 >>> test(10, 1) Traceback (most recent call last): ... ValueError: foo: wrong type of 'b' argument, 'str' expected, got 'int' ''' sig = inspect.signature(func) types = {} for param in sig.parameters.values(): # Iterate through function's parameters and build the list of # arguments types try: type_ = param.annotation except AttributeError: continue else: if not inspect.isclass(type_): # Not a type, skip it continue types[param.name] = type_ # If the argument has a type specified, let's check that its # default value (if present) conforms with the type. try: default = param.default except AttributeError: continue else: if not isinstance(default, type_): raise ValueError("{func}: wrong type of a default value for {arg!r}". \ format(func=func.__qualname__, arg=param.name)) def check_type(sig, arg_name, arg_type, arg_value): # Internal function that encapsulates arguments type checking if not isinstance(arg_value, arg_type): raise ValueError("{func}: wrong type of {arg!r} argument, " \ "{exp!r} expected, got {got!r}". \ format(func=func.__qualname__, arg=arg_name, exp=arg_type.__name__, got=type(arg_value).__name__)) @functools.wraps(func) def wrapper(*args, **kwargs): # Let's bind the arguments ba = sig.bind(*args, **kwargs) for arg_name, arg in ba.arguments.items(): # And iterate through the bound arguments try: type_ = types[arg_name] except KeyError: continue else: # OK, we have a type for the argument, lets get the corresponding # parameter description from the signature object param = sig.parameters[arg_name] if param.kind == param.VAR_POSITIONAL: # If this parameter is a variable-argument parameter, # then we need to check each of its values for value in arg: check_type(sig, arg_name, type_, value) elif param.kind == param.VAR_KEYWORD: # If this parameter is a variable-keyword-argument parameter: for subname, value in arg.items(): check_type(sig, arg_name + ':' + subname, type_, value) else: # And, finally, if this parameter a regular one: check_type(sig, arg_name, type_, arg) result = func(*ba.args, **ba.kwargs) # The last bit - let's check that the result is correct try: return_type = sig.return_annotation except AttributeError: # Looks like we don't have any restriction on the return type pass else: if isinstance(return_type, type) and not isinstance(result, return_type): raise ValueError('{func}: wrong return type, {exp} expected, got {got}'. \ format(func=func.__qualname__, exp=return_type.__name__, got=type(result).__name__)) return result return wrapper References ========== .. [#impl] pep362 branch (https://bitbucket.org/1st1/cpython/overview) .. [#issue] issue 15008 (http://bugs.python.org/issue15008) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From yselivanov.ml at gmail.com Thu Jun 21 19:31:13 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 21 Jun 2012 13:31:13 -0400 Subject: [Python-Dev] [Python-checkins] peps: The latest changes from Yury Selivanov. I can almost taste the acceptance! In-Reply-To: References: Message-ID: <8CB4F276-1F49-472E-92E6-25D48502052C@gmail.com> On 2012-06-21, at 11:34 AM, Eric Snow wrote: > On Thu, Jun 21, 2012 at 2:44 AM, larry.hastings > wrote: >> http://hg.python.org/peps/rev/1edf1cecae7d >> changeset: 4472:1edf1cecae7d >> user: Larry Hastings >> date: Thu Jun 21 01:44:15 2012 -0700 >> summary: >> The latest changes from Yury Selivanov. I can almost taste the acceptance! >> >> files: >> pep-0362.txt | 159 +++++++++++++++++++++++++++++++------- >> 1 files changed, 128 insertions(+), 31 deletions(-) >> >> >> diff --git a/pep-0362.txt b/pep-0362.txt >> --- a/pep-0362.txt >> +++ b/pep-0362.txt >> @@ -42,23 +42,58 @@ >> A Signature object has the following public attributes and methods: >> >> * return_annotation : object >> - The annotation for the return type of the function if specified. >> - If the function has no annotation for its return type, this >> - attribute is not set. >> + The "return" annotation for the function. If the function >> + has no "return" annotation, this attribute is not set. >> + >> * parameters : OrderedDict >> An ordered mapping of parameters' names to the corresponding >> - Parameter objects (keyword-only arguments are in the same order >> - as listed in ``code.co_varnames``). >> + Parameter objects. >> + >> * bind(\*args, \*\*kwargs) -> BoundArguments >> Creates a mapping from positional and keyword arguments to >> parameters. Raises a ``TypeError`` if the passed arguments do >> not match the signature. >> + >> * bind_partial(\*args, \*\*kwargs) -> BoundArguments >> Works the same way as ``bind()``, but allows the omission >> of some required arguments (mimics ``functools.partial`` >> behavior.) Raises a ``TypeError`` if the passed arguments do >> not match the signature. >> >> +* replace(parameters, \*, return_annotation) -> Signature > > Shouldn't it be something like this: > > * replace_(*parameters, [return_annotation]) -> Signature > > Or is parameters supposed to be a dict/OrderedDict of replacements/additions? No, it's a regular list. I'd keep the 'parameters' argument plain due to the usual use-cases (see the tests) >> + Creates a new Signature instance based on the instance >> + ``replace`` was invoked on. It is possible to pass different >> + ``parameters`` and/or ``return_annotation`` to override the >> + corresponding properties of the base signature. To remove >> + ``return_annotation`` from the copied ``Signature``, pass in >> + ``Signature.empty``. > > Can you likewise remove parameters this way? No, you have to create a new list (keep it simple?) >> + >> +Signature objects are immutable. Use ``Signature.replace()`` to >> +make a modified copy: >> +:: >> + >> + >>> sig = signature(foo) >> + >>> new_sig = sig.replace(return_annotation="new return annotation") >> + >>> new_sig is not sig >> + True >> + >>> new_sig.return_annotation == sig.return_annotation >> + True > > Should be False here, right? There is a new version of PEP checked-in where this is fixed. > >> + >>> new_sig.parameters == sig.parameters >> + True > > An example of replacing parameters would also be good here. > >> + >> +There are two ways to instantiate a Signature class: >> + >> +* Signature(parameters, *, return_annotation) > > Same here as with Signature.replace(). > >> + Default Signature constructor. Accepts an optional sequence >> + of ``Parameter`` objects, and an optional ``return_annotation``. >> + Parameters sequence is validated to check that there are no >> + parameters with duplicate names, and that the parameters >> + are in the right order, i.e. positional-only first, then >> + positional-or-keyword, etc. >> +* Signature.from_function(function) >> + Returns a Signature object reflecting the signature of the >> + function passed in. >> + >> It's possible to test Signatures for equality. Two signatures are >> equal when their parameters are equal, their positional and >> positional-only parameters appear in the same order, and they >> @@ -67,9 +102,14 @@ >> Changes to the Signature object, or to any of its data members, >> do not affect the function itself. >> >> -Signature also implements ``__str__`` and ``__copy__`` methods. >> -The latter creates a shallow copy of Signature, with all Parameter >> -objects copied as well. >> +Signature also implements ``__str__``: >> +:: >> + >> + >>> str(Signature.from_function((lambda *args: None))) >> + '(*args)' >> + >> + >>> str(Signature()) >> + '()' >> >> >> Parameter Object >> @@ -80,20 +120,22 @@ >> propose a rich Parameter object designed to represent any possible >> function parameter. >> >> -The structure of the Parameter object is: >> +A Parameter object has the following public attributes and methods: >> >> * name : str >> - The name of the parameter as a string. >> + The name of the parameter as a string. Must be a valid >> + python identifier name (with the exception of ``POSITIONAL_ONLY`` >> + parameters, which can have it set to ``None``.) >> >> * default : object >> - The default value for the parameter, if specified. If the >> - parameter has no default value, this attribute is not set. >> + The default value for the parameter. If the parameter has no >> + default value, this attribute is not set. >> >> * annotation : object >> - The annotation for the parameter if specified. If the >> - parameter has no annotation, this attribute is not set. >> + The annotation for the parameter. If the parameter has no >> + annotation, this attribute is not set. >> >> -* kind : str >> +* kind >> Describes how argument values are bound to the parameter. >> Possible values: >> >> @@ -101,7 +143,7 @@ >> as a positional argument. >> >> Python has no explicit syntax for defining positional-only >> - parameters, but many builtin and extension module functions >> + parameters, but many built-in and extension module functions >> (especially those that accept only one or two parameters) >> accept them. >> >> @@ -124,9 +166,30 @@ >> that aren't bound to any other parameter. This corresponds >> to a "\*\*kwds" parameter in a Python function definition. >> >> +* replace(\*, name, kind, default, annotation) -> Parameter >> + Creates a new Parameter instance based on the instance >> + ``replaced`` was invoked on. To override a Parameter >> + attribute, pass the corresponding argument. To remove >> + an attribute from a ``Parameter``, pass ``Parameter.empty``. >> + >> + >> Two parameters are equal when they have equal names, kinds, defaults, >> and annotations. >> >> +Parameter objects are immutable. Instead of modifying a Parameter object, >> +you can use ``Parameter.replace()`` to create a modified copy like so: >> +:: >> + >> + >>> param = Parameter('foo', Parameter.KEYWORD_ONLY, default=42) >> + >>> str(param) >> + 'foo=42' >> + >> + >>> str(param.replace()) >> + 'foo=42' >> + >> + >>> str(param.replace(default=Parameter.empty, annotation='spam')) >> + "foo:'spam'" >> + >> >> BoundArguments Object >> ===================== >> @@ -138,7 +201,8 @@ >> >> * arguments : OrderedDict >> An ordered, mutable mapping of parameters' names to arguments' values. >> - Does not contain arguments' default values. >> + Contains only explicitly bound arguments. Arguments for >> + which ``bind()`` relied on a default value are skipped. >> * args : tuple >> Tuple of positional arguments values. Dynamically computed from >> the 'arguments' attribute. >> @@ -159,6 +223,23 @@ >> ba = sig.bind(10, b=20) >> test(*ba.args, **ba.kwargs) >> >> +Arguments which could be passed as part of either ``*args`` or ``**kwargs`` >> +will be included only in the ``BoundArguments.args`` attribute. Consider the > > Why wouldn't the kwargs go into BoundArguments.kwargs? 'BoundArguments.kwargs' are computed dynamically. You can alter 'BoundArguments.arguments' and you'll get a new 'BoundArguments.kwargs'. There is no point to stick to original 'kwargs' with which 'bind()' was invoked. >> +following example: >> +:: >> + >> + def test(a=1, b=2, c=3): >> + pass >> + >> + sig = signature(test) >> + ba = sig.bind(a=10, c=13) >> + >> + >>> ba.args >> + (10,) >> + >> + >>> ba.kwargs: >> + {'c': 13} >> + >> >> Implementation >> ============== >> @@ -172,7 +253,7 @@ >> - If the object is not callable - raise a TypeError >> >> - If the object has a ``__signature__`` attribute and if it >> - is not ``None`` - return a shallow copy of it >> + is not ``None`` - return it >> >> - If it has a ``__wrapped__`` attribute, return >> ``signature(object.__wrapped__)`` >> @@ -180,12 +261,9 @@ >> - If the object is a an instance of ``FunctionType`` construct > > s/``FunctionType`` construct/``FunctionType``, construct/ > >> and return a new ``Signature`` for it >> >> - - If the object is a method or a classmethod, construct and return >> - a new ``Signature`` object, with its first parameter (usually >> - ``self`` or ``cls``) removed >> - >> - - If the object is a staticmethod, construct and return >> - a new ``Signature`` object >> + - If the object is a method, construct and return a new ``Signature`` >> + object, with its first parameter (usually ``self`` or ``cls``) >> + removed > > It may be worth explicitly clarify that it refers to bound methods > (and classmethods) here. OK. classmethods are descriptors, and all they do is just exposing the BoundMethodType (that is bound to the class dynamically). Sp, the 'methods' -> 'bound methods' should be OK? > Also, inspect.getfullargspec() doesn't strip out the self/cls. Would > it be okay to store that implicit first argument (self/cls) on the > Signature object somehow? Explicit is better than implicit. It's > certainly a very special case: the only implicit (and unavoidable) > arguments of any kind of callable. If the self were stored on the > Signature, I'd also expect that Signature.replace() would leave it out > (as would any other copy mechanism). Signature represents the calling signature, i.e. you can look at it and see what arguments are necessary to call that thing. 'self' for methods simply doesn't exist. class Foo: def foo(self, a): pass Signature for Foo.foo will have 'self' argument first, whereas Signature for Foo().foo will have just 'a'. And that's the beauty of the new API, that's why we can have signatures for partials etc. >> - If the object is an instance of ``functools.partial``, construct >> a new ``Signature`` from its ``partial.func`` attribute, and >> @@ -196,15 +274,15 @@ >> - If the object's type has a ``__call__`` method defined in >> its MRO, return a Signature for it >> >> - - If the object has a ``__new__`` method defined in its class, >> + - If the object has a ``__new__`` method defined in its MRO, >> return a Signature object for it >> >> - - If the object has a ``__init__`` method defined in its class, >> + - If the object has a ``__init__`` method defined in its MRO, >> return a Signature object for it >> >> - Return ``signature(object.__call__)`` >> >> -Note, that the ``Signature`` object is created in a lazy manner, and >> +Note that the ``Signature`` object is created in a lazy manner, and >> is not automatically cached. If, however, the Signature object was >> explicitly cached by the user, ``signature()`` returns a new shallow copy >> of it on each invocation. >> @@ -236,11 +314,21 @@ >> ---------------------------------------- >> >> Some functions may not be introspectable in certain implementations of >> -Python. For example, in CPython, builtin functions defined in C provide >> +Python. For example, in CPython, built-in functions defined in C provide >> no metadata about their arguments. Adding support for them is out of >> scope for this PEP. >> >> >> +Signature and Parameter equivalence >> +----------------------------------- >> + >> +We assume that parameter names have semantic significance--two >> +signatures are equal only when their corresponding parameters have >> +the exact same names. Users who want looser equivalence tests, perhaps >> +ignoring names of VAR_KEYWORD or VAR_POSITIONAL parameters, will >> +need to implement those themselves. >> + >> + >> Examples >> ======== >> >> @@ -270,6 +358,10 @@ >> def __call__(self, a, b, *, c) -> tuple: >> return a, b, c >> >> + @classmethod >> + def spam(cls, a): >> + return a >> + >> >> def shared_vars(*shared_args): >> """Decorator factory that defines shared variables that are >> @@ -280,10 +372,12 @@ >> def wrapper(*args, **kwds): >> full_args = shared_args + args >> return f(*full_args, **kwds) >> + >> # Override signature >> - sig = wrapper.__signature__ = signature(f) >> - for __ in shared_args: >> - sig.parameters.popitem(last=False) >> + sig = signature(f) >> + sig = sig.replace(tuple(sig.parameters.values())[1:]) >> + wrapper.__signature__ = sig >> + >> return wrapper >> return decorator >> >> @@ -313,6 +407,9 @@ >> >>> format_signature(Foo().__call__) >> '(a, b, *, c) -> tuple' >> >> + >>> format_signature(Foo.spam) >> + '(a)' >> + >> >>> format_signature(partial(Foo().__call__, 1, c=3)) >> '(b, *, c=3) -> tuple' > > I'm really impressed by the great work on this and how well the PEP > process has been working here. This is a great addition to Python! Thank you! - Yury From aclark at aclark.net Thu Jun 21 19:43:40 2012 From: aclark at aclark.net (Alex Clark) Date: Thu, 21 Jun 2012 13:43:40 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE357D8.1090006@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> Message-ID: Hi, On 6/21/12 1:20 PM, Tarek Ziad? wrote: > On 6/21/12 6:44 PM, Chris McDonough wrote: >> >>> >>> Yes. At the very least, there will be updated development snapshots >>> (which are what buildout uses anyway). >>> >>> (Official releases are in a bit of a weird holding pattern. >>> distribute's versioning scheme leads to potential confusion: if I >>> release e.g. 0.6.1, then it sounds like it's a lesser version than >>> whatever distribute is up to now. OTOH, releasing a later version >>> number than distribute implies that I'm supporting their feature >>> enhancements, and I really don't want to add new features to 0.6... but >>> don't have time right now to clean up all the stuff I started in the 0.7 >>> line either, since I've been *hoping* that the work on packaging would >>> make 0.7 unnecessary. And let's not even get started on the part where >>> system-installed copies of distribute can prevent people from >>> downloading or installing setuptools in the first place.) >> >> >> Welp, I don't want to get in the middle of that whole mess. But maybe >> the distribute folks would be kind enough to do a major version bump >> in their next release; e.g. 1.67 instead of 0.67. That said, I don't >> think anyone would be confused by overlapping version numbers between >> the two projects. > Oh yeah no problem, if Philip backports all the things we've done like > Py3 compat, and bless more people to maintain setuptools, we can even > discontinue distribute ! > > If not, I think you are just being joking here -- we don't want to go > back into the lock situation we've suffered for many years were PJE is > the only maintainer then suddenly disappears for a year, telling us no > one that is willing to maintain setuptools is able to do so. (according > to him) > > >> It's known that they have been diverging for a while. > Yeah the biggest difference is Py3 compat, other than that afaik I don't > think any API has been removed or modified. > > > In my opinion, distribute is the only project that should go forward > since it's actively maintained and does not suffer from the bus factor. +1. I can't help but cringe when I read this (sorry, PJ Eby!): "Official releases are in a bit of a weird holding pattern." due to distribute. Weren't they in a bit of a weird holding pattern before distribute? Haven't they always been in a bit of a weird holding pattern? Let's let setuptools be setuptools and distribute be distribute i.e. as long as distribute exists, I don't care at all about setuptools' release schedule (c.f. PIL/Pillow) and I like it that way :-). If one day setuptools or packaging/distutils2 comes along and fixes everything, then distribute can cease to exist. Alex -- Alex Clark ? http://pythonpackages.com From pje at telecommunity.com Thu Jun 21 19:49:02 2012 From: pje at telecommunity.com (PJ Eby) Date: Thu, 21 Jun 2012 13:49:02 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE357D8.1090006@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> Message-ID: On Thu, Jun 21, 2012 at 1:20 PM, Tarek Ziad? wrote: > telling us no one that is willing to maintain setuptools is able to do so. > (according to him) Perhaps there is some confusion or language barrier here: what I said at that time was that the only people who I already *knew* to be capable of taking on full responsibility for *continued development* of setuptools, were not available/interested in the job, to my knowledge. Specifically, the main people I had in mind were Ian Bicking and/or Jim Fulton, both of whom had developed extensions to or significant chunks of setuptools' functionality themselves, during which they demonstrated exemplary levels of understanding both of the code base and the wide variety of scenarios in which that code base had to operate. They also both demonstrated conservative, user-oriented design choices, that made me feel comfortable that they would not do anything to disrupt the existing user base, and that if they made any compatibility-breaking changes, they would do so in a way that avoided disruption. (I believe I also gave Philip Jenvey as an example of someone who, while not yet proven at that level, was someone I considered a good potential candidate as well.) This was not a commentary on anyone *else's* ability, only on my then-present *knowledge* of clearly-suitable persons and their availability, or lack thereof. I would guess that the pool of qualified persons is even larger now, but the point is moot: my issue was never about who would "maintain" setuptools, but who would *develop* it. And I expect that we would at this point agree that future *development* of setuptools is not something either of us are seeking. Rather, we should be seeking to develop tools that can properly supersede it. This is why I participated in Distutils-SIG discussion of the various packaging PEPs, and hope to see more of them there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrism at plope.com Thu Jun 21 19:56:27 2012 From: chrism at plope.com (Chris McDonough) Date: Thu, 21 Jun 2012 13:56:27 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE357D8.1090006@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> Message-ID: <4FE3604B.3050606@plope.com> On 06/21/2012 01:20 PM, Tarek Ziad? wrote: > On 6/21/12 6:44 PM, Chris McDonough wrote: >> >>> >>> Yes. At the very least, there will be updated development snapshots >>> (which are what buildout uses anyway). >>> >>> (Official releases are in a bit of a weird holding pattern. >>> distribute's versioning scheme leads to potential confusion: if I >>> release e.g. 0.6.1, then it sounds like it's a lesser version than >>> whatever distribute is up to now. OTOH, releasing a later version >>> number than distribute implies that I'm supporting their feature >>> enhancements, and I really don't want to add new features to 0.6... but >>> don't have time right now to clean up all the stuff I started in the 0.7 >>> line either, since I've been *hoping* that the work on packaging would >>> make 0.7 unnecessary. And let's not even get started on the part where >>> system-installed copies of distribute can prevent people from >>> downloading or installing setuptools in the first place.) >> >> >> Welp, I don't want to get in the middle of that whole mess. But maybe >> the distribute folks would be kind enough to do a major version bump >> in their next release; e.g. 1.67 instead of 0.67. That said, I don't >> think anyone would be confused by overlapping version numbers between >> the two projects. > Oh yeah no problem, if Philip backports all the things we've done like > Py3 compat, and bless more people to maintain setuptools, we can even > discontinue distribute ! > > If not, I think you are just being joking here -- we don't want to go > back into the lock situation we've suffered for many years were PJE is > the only maintainer then suddenly disappears for a year, telling us no > one that is willing to maintain setuptools is able to do so. (according > to him) > > >> It's known that they have been diverging for a while. > Yeah the biggest difference is Py3 compat, other than that afaik I don't > think any API has been removed or modified. > > > In my opinion, distribute is the only project that should go forward > since it's actively maintained and does not suffer from the bus factor. I'm not too interested in the drama/history of the fork situation so I don't care whether setuptools has the fix or distribute has it or both have it, but being able to point at some package which doesn't prevent folks from overriding sys.path ordering using PYTHONPATH would be a good thing. - C From pje at telecommunity.com Thu Jun 21 20:02:06 2012 From: pje at telecommunity.com (PJ Eby) Date: Thu, 21 Jun 2012 14:02:06 -0400 Subject: [Python-Dev] import too slow on NFS based systems In-Reply-To: References: <20120621123337.430395c4@pitrou.net> Message-ID: On Thu, Jun 21, 2012 at 10:08 AM, Daniel Braniss wrote: > > On Thu, 21 Jun 2012 13:17:01 +0300 > > Daniel Braniss wrote: > > > Hi, > > > when lib/python/site-packages/ is accessed via NFS, open/stat/access > is very > > > expensive/slow. > > > > > > A simple solution is to use an in memory directory search/hash, so I > was > > > wondering if this has been concidered in the past, if not, and I come > > > with a working solution for Unix (at least Linux/Freebsd) will it be > > > concidered. > > > > There is such a thing in Python 3.3, although some stat() calls are > > still necessary to know whether the directory caches are fresh. > > Can you give it a try and provide some feedback? > > WOW! > with a sample python program: > > in 2.7 there are: > stats open > 2736 9037 > in 3.3 > 288 57 > > now I have to fix my 2.7 to work with 3.3 :-) > > any chance that this can be backported to 2.7? > As Antoine says, not in the official release. You can, however, speed things up substantially in 2.x by zipping the standard library and placing it in the location given in the default sys.path, e.g.: # python2.7 Python 2.7 (r27:82500, May 5 2011, 11:50:25) Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> [p for p in sys.path if p.endswith('.zip')] ['/usr/lib/python27.zip'] If you include a compiled 'sitecustomize.py' in this zipfile, you would also be able to implement a caching importer based on the default one in pkgutil, to take up the rest of the slack. I've previously posted sketches of such importers; they're not that complicated to implement. It's just that if you don't *also* zip up the standard library, your raw interpreter start time won't get much benefit. (To be clear, creating the zipfile will only speed up stdlib imports, nothing else; you'll need to implement a caching importer to get any benefit for site-packages imports.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tarek at ziade.org Thu Jun 21 20:49:31 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Thu, 21 Jun 2012 20:49:31 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> Message-ID: <4FE36CBB.2070205@ziade.org> On 6/21/12 7:49 PM, PJ Eby wrote: > On Thu, Jun 21, 2012 at 1:20 PM, Tarek Ziad? > wrote: > > telling us no one that is willing to maintain setuptools is able > to do so. (according to him) > > > Perhaps there is some confusion or language barrier here: what I said > at that time was that the only people who I already *knew* to be > capable of taking on full responsibility for *continued development* > of setuptools, were not available/interested in the job, to my knowledge. > > Specifically, the main people I had in mind were Ian Bicking and/or > Jim Fulton, both of whom had developed extensions to or significant > chunks of setuptools' functionality themselves, during which they > demonstrated exemplary levels of understanding both of the code base > and the wide variety of scenarios in which that code base had to > operate. They also both demonstrated conservative, user-oriented > design choices, that made me feel comfortable that they would not do > anything to disrupt the existing user base, and that if they made any > compatibility-breaking changes, they would do so in a way that avoided > disruption. (I believe I also gave Philip Jenvey as an example of > someone who, while not yet proven at that level, was someone I > considered a good potential candidate as well.) > > This was not a commentary on anyone *else's* ability, only on my > then-present *knowledge* of clearly-suitable persons and their > availability, or lack thereof. Yes, so I double-checked my sentence, I think we are in agreement: you would not let folks that *wanted* to maintain it back then, do it. Sorry if this was not clear to you. But let's forget about this, old story I guess. > > I would guess that the pool of qualified persons is even larger now, > but the point is moot: my issue was never about who would "maintain" > setuptools, but who would *develop* it. > > And I expect that we would at this point agree that future > *development* of setuptools is not something either of us are seeking. > Rather, we should be seeking to develop tools that can properly > supersede it. > > This is why I participated in Distutils-SIG discussion of the various > packaging PEPs, and hope to see more of them there. > I definitely agree, and I think your feedback on the various PEPs were very important. My point is just that, we could (and *should*) in my opinion, merge back setuptools and distribute, just to have a py3-enabled setuptools that is in maintenance mode, and work on the new stuff in packaging besides it. the merged setuptools/distribute project could also be the place were we start to do the work to be compatible with the new standards That's my proposal. Tarek -------------- next part -------------- An HTML attachment was scrubbed... URL: From tarek at ziade.org Thu Jun 21 21:05:07 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Thu, 21 Jun 2012 21:05:07 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE32F13.7030102@astro.uio.no> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> <4FE31763.1020609@astro.uio.no> <4FE3204E.3060301@ziade.org> <4FE32F13.7030102@astro.uio.no> Message-ID: <4FE37063.2030705@ziade.org> On 6/21/12 4:26 PM, Dag Sverre Seljebotn wrote: > >> project should give me so I can compile its extensions ?" I think this >> has nothing to do with the tools/implementations. > If you sit down and ask your self: "what are the information a python > > I'm not sure if I understand. A project can't "give the information > needed to build it". The build system is an integrated piece of the > code and package itself. Making the build of library X work on some > ugly HPC setup Y is part of the development of X. > > To my mind a solution looks something like (and Bento is close to this): > > Step 1) "Some standard" to do configuration of a package (--prefix > and other what-goes-where options, what libraries to link with, what > compilers to use...) > > Step 2) Launch the package's custom build system (may be Unix shell > script or makefile in some cases (sometimes portability is not a > goal), may be a waf build) > > Step 3) "Some standard" to be able to cleanly > install/uninstall/upgrade the product of step 2) > > An attempt to do Step 2) in a major way in the packaging framework > itself, and have the package just "declare" its C extensions, would > not work. It's fine to have a way in the packaging framework that > works for trivial cases, but it's impossible to create something that > works for every case. I think we should, as you proposed, list a few projects w/ compilation needs -- from the simplest to the more complex, then see how a standard *description* could be used by any tool > >> >> And if we're able to write down in a PEP this, e.g. the information a >> compiler is looking for to do its job, then any tool out there waf, >> scons, cmake, jam, ant, etc, can do the job, no ? >> >> >>> >>> Anyway: I really don't want to start a flame-war here. So let's accept >>> up front that we likely won't agree here; I just wanted to clarify my >>> position. >> After 4 years I still don't understand what "we won't agree" means in >> this context. *NO ONE* ever ever came and told me : here's what I want a >> Python project to describe for its extensions. > > That's unfortunate. To be honest, it's probably partly because it's > easier to say what won't work than come with a constructive > suggestion. A lot of people (me included) just use > waf/cmake/autotools, and forget about making the code installable > through PyPI or any of the standard Python tools. Just because that > works *now* for us, but we don't have any good ideas for how to make > this into something that works on a wider scale. > > I think David is one of the few who has really dug into the matter and > tried to find something that can both do builds and work through > standard install mechanisms. I can't answer for why you haven't been > able to understand one another. > > It may also be an issue with how much one can constructively do on > mailing lists. Perhaps the only route forward is to to bring people > together in person and walk distutils2 people through some hairy > scientific HPC builds (and vice versa). Like versions scheme, I think it's fine if you guys have a more complex system to build software. But there should be a way to share a common standard for complation, even if people that uses distutils2 or xxx, are just doing the dumbest things, like simple C libs compilation. > >> Just "we won't agree" or "distutils sucks" :) >> >> >> Gosh I hope we will overcome this lock one day, and move forward :D > > Well, me too. The other thing is, the folks in distutils2 and myself, have zero knowledge about compilers. That's why we got very frustrated not to see people with that knowledge come and help us in this area. So, I reiterate my proposal, and it could also be expressed like this: 1/ David writes a PEP where he describes how Bento interact with a project -- metadata, description files, etc 2/ Someone from distutils2 completes the PEP by describing how setup.cfg works wrt Extensions 3/ we see if we can have a common standard even if it's a subset of bento capabilities > > Dag > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com From tarek at ziade.org Thu Jun 21 21:17:01 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Thu, 21 Jun 2012 21:17:01 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE3604B.3050606@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> Message-ID: <4FE3732D.9090401@ziade.org> On 6/21/12 7:56 PM, Chris McDonough wrote: > ... >> think any API has been removed or modified. >> >> >> In my opinion, distribute is the only project that should go forward >> since it's actively maintained and does not suffer from the bus factor. > Yeah the biggest difference is Py3 compat, other than that afaik I don't > > I'm not too interested in the drama/history of the fork situation You are the one currently adding drama by asking for a new setuptools release and saying distribute is diverging. > so I don't care whether setuptools has the fix or distribute has it or > both have it, but being able to point at some package which doesn't > prevent folks from overriding sys.path ordering using PYTHONPATH would > be a good thing. > It has to be in Distribute if we want it in most major Linux distros. And as I proposed to PJE I think the best thing would be to have a single project code base, working with Py3 and receiving maintenance fixes with several maintainers. Since it's clear we're not going to add feature in any of the projects, I think we can safely trust a larger list of maintainers, and just keep the project working until the replacement is used > - C > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com From p.f.moore at gmail.com Thu Jun 21 22:01:26 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 21 Jun 2012 21:01:26 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE3732D.9090401@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> Message-ID: Can I take a step back and make a somewhat different point. Developer requirements are very relevant, sure. But the most important requirements are those of the end user. The person who simply wants to *use* a distribution, couldn't care less how it was built, whether it uses setuptools, or whatever. End users should not need packaging tools on their machines. At the moment, to install from source requires the tools the developer chooses to use for his convenience (distribute/setuptools, distutils2/packaging, bento) to be installed on the target machine. And binary installers are only normally available for code that needs a C extension, and in that case the developer's choice is still visible in terms of the binary format provided. I would argue that we should only put *end user* tools in the stdlib. - A unified package format, suitable for binaries, but also for pure Python code that wants to ship that way. - Installation management tools (download, install, remove, list, and dependency management) that handle the above package format - Maybe support in the package format and/or installation tools for managing "wrapper executables" for executable scripts in distributions Development tools like distutils2, distribute/setuptools, bento would *only* be needed on developer machines, and would be purely developer choice. They would all interact with end users via the stdlib-supported standard formats. They could live outside the stdlib, and developers could use whichever tool suited them. This is a radical idea in that it does not cater for the "zipped up development directory as a distribution format" mental model that current Python uses. That model could still work, but only if all the tools generated a stdlib-supported build definition (which could simply be a Python script that runs the various compile/copy commands, plus some compiler support classes in the stdlib) in the same way that lex/yacc generate C, and projects often distribute the generated C along with the grammar files. Legacy support in the form of distutils, converters from bdist_xxx formats to the new binary format, and maybe pip-style "hide the madness under a unified interface" tools could support this, either in the stdlib or as 3rd party tools. I realise this is probably too radical to happen, but at least, it might put the debate into context if people try to remember that end users, as well as package developers, are affected by this (and there are a lot more end users than package developers...). Paul. PS I know that setuptools includes some end-user aspects - multi-versioning, entry points and optional dependencies, for example. Maybe these are needed - personally, I have never had a need for any of these, so I'm not the best person to comment. From pje at telecommunity.com Thu Jun 21 22:34:45 2012 From: pje at telecommunity.com (PJ Eby) Date: Thu, 21 Jun 2012 16:34:45 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> Message-ID: On Thu, Jun 21, 2012 at 4:01 PM, Paul Moore wrote: > End users should not need packaging tools on their machines. > Well, unless they're developers. ;-) Sometimes, the "end user" is a developer making use of a library. Development tools like distutils2, distribute/setuptools, bento would > *only* be needed on developer machines, and would be purely developer > choice. They would all interact with end users via the > stdlib-supported standard formats. They could live outside the stdlib, > and developers could use whichever tool suited them. > AFAIK, this was the goal behind setup.cfg in packaging, and it's a goal I agree with. > This is a radical idea in that it does not cater for the "zipped up > development directory as a distribution format" mental model that > current Python uses. That model could still work, but only if all the > tools generated a stdlib-supported build definition Again, packaging's setup.cfg is, or should be, this. I think there are some technical challenges with the current state of setup.cfg, but AFAIK they aren't anything insurmountable. (Background: the general idea is that setup.cfg contains "hooks", which name Python callables to be invoked at various stages of the process. These hooks can dynamically add to the setup.cfg data, e.g. to list newly-built files, binaries, etc., as well as to do any actual building.) PS I know that setuptools includes some end-user aspects - > multi-versioning, entry points and optional dependencies, for example. > Maybe these are needed - personally, I have never had a need for any > of these, so I'm not the best person to comment. > Entry points are a developer tool, and cross-project co-ordination facility. They allow packages to advertise classes, modules, functions, etc. that other projects may wish to import and use in a programmatic way. For example, a web framework may say, "if you want to provide a page template file format, register an entry point under this naming convention, and we will automatically use it when a template has a matching file extension." So entry points are not really consumed by end users; libraries and frameworks use them as ways to dynamically co-ordinate with other installed libraries, plugins, etc. Optional dependencies ("extras"), OTOH, are for end-user convenience: they allow an author to suggest configurations that might be of interest. Without them, people have to do things like this: http://pypi.python.org/pypi/celery-with-couchdb in order to advertise what else should be installed. If Celery were instead to list its couchdb and SQLAlchemy requirements as "extras" in setup.py, then one could "easy_install celery[couchdb]" or "easy_install celery[sqla]" instead of needing to register separate project names on PyPI for each of these scenarios. As it happens, however, two of the most popular setuptools add-ons (pip and buildout) either did not or still do not support "extras", because they were not frequently used. Unfortunately, this meant that projects had to do things like setup dummy projects on PyPI, because the popular tools didn't support the scenario. In short, nobody's likely to mourn the passing of extras to any great degree. They're a nice idea, but hard to bootstrap into use due to the chicken-and-egg problem. If you don't know what they're for, you won't use them, and without common naming conventions (like mypackage[c_speedups] or mypackage[test_support]), nobody will get used to asking for them. I think at some point we will end up reinventing them, but essentially the challenge is that they are a generalized solution to a variety of small problems that are not individually very motivating to anybody. They were only motivating to me in the aggregate because I saw lots of individual people being bothered by their particular variation on the theme of auxiliary dependencies or recommended options. As for multi-versioning, it's pretty clearly a dead duck, a proof-of-concept that was very quickly obsoleted by buildout and virtualenv. Buildout is a better implementation of multi-versioning for actual scripts, and virtualenvs work fine for people who haven't yet discovered the joys of buildout. (I'm a recent buildout convert, in case you can't tell. ;-) ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.s.seljebotn at astro.uio.no Thu Jun 21 22:46:58 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Thu, 21 Jun 2012 22:46:58 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE37063.2030705@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> <4FE31763.1020609@astro.uio.no> <4FE3204E.3060301@ziade.org> <4FE32F13.7030102@astro.uio.no> <4FE37063.2030705@ziade.org> Message-ID: <4FE38842.4090308@astro.uio.no> On 06/21/2012 09:05 PM, Tarek Ziad? wrote: > On 6/21/12 4:26 PM, Dag Sverre Seljebotn wrote: >> >>> project should give me so I can compile its extensions ?" I think this >>> has nothing to do with the tools/implementations. >> If you sit down and ask your self: "what are the information a python >> >> I'm not sure if I understand. A project can't "give the information >> needed to build it". The build system is an integrated piece of the >> code and package itself. Making the build of library X work on some >> ugly HPC setup Y is part of the development of X. >> >> To my mind a solution looks something like (and Bento is close to this): >> >> Step 1) "Some standard" to do configuration of a package (--prefix and >> other what-goes-where options, what libraries to link with, what >> compilers to use...) >> >> Step 2) Launch the package's custom build system (may be Unix shell >> script or makefile in some cases (sometimes portability is not a >> goal), may be a waf build) >> >> Step 3) "Some standard" to be able to cleanly >> install/uninstall/upgrade the product of step 2) >> >> An attempt to do Step 2) in a major way in the packaging framework >> itself, and have the package just "declare" its C extensions, would >> not work. It's fine to have a way in the packaging framework that >> works for trivial cases, but it's impossible to create something that >> works for every case. > > I think we should, as you proposed, list a few projects w/ compilation > needs -- from the simplest to the more complex, then see how a standard > *description* could be used by any tool It's not clear to me what you mean by description. Package metadata, install information or description of what/how to build? I hope you don't mean the latter, that would be insane...it would effectively amount to creating a build tool that's both more elegant and more powerful than any option that's currently already out there. Assuming you mean the former, that's what David did to create Bento. Reading and understanding Bento and the design decisions going into it would be a better use of time than redoing a discussion, and would at least be a very good starting point. But anyway, some project types from simple to advanced: - Simple library using Cython + NumPy C API - Wrappers around HPC codes like mpi4py, petsc4py - NumPy - SciPy (uses Fortran compilers too) - Library using code generation, Cython, NumPy C API, Fortran 90 code, some performance tuning with CPU characteristics (instruction set, cache size, optimal loop structure) decided compile-time >>> And if we're able to write down in a PEP this, e.g. the information a >>> compiler is looking for to do its job, then any tool out there waf, >>> scons, cmake, jam, ant, etc, can do the job, no ? >>> >>> >>>> >>>> Anyway: I really don't want to start a flame-war here. So let's accept >>>> up front that we likely won't agree here; I just wanted to clarify my >>>> position. >>> After 4 years I still don't understand what "we won't agree" means in >>> this context. *NO ONE* ever ever came and told me : here's what I want a >>> Python project to describe for its extensions. >> >> That's unfortunate. To be honest, it's probably partly because it's >> easier to say what won't work than come with a constructive >> suggestion. A lot of people (me included) just use >> waf/cmake/autotools, and forget about making the code installable >> through PyPI or any of the standard Python tools. Just because that >> works *now* for us, but we don't have any good ideas for how to make >> this into something that works on a wider scale. >> >> I think David is one of the few who has really dug into the matter and >> tried to find something that can both do builds and work through >> standard install mechanisms. I can't answer for why you haven't been >> able to understand one another. >> >> It may also be an issue with how much one can constructively do on >> mailing lists. Perhaps the only route forward is to to bring people >> together in person and walk distutils2 people through some hairy >> scientific HPC builds (and vice versa). > > Like versions scheme, I think it's fine if you guys have a more complex > system to build software. But there should be a way to share a common > standard for complation, even if people that uses distutils2 or xxx, are > just doing the dumbest things, like simple C libs compilation. > >> >>> Just "we won't agree" or "distutils sucks" :) >>> >>> >>> Gosh I hope we will overcome this lock one day, and move forward :D >> >> Well, me too. > The other thing is, the folks in distutils2 and myself, have zero > knowledge about compilers. That's why we got very frustrated not to see > people with that knowledge come and help us in this area. Here's the flip side: If you have zero knowledge about compilers, it's going to be almost impossible to have a meaningful discussion about a compilation PEP. It's very hard to discuss standards unless everybody involved have the necessary prerequisite knowledge. You don't go discussing details of the Linux kernel without some solid C experience either. The necessary prerequisites in this case is not merely "knowledge of compilers". To avoid repeating mistakes of the past, the prerequisites for a meaningful discussion is years of hard-worn experience building software in various languages, on different platforms, using different build tools. Look, these problems are really hard to deal with. Myself I have experience with building 2-3 languages using 2-3 build tools on 2 platforms, and I consider myself a complete novice and usually decide to trust David's instincts over trying to make up an opinion of my own -- simply because I know he's got a lot more experience than I have. Theoretically it is possible to separate and isolate concerns so that one set of people discuss build integration and another set of people discuss installation. Problem is that all the problems tangle -- in particular when the starting point is distutils! That's why *sometimes*, not always, design by committee is the wrong approach, and one-man-shows is what brings technology forwards. > So, I reiterate my proposal, and it could also be expressed like this: > > 1/ David writes a PEP where he describes how Bento interact with a > project -- metadata, description files, etc > 2/ Someone from distutils2 completes the PEP by describing how setup.cfg > works wrt Extensions > 3/ we see if we can have a common standard even if it's a subset of > bento capabilities bento isn't a build tool, it's a packaging tool, competing directly with distutils2. It can deal with simple distutils-like builds using a bundled build tool, and currently has integration with waf for complicated builds; integration with other build systems will presumably be added later as people need it (the main point is that bento is designed for it). Dag From tarek at ziade.org Thu Jun 21 23:04:37 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Thu, 21 Jun 2012 23:04:37 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE38842.4090308@astro.uio.no> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> <4FE31763.1020609@astro.uio.no> <4FE3204E.3060301@ziade.org> <4FE32F13.7030102@astro.uio.no> <4FE37063.2030705@ziade.org> <4FE38842.4090308@astro.uio.no> Message-ID: <4FE38C65.7050905@ziade.org> On 6/21/12 10:46 PM, Dag Sverre Seljebotn wrote: ... >> I think we should, as you proposed, list a few projects w/ compilation >> needs -- from the simplest to the more complex, then see how a standard >> *description* could be used by any tool > > It's not clear to me what you mean by description. Package metadata, > install information or description of what/how to build? > > I hope you don't mean the latter, that would be insane...it would > effectively amount to creating a build tool that's both more elegant > and more powerful than any option that's currently already out there. > > Assuming you mean the former, that's what David did to create Bento. > Reading and understanding Bento and the design decisions going into it > would be a better use of time than redoing a discussion, and would at > least be a very good starting point. What I mean is : what would it take to use Bento (or another tool) as the compiler in a distutils-based project, without having to change the distutils metadata. > > But anyway, some project types from simple to advanced: > > - Simple library using Cython + NumPy C API > - Wrappers around HPC codes like mpi4py, petsc4py > - NumPy > - SciPy (uses Fortran compilers too) > - Library using code generation, Cython, NumPy C API, Fortran 90 > code, some performance tuning with CPU characteristics (instruction > set, cache size, optimal loop structure) decided compile-time I'd add: - A Distutils project with a few ExtensionsThe other thing is, the folks in distutils2 and myself, have zero >> knowledge about compilers. That's why we got very frustrated not to see >> people with that knowledge come and help us in this area. > > Here's the flip side: If you have zero knowledge about compilers, it's > going to be almost impossible to have a meaningful discussion about a > compilation PEP. It's very hard to discuss standards unless everybody > involved have the necessary prerequisite knowledge. You don't go > discussing details of the Linux kernel without some solid C experience > either. Consider me as the end user that want to have his 2 C modules compiled in their Python project. > > The necessary prerequisites in this case is not merely "knowledge of > compilers". To avoid repeating mistakes of the past, the prerequisites > for a meaningful discussion is years of hard-worn experience building > software in various languages, on different platforms, using different > build tools. > > Look, these problems are really hard to deal with. Myself I have > experience with building 2-3 languages using 2-3 build tools on 2 > platforms, and I consider myself a complete novice and usually decide > to trust David's instincts over trying to make up an opinion of my own > -- simply because I know he's got a lot more experience than I have. > > Theoretically it is possible to separate and isolate concerns so that > one set of people discuss build integration and another set of people > discuss installation. Problem is that all the problems tangle -- in > particular when the starting point is distutils! > > That's why *sometimes*, not always, design by committee is the wrong > approach, and one-man-shows is what brings technology forwards. I am not saying this should be designed by a commitee, but rather - if such a tool can be made compatible with simple Distutils project, the guy behind this tool can probably help on a PEP with feedback from a larger audience than the sci community. What bugs me is to say that we live in two separate worlds and cannot build common pieces. This is not True. > >> So, I reiterate my proposal, and it could also be expressed like this: >> >> 1/ David writes a PEP where he describes how Bento interact with a >> project -- metadata, description files, etc >> 2/ Someone from distutils2 completes the PEP by describing how setup.cfg >> works wrt Extensions >> 3/ we see if we can have a common standard even if it's a subset of >> bento capabilities > > bento isn't a build tool, it's a packaging tool, competing directly > with distutils2. It can deal with simple distutils-like builds using a > bundled build tool, and currently has integration with waf for > complicated builds; integration with other build systems will > presumably be added later as people need it (the main point is that > bento is designed for it). I am not interested in Bento-the-tool. I am interested in what such a tool needs from a project to use it => "It can deal with simple distutils-like builds using a bundled build tool" => If I understand this correctly, does that mean that Bento can build a distutils project with the distutils Metadata ? If this is the case it means that there a piece of function that translates Distutils metadata into something Bento deals with. That's the part I am interested in for interoperability. > > Dag > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com From donald.stufft at gmail.com Thu Jun 21 23:38:26 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Thu, 21 Jun 2012 17:38:26 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> Message-ID: <8359F922B4E045BD819F7811725306D6@gmail.com> On Thursday, June 21, 2012 at 4:01 PM, Paul Moore wrote: > End users should not need packaging tools on their machines. > Sort of riffing on this idea, I cannot seem to find a specification for what a Python package actually is. Maybe the first effort should focus on this instead of arguing one implementation or another. As a packager: I should not (in general) care what tool (pip, pysetup, easy_install, buildout, whatever) is used to install my package, My package should just describe what to do to install itself. As a end user: I should not (in general) care what tool was used to create a package (setuptools, bento, distutils, whatever). My tool of choice should look at the package and preform the operations that the package says are needed for install. Ideally the package could have some basic primitives that are enough to tell the package installer tool what to do to install it, These primitives should be enough to cover the common cases (pure python modules at the very least, maybe additionally some C modules). Now as others have remarked it would be insane to attempt to do this in every case as it would involve writing a build system that is more advanced than anything else existing, so a required primitive would be something that allows calling out to a specific package decided build system (waf, make, whatever) to handle the build configuration. The eventual end goal here being to make a package from something that varies from implementation to implementation to a standardized format that any number of tools can build on top of. It would likely include some things defining where metadata MUST be defined. For instance, if metadata in setuptools was "compiled" down to static file, and easy_install, pip et;al used that static file to install from instead of executing setup.py, then the end user would not have required setup tools installed and instead any number of tools could have been created that utilized that data. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Jun 21 23:55:37 2012 From: cournape at gmail.com (David Cournapeau) Date: Thu, 21 Jun 2012 22:55:37 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE38C65.7050905@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> <4FE31763.1020609@astro.uio.no> <4FE3204E.3060301@ziade.org> <4FE32F13.7030102@astro.uio.no> <4FE37063.2030705@ziade.org> <4FE38842.4090308@astro.uio.no> <4FE38C65.7050905@ziade.org> Message-ID: On Thu, Jun 21, 2012 at 10:04 PM, Tarek Ziad? wrote: > On 6/21/12 10:46 PM, Dag Sverre Seljebotn wrote: > ... > > I think we should, as you proposed, list a few projects w/ compilation >>> needs -- from the simplest to the more complex, then see how a standard >>> *description* could be used by any tool >>> >> >> It's not clear to me what you mean by description. Package metadata, >> install information or description of what/how to build? >> >> I hope you don't mean the latter, that would be insane...it would >> effectively amount to creating a build tool that's both more elegant and >> more powerful than any option that's currently already out there. >> >> Assuming you mean the former, that's what David did to create Bento. >> Reading and understanding Bento and the design decisions going into it >> would be a better use of time than redoing a discussion, and would at least >> be a very good starting point. >> > > What I mean is : what would it take to use Bento (or another tool) as the > compiler in a distutils-based project, without having to change the > distutils metadata. I think there is a misunderstanding of what bento is: bento is not a compiler or anything like that. It is a set of libraries that work together to configure, build and install a python project. Concretely, in bento, there is - a part that build a packager description (Distribution-like in distutils-parlance) from a bento.info (a bite like setup.cfg) - a set of tools of commands around this package description. - a set of "backends" to e.g. use waf to build C extension with full and automatic dependency analysis (rebuild this if this other thing is out of date), parallel builds and configuration. Bento scripts build numpy more efficiently and reliable while being 50 % shorter than our setup.py. - a small library to build a distutils-compatible Distribution so that you can write a 3 lines setup.py that takes all its info from bento.infoand allow for pip to work. Now, you could produce a similar package description from the setup.cfg to be fed to bento, but I don't really see the point since AFAIK, bento.infois strictly more powerful as a format than setup.cfg. Another key point is that the commands around this package description are almost entirely decoupled from each other: this is the hard part, and something that is not really possible to do with the current distutils design in an incremental way. - Command don't know about each other and dependencies between commands are *external* to commands. You say command "build" depends on command "configure", those dependencies are resolved at runtime. This allows for 3rd parties to insert new command without interfering with each other. - options are registered and handled outside command as well: each command can query any other command options. I believe something similar is now available in distutils2, though. Bento allow to add arbitrary configure options to customize library directories (ala autoconf). - bento internally has an explicit "database" of built files, with associated categories, and the build command produces a build "manifest". The build manifest + the build tree defines completely the input for install and installers command. The different binary installers use the same build manifest, and the build manifest is actually designed as to allow lossless convertion between different installers (e.g. wininst <-> msi, egg <-> mpkg on mac, etc?). This is what allows in principle to use make, gyp, etc? to produce this build manifest > > "It can deal with simple distutils-like builds using a bundled build tool" > => If I understand this correctly, does that mean that Bento can build a > distutils project with the distutils Metadata ? > I think Dag meant that bento has a system where you can basically do # setup.py from distutils.core import setup import bento.distutils bento.distutils.monkey_patch() setup() and this setup.py will automatically build a distutils Distribution populated from bento.info. This allows a bento package to be installable with pip or anything that expected a setup.py This allows for interoperability without having to depend on all the distutils issues. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Jun 22 00:00:39 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 22 Jun 2012 00:00:39 +0200 Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> <4FE31763.1020609@astro.uio.no> <4FE3204E.3060301@ziade.org> <4FE32F13.7030102@astro.uio.no> <4FE37063.2030705@ziade.org> <4FE38842.4090308@astro.uio.no> Message-ID: <20120622000039.3b8bff71@pitrou.net> On Thu, 21 Jun 2012 22:46:58 +0200 Dag Sverre Seljebotn wrote: > > The other thing is, the folks in distutils2 and myself, have zero > > knowledge about compilers. That's why we got very frustrated not to see > > people with that knowledge come and help us in this area. > > Here's the flip side: If you have zero knowledge about compilers, it's > going to be almost impossible to have a meaningful discussion about a > compilation PEP. If a PEP is being discussed, even a packaging PEP, it involves all of python-dev, so Tarek and ?ric not being knowledgeable in compilers is not a big problem. > The necessary prerequisites in this case is not merely "knowledge of > compilers". To avoid repeating mistakes of the past, the prerequisites > for a meaningful discussion is years of hard-worn experience building > software in various languages, on different platforms, using different > build tools. This is precisely the kind of knowledge that a PEP is aimed at distilling. Regards Antoine. From d.s.seljebotn at astro.uio.no Fri Jun 22 00:05:48 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Fri, 22 Jun 2012 00:05:48 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE38C65.7050905@ziade.org> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> <4FE31763.1020609@astro.uio.no> <4FE3204E.3060301@ziade.org> <4FE32F13.7030102@astro.uio.no> <4FE37063.2030705@ziade.org> <4FE38842.4090308@astro.uio.no> <4FE38C65.7050905@ziade.org> Message-ID: <4FE39ABC.2000103@astro.uio.no> On 06/21/2012 11:04 PM, Tarek Ziad? wrote: > On 6/21/12 10:46 PM, Dag Sverre Seljebotn wrote: > ... >>> I think we should, as you proposed, list a few projects w/ compilation >>> needs -- from the simplest to the more complex, then see how a standard >>> *description* could be used by any tool >> >> It's not clear to me what you mean by description. Package metadata, >> install information or description of what/how to build? >> >> I hope you don't mean the latter, that would be insane...it would >> effectively amount to creating a build tool that's both more elegant >> and more powerful than any option that's currently already out there. >> >> Assuming you mean the former, that's what David did to create Bento. >> Reading and understanding Bento and the design decisions going into it >> would be a better use of time than redoing a discussion, and would at >> least be a very good starting point. > > What I mean is : what would it take to use Bento (or another tool) as > the compiler in a distutils-based project, without having to change the > distutils metadata. As for current distutils/setuptools/distribute metadata, the idea is you run the bento conversion utility to convert it to Bento metadata, then use Bento. Please read http://cournape.github.com/Bento/ There may be packages where this doesn't work and you'd need to tweak the results yourself though. >> Here's the flip side: If you have zero knowledge about compilers, it's >> going to be almost impossible to have a meaningful discussion about a >> compilation PEP. It's very hard to discuss standards unless everybody >> involved have the necessary prerequisite knowledge. You don't go >> discussing details of the Linux kernel without some solid C experience >> either. > Consider me as the end user that want to have his 2 C modules compiled > in their Python project. OK, so can I propose that you kill off distutils2 and use bento wholesale instead? Obviously not. So you're not just an end-user. That illusion would wear rather thin very quickly. >> >> The necessary prerequisites in this case is not merely "knowledge of >> compilers". To avoid repeating mistakes of the past, the prerequisites >> for a meaningful discussion is years of hard-worn experience building >> software in various languages, on different platforms, using different >> build tools. >> >> Look, these problems are really hard to deal with. Myself I have >> experience with building 2-3 languages using 2-3 build tools on 2 >> platforms, and I consider myself a complete novice and usually decide >> to trust David's instincts over trying to make up an opinion of my own >> -- simply because I know he's got a lot more experience than I have. >> >> Theoretically it is possible to separate and isolate concerns so that >> one set of people discuss build integration and another set of people >> discuss installation. Problem is that all the problems tangle -- in >> particular when the starting point is distutils! >> >> That's why *sometimes*, not always, design by committee is the wrong >> approach, and one-man-shows is what brings technology forwards. > > I am not saying this should be designed by a commitee, but rather - if > such a tool can be made compatible with simple Distutils project, the > guy behind this tool can probably help on a PEP with feedback from a > larger audience than the sci community. > > What bugs me is to say that we live in two separate worlds and cannot > build common pieces. This is not True. I'm not saying it's *impossible* to build common pieces, I'm suggesting that it's not cost-effective in terms of man-hours going into it. And the problem isn't technical as much as social and the mix of people and skill sets involved. But David really made that decision for me when he left distutils-sig, I'm not going to spend my own time and energy trying to get decent builds shoehorned into distutils2 when he is busy working on a solution. (David already spent loads of time on trying to integrate scons with distutils (the numscons project) and maintained numpy.distutils and scipy builds for years; I trust his judgement above pretty much anybody else's.) >>> So, I reiterate my proposal, and it could also be expressed like this: >>> >>> 1/ David writes a PEP where he describes how Bento interact with a >>> project -- metadata, description files, etc >>> 2/ Someone from distutils2 completes the PEP by describing how setup.cfg >>> works wrt Extensions >>> 3/ we see if we can have a common standard even if it's a subset of >>> bento capabilities >> >> bento isn't a build tool, it's a packaging tool, competing directly >> with distutils2. It can deal with simple distutils-like builds using a >> bundled build tool, and currently has integration with waf for >> complicated builds; integration with other build systems will >> presumably be added later as people need it (the main point is that >> bento is designed for it). > I am not interested in Bento-the-tool. I am interested in what such a > tool needs from a project to use it => Again, you should read the elevator pitch at http://cournape.github.com/Bento/ + the Bento documentation. > "It can deal with simple distutils-like builds using a bundled build > tool" => If I understand this correctly, does that mean that Bento can > build a distutils project with the distutils Metadata ? Sorry, what I meant with "distutils-like builds" is "two simple C extensions", i.e. the trivial build case. Dag From brian at python.org Fri Jun 22 00:14:46 2012 From: brian at python.org (Brian Curtin) Date: Thu, 21 Jun 2012 17:14:46 -0500 Subject: [Python-Dev] Accepting PEP 397 In-Reply-To: References: Message-ID: On Wed, Jun 20, 2012 at 11:54 AM, Brian Curtin wrote: > As the PEP czar for 397, after Martin's final updates, I hereby > pronounce this PEP "accepted"! > > Thanks to Mark Hammond for kicking it off, Vinay Sajip for writing up > the code, Martin von Loewis for recent updates, and everyone in the > community who contributed to the discussions. > > I will begin integration work this evening. It's in. http://hg.python.org/cpython/rev/a7ecbb2ad967 Thanks all! From d.s.seljebotn at astro.uio.no Fri Jun 22 00:25:06 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Fri, 22 Jun 2012 00:25:06 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE39ABC.2000103@astro.uio.no> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> <4FE31763.1020609@astro.uio.no> <4FE3204E.3060301@ziade.org> <4FE32F13.7030102@astro.uio.no> <4FE37063.2030705@ziade.org> <4FE38842.4090308@astro.uio.no> <4FE38C65.7050905@ziade.org> <4FE39ABC.2000103@astro.uio.no> Message-ID: <4FE39F42.3080800@astro.uio.no> On 06/22/2012 12:05 AM, Dag Sverre Seljebotn wrote: > On 06/21/2012 11:04 PM, Tarek Ziad? wrote: >> On 6/21/12 10:46 PM, Dag Sverre Seljebotn wrote: >> ... >>>> I think we should, as you proposed, list a few projects w/ compilation >>>> needs -- from the simplest to the more complex, then see how a standard >>>> *description* could be used by any tool >>> >>> It's not clear to me what you mean by description. Package metadata, >>> install information or description of what/how to build? >>> >>> I hope you don't mean the latter, that would be insane...it would >>> effectively amount to creating a build tool that's both more elegant >>> and more powerful than any option that's currently already out there. >>> >>> Assuming you mean the former, that's what David did to create Bento. >>> Reading and understanding Bento and the design decisions going into it >>> would be a better use of time than redoing a discussion, and would at >>> least be a very good starting point. >> >> What I mean is : what would it take to use Bento (or another tool) as >> the compiler in a distutils-based project, without having to change the >> distutils metadata. > > As for current distutils/setuptools/distribute metadata, the idea is you > run the bento conversion utility to convert it to Bento metadata, then > use Bento. > > Please read > > http://cournape.github.com/Bento/ > > There may be packages where this doesn't work and you'd need to tweak > the results yourself though. > >>> Here's the flip side: If you have zero knowledge about compilers, it's >>> going to be almost impossible to have a meaningful discussion about a >>> compilation PEP. It's very hard to discuss standards unless everybody >>> involved have the necessary prerequisite knowledge. You don't go >>> discussing details of the Linux kernel without some solid C experience >>> either. >> Consider me as the end user that want to have his 2 C modules compiled >> in their Python project. > > OK, so can I propose that you kill off distutils2 and use bento > wholesale instead? > > Obviously not. So you're not just an end-user. That illusion would wear > rather thin very quickly. I regret this comment, it's not helpful to the discussion. Trying again: David's numscons project was a large effort and it tried to integrate a proper build system (scons) with distutils. That effort didn't in the end go anywhere. But I think it did show that everything is coupled to everything, and that build system integration (and other "special" needs of the scipy community) affects everything in the package system. It's definitely not as simple as having somebody with compiler experience chime in on the isolated topic of how to build extensions. It's something that needs to drive the entire design process. Which is perhaps why it is difficult to have a package system designed by people who don't know compilers to be usable by people who need to use them in non-trivial ways. Dag > >>> >>> The necessary prerequisites in this case is not merely "knowledge of >>> compilers". To avoid repeating mistakes of the past, the prerequisites >>> for a meaningful discussion is years of hard-worn experience building >>> software in various languages, on different platforms, using different >>> build tools. >>> >>> Look, these problems are really hard to deal with. Myself I have >>> experience with building 2-3 languages using 2-3 build tools on 2 >>> platforms, and I consider myself a complete novice and usually decide >>> to trust David's instincts over trying to make up an opinion of my own >>> -- simply because I know he's got a lot more experience than I have. >>> >>> Theoretically it is possible to separate and isolate concerns so that >>> one set of people discuss build integration and another set of people >>> discuss installation. Problem is that all the problems tangle -- in >>> particular when the starting point is distutils! >>> >>> That's why *sometimes*, not always, design by committee is the wrong >>> approach, and one-man-shows is what brings technology forwards. >> >> I am not saying this should be designed by a commitee, but rather - if >> such a tool can be made compatible with simple Distutils project, the >> guy behind this tool can probably help on a PEP with feedback from a >> larger audience than the sci community. >> >> What bugs me is to say that we live in two separate worlds and cannot >> build common pieces. This is not True. > > I'm not saying it's *impossible* to build common pieces, I'm suggesting > that it's not cost-effective in terms of man-hours going into it. And > the problem isn't technical as much as social and the mix of people and > skill sets involved. > > But David really made that decision for me when he left distutils-sig, > I'm not going to spend my own time and energy trying to get decent > builds shoehorned into distutils2 when he is busy working on a solution. > > (David already spent loads of time on trying to integrate scons with > distutils (the numscons project) and maintained numpy.distutils and > scipy builds for years; I trust his judgement above pretty much anybody > else's.) > >>>> So, I reiterate my proposal, and it could also be expressed like this: >>>> >>>> 1/ David writes a PEP where he describes how Bento interact with a >>>> project -- metadata, description files, etc >>>> 2/ Someone from distutils2 completes the PEP by describing how >>>> setup.cfg >>>> works wrt Extensions >>>> 3/ we see if we can have a common standard even if it's a subset of >>>> bento capabilities >>> >>> bento isn't a build tool, it's a packaging tool, competing directly >>> with distutils2. It can deal with simple distutils-like builds using a >>> bundled build tool, and currently has integration with waf for >>> complicated builds; integration with other build systems will >>> presumably be added later as people need it (the main point is that >>> bento is designed for it). >> I am not interested in Bento-the-tool. I am interested in what such a >> tool needs from a project to use it => > > Again, you should read the elevator pitch at > http://cournape.github.com/Bento/ + the Bento documentation. > >> "It can deal with simple distutils-like builds using a bundled build >> tool" => If I understand this correctly, does that mean that Bento can >> build a distutils project with the distutils Metadata ? > > Sorry, what I meant with "distutils-like builds" is "two simple C > extensions", i.e. the trivial build case. > > Dag From cournape at gmail.com Fri Jun 22 00:32:45 2012 From: cournape at gmail.com (David Cournapeau) Date: Thu, 21 Jun 2012 23:32:45 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120622000039.3b8bff71@pitrou.net> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> <4FE31763.1020609@astro.uio.no> <4FE3204E.3060301@ziade.org> <4FE32F13.7030102@astro.uio.no> <4FE37063.2030705@ziade.org> <4FE38842.4090308@astro.uio.no> <20120622000039.3b8bff71@pitrou.net> Message-ID: On Thu, Jun 21, 2012 at 11:00 PM, Antoine Pitrou wrote: > On Thu, 21 Jun 2012 22:46:58 +0200 > Dag Sverre Seljebotn wrote: > > > The other thing is, the folks in distutils2 and myself, have zero > > > knowledge about compilers. That's why we got very frustrated not to see > > > people with that knowledge come and help us in this area. > > > > Here's the flip side: If you have zero knowledge about compilers, it's > > going to be almost impossible to have a meaningful discussion about a > > compilation PEP. > > If a PEP is being discussed, even a packaging PEP, it involves all of > python-dev, so Tarek and ?ric not being knowledgeable in compilers is > not a big problem. > > > The necessary prerequisites in this case is not merely "knowledge of > > compilers". To avoid repeating mistakes of the past, the prerequisites > > for a meaningful discussion is years of hard-worn experience building > > software in various languages, on different platforms, using different > > build tools. > > This is precisely the kind of knowledge that a PEP is aimed at > distilling. > What would you imagine such a PEP would contain ? If you don't need to customize the compilation, then I would say refactoring what's in distutils is good enough. If you need customization, then I am convinced one should just use one of the existing build tools (waf, fbuild, scons, etc?). Python has more than enough of them already. By refactoring, I mean extracting it completely from command, and have an API similar to e.g. fbuild ( https://github.com/felix-lang/fbuild/blob/master/examples/c/fbuildroot.py), i.e. you basically have a class PythonBuilder.build_extension(name, sources, options). The key point is to remove any dependency on commands. If fbuild were not python3-specific, I would say just use that. It would cover most usecases. Actually, > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/cournape%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tarek at ziade.org Fri Jun 22 00:42:42 2012 From: tarek at ziade.org (=?UTF-8?B?VGFyZWsgWmlhZMOp?=) Date: Fri, 22 Jun 2012 00:42:42 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2E491.8010504@astro.uio.no> <4FE30BD6.1060706@ziade.org> <4FE31763.1020609@astro.uio.no> <4FE3204E.3060301@ziade.org> <4FE32F13.7030102@astro.uio.no> <4FE37063.2030705@ziade.org> <4FE38842.4090308@astro.uio.no> <4FE38C65.7050905@ziade.org> Message-ID: <4FE3A362.4080008@ziade.org> On 6/21/12 11:55 PM, David Cournapeau wrote: > > I think there is a misunderstanding of what bento is: bento is not a > compiler or anything like that. It is a set of libraries that work > together to configure, build and install a python project. > > Concretely, in bento, there is > - a part that build a packager description (Distribution-like in > distutils-parlance) from a bento.info (a bite like > setup.cfg) > - a set of tools of commands around this package description. > - a set of "backends" to e.g. use waf to build C extension with full > and automatic dependency analysis (rebuild this if this other thing is > out of date), parallel builds and configuration. Bento scripts build > numpy more efficiently and reliable while being 50 % shorter than our > setup.py. > - a small library to build a distutils-compatible Distribution so > that you can write a 3 lines setup.py that takes all its info from > bento.info and allow for pip to work. > > Now, you could produce a similar package description from the > setup.cfg to be fed to bento, but I don't really see the point since > AFAIK, bento.info is strictly more powerful as a > format than setup.cfg. > So that means that *today*, Bento can consume Distutils2 project and compiles them, just by reading their setup.cfg, right ? And the code you have to convert setup.cfg into bento.info is what I was talking about. It means that I can create a project without a setup.py file, and just setup.cfg, and have it working with distutils2 *or* bento That's *exactly* what I was talking about. the setup.cfg is the *common* standard, and is planned to be published at PyPI statically. Let people out there use their tool of their choice to install a project defined by a setup.cfg so 2 questions: 1/ does Bento install things following PEP 376 ? 2/ how does the setup.cfg hooks work wrt Bento ? and last one proposal: how a PEP that defines a setup.cfg standard that is Bento-friendly, but still distutils2-friendly would sound ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From aclark at aclark.net Fri Jun 22 01:34:37 2012 From: aclark at aclark.net (Alex Clark) Date: Thu, 21 Jun 2012 19:34:37 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <8359F922B4E045BD819F7811725306D6@gmail.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> Message-ID: Hi, On 6/21/12 5:38 PM, Donald Stufft wrote: > On Thursday, June 21, 2012 at 4:01 PM, Paul Moore wrote: >> End users should not need packaging tools on their machines. >> > Sort of riffing on this idea, I cannot seem to find a specification for > what a Python > package actually is. FWIW according to distutils[1], a package is: a module or modules inside another module[2]. So e.g.:: foo.py is a module and: foo/__init__.py foo/foo.py is a simple package containing the following modules: import foo, foo.foo Alex [1] http://docs.python.org/distutils/introduction.html#general-python-terminology [2] And a distribution is a compressed archive of a package, in case that's not clear. > Maybe the first effort should focus on this instead > of arguing one > implementation or another. > > As a packager: > I should not (in general) care what tool (pip, pysetup, > easy_install, buildout, whatever) is used > to install my package, My package should just describe what to do > to install itself. > > As a end user: > I should not (in general) care what tool was used to create a > package (setuptools, bento, distutils, > whatever). My tool of choice should look at the package and preform > the operations that the package > says are needed for install. > > Ideally the package could have some basic primitives that are enough to > tell the package installer > tool what to do to install it, These primitives should be enough to > cover the common cases (pure python > modules at the very least, maybe additionally some C modules). Now as > others have remarked it would > be insane to attempt to do this in every case as it would involve > writing a build system that is more > advanced than anything else existing, so a required primitive would be > something that allows calling out > to a specific package decided build system (waf, make, whatever) to > handle the build configuration. > > The eventual end goal here being to make a package from something that > varies from implementation > to implementation to a standardized format that any number of tools can > build on top of. It would likely > include some things defining where metadata MUST be defined. > > For instance, if metadata in setuptools was "compiled" down to static > file, and easy_install, pip et;al > used that static file to install from instead of executing setup.py, > then the end user would not have > required setup tools installed and instead any number of tools could > have been created that utilized > that data. > > -- Alex Clark ? http://pythonpackages.com From donald.stufft at gmail.com Fri Jun 22 02:01:02 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Thu, 21 Jun 2012 20:01:02 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> Message-ID: <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> On Thursday, June 21, 2012 at 7:34 PM, Alex Clark wrote: > Hi, > > On 6/21/12 5:38 PM, Donald Stufft wrote: > > On Thursday, June 21, 2012 at 4:01 PM, Paul Moore wrote: > > > End users should not need packaging tools on their machines. > > > > Sort of riffing on this idea, I cannot seem to find a specification for > > what a Python > > package actually is. > > > > > > FWIW according to distutils[1], a package is: a module or modules inside > another module[2]. So e.g.:: > > > foo.py is a module > > > and: > > foo/__init__.py > foo/foo.py > > is a simple package containing the following modules: > > import foo, foo.foo > > > Alex > > > [1] > http://docs.python.org/distutils/introduction.html#general-python-terminology > > [2] And a distribution is a compressed archive of a package, in case > that's not clear. > Right, i'm actually talking about distributions. (As is everyone else in this thread). And that a definition is not a specification. What i'm trying to get at is with a standard package format where all the metadata is able to get gotten at without the packaging lib (distutils/setuptools cannot get at metadata without using distutils or setuptools). It would need to be required that this serves as the one true source of metadata and that other tools can add certain types of metadata to this format. If say distutils2 wrote a package that adhered to a certain standard, and wrote all the information that distutils2 knows about how to install said package (what files, names, versions, dependencies etc) to a file (say PKG-INFO) that contained only "common" standard information then another tool (say bento) could take that package, and install it. The idea i'm hoping for is to stop worrying about one implementation over another and hoping to create a common format that all the tools can agree upon and create/install. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skippy.hammond at gmail.com Fri Jun 22 03:22:00 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Fri, 22 Jun 2012 11:22:00 +1000 Subject: [Python-Dev] Accepting PEP 397 In-Reply-To: References: Message-ID: <4FE3C8B8.1030402@gmail.com> On 22/06/2012 8:14 AM, Brian Curtin wrote: > On Wed, Jun 20, 2012 at 11:54 AM, Brian Curtin wrote: >> As the PEP czar for 397, after Martin's final updates, I hereby >> pronounce this PEP "accepted"! >> >> Thanks to Mark Hammond for kicking it off, Vinay Sajip for writing up >> the code, Martin von Loewis for recent updates, and everyone in the >> community who contributed to the discussions. >> >> I will begin integration work this evening. > > It's in. http://hg.python.org/cpython/rev/a7ecbb2ad967 > > Thanks all! Awesome - thank you! Mark From yselivanov.ml at gmail.com Fri Jun 22 03:56:15 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 21 Jun 2012 21:56:15 -0400 Subject: [Python-Dev] PEP 362 - request for pronouncement Message-ID: Hello, On behalf of Brett, Larry, and myself, I'm requesting for PEP 362 pronouncement. The PEP, has been updated with all feedback from python-dev list discussions. I'm posting the latest version of it with this message. The PEP is also available here: http://www.python.org/dev/peps/pep-0362/ The python issue tracking the patch: http://bugs.python.org/issue15008 The reception of the PEP was very positive, the API is minimalistic and future-proof; the implementation is stable, well tested, reviewed and should be ready to merge. Thank you, Yury PEP: 362 Title: Function Signature Object Version: $Revision$ Last-Modified: $Date$ Author: Brett Cannon , Jiwon Seo , Yury Selivanov , Larry Hastings Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 21-Aug-2006 Python-Version: 3.3 Post-History: 04-Jun-2012 Abstract ======== Python has always supported powerful introspection capabilities, including introspecting functions and methods (for the rest of this PEP, "function" refers to both functions and methods). By examining a function object you can fully reconstruct the function's signature. Unfortunately this information is stored in an inconvenient manner, and is spread across a half-dozen deeply nested attributes. This PEP proposes a new representation for function signatures. The new representation contains all necessary information about a function and its parameters, and makes introspection easy and straightforward. However, this object does not replace the existing function metadata, which is used by Python itself to execute those functions. The new metadata object is intended solely to make function introspection easier for Python programmers. Signature Object ================ A Signature object represents the call signature of a function and its return annotation. For each parameter accepted by the function it stores a `Parameter object`_ in its ``parameters`` collection. A Signature object has the following public attributes and methods: * return_annotation : object The "return" annotation for the function. If the function has no "return" annotation, this attribute is not set. * parameters : OrderedDict An ordered mapping of parameters' names to the corresponding Parameter objects. * bind(\*args, \*\*kwargs) -> BoundArguments Creates a mapping from positional and keyword arguments to parameters. Raises a ``TypeError`` if the passed arguments do not match the signature. * bind_partial(\*args, \*\*kwargs) -> BoundArguments Works the same way as ``bind()``, but allows the omission of some required arguments (mimics ``functools.partial`` behavior.) Raises a ``TypeError`` if the passed arguments do not match the signature. * replace(parameters, \*, return_annotation) -> Signature Creates a new Signature instance based on the instance ``replace`` was invoked on. It is possible to pass different ``parameters`` and/or ``return_annotation`` to override the corresponding properties of the base signature. To remove ``return_annotation`` from the copied ``Signature``, pass in ``Signature.empty``. Signature objects are immutable. Use ``Signature.replace()`` to make a modified copy: :: >>> def foo() -> None: ... pass >>> sig = signature(foo) >>> new_sig = sig.replace(return_annotation="new return annotation") >>> new_sig is not sig True >>> new_sig.return_annotation != sig.return_annotation True >>> new_sig.parameters == sig.parameters True >>> new_sig = new_sig.replace(return_annotation=new_sig.empty) >>> hasattr(new_sig, "return_annotation") False There are two ways to instantiate a Signature class: * Signature(parameters, \*, return_annotation) Default Signature constructor. Accepts an optional sequence of ``Parameter`` objects, and an optional ``return_annotation``. Parameters sequence is validated to check that there are no parameters with duplicate names, and that the parameters are in the right order, i.e. positional-only first, then positional-or-keyword, etc. * Signature.from_function(function) Returns a Signature object reflecting the signature of the function passed in. It's possible to test Signatures for equality. Two signatures are equal when their parameters are equal, their positional and positional-only parameters appear in the same order, and they have equal return annotations. Changes to the Signature object, or to any of its data members, do not affect the function itself. Signature also implements ``__str__``: :: >>> str(Signature.from_function((lambda *args: None))) '(*args)' >>> str(Signature()) '()' Parameter Object ================ Python's expressive syntax means functions can accept many different kinds of parameters with many subtle semantic differences. We propose a rich Parameter object designed to represent any possible function parameter. A Parameter object has the following public attributes and methods: * name : str The name of the parameter as a string. Must be a valid python identifier name (with the exception of ``POSITIONAL_ONLY`` parameters, which can have it set to ``None``.) * default : object The default value for the parameter. If the parameter has no default value, this attribute is not set. * annotation : object The annotation for the parameter. If the parameter has no annotation, this attribute is not set. * kind Describes how argument values are bound to the parameter. Possible values: * ``Parameter.POSITIONAL_ONLY`` - value must be supplied as a positional argument. Python has no explicit syntax for defining positional-only parameters, but many built-in and extension module functions (especially those that accept only one or two parameters) accept them. * ``Parameter.POSITIONAL_OR_KEYWORD`` - value may be supplied as either a keyword or positional argument (this is the standard binding behaviour for functions implemented in Python.) * ``Parameter.KEYWORD_ONLY`` - value must be supplied as a keyword argument. Keyword only parameters are those which appear after a "*" or "\*args" entry in a Python function definition. * ``Parameter.VAR_POSITIONAL`` - a tuple of positional arguments that aren't bound to any other parameter. This corresponds to a "\*args" parameter in a Python function definition. * ``Parameter.VAR_KEYWORD`` - a dict of keyword arguments that aren't bound to any other parameter. This corresponds to a "\*\*kwds" parameter in a Python function definition. Always use ``Parameter.*`` constants for setting and checking value of the ``kind`` attribute. * replace(\*, name, kind, default, annotation) -> Parameter Creates a new Parameter instance based on the instance ``replaced`` was invoked on. To override a Parameter attribute, pass the corresponding argument. To remove an attribute from a ``Parameter``, pass ``Parameter.empty``. Parameter constructor: * Parameter(name, kind, \*, annotation, default) Instantiates a Parameter object. ``name`` and ``kind`` are required, while ``annotation`` and ``default`` are optional. Two parameters are equal when they have equal names, kinds, defaults, and annotations. Parameter objects are immutable. Instead of modifying a Parameter object, you can use ``Parameter.replace()`` to create a modified copy like so: :: >>> param = Parameter('foo', Parameter.KEYWORD_ONLY, default=42) >>> str(param) 'foo=42' >>> str(param.replace()) 'foo=42' >>> str(param.replace(default=Parameter.empty, annotation='spam')) "foo:'spam'" BoundArguments Object ===================== Result of a ``Signature.bind`` call. Holds the mapping of arguments to the function's parameters. Has the following public attributes: * arguments : OrderedDict An ordered, mutable mapping of parameters' names to arguments' values. Contains only explicitly bound arguments. Arguments for which ``bind()`` relied on a default value are skipped. * args : tuple Tuple of positional arguments values. Dynamically computed from the 'arguments' attribute. * kwargs : dict Dict of keyword arguments values. Dynamically computed from the 'arguments' attribute. The ``arguments`` attribute should be used in conjunction with ``Signature.parameters`` for any arguments processing purposes. ``args`` and ``kwargs`` properties can be used to invoke functions: :: def test(a, *, b): ... sig = signature(test) ba = sig.bind(10, b=20) test(*ba.args, **ba.kwargs) Arguments which could be passed as part of either ``*args`` or ``**kwargs`` will be included only in the ``BoundArguments.args`` attribute. Consider the following example: :: def test(a=1, b=2, c=3): pass sig = signature(test) ba = sig.bind(a=10, c=13) >>> ba.args (10,) >>> ba.kwargs: {'c': 13} Implementation ============== The implementation adds a new function ``signature()`` to the ``inspect`` module. The function is the preferred way of getting a ``Signature`` for a callable object. The function implements the following algorithm: - If the object is not callable - raise a TypeError - If the object has a ``__signature__`` attribute and if it is not ``None`` - return it - If it has a ``__wrapped__`` attribute, return ``signature(object.__wrapped__)`` - If the object is a an instance of ``FunctionType`` construct and return a new ``Signature`` for it - If the object is a bound method, construct and return a new ``Signature`` object, with its first parameter (usually ``self`` or ``cls``) removed - If the object is an instance of ``functools.partial``, construct a new ``Signature`` from its ``partial.func`` attribute, and account for already bound ``partial.args`` and ``partial.kwargs`` - If the object is a class or metaclass: - If the object's type has a ``__call__`` method defined in its MRO, return a Signature for it - If the object has a ``__new__`` method defined in its MRO, return a Signature object for it - If the object has a ``__init__`` method defined in its MRO, return a Signature object for it - Return ``signature(object.__call__)`` Note that the ``Signature`` object is created in a lazy manner, and is not automatically cached. However, the user can manually cache a Signature by storing it in the ``__signature__`` attribute. An implementation for Python 3.3 can be found at [#impl]_. The python issue tracking the patch is [#issue]_. Design Considerations ===================== No implicit caching of Signature objects ---------------------------------------- The first PEP design had a provision for implicit caching of ``Signature`` objects in the ``inspect.signature()`` function. However, this has the following downsides: * If the ``Signature`` object is cached then any changes to the function it describes will not be reflected in it. However, If the caching is needed, it can be always done manually and explicitly * It is better to reserve the ``__signature__`` attribute for the cases when there is a need to explicitly set to a ``Signature`` object that is different from the actual one Some functions may not be introspectable ---------------------------------------- Some functions may not be introspectable in certain implementations of Python. For example, in CPython, built-in functions defined in C provide no metadata about their arguments. Adding support for them is out of scope for this PEP. Signature and Parameter equivalence ----------------------------------- We assume that parameter names have semantic significance--two signatures are equal only when their corresponding parameters are equal and have the exact same names. Users who want looser equivalence tests, perhaps ignoring names of VAR_KEYWORD or VAR_POSITIONAL parameters, will need to implement those themselves. Examples ======== Visualizing Callable Objects' Signature --------------------------------------- Let's define some classes and functions: :: from inspect import signature from functools import partial, wraps class FooMeta(type): def __new__(mcls, name, bases, dct, *, bar:bool=False): return super().__new__(mcls, name, bases, dct) def __init__(cls, name, bases, dct, **kwargs): return super().__init__(name, bases, dct) class Foo(metaclass=FooMeta): def __init__(self, spam:int=42): self.spam = spam def __call__(self, a, b, *, c) -> tuple: return a, b, c @classmethod def spam(cls, a): return a def shared_vars(*shared_args): """Decorator factory that defines shared variables that are passed to every invocation of the function""" def decorator(f): @wraps(f) def wrapper(*args, **kwds): full_args = shared_args + args return f(*full_args, **kwds) # Override signature sig = signature(f) sig = sig.replace(tuple(sig.parameters.values())[1:]) wrapper.__signature__ = sig return wrapper return decorator @shared_vars({}) def example(_state, a, b, c): return _state, a, b, c def format_signature(obj): return str(signature(obj)) Now, in the python REPL: :: >>> format_signature(FooMeta) '(name, bases, dct, *, bar:bool=False)' >>> format_signature(Foo) '(spam:int=42)' >>> format_signature(Foo.__call__) '(self, a, b, *, c) -> tuple' >>> format_signature(Foo().__call__) '(a, b, *, c) -> tuple' >>> format_signature(Foo.spam) '(a)' >>> format_signature(partial(Foo().__call__, 1, c=3)) '(b, *, c=3) -> tuple' >>> format_signature(partial(partial(Foo().__call__, 1, c=3), 2, c=20)) '(*, c=20) -> tuple' >>> format_signature(example) '(a, b, c)' >>> format_signature(partial(example, 1, 2)) '(c)' >>> format_signature(partial(partial(example, 1, b=2), c=3)) '(b=2, c=3)' Annotation Checker ------------------ :: import inspect import functools def checktypes(func): '''Decorator to verify arguments and return types Example: >>> @checktypes ... def test(a:int, b:str) -> int: ... return int(a * b) >>> test(10, '1') 1111111111 >>> test(10, 1) Traceback (most recent call last): ... ValueError: foo: wrong type of 'b' argument, 'str' expected, got 'int' ''' sig = inspect.signature(func) types = {} for param in sig.parameters.values(): # Iterate through function's parameters and build the list of # arguments types try: type_ = param.annotation except AttributeError: continue else: if not inspect.isclass(type_): # Not a type, skip it continue types[param.name] = type_ # If the argument has a type specified, let's check that its # default value (if present) conforms with the type. try: default = param.default except AttributeError: continue else: if not isinstance(default, type_): raise ValueError("{func}: wrong type of a default value for {arg!r}". \ format(func=func.__qualname__, arg=param.name)) def check_type(sig, arg_name, arg_type, arg_value): # Internal function that encapsulates arguments type checking if not isinstance(arg_value, arg_type): raise ValueError("{func}: wrong type of {arg!r} argument, " \ "{exp!r} expected, got {got!r}". \ format(func=func.__qualname__, arg=arg_name, exp=arg_type.__name__, got=type(arg_value).__name__)) @functools.wraps(func) def wrapper(*args, **kwargs): # Let's bind the arguments ba = sig.bind(*args, **kwargs) for arg_name, arg in ba.arguments.items(): # And iterate through the bound arguments try: type_ = types[arg_name] except KeyError: continue else: # OK, we have a type for the argument, lets get the corresponding # parameter description from the signature object param = sig.parameters[arg_name] if param.kind == param.VAR_POSITIONAL: # If this parameter is a variable-argument parameter, # then we need to check each of its values for value in arg: check_type(sig, arg_name, type_, value) elif param.kind == param.VAR_KEYWORD: # If this parameter is a variable-keyword-argument parameter: for subname, value in arg.items(): check_type(sig, arg_name + ':' + subname, type_, value) else: # And, finally, if this parameter a regular one: check_type(sig, arg_name, type_, arg) result = func(*ba.args, **ba.kwargs) # The last bit - let's check that the result is correct try: return_type = sig.return_annotation except AttributeError: # Looks like we don't have any restriction on the return type pass else: if isinstance(return_type, type) and not isinstance(result, return_type): raise ValueError('{func}: wrong return type, {exp} expected, got {got}'. \ format(func=func.__qualname__, exp=return_type.__name__, got=type(result).__name__)) return result return wrapper References ========== .. [#impl] pep362 branch (https://bitbucket.org/1st1/cpython/overview) .. [#issue] issue 15008 (http://bugs.python.org/issue15008) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From ncoghlan at gmail.com Fri Jun 22 07:05:08 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 22 Jun 2012 15:05:08 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> Message-ID: On Fri, Jun 22, 2012 at 10:01 AM, Donald Stufft wrote: > The idea i'm hoping for is to stop worrying about one implementation over > another and > hoping to create a common format that all the tools can agree upon and > create/install. Right, and this is where it encouraged me to see in the Bento docs that David had cribbed from RPM in this regard (although I don't believe he has cribbed *enough*). A packaging system really needs to cope with two very different levels of packaging: 1. Source distributions (e.g. SRPMs). To get from this to useful software requires developer tools. 2. "Binary" distributions (e.g. RPMs). To get from this to useful software mainly requires a "file copy" utility (well, that and an archive decompressor). An SRPM is *just* a SPEC file and source tarball. That's it. To get from that to an installed product, you have a bunch of additional "BuildRequires" dependencies, along with %build and %install scripts and a %files definition that define what will be packaged up and included in the binary RPM. The exact nature of the metadata format doesn't really matter, what matters is that it's a documented standard that multiple tools can read. An RPM includes files that actually get installed on the target system. An RPM can be arch specific (if they include built binary bits) or "noarch" if they're platform neutral. distutils really only plays at the SRPM level - there is no defined OS neutral RPM equivalent. That's why I brought up the bdist_simple discussion earlier in the thread - if we can agree on a standard bdist_simple format, then we can more cleanly decouple the "build" step from the "install" step. I think one of the key things to learn from the SPEC file format is the configuration language it used for the various build phases: sh (technically, any shell on the system, but almost everyone just uses the default system shell) This is why you can integrate whatever build system you like with it: so long as you can invoke the build from the shell, then you can use it to make your RPM. Now, there's an obvious problem with this: it's completely useless from a *cross-platform* building point of view. Isn't it a shame there's no language we could use that would let us invoke build systems in a cross platform way? Oh, wait... So here's some sheer pie-in-the-sky speculation. If people like elements of this idea enough to run with it, great. If not... oh well: - I believe the "egg" term has way too much negative baggage (courtesy of easy_install), and find the full term Distribution to be too easily confused with "Linux distribution". However, "Python dist" is unambiguous (since the more typical abbreviation for an aggregate distribution is "distro"). Thus, I attempt to systematically refer to the objects used to distribute Python software from developers to users as "dists". In practice, this terminology is already used in many places (distutils, sdist, bdist_msi, bdist_rpm, the .dist-info format in PEP 376 etc). Thus, Python software is distributed as dists (either sdists or bdists), which may in turn be converted to distro packages (e.g. SRPMs and RPMs) for deployment to particular environments. - I reject setup.cfg, as I believe ini-style configuration files are not appropriate for a metadata format that needs to include file listings and code fragments - I reject bento.info, as I think if we accept yet-another-custom-configuration-file-format into the standard library instead of just using YAML, we're even crazier than is already apparent - I shall use "dist.yaml" as my proposed name for my "I wish I could define packages like this" format (and yes, that means adding yaml support to the standard library is part of the wish) - many of the details below will be flawed, but I want to give a clear idea for how a concept like this might work in practice - we need to define a clear set of build phases, and then design the dist metadata format accordingly. For example: - source - uses a "source" section in dist.yaml - "source/install" maps source files directly to desired install locations - essentially what the setup.cfg Resources section tries to do - used for pure Python code, documentation, etc - See below for example - "source/files" defines a list of extra files to be included - "source/exclude" defines the list of files to be excluded - "source/run" defines a Python fragment to be executed - serves a similar purpose to the "files" section in setup.cfg - creates a temporary directory (and sets it as the working directory) - dist.yaml is copied to the temporary directory - all files to be installed are copied to the temporary directory - all extra files are copied to the temporary directory - the Python fragment in "source/run" is executed (which can thus easily add more files) - if sdist archive creation is requested, entire contents of temporary directory are included - build - uses a "build" section in dist.yaml - "build/install" maps built files to desired install locations - like source/install, but for build artifacts - compiled C extensions, .pyc and .pyo files, etc would all go here - "build/run" defines a Python fragment to be executed - "build/files" defines the list of files to be included - "build/exclude" defines the list of files to be excluded - "build/requires" defines extra dependencies not needed at runtime - starting environment is a source directory that is either: - preexisting (e.g. to allow building in-place in the source tree) - created by running source first - created by unpacking an sdist archive - the Python fragment in "build/run" is executed to trigger the build - if the build succeeds (i.e. doesn't throw an exception) - create a temporary directory - copy dist.yaml - copy all specified files - this is the easiest way to exclude build artifacts from the distribution, while still keeping them around to enable incremental builds - if bdist_simple archive creation is requested, entire contents of temporary directory are included - other bdist formats (such as bdist_rpm) will have their own rules for getting from the bdist_simple format to the platform specific format - install - uses an "install" section in dist.yaml - "install/pre" defines a Python fragment to be executed before copying files - "install/post" defines a Python fragment to be executed after copying files - starting environment is a bdist_simple directory that is either: - preexisting (e.g. to allow creation by system packaging tools) - created by running build first - created by unpacking a bdist_simple archive - end result is a fully installed and usable piece of software - test - uses a "test" section in dist.yaml - "test/run" defines a Python fragment to be executed to start the tests - "test/requires" defines extra dependencies needed to run the test suite - Example "source/install" based on http://alexis.notmyidea.org/distutils2/setupcfg.html#complete-example (my YAML may be a bit dodgy). - With this scheme, module installation is just another install category. - A solution for easily installing entire subtrees is desirable. I propose the recursive glob ** syntax for that purpose. - Unlike setup.cfg, every category would have an "-excluded" counterpart to filter unwanted files. Explicit is better than implicit. source: install: modules: example.py example_pkg/*.py example_pkg/**/*.py example_pkg/resource.txt doc: README doc/* doc-excluded: doc/man man: doc/man scripts: # Directory details are stripped automatically scripts/LAUNCH scripts/*.{sh,bat} # But subdirectories can be made explicit extras/: scripts/extras/*.{sh,bat} - the goal of a dist.yaml syntax would be to be *explicit* and *comprehensive*. If this gets too verbose, then the solution would be dist.yaml generators that are less expressive, but also reduce the necessary boilerplate. - a typical "sdist" will now just be an archive consisting of: - the project's dist.yaml file - all files created by the "source" phase - the "bdist_simple" format will just be an archive consisting of: - the project's dist.yaml file - all files created by the "build" phase - the source and build run hooks and install pre and post hooks become the way you integrate with arbitrary build systems. No fancy command or compiler system or anything like that, you just import whatever you need and call it with the appropriate arguments. To other tools, they will just be opaque chunks of text, but to the build system, they're executable pieces of Python code, just as RPM includes executable scripts. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Fri Jun 22 07:07:02 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 22 Jun 2012 15:07:02 +1000 Subject: [Python-Dev] PEP 362 - request for pronouncement In-Reply-To: References: Message-ID: On Fri, Jun 22, 2012 at 11:56 AM, Yury Selivanov wrote: > Hello, > > On behalf of Brett, Larry, and myself, I'm requesting for PEP 362 > pronouncement. > > The PEP, has been updated with all feedback from python-dev list > discussions. I'm posting the latest version of it with this message. > The PEP is also available here: http://www.python.org/dev/peps/pep-0362/ > The python issue tracking the patch: http://bugs.python.org/issue15008 > > The reception of the PEP was very positive, the API is minimalistic and > future-proof; the implementation is stable, well tested, reviewed and should > be ready to merge. I'll also note that I've explicitly recused myself from being the BDFL-Delegate for this one, as I had too much of a hand in the API design. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From donald.stufft at gmail.com Fri Jun 22 07:20:31 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Fri, 22 Jun 2012 01:20:31 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> Message-ID: <46FAF51F25294E0081F05605365F4EA6@gmail.com> On Friday, June 22, 2012 at 1:05 AM, Nick Coghlan wrote: > > - I reject setup.cfg, as I believe ini-style configuration files are > not appropriate for a metadata format that needs to include file > listings and code fragments > > - I reject bento.info (http://bento.info), as I think if we accept > yet-another-custom-configuration-file-format into the standard library > instead of just using YAML, we're even crazier than is already > apparent > > - I shall use "dist.yaml" as my proposed name for my "I wish I could > define packages like this" format (and yes, that means adding yaml > support to the standard library is part of the wish) > > - many of the details below will be flawed, but I want to give a clear > idea for how a concept like this might work in practice > > - we need to define a clear set of build phases, and then design the > dist metadata format accordingly. For example: > - source > - uses a "source" section in dist.yaml > - "source/install" maps source files directly to desired > install locations > - essentially what the setup.cfg Resources section tries to do > - used for pure Python code, documentation, etc > - See below for example > - "source/files" defines a list of extra files to be included > - "source/exclude" defines the list of files to be excluded > - "source/run" defines a Python fragment to be executed > - serves a similar purpose to the "files" section in setup.cfg > - creates a temporary directory (and sets it as the working directory) > - dist.yaml is copied to the temporary directory > - all files to be installed are copied to the temporary directory > - all extra files are copied to the temporary directory > - the Python fragment in "source/run" is executed (which can > thus easily add more files) > - if sdist archive creation is requested, entire contents of > temporary directory are included > - build > - uses a "build" section in dist.yaml > - "build/install" maps built files to desired install locations > - like source/install, but for build artifacts > - compiled C extensions, .pyc and .pyo files, etc would all go here > - "build/run" defines a Python fragment to be executed > - "build/files" defines the list of files to be included > - "build/exclude" defines the list of files to be excluded > - "build/requires" defines extra dependencies not needed at runtime > - starting environment is a source directory that is either: > - preexisting (e.g. to allow building in-place in the source tree) > - created by running source first > - created by unpacking an sdist archive > - the Python fragment in "build/run" is executed to trigger the build > - if the build succeeds (i.e. doesn't throw an exception) > - create a temporary directory > - copy dist.yaml > - copy all specified files > - this is the easiest way to exclude build artifacts from > the distribution, while still keeping them around to enable > incremental builds > - if bdist_simple archive creation is requested, entire > contents of temporary directory are included > - other bdist formats (such as bdist_rpm) will have their own > rules for getting from the bdist_simple format to the platform > specific format > - install > - uses an "install" section in dist.yaml > - "install/pre" defines a Python fragment to be executed > before copying files > - "install/post" defines a Python fragment to be executed > after copying files > - starting environment is a bdist_simple directory that is either: > - preexisting (e.g. to allow creation by system packaging tools) > - created by running build first > - created by unpacking a bdist_simple archive > - end result is a fully installed and usable piece of software > - test > - uses a "test" section in dist.yaml > - "test/run" defines a Python fragment to be executed to start the tests > - "test/requires" defines extra dependencies needed to run the > test suite > I dislike some of the (implementation) details, but in general I think this is a good direction to go in. Less trying to force tools to work together by hijacking setup.py or something and more "this is a package, it contains the data you need to install, and how to install it, you installation tool can use this data however it pleases to make sure it is installed." I feel like this is (one of?) the missing piece of the puzzle to define a set of standards that _any_ package creation, or installation tool can implement and gain interoperability. I don't want to argue over implementation details as I think that is premature right now, so this concept has a big +1 from me. RPM, deb, etc has a long history and a lot of shared knowledge so looking at them and adapting it to work cross platform is likely to be huge win. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Fri Jun 22 08:25:44 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 22 Jun 2012 15:25:44 +0900 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> Message-ID: <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> Paul Moore writes: > End users should not need packaging tools on their machines. I think this desideratum is close to obsolete these days, with webapps in "the cloud" downloading resources (including, but not limited to, code) on an as-needed basis. If you're *not* obtaining resources as-needed, but instead installing an everything-you-could-ever-need SUMO, I don't see the problem with including packaging tools as well. Not to mention that "end user" isn't a permanent property of a person, but rather a role that they can change at will and sometimes may be forced to. What is desirable is that such tools be kept in the back of a closet where people currently in the "end user" role don't need to see them at all, but developers can get them immediately when needed. From ncoghlan at gmail.com Fri Jun 22 08:29:55 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 22 Jun 2012 16:29:55 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <46FAF51F25294E0081F05605365F4EA6@gmail.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <46FAF51F25294E0081F05605365F4EA6@gmail.com> Message-ID: On Fri, Jun 22, 2012 at 3:20 PM, Donald Stufft wrote: > I don't want to argue over implementation details as I think that is > premature right now, so this > concept has a big +1 from me. RPM, deb, etc has a long history and a lot of > shared knowledge > so looking at them and adapting it to work cross platform is likely to be > huge win. Right, much of what I wrote in that email should be taken as "this is one way I think it *could* work", rather than "this is the way I think it *should* work". In particular, any realistic attempt should also look at what Debian based systems do differently from RPM based systems. I think the key elements are recognising that: - an "sdist" contains three kinds of file: - package metadata - files to be installed directly on the target system - files needed to build other files - a "bdist" also contains three kinds of file: - package metadata - files to be installed directly on the target system - files needed to correctly install and update other files That means the key transformations to be defined are: - source checkout -> sdist - need to define contents of sdist - need to define where any directly installed files are going to end up - sdist -> bdist - need to define contents of bdist - need to define how to create the build artifacts - need to define where any installed build artifacts are going to end up - bdist -> installed software - need to allow application developers to customise the installation process - need to allow system packages to customise where certain kinds of file end up The one *anti-pattern* I think we really want to avoid is a complex registration system where customisation isn't as simple as saying either: - run this inline piece of code; or - invoke this named function or class that implements the appropriate interface The other main consideration is that we want the format to be easy to read with general purpose tools, and that means something based on a configuration file standard. YAML is the obvious choice at that point. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Fri Jun 22 08:30:59 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 22 Jun 2012 16:30:59 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Fri, Jun 22, 2012 at 4:25 PM, Stephen J. Turnbull wrote: > Paul Moore writes: > > ?> End users should not need packaging tools on their machines. > > I think this desideratum is close to obsolete these days, with webapps > in "the cloud" downloading resources (including, but not limited to, > code) on an as-needed basis. There's still a lot more to the software world than what happens on the public internet. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tarek at ziade.org Fri Jun 22 08:42:43 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Fri, 22 Jun 2012 08:42:43 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> Message-ID: <4FE413E3.6050602@ziade.org> On 6/22/12 7:05 AM, Nick Coghlan wrote: > .. > > - I reject setup.cfg, as I believe ini-style configuration files are > not appropriate for a metadata format that needs to include file > listings and code fragments I don't understand what's the problem is with ini-style files, as they are suitable for multi-line variables etc. (see zc.buildout) yaml vs ini vs xxx seems to be an implementation detail, and my take on this is that we have ConfigParser in the stdlib From ncoghlan at gmail.com Fri Jun 22 09:11:11 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 22 Jun 2012 17:11:11 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE413E3.6050602@ziade.org> References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE413E3.6050602@ziade.org> Message-ID: On Fri, Jun 22, 2012 at 4:42 PM, Tarek Ziad? wrote: > On 6/22/12 7:05 AM, Nick Coghlan wrote: > I don't understand what's the problem is with ini-style files, as they are > suitable for multi-line variables etc. (see zc.buildout) > > yaml vs ini vs xxx seems to be an implementation detail, and my take on this > is that we have ConfigParser in the stdlib You can't do more than one layer of nested data structures cleanly with an ini-style solution, and some aspects of packaging are just crying out for metadata that nests more deeply than that. The setup.cfg format for specifying installation layouts doesn't even come *close* to being intuitively readable - using a format with better nesting support has some hope of fixing that, since filesystem layouts are naturally hierarchical. A JSON based format would also be acceptable to me from a functional point of view, although in that case, asking people to edit it directly would be cruel - you would want to transform it to YAML in order to actually read it or write it. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tarek at ziade.org Fri Jun 22 09:24:59 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Fri, 22 Jun 2012 09:24:59 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE413E3.6050602@ziade.org> Message-ID: <4FE41DCB.8000001@ziade.org> On 6/22/12 9:11 AM, Nick Coghlan wrote: > On Fri, Jun 22, 2012 at 4:42 PM, Tarek Ziad? wrote: >> On 6/22/12 7:05 AM, Nick Coghlan wrote: >> I don't understand what's the problem is with ini-style files, as they are >> suitable for multi-line variables etc. (see zc.buildout) >> >> yaml vs ini vs xxx seems to be an implementation detail, and my take on this >> is that we have ConfigParser in the stdlib > You can't do more than one layer of nested data structures cleanly > with an ini-style solution, and some aspects of packaging are just > crying out for metadata that nests more deeply than that. The > setup.cfg format for specifying installation layouts doesn't even come > *close* to being intuitively readable - using a format with better > nesting support has some hope of fixing that, since filesystem layouts > are naturally hierarchical. > > A JSON based format would also be acceptable to me from a functional > point of view, although in that case, asking people to edit it > directly would be cruel - you would want to transform it to YAML in > order to actually read it or write it. I still think this is an implementation detail, and that ini can work here, as they have proven to work with buildout and look very clean to me. But I guess that's not important -- looking forward for you changes proposals on packaging. I am now wondering why we don't have a yaml module in the stdlib btw :) > > Cheers, > Nick. > From ncoghlan at gmail.com Fri Jun 22 09:29:31 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 22 Jun 2012 17:29:31 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE41DCB.8000001@ziade.org> References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE413E3.6050602@ziade.org> <4FE41DCB.8000001@ziade.org> Message-ID: On Fri, Jun 22, 2012 at 5:24 PM, Tarek Ziad? wrote: > On 6/22/12 9:11 AM, Nick Coghlan wrote: >> >> On Fri, Jun 22, 2012 at 4:42 PM, Tarek Ziad? ?wrote: >>> >>> On 6/22/12 7:05 AM, Nick Coghlan wrote: >>> I don't understand what's the problem is with ini-style files, as they >>> are >>> suitable for multi-line variables etc. (see zc.buildout) >>> >>> yaml vs ini vs xxx seems to be an implementation detail, and my take on >>> this >>> is that we have ConfigParser in the stdlib >> >> You can't do more than one layer of nested data structures cleanly >> with an ini-style solution, and some aspects of packaging are just >> crying out for metadata that nests more deeply than that. The >> setup.cfg format for specifying installation layouts doesn't even come >> *close* to being intuitively readable - using a format with better >> nesting support has some hope of fixing that, since filesystem layouts >> are naturally hierarchical. >> >> A JSON based format would also be acceptable to me from a functional >> point of view, although in that case, asking people to edit it >> directly would be cruel - you would want to transform it to YAML in >> order to actually read it or write it. > > > I still think this is an implementation detail, and that ini can work here, > as they have proven to work with buildout and look very clean to me. Yeah, and I later realised that RPM also uses a flat format. I think nested is potentially cleaner, but that's the kind of thing a PEP can thrash out. > I am now wondering why we don't have a yaml module in the stdlib btw :) ini-style is often good enough, and failing that there's json. Or, you just depend on PyYAML :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From vinay_sajip at yahoo.co.uk Fri Jun 22 09:49:31 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 22 Jun 2012 07:49:31 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE413E3.6050602@ziade.org> Message-ID: Nick Coghlan gmail.com> writes: > > On Fri, Jun 22, 2012 at 4:42 PM, Tarek Ziad? ziade.org> wrote: > > On 6/22/12 7:05 AM, Nick Coghlan wrote: > > I don't understand what's the problem is with ini-style files, as they are > > suitable for multi-line variables etc. (see zc.buildout) > > > > yaml vs ini vs xxx seems to be an implementation detail, and my take on this > > is that we have ConfigParser in the stdlib > > You can't do more than one layer of nested data structures cleanly > with an ini-style solution, and some aspects of packaging are just > crying out for metadata that nests more deeply than that. The > setup.cfg format for specifying installation layouts doesn't even come > *close* to being intuitively readable - using a format with better > nesting support has some hope of fixing that, since filesystem layouts > are naturally hierarchical. > > A JSON based format would also be acceptable to me from a functional > point of view, although in that case, asking people to edit it > directly would be cruel - you would want to transform it to YAML in > order to actually read it or write it. The format-neutral alternative I used for logging configuration was a dictionary schema - JSON, YAML and Python code can all be mapped to that. Perhaps the relevant APIs can work at the dict layer. I agree that YAML is the human-friendliest "one obvious" format for review/edit, though. +1 to the overall approach suggested, it makes a lot of sense. Simple is better than complex, and all that :-) Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Fri Jun 22 09:56:09 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 22 Jun 2012 07:56:09 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE413E3.6050602@ziade.org> <4FE41DCB.8000001@ziade.org> Message-ID: Nick Coghlan gmail.com> writes: > ini-style is often good enough, and failing that there's json. Or, you > just depend on PyYAML :) Except when PyYAML is packaged and distributed using dist.yaml :-) Regards, Vinay Sajip From donald.stufft at gmail.com Fri Jun 22 10:05:39 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Fri, 22 Jun 2012 04:05:39 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE413E3.6050602@ziade.org> <4FE41DCB.8000001@ziade.org> Message-ID: <910456EE8BF24A9C8D205A80E79AA26D@gmail.com> I think json probably makes the most sense, it's already part of the stdlib for 2.6+ and while it has some issues with human editablity, there's no reason why this json file couldn't be auto generated from another data structure by the "package creation tool" that exists outside of the stdlib (or inside, but outside the scope of this proposal). Which is really part of what I like a lot about this proposal, how you come about the final product doesn't matter, distutils, bento, yet-uncreated-tool, manually crafting tar balls and files, you could describe your data in yaml, python, or going towards more magical ends of things, it could be automatically generated from your filesystem. It doesn't matter, all that matters is you create your final archive with the agreed upon structure and the agreed upon dist.(yml|json|ini) and any compliant installer should be able to install it. On Friday, June 22, 2012 at 3:56 AM, Vinay Sajip wrote: > Nick Coghlan gmail.com (http://gmail.com)> writes: > > > ini-style is often good enough, and failing that there's json. Or, you > > just depend on PyYAML :) > > > > > Except when PyYAML is packaged and distributed using dist.yaml :-) > > Regards, > > Vinay Sajip > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org (mailto:Python-Dev at python.org) > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/donald.stufft%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Jun 22 10:32:06 2012 From: cournape at gmail.com (David Cournapeau) Date: Fri, 22 Jun 2012 09:32:06 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> Message-ID: On Fri, Jun 22, 2012 at 6:05 AM, Nick Coghlan wrote: > On Fri, Jun 22, 2012 at 10:01 AM, Donald Stufft > wrote: > > The idea i'm hoping for is to stop worrying about one implementation over > > another and > > hoping to create a common format that all the tools can agree upon and > > create/install. > > Right, and this is where it encouraged me to see in the Bento docs > that David had cribbed from RPM in this regard (although I don't > believe he has cribbed *enough*). > > A packaging system really needs to cope with two very different levels > of packaging: > 1. Source distributions (e.g. SRPMs). To get from this to useful > software requires developer tools. > 2. "Binary" distributions (e.g. RPMs). To get from this to useful > software mainly requires a "file copy" utility (well, that and an > archive decompressor). > > An SRPM is *just* a SPEC file and source tarball. That's it. To get > from that to an installed product, you have a bunch of additional > "BuildRequires" dependencies, along with %build and %install scripts > and a %files definition that define what will be packaged up and > included in the binary RPM. The exact nature of the metadata format > doesn't really matter, what matters is that it's a documented standard > that multiple tools can read. > > An RPM includes files that actually get installed on the target > system. An RPM can be arch specific (if they include built binary > bits) or "noarch" if they're platform neutral. > > distutils really only plays at the SRPM level - there is no defined OS > neutral RPM equivalent. That's why I brought up the bdist_simple > discussion earlier in the thread - if we can agree on a standard > bdist_simple format, then we can more cleanly decouple the "build" > step from the "install" step. > > I think one of the key things to learn from the SPEC file format is > the configuration language it used for the various build phases: sh > (technically, any shell on the system, but almost everyone just uses > the default system shell) > > This is why you can integrate whatever build system you like with it: > so long as you can invoke the build from the shell, then you can use > it to make your RPM. > > Now, there's an obvious problem with this: it's completely useless > from a *cross-platform* building point of view. Isn't it a shame > there's no language we could use that would let us invoke build > systems in a cross platform way? Oh, wait... > > So here's some sheer pie-in-the-sky speculation. If people like > elements of this idea enough to run with it, great. If not... oh well: > > - I believe the "egg" term has way too much negative baggage (courtesy > of easy_install), and find the full term Distribution to be too easily > confused with "Linux distribution". However, "Python dist" is > unambiguous (since the more typical abbreviation for an aggregate > distribution is "distro"). Thus, I attempt to systematically refer to > the objects used to distribute Python software from developers to > users as "dists". In practice, this terminology is already used in > many places (distutils, sdist, bdist_msi, bdist_rpm, the .dist-info > format in PEP 376 etc). Thus, Python software is distributed as dists > (either sdists or bdists), which may in turn be converted to distro > packages (e.g. SRPMs and RPMs) for deployment to particular > environments. > > - I reject setup.cfg, as I believe ini-style configuration files are > not appropriate for a metadata format that needs to include file > listings and code fragments > > - I reject bento.info, as I think if we accept > yet-another-custom-configuration-file-format into the standard library > instead of just using YAML, we're even crazier than is already > apparent > I agree having yet another format is a bit crazy, and am actually considering changing bento.info to be a yaml. I initially did got toward a cabal-like syntax instead for the following reasons: - lack of conditional (a must IMO, it is even more useful for cross -platform stuff than it is for RPM only) - yaml becomes quite a bit verbose for some cases I find JSON to be inappropriate because beyond the above issues, it does not support comments, and it is significantly more verbose. That being said, that's just syntax and what matters more is the features we allow: - I like the idea of categorizing like you did better than how it works in bento, but I think one need to be able to create its own category as well. A category is just a mapping from a name to an install directory (see http://cournape.github.com/Bento/html/tutorial.html#installed-data-files-datafiles-section, but we could find another syntax of course). - I don't find the distinction between source and build very useful in the-yet-to-be-implemented description. Or maybe that's just a naming issue, and it is just the same distinction as extra files vs installed files I made in bento ? See next point - regarding build, I don't think we want to force people to implement target locations there. I also don't see how you want to make it work for built files (you don't know the name yet). Can you give an example of how it would work for say extension and built doc ? - regarding hooks: I think it is simpler to have a single file which contains all the hooks, if only to allow for easy communication between hooks and code reuse between hooks. I don't see any drawback to using only one file ? - Besides containing the file bits + metadata, I wonder if one should allow additional fields, that maybe would be tool specific. In bento, there are a couple of such additional fields that may not be very useful to others. - do we want to allow for recursive dist.yaml ? This numpy.distutils feature is used quite a bit, and I believe twisted has something similar. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Jun 22 10:40:47 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 22 Jun 2012 09:40:47 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> Message-ID: On 22 June 2012 06:05, Nick Coghlan wrote: > distutils really only plays at the SRPM level - there is no defined OS > neutral RPM equivalent. That's why I brought up the bdist_simple > discussion earlier in the thread - if we can agree on a standard > bdist_simple format, then we can more cleanly decouple the "build" > step from the "install" step. That was essentially the key insight I was trying to communicate in my "think about the end users" comment. Thanks, Nick! Comments on the rest of your email to follow (if needed) when I've digested it... Paul From tochansky at tochlab.net Fri Jun 22 10:09:54 2012 From: tochansky at tochlab.net (Dmitriy Tochansky) Date: Fri, 22 Jun 2012 08:09:54 +0000 Subject: [Python-Dev] Checking if unsigned int less then zero. Message-ID: <302e463f840221a066fb659e4963abb6@tochlab.net> Hello! Playing with cpython source, I found some strange strings in socketmodule.c: --- if (flowinfo < 0 || flowinfo > 0xfffff) { PyErr_SetString( PyExc_OverflowError, "getsockaddrarg: flowinfo must be 0-1048575."); return 0; } --- --- if (flowinfo < 0 || flowinfo > 0xfffff) { PyErr_SetString(PyExc_OverflowError, "getsockaddrarg: flowinfo must be 0-1048575."); return NULL; } --- The flowinfo variable declared few strings above as unsgined int. Is there any practical sense in this check? Seems like gcc just removes this check. I think any compiler will generate code that checks as unsigned, for example in x86 its JAE/JGE. May be this code is for "bad" compilers or exotic arch? -- Dmitriy From vinay_sajip at yahoo.co.uk Fri Jun 22 11:11:20 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 22 Jun 2012 09:11:20 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> Message-ID: David Cournapeau gmail.com> writes: > I agree having yet another format is a bit crazy, and am actually considering changing bento.info to be a yaml. I initially did got toward a cabal-like syntax instead for the following reasons: > ? - lack of conditional (a must IMO, it is even more useful for cross -platform stuff than it is for RPM only) Conditionals could perhaps be handled in different ways, e.g. 1. Markers as used in distutils2/packaging (where the condition is platform or version related) 2. A scheme to resolve variables, such as is used in PEP 391 (dictionary-based configuration for logging). If conditionals are much more involved than this, there's a possibility of introducing too much program logic - the setup.py situation > ? - regarding hooks: I think it is simpler to have a single file which contains all the hooks, if only to allow for easy communication between hooks and code reuse between hooks. I don't see any drawback to using only one file ? I was assuming that the dist.yaml file would just have callable references here; I suppose having (sizable) Python fragments in dist.yaml might become unwieldy. Regards, Vinay Sajip From d.s.seljebotn at astro.uio.no Fri Jun 22 11:22:14 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Fri, 22 Jun 2012 11:22:14 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> Message-ID: <4FE43946.2050505@astro.uio.no> On 06/22/2012 10:40 AM, Paul Moore wrote: > On 22 June 2012 06:05, Nick Coghlan wrote: >> distutils really only plays at the SRPM level - there is no defined OS >> neutral RPM equivalent. That's why I brought up the bdist_simple >> discussion earlier in the thread - if we can agree on a standard >> bdist_simple format, then we can more cleanly decouple the "build" >> step from the "install" step. > > That was essentially the key insight I was trying to communicate in my > "think about the end users" comment. Thanks, Nick! The subtlety here is that there's no way to know before building the package what files should be installed. (For simple extensions, and perhaps documentation, you could get away with ad-hoc rules or special support for Sphinx and what-not, but there's no general solution that works in all cases.) What Bento does is have one metadata file for the source-package, and another metadata file (manifest) for the built-package. The latter is normally generated by the build process (but follows a standard nevertheless). Then that manifest is used for installation (through several available methods). Dag From donald.stufft at gmail.com Fri Jun 22 11:38:58 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Fri, 22 Jun 2012 05:38:58 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE43946.2050505@astro.uio.no> References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> Message-ID: On Friday, June 22, 2012 at 5:22 AM, Dag Sverre Seljebotn wrote: > > What Bento does is have one metadata file for the source-package, and > another metadata file (manifest) for the built-package. The latter is > normally generated by the build process (but follows a standard > nevertheless). Then that manifest is used for installation (through > several available methods). > > >From what I understand, this dist.(yml|json|ini) would be replacing the mainfest not the bento.info then. When bento builds a package compatible with the proposed format it would instead of generating it's own manifest it would generate the dist.(yml|json|ini). -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.s.seljebotn at astro.uio.no Fri Jun 22 11:52:44 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Fri, 22 Jun 2012 11:52:44 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> Message-ID: <4FE4406C.5040009@astro.uio.no> On 06/22/2012 11:38 AM, Donald Stufft wrote: > On Friday, June 22, 2012 at 5:22 AM, Dag Sverre Seljebotn wrote: >> >> What Bento does is have one metadata file for the source-package, and >> another metadata file (manifest) for the built-package. The latter is >> normally generated by the build process (but follows a standard >> nevertheless). Then that manifest is used for installation (through >> several available methods). > From what I understand, this dist.(yml|json|ini) would be replacing the > mainfest not the bento.info then. When bento builds a package compatible > with the proposed format it would instead of generating it's own manifest > it would generate the dist.(yml|json|ini). Well, but I think you need to care about the whole process here. Focusing only on the "end-user case" and binary installers has the flip side that smuggling in a back door is incredibly easy in compiled binaries. You simply upload a binary that doesn't match the source. The reason PyPI isn't one big security risk is that packages are built from source, and so you can have some confidence that backdoors would be noticed and highlighted by somebody. Having a common standards for binary installation phase would be great sure, but security-minded users would still need to build from source in every case (or trust a 3rt party build farm that builds from source). The reason you can trust RPMs at all is because they're built from SRPMs. Dag From __peter__ at web.de Fri Jun 22 11:55:16 2012 From: __peter__ at web.de (Peter Otten) Date: Fri, 22 Jun 2012 11:55:16 +0200 Subject: [Python-Dev] Checking if unsigned int less then zero. References: <302e463f840221a066fb659e4963abb6@tochlab.net> Message-ID: Dmitriy Tochansky wrote: > Playing with cpython source, I found some strange strings in > socketmodule.c: > > --- > if (flowinfo < 0 || flowinfo > 0xfffff) { > PyErr_SetString( > PyExc_OverflowError, > "getsockaddrarg: flowinfo must be 0-1048575."); > return 0; > } > --- > > --- > if (flowinfo < 0 || flowinfo > 0xfffff) { > PyErr_SetString(PyExc_OverflowError, > "getsockaddrarg: flowinfo must be 0-1048575."); > return NULL; > } > --- > > The flowinfo variable declared few strings above as unsgined int. Is > there any practical sense in this check? Seems like gcc just removes > this check. I think any compiler will generate code that checks as > unsigned, for example in x86 its JAE/JGE. May be this code is for "bad" > compilers or exotic arch? I think you are right, the < 0 check is redundant. The developers probably forgot to remove it when http://bugs.python.org/issue9975 was fixed. From solipsis at pitrou.net Fri Jun 22 11:55:51 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 22 Jun 2012 11:55:51 +0200 Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> Message-ID: <20120622115551.049fdb43@pitrou.net> On Fri, 22 Jun 2012 15:05:08 +1000 Nick Coghlan wrote: > > So here's some sheer pie-in-the-sky speculation. If people like > elements of this idea enough to run with it, great. If not... oh well: Could this kind of discussion perhaps go on python-ideas? Thanks Antoine. From vinay_sajip at yahoo.co.uk Fri Jun 22 12:09:56 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 22 Jun 2012 10:09:56 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE4406C.5040009@astro.uio.no> Message-ID: Dag Sverre Seljebotn astro.uio.no> writes: > Well, but I think you need to care about the whole process here. > > Focusing only on the "end-user case" and binary installers has the flip > side that smuggling in a back door is incredibly easy in compiled > binaries. You simply upload a binary that doesn't match the source. > > The reason PyPI isn't one big security risk is that packages are built > from source, and so you can have some confidence that backdoors would be > noticed and highlighted by somebody. > > Having a common standards for binary installation phase would be great > sure, but security-minded users would still need to build from source in > every case (or trust a 3rt party build farm that builds from source). > The reason you can trust RPMs at all is because they're built from SRPMs. Easy enough on Posix platforms, perhaps, but what about Windows? One can't expect a C compiler to be installed everywhere. Perhaps security against backdoors could also be provided through other mechanisms, such as signing of binary installers. Regards, Vinay Sajip From cournape at gmail.com Fri Jun 22 12:20:37 2012 From: cournape at gmail.com (David Cournapeau) Date: Fri, 22 Jun 2012 11:20:37 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> Message-ID: On Fri, Jun 22, 2012 at 10:38 AM, Donald Stufft wrote: > On Friday, June 22, 2012 at 5:22 AM, Dag Sverre Seljebotn wrote: > > > What Bento does is have one metadata file for the source-package, and > another metadata file (manifest) for the built-package. The latter is > normally generated by the build process (but follows a standard > nevertheless). Then that manifest is used for installation (through > several available methods). > > From what I understand, this dist.(yml|json|ini) would be replacing the > mainfest not the bento.info then. When bento builds a package compatible > with the proposed format it would instead of generating it's own manifest > it would generate the dist.(yml|json|ini). If by manifest you mean the build manifest, then that's not desirable: the manifest contains the explicit filenames, and those are platform/environment specific. You don't want this to be user-facing. The way it should work is: - package description (dist.yaml, setup.cfg, bento.info, whatever) - use this as input to the build process - build process produces a build manifest that is platform specific. It should be extremely simple, no conditional or anything, and should ideally be fed to both python and non-python programs. - build manifest is then the sole input to the process building installers (besides the actual build tree, of course). Conceptually, after the build, you can do : manifest = BuildManifest.from_file("build_manifest.json") manifest.update_path(path_configuration) # This is needed so as to allow path scheme to be changed depending on installer format for category, source, target on manifest.iter_files(): # simple case is copying source to target, potentially using the category label for category specific stuff. This was enough for me to do straight install, eggs, .exe and .msi windows installers and .mpkg from that with a relatively simple API. Bonus point, if you include this file inside the installers, you can actually losslessly convert from one to the other. David From d.s.seljebotn at astro.uio.no Fri Jun 22 12:28:30 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Fri, 22 Jun 2012 12:28:30 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> Message-ID: <4FE448CE.7040909@astro.uio.no> On 06/22/2012 12:20 PM, David Cournapeau wrote: > On Fri, Jun 22, 2012 at 10:38 AM, Donald Stufft wrote: >> On Friday, June 22, 2012 at 5:22 AM, Dag Sverre Seljebotn wrote: >> >> >> What Bento does is have one metadata file for the source-package, and >> another metadata file (manifest) for the built-package. The latter is >> normally generated by the build process (but follows a standard >> nevertheless). Then that manifest is used for installation (through >> several available methods). >> >> From what I understand, this dist.(yml|json|ini) would be replacing the >> mainfest not the bento.info then. When bento builds a package compatible >> with the proposed format it would instead of generating it's own manifest >> it would generate the dist.(yml|json|ini). > > If by manifest you mean the build manifest, then that's not desirable: > the manifest contains the explicit filenames, and those are > platform/environment specific. You don't want this to be user-facing. > > The way it should work is: > - package description (dist.yaml, setup.cfg, bento.info, whatever) > - use this as input to the build process > - build process produces a build manifest that is platform > specific. It should be extremely simple, no conditional or anything, > and should ideally be fed to both python and non-python programs. > - build manifest is then the sole input to the process building > installers (besides the actual build tree, of course). > > Conceptually, after the build, you can do : > > manifest = BuildManifest.from_file("build_manifest.json") > manifest.update_path(path_configuration) # This is needed so as to > allow path scheme to be changed depending on installer format > for category, source, target on manifest.iter_files(): > # simple case is copying source to target, potentially using the > category label for category specific stuff. > > This was enough for me to do straight install, eggs, .exe and .msi > windows installers and .mpkg from that with a relatively simple API. > Bonus point, if you include this file inside the installers, you can > actually losslessly convert from one to the other. I think Donald's suggestion can be phrased as this: During build, copy the dist metadata (name, version, dependencies...) to the build manifest as well. Then allow to upload only the built versions for different platforms to PyPI etc. and allow relative anarchy to reign in how you create the built dists. And I'm saying that would encourage a culture that's very dangerous from a security perspective. Even if many uses binaries, it is important to encourage a culture where it is always trivial (well, as trivial as we can possibly make it, in the case of Windows) to build from source for those who wish to. Making the user-facing entry point of the dist metadata be in the source package rather than the binary package seems like a necessary (but not sufficient) condition for such a culture. Dag From donald.stufft at gmail.com Fri Jun 22 12:35:27 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Fri, 22 Jun 2012 06:35:27 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE4406C.5040009@astro.uio.no> References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE4406C.5040009@astro.uio.no> Message-ID: <82D3119BA84A4CB4B46656385FFC4A5A@gmail.com> On Friday, June 22, 2012 at 5:52 AM, Dag Sverre Seljebotn wrote: > > The reason PyPI isn't one big security risk is that packages are built > from source, and so you can have some confidence that backdoors would be > noticed and highlighted by somebody. > > Having a common standards for binary installation phase would be great > sure, but security-minded users would still need to build from source in > every case (or trust a 3rt party build farm that builds from source). > The reason you can trust RPMs at all is because they're built from SRPMs. > > Dag The reason you trust RPM's is not because they are built from SRPM's, but because you trust the people running the repositories. In the case of PyPI you can't make a global call to implicitly trust all packages because there is no gatekeeper as in an RPM system, so it falls to the individual to decide for him or herself which authors they trust and which authors they do not trust. But this proposal alludes to both source dists and built dists, either which may be published and installed from. In the case of a source dist the package format would include all the metadata of the package. Included in that is a python script that knows how to build this particular package (if special steps are required). This script could simply call out to an already existing build system, or if simple enough work on it's own. Source dists would also obviously contain the source. In the case of a binary dist the package format would include all the metadata of the package, plus the binary files. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald.stufft at gmail.com Fri Jun 22 12:38:49 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Fri, 22 Jun 2012 06:38:49 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> Message-ID: <445D9600A91C43D29CB74F162260FD11@gmail.com> On Friday, June 22, 2012 at 6:20 AM, David Cournapeau wrote: > If by manifest you mean the build manifest, then that's not desirable: > the manifest contains the explicit filenames, and those are > platform/environment specific. You don't want this to be user-facing. > It appears I misunderstood the files that bento uses then ;) It is late (well early now) and I have not used bento extensively. What I suggest mirrors RPM's similarly except the build step (when there is indeed a build step) is handled by a python script included in the package by the author of said package. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Jun 22 13:27:19 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 22 Jun 2012 12:27:19 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE448CE.7040909@astro.uio.no> References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> Message-ID: On 22 June 2012 11:28, Dag Sverre Seljebotn wrote: > And I'm saying that would encourage a culture that's very dangerous from a > security perspective. Even if many uses binaries, it is important to > encourage a culture where it is always trivial (well, as trivial as we can > possibly make it, in the case of Windows) to build from source for those who > wish to. And what I am trying to say is that no matter how much effort gets put into trying to make build from source easy, it'll pretty much always not be even remotely trivial on Windows. There has been a lot of work done to try to achieve this, but as far as I've seen, it's always failed. One external dependency, and you're in a mess. Unless you're proposing some means of Python's packaging solution encapsulating URLs for binary libraries of external packages which will be automatically downloaded - and then all the security holes open again. You have to remember that not only do many Windows users not have a compiler, but also getting a compiler is non-trivial (not hard, just download and install VS Express, but still a pain to do just to get (say) lxml installed). And there is no standard location for external libraries in Windows, so you also need the end user to specify where everything is (or guess, or mandate a directory structure). The only easy-to-use solution that has ever really worked on Windows in my experience is downloadable binaries. Blame whoever you like, point out that it's not good practice if you must, but don't provide binaries and you lose a major part of your user base. (You may choose not to care about losing that group, that's a different question). Signed binaries may be a solution. My experience with signed binaries has not been exactly positive, but it's an option. Presumably PyPI would be the trusted authority? Would PyPI and the downloaders need to use SSL? Would developers need to have signing keys to use PyPI? And more to the point, do the people designing the packaging solutions have experience with this sort of stuff (I sure don't :-))? Paul. From vinay_sajip at yahoo.co.uk Fri Jun 22 14:09:04 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 22 Jun 2012 12:09:04 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> Message-ID: Paul Moore gmail.com> writes: > Signed binaries may be a solution. My experience with signed binaries > has not been exactly positive, but it's an option. Presumably PyPI > would be the trusted authority? Would PyPI and the downloaders need to > use SSL? Would developers need to have signing keys to use PyPI? And > more to the point, do the people designing the packaging solutions > have experience with this sort of stuff (I sure don't )? I'm curious - what problems have you had with signed binaries? I dipped my toes in this particular pool with the Python launcher installers - I got a code signing certificate and signed my MSIs with it. The process was fairly painless. As far as I know, all signing does is to indicate that the binary package hasn't been tampered with and allows the downloader to decide whether they trust the signer not to have allowed backdoors, etc. I don't see that it mandates use of SSL, or even signing, by anyone. At least some people will require that an installer be invokable with an option that causes it to bail if any part of what's being installed can't be verified (for some value of "verified"). Regards, Vinay Sajip From barry at python.org Fri Jun 22 14:14:04 2012 From: barry at python.org (Barry Warsaw) Date: Fri, 22 Jun 2012 08:14:04 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> Message-ID: <20120622081404.659d8a65@resist.wooz.org> On Jun 22, 2012, at 12:27 PM, Paul Moore wrote: >And what I am trying to say is that no matter how much effort gets put >into trying to make build from source easy, it'll pretty much always >not be even remotely trivial on Windows. It seems to me that a "Windows build service" is something the Python infrastructure could support. This would be analogous to the types of binary build services Linux distros provide, e.g. the normal Ubuntu workflow of uploading a source package to a build daemon, which churns away for a while, and results in platform-specific binary packages which can be directly installed on an end-user system. -Barry From barry at python.org Fri Jun 22 14:20:20 2012 From: barry at python.org (Barry Warsaw) Date: Fri, 22 Jun 2012 08:20:20 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE413E3.6050602@ziade.org> Message-ID: <20120622082020.1400a483@resist.wooz.org> On Jun 22, 2012, at 07:49 AM, Vinay Sajip wrote: >The format-neutral alternative I used for logging configuration was a >dictionary schema - JSON, YAML and Python code can all be mapped to >that. Perhaps the relevant APIs can work at the dict layer. I don't much care whether it's ini, json, or yaml, but I do think it needs to be declarative and language neutral. I don't want to lock up all that metadata into Python data structures. There are valid use cases for being able to access the data from outside of Python. And please give some thought to test declarations. We need a standard way to declare how a package's tests should be run, and what the test dependencies are, so that we can improve the quality of all distro/binary packages by running the test suite at build time. Having to guess whether it's `python setup.py test` or `python -m unittest discover` or whether nose or py.test is required, etc. etc. is no good. -Barry From mail at timgolden.me.uk Fri Jun 22 14:23:00 2012 From: mail at timgolden.me.uk (Tim Golden) Date: Fri, 22 Jun 2012 13:23:00 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <20120622081404.659d8a65@resist.wooz.org> References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622081404.659d8a65@resist.wooz.org> Message-ID: <4FE463A4.7070108@timgolden.me.uk> On 22/06/2012 13:14, Barry Warsaw wrote: > On Jun 22, 2012, at 12:27 PM, Paul Moore wrote: > >> And what I am trying to say is that no matter how much effort gets put >> into trying to make build from source easy, it'll pretty much always >> not be even remotely trivial on Windows. > > It seems to me that a "Windows build service" is something the Python > infrastructure could support. This would be analogous to the types of binary > build services Linux distros provide, e.g. the normal Ubuntu workflow of > uploading a source package to a build daemon, which churns away for a while, > and results in platform-specific binary packages which can be directly > installed on an end-user system. The devil would be in the details. As Paul Moore pointed out earlier, building *any* extension which relies on some 3rd-party library on Windows (mysql, libxml, sdl, whatever) can be an exercise in iterative frustration as you discover build requirements on build requirements. This isn't just down to Python: try building TortoiseSvn by yourself, for example. That's not say that this is insurmountable. Christopher Gohlke has for a long while maintained an unofficial binary store at his site: http://www.lfd.uci.edu/~gohlke/pythonlibs/ but I've no idea how much work he's had to put in to get all the dependencies built. Someone who just turned up with a new build: "Here's a Python interface for ToastRack -- the new card-printing service" would need a way to provide the proposed build infrastructure with what was needed to build the library behind the Python extension. Little fleas have smaller fleas... and so on. TJG From stephen at xemacs.org Fri Jun 22 14:39:18 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 22 Jun 2012 21:39:18 +0900 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> Nick Coghlan writes: > On Fri, Jun 22, 2012 at 4:25 PM, Stephen J. Turnbull wrote: > > Paul Moore writes: > > > > ?> End users should not need packaging tools on their machines. > > > > I think this desideratum is close to obsolete these days, with webapps > > in "the cloud" downloading resources (including, but not limited to, > > code) on an as-needed basis. > > There's still a lot more to the software world than what happens on > the public internet. That's taking just one extreme out of context. The other extreme I mentioned is a whole (virtual) Python environment to go with your app. And I don't really see a middle ground, unless you're delivering a non-standard stdlib anyway, with all the stuff that end users don't need stripped out of it. They'll get the debugger and the profiler with Python; should we excise them from the stdlib just because end users don't need them? How about packaging diagnostic tools, especially in the early days of the new module? I agreed that end users should not need to download the packaging tools separately or in advance. But that's rather different from having a *requirement* that the tools not be included, or that installers should have no dependencies on the toolset outside of a minimal and opaque runtime module. From aclark at aclark.net Fri Jun 22 15:13:18 2012 From: aclark at aclark.net (Alex Clark) Date: Fri, 22 Jun 2012 09:13:18 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> Message-ID: Hi, On 6/22/12 1:05 AM, Nick Coghlan wrote: > On Fri, Jun 22, 2012 at 10:01 AM, Donald Stufft wrote: >> The idea i'm hoping for is to stop worrying about one implementation over >> another and >> hoping to create a common format that all the tools can agree upon and >> create/install. > > Right, and this is where it encouraged me to see in the Bento docs > that David had cribbed from RPM in this regard (although I don't > believe he has cribbed *enough*). > > A packaging system really needs to cope with two very different levels > of packaging: > 1. Source distributions (e.g. SRPMs). To get from this to useful > software requires developer tools. > 2. "Binary" distributions (e.g. RPMs). To get from this to useful > software mainly requires a "file copy" utility (well, that and an > archive decompressor). > > An SRPM is *just* a SPEC file and source tarball. That's it. To get > from that to an installed product, you have a bunch of additional > "BuildRequires" dependencies, along with %build and %install scripts > and a %files definition that define what will be packaged up and > included in the binary RPM. The exact nature of the metadata format > doesn't really matter, what matters is that it's a documented standard > that multiple tools can read. > > An RPM includes files that actually get installed on the target > system. An RPM can be arch specific (if they include built binary > bits) or "noarch" if they're platform neutral. > > distutils really only plays at the SRPM level - there is no defined OS > neutral RPM equivalent. That's why I brought up the bdist_simple > discussion earlier in the thread - if we can agree on a standard > bdist_simple format, then we can more cleanly decouple the "build" > step from the "install" step. > > I think one of the key things to learn from the SPEC file format is > the configuration language it used for the various build phases: sh > (technically, any shell on the system, but almost everyone just uses > the default system shell) > > This is why you can integrate whatever build system you like with it: > so long as you can invoke the build from the shell, then you can use > it to make your RPM. > > Now, there's an obvious problem with this: it's completely useless > from a *cross-platform* building point of view. Isn't it a shame > there's no language we could use that would let us invoke build > systems in a cross platform way? Oh, wait... > > So here's some sheer pie-in-the-sky speculation. If people like > elements of this idea enough to run with it, great. If not... oh well: > > - I believe the "egg" term has way too much negative baggage (courtesy > of easy_install), and find the full term Distribution to be too easily > confused with "Linux distribution". However, "Python dist" is > unambiguous (since the more typical abbreviation for an aggregate > distribution is "distro"). Thus, I attempt to systematically refer to > the objects used to distribute Python software from developers to > users as "dists". In practice, this terminology is already used in > many places (distutils, sdist, bdist_msi, bdist_rpm, the .dist-info > format in PEP 376 etc). Thus, Python software is distributed as dists > (either sdists or bdists), which may in turn be converted to distro > packages (e.g. SRPMs and RPMs) for deployment to particular > environments. +0.5. There is definitely a problem with the term "egg", but I don't think negative baggage is it. Rather, I think "egg" is just plain too confusing, and perhaps too "cutsie", too. A blurb from the internet[1]: "An egg is a bundle that contains all the package data. In the ideal case, an egg is a zip-compressed file with all the necessary package files. But in some cases, setuptools decides (or is told by switches) that a package should not be zip-compressed. In those cases, an egg is simply an uncompressed subdirectory, but with the same contents. The single file version is handy for transporting, and saves a little bit of disk space, but an egg directory is functionally and organizationally identical." Compared to the definitions of package and distribution I posted earlier in this thread, the confusion is: - A package is one or more modules inside another module, a distribution is a compressed archive of those modules, but an egg is either or both. - The blurb author uses the term "package data" presumably to refer to package modules, package data (i.e. resources like templates, etc), and package metadata. So to avoid this confusion I've personally stopped using the term "egg" in favor of "package". (Outside a computer context, everyone knows a package is something "with stuff in it") But as Donald said, what we are all talking about is technically called a "distribution". ("Honey, a distribution arrived for you in the mail today!" :-)) I love that Nick is thinking "outside the box" re: terminology, but I'm not 100% convinced the new term should be "dist". Rather I propose: - Change the definition of package to: a module (or modules) plus package data and package metadata inside another module. - Refer to source dists as "source packages" i.e. packages containing source code. - Refer to binary dists as "binary packages" i.e. packages containing byte code and executables. I believe this is the most "human" thing we can do[2]. Alex [1] http://www.ibm.com/developerworks/linux/library/l-cppeak3/index.html [2] http://python-for-humans.heroku.com > > - I reject setup.cfg, as I believe ini-style configuration files are > not appropriate for a metadata format that needs to include file > listings and code fragments > > - I reject bento.info, as I think if we accept > yet-another-custom-configuration-file-format into the standard library > instead of just using YAML, we're even crazier than is already > apparent > > - I shall use "dist.yaml" as my proposed name for my "I wish I could > define packages like this" format (and yes, that means adding yaml > support to the standard library is part of the wish) > > - many of the details below will be flawed, but I want to give a clear > idea for how a concept like this might work in practice > > - we need to define a clear set of build phases, and then design the > dist metadata format accordingly. For example: > - source > - uses a "source" section in dist.yaml > - "source/install" maps source files directly to desired > install locations > - essentially what the setup.cfg Resources section tries to do > - used for pure Python code, documentation, etc > - See below for example > - "source/files" defines a list of extra files to be included > - "source/exclude" defines the list of files to be excluded > - "source/run" defines a Python fragment to be executed > - serves a similar purpose to the "files" section in setup.cfg > - creates a temporary directory (and sets it as the working directory) > - dist.yaml is copied to the temporary directory > - all files to be installed are copied to the temporary directory > - all extra files are copied to the temporary directory > - the Python fragment in "source/run" is executed (which can > thus easily add more files) > - if sdist archive creation is requested, entire contents of > temporary directory are included > - build > - uses a "build" section in dist.yaml > - "build/install" maps built files to desired install locations > - like source/install, but for build artifacts > - compiled C extensions, .pyc and .pyo files, etc would all go here > - "build/run" defines a Python fragment to be executed > - "build/files" defines the list of files to be included > - "build/exclude" defines the list of files to be excluded > - "build/requires" defines extra dependencies not needed at runtime > - starting environment is a source directory that is either: > - preexisting (e.g. to allow building in-place in the source tree) > - created by running source first > - created by unpacking an sdist archive > - the Python fragment in "build/run" is executed to trigger the build > - if the build succeeds (i.e. doesn't throw an exception) > - create a temporary directory > - copy dist.yaml > - copy all specified files > - this is the easiest way to exclude build artifacts from > the distribution, while still keeping them around to enable > incremental builds > - if bdist_simple archive creation is requested, entire > contents of temporary directory are included > - other bdist formats (such as bdist_rpm) will have their own > rules for getting from the bdist_simple format to the platform > specific format > - install > - uses an "install" section in dist.yaml > - "install/pre" defines a Python fragment to be executed > before copying files > - "install/post" defines a Python fragment to be executed > after copying files > - starting environment is a bdist_simple directory that is either: > - preexisting (e.g. to allow creation by system packaging tools) > - created by running build first > - created by unpacking a bdist_simple archive > - end result is a fully installed and usable piece of software > - test > - uses a "test" section in dist.yaml > - "test/run" defines a Python fragment to be executed to start the tests > - "test/requires" defines extra dependencies needed to run the > test suite > > - Example "source/install" based on > http://alexis.notmyidea.org/distutils2/setupcfg.html#complete-example > (my YAML may be a bit dodgy). > - With this scheme, module installation is just another install category. > - A solution for easily installing entire subtrees is desirable. I > propose the recursive glob ** syntax for that purpose. > - Unlike setup.cfg, every category would have an "-excluded" > counterpart to filter unwanted files. Explicit is better than > implicit. > > source: > install: > modules: > example.py > example_pkg/*.py > example_pkg/**/*.py > example_pkg/resource.txt > doc: > README > doc/* > doc-excluded: > doc/man > man: > doc/man > scripts: > # Directory details are stripped automatically > scripts/LAUNCH > scripts/*.{sh,bat} > # But subdirectories can be made explicit > extras/: > scripts/extras/*.{sh,bat} > > - the goal of a dist.yaml syntax would be to be *explicit* and > *comprehensive*. If this gets too verbose, then the solution would be > dist.yaml generators that are less expressive, but also reduce the > necessary boilerplate. > > - a typical "sdist" will now just be an archive consisting of: > - the project's dist.yaml file > - all files created by the "source" phase > > - the "bdist_simple" format will just be an archive consisting of: > - the project's dist.yaml file > - all files created by the "build" phase > > - the source and build run hooks and install pre and post hooks become > the way you integrate with arbitrary build systems. No fancy command > or compiler system or anything like that, you just import whatever you > need and call it with the appropriate arguments. To other tools, they > will just be opaque chunks of text, but to the build system, they're > executable pieces of Python code, just as RPM includes executable > scripts. > > Cheers, > Nick. > -- Alex Clark ? http://pythonpackages.com From p.f.moore at gmail.com Fri Jun 22 15:24:19 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 22 Jun 2012 14:24:19 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On 22 June 2012 13:39, Stephen J. Turnbull wrote: > Nick Coghlan writes: > ?> On Fri, Jun 22, 2012 at 4:25 PM, Stephen J. Turnbull wrote: > ?> > Paul Moore writes: > ?> > > ?> > ?> End users should not need packaging tools on their machines. > ?> > > ?> > I think this desideratum is close to obsolete these days, with webapps > ?> > in "the cloud" downloading resources (including, but not limited to, > ?> > code) on an as-needed basis. > ?> > ?> There's still a lot more to the software world than what happens on > ?> the public internet. > > That's taking just one extreme out of context. ?The other extreme I > mentioned is a whole (virtual) Python environment to go with your app. > > And I don't really see a middle ground, unless you're delivering a > non-standard stdlib anyway, with all the stuff that end users don't > need stripped out of it. ?They'll get the debugger and the profiler > with Python; should we excise them from the stdlib just because end > users don't need them? ?How about packaging diagnostic tools, > especially in the early days of the new module? > > I agreed that end users should not need to download the packaging > tools separately or in advance. ?But that's rather different from > having a *requirement* that the tools not be included, or that > installers should have no dependencies on the toolset outside of a > minimal and opaque runtime module. I suppose if you're saying that "pip install lxml" should download and install for me Visual Studio, libxml2 sources and any dependencies, and run all the builds, then you're right. But I assume you're not. So why should I need to install Visual Studio just to *use* lxml? On the other hand, I concede that there are some grey areas between the 2 extremes. I don't know enough to do a proper review of the various cases. But I do think that there's a risk that the discussion, because it is necessarily driven by developers, forgets that "end users" really don't have some tools that a developer would consider "trivial" to have. Paul. From cournape at gmail.com Fri Jun 22 15:47:27 2012 From: cournape at gmail.com (David Cournapeau) Date: Fri, 22 Jun 2012 14:47:27 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Fri, Jun 22, 2012 at 2:24 PM, Paul Moore wrote: > > I suppose if you're saying that "pip install lxml" should download and > install for me Visual Studio, libxml2 sources and any dependencies, > and run all the builds, then you're right. But I assume you're not. So > why should I need to install Visual Studio just to *use* lxml? > > On the other hand, I concede that there are some grey areas between > the 2 extremes. I don't know enough to do a proper review of the > various cases. But I do think that there's a risk that the discussion, > because it is necessarily driven by developers, forgets that "end > users" really don't have some tools that a developer would consider > "trivial" to have. Binary installers are important: if you think lxml is hard on windows, think about what it means to build fortran libraries and link them with visual studio for scipy :) That's one of the reason virtualenv + pip is not that useful for numpy/scipy end users. Bento has code to build basic binary installers in all the formats supported by distutils except for RPM, and the code is by design mostly independ of the rest. I would be happy to clean up that code to make it more reusable (most of it is extracted from distutils/setuptools anyway). But it should be completely orthogonal to the issue of package description: if there is one thing that distutils got horribly wrong, that's tying everything altogether. The uncoupling is the key, because otherwise, one keep discussing all the issues together, which is part of what makes the discussion so hard. Different people have different needs. David From p.f.moore at gmail.com Fri Jun 22 15:48:58 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 22 Jun 2012 14:48:58 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> Message-ID: On 22 June 2012 13:09, Vinay Sajip wrote: > Paul Moore gmail.com> writes: > >> Signed binaries may be a solution. My experience with signed binaries >> has not been exactly positive, but it's an option. Presumably PyPI >> would be the trusted authority? Would PyPI and the downloaders need to >> use SSL? Would developers need to have signing keys to use PyPI? And >> more to the point, do the people designing the packaging solutions >> have experience with this sort of stuff (I sure don't )? > > I'm curious - what problems have you had with signed binaries? As a user, I guess not that much. I may be misremembering bad experiences with different things. We've had annoyances with self-signed jars, and websites. It's generally more about annoying "can't confirm this should be trusted, please verify" messages which people end up just saying "yes" to (and so ruining any value from the check). But I don't know how often I have used them, to the extent that the only time I'm aware of them is when they don't work silently (e.g., I get a prompt asking if I want to trust this publisher - this is essentially a failure, as I always say "yes" simply because I have no idea how I would go about deciding that I do trust them, beyond what I've already done in locating and downloading the software from them!) > I dipped my toes > in this particular pool with the Python launcher installers - I got a code > signing certificate and signed my MSIs with it. The process was fairly painless. OK, that's a good example, I didn't even realise those installers were signed, making it an excellent example of how easy it can be when it works. But you say "I got a code signing certificate". How? When I dabbled with signing, the only option I could find that didn't involve paying and/or having a registered domain of my own was a self-signed certificate, which from a UI point of view seems of little use "Paul Moore says you should trust him. Do you? Yes/No"... If signed binaries is the way we go, then we should be aware that we exclude people who don't have certificates from uploading to PyPI. Maybe that's OK, but without some sort of check I don't know how many current developers that would exclude, let alone how many potential developers would be put off. A Python-supported build farm, which signed code on behalf of developers, might alleviate this. But then we need to protect against malicious code being submitted to the build farm, etc. > As far as I know, all signing does is to indicate that the binary package hasn't > been tampered with and allows the downloader to decide whether they trust the > signer not to have allowed backdoors, etc. I don't see that it mandates use of > SSL, or even signing, by anyone. At least some people will require that an > installer be invokable with an option that causes it to bail if any part of > what's being installed can't be verified (for some value of "verified"). Fair enough. I don't object to offering the option to verify signatures (I think I said something like that in an earlier message). I do have concerns about making signed code mandatory. (Not least over whether it'd let me install my own unsigned code!) Paul From solipsis at pitrou.net Fri Jun 22 16:19:10 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 22 Jun 2012 16:19:10 +0200 Subject: [Python-Dev] Signed packages References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> Message-ID: <20120622161910.5baf0584@pitrou.net> On Fri, 22 Jun 2012 12:27:19 +0100 Paul Moore wrote: > > Signed binaries may be a solution. My experience with signed binaries > has not been exactly positive, but it's an option. Presumably PyPI > would be the trusted authority? Would PyPI and the downloaders need to > use SSL? Would developers need to have signing keys to use PyPI? And > more to the point, do the people designing the packaging solutions > have experience with this sort of stuff (I sure don't :-))? The ones signing the binaries would have to be the packagers, not PyPI. Also, if packages are signed, you arguably don't need to use SSL when downloading them (but SSL can still be useful for other purposes e.g. navigating in the catalog). PyPI-signing of packages would not achieve anything, since PyPI cannot vouch for the quality and non-maliciousness of uploaded files. It would only serve as a replacement for SSL downloads. Regards Antoine. From martin at v.loewis.de Fri Jun 22 17:24:43 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Fri, 22 Jun 2012 17:24:43 +0200 Subject: [Python-Dev] Signed packages In-Reply-To: <20120622161910.5baf0584@pitrou.net> References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622161910.5baf0584@pitrou.net> Message-ID: <20120622172443.Horde.096CMqGZi1VP5I477AO0JhA@webmail.df.eu> Zitat von Antoine Pitrou : > On Fri, 22 Jun 2012 12:27:19 +0100 > Paul Moore wrote: >> >> Signed binaries may be a solution. My experience with signed binaries >> has not been exactly positive, but it's an option. Presumably PyPI >> would be the trusted authority? Would PyPI and the downloaders need to >> use SSL? Would developers need to have signing keys to use PyPI? And >> more to the point, do the people designing the packaging solutions >> have experience with this sort of stuff (I sure don't :-))? > > The ones signing the binaries would have to be the packagers, not PyPI. It depends. PyPI already signs all binaries (essentially) as part of the mirror protocol. What this proves is that the mirror has not modified the data compared to the copy of PyPI. If PyPI can be trusted not to modify the binaries, then this also proves that the binaries are the same as originally uploaded. What this doesn't prove is that the upload was really made by the declared author of the package (which could be prevented by signing the packages by the original author); it also doesn't prove that the binaries are free of malicous code (which no amount of signing can prove). > PyPI-signing of packages would not achieve anything, since PyPI cannot > vouch for the quality and non-maliciousness of uploaded files. That's just not true. It can prove that the files have not been modified by mirrors, caches, and the like, of which there are plenty in practice. > It would only serve as a replacement for SSL downloads. See above. Also notice that such signing is already implemented, as part of PEP 381. Regards, Martin From vinay_sajip at yahoo.co.uk Fri Jun 22 17:36:47 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 22 Jun 2012 15:36:47 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> Message-ID: Paul Moore gmail.com> writes: > As a user, I guess not that much. I may be misremembering bad > experiences with different things. We've had annoyances with > self-signed jars, and websites. It's generally more about annoying > "can't confirm this should be trusted, please verify" messages which > people end up just saying "yes" to (and so ruining any value from the > check). Like those pesky EULAs ;-) > But you say "I got a code signing certificate". How? When I dabbled > with signing, the only option I could find that didn't involve paying > and/or having a registered domain of my own was a self-signed > certificate, which from a UI point of view seems of little use "Paul > Moore says you should trust him. Do you? Yes/No"... I got mine from Certum (certum.pl) - they offer (or at least did offer, last year) free code signing certificates for Open Source developers (you have to have "Open Source Developer" in what's being certified). See: http://www.certum.eu/certum/cert,offer_en_open_source_cs.xml > If signed binaries is the way we go, then we should be aware that we > exclude people who don't have certificates from uploading to PyPI. I don't think that any exclusion would occur. It just means that there's a mechanism for people who are picky about such things to have a slightly larger comfort zone. > Maybe that's OK, but without some sort of check I don't know how many > current developers that would exclude, let alone how many potential > developers would be put off. I don't think any packager need be excluded. It would be up to individual packagers and package consumers as to whether they sign packages / stick to only using signed packages. For almost everyone, life should go on as before. > A Python-supported build farm, which signed code on behalf of > developers, might alleviate this. But then we need to protect against > malicious code being submitted to the build farm, etc. There is IMO neither the will nor the resource to do any sort of policing. Caveat emptor (or caveat user, rather). Let's not forget, all of this software is without warranty of any kind. > Fair enough. I don't object to offering the option to verify > signatures (I think I said something like that in an earlier message). > I do have concerns about making signed code mandatory. (Not least over > whether it'd let me install my own unsigned code!) Any workable mechanism would need to be optional (the user doing the installing would be the decider as to whether to go ahead and install, with signature, or lack thereof, in mind). Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Fri Jun 22 17:48:17 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 22 Jun 2012 15:48:17 +0000 (UTC) Subject: [Python-Dev] Signed packages References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622161910.5baf0584@pitrou.net> <20120622172443.Horde.096CMqGZi1VP5I477AO0JhA@webmail.df.eu> Message-ID: v.loewis.de> writes: > > See above. Also notice that such signing is already implemented, as part > of PEP 381. > BTW, I notice that the certificate for https://pypi.python.org/ expired a week ago ... Regards, Vinay Sajip From status at bugs.python.org Fri Jun 22 18:06:57 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 22 Jun 2012 18:06:57 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120622160657.A92871C7DB@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-06-15 - 2012-06-22) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3482 (+10) closed 23435 (+51) total 26917 (+61) Open issues with patches: 1460 Issues opened (46) ================== #15079: pickle: Possibly misplaced test http://bugs.python.org/issue15079 opened by mstefanro #15080: Cookie library doesn't parse date properly http://bugs.python.org/issue15080 opened by jgillick #15082: [httplib] httplib.BadStatusLine on any HTTPS connection in cer http://bugs.python.org/issue15082 opened by jtr51 #15083: Rewrite ElementTree tests in a cleaner and safer way http://bugs.python.org/issue15083 opened by eli.bendersky #15088: PyGen_NeedsFinalizing is public, but undocumented http://bugs.python.org/issue15088 opened by ncoghlan #15090: Add etag support to urllib.request.urlretrieve() http://bugs.python.org/issue15090 opened by rhettinger #15091: ImportError when package is symlinked on Unix http://bugs.python.org/issue15091 opened by jason.coombs #15092: Using enum PyUnicode_Kind http://bugs.python.org/issue15092 opened by storchaka #15093: ntpath.isdir returns False for directory symlinks http://bugs.python.org/issue15093 opened by jason.coombs #15094: Incorrectly placed #endif in _tkinter.c. http://bugs.python.org/issue15094 opened by storchaka #15095: test_imaplib problem - intermittent skips and LOGINDISABLED no http://bugs.python.org/issue15095 opened by ncoghlan #15097: Improving wording on the thread-safeness of import http://bugs.python.org/issue15097 opened by valhallasw #15099: exec of function doesn't call __getitem__ or __missing__ on un http://bugs.python.org/issue15099 opened by johnf #15100: Race conditions in shutil.copy, shutil.copy2 and shutil.copyfi http://bugs.python.org/issue15100 opened by radoslaw.zarzynski #15101: multiprocessing pool finalizer can fail if triggered in backgr http://bugs.python.org/issue15101 opened by sbt #15102: Fix 64-bit building for buildbot scripts http://bugs.python.org/issue15102 opened by jkloth #15104: Unclear language in __main__ description http://bugs.python.org/issue15104 opened by techtonik #15105: curses: wrong indentation http://bugs.python.org/issue15105 opened by vjp #15106: Potential Bug in errors.c http://bugs.python.org/issue15106 opened by Ken.Cheung #15108: ERROR: SystemError: ./../Objects/tupleobject.c:118: bad argume http://bugs.python.org/issue15108 opened by pxd #15109: sqlite3.Connection.iterdump() dies with encoding exception http://bugs.python.org/issue15109 opened by ekontsevoy #15110: strange Tracebacks with importlib http://bugs.python.org/issue15110 opened by amaury.forgeotdarc #15111: Wrong ImportError message with importlib http://bugs.python.org/issue15111 opened by amaury.forgeotdarc #15112: argparse: nargs='*' positional argument doesn't accept any ite http://bugs.python.org/issue15112 opened by waltermundt #15114: Deprecate strict mode of HTMLParser http://bugs.python.org/issue15114 opened by ezio.melotti #15115: Duplicated Content-Transfer-Encoding header when applying emai http://bugs.python.org/issue15115 opened by cancel #15116: remove out-of-date Mac application scripting documentation http://bugs.python.org/issue15116 opened by hhas #15117: Please document top-level sqlite3 module variables http://bugs.python.org/issue15117 opened by wchlm #15118: uname and other os functions should return a struct sequence i http://bugs.python.org/issue15118 opened by larry #15119: ctypes mixed-types bitfield layout nonsensical; doesn't match http://bugs.python.org/issue15119 opened by mark.dickinson #15121: devguide doesn't document all bug tracker components http://bugs.python.org/issue15121 opened by petri.lehtinen #15122: Add an option to always rewrite single-file mailboxes in-place http://bugs.python.org/issue15122 opened by petri.lehtinen #15124: _thread.LockType: Optimize lock deletion, acquisition of uncon http://bugs.python.org/issue15124 opened by kristjan.jonsson #15125: argparse: positional arguments containing - in name not handle http://bugs.python.org/issue15125 opened by nstiurca #15127: Supressing warnings with -w "whether gcc supports ParseTuple" http://bugs.python.org/issue15127 opened by samueljohn #15128: inspect raises exception when frames are misleading about sour http://bugs.python.org/issue15128 opened by acapnotic #15130: remove redundant paragraph in socket howto http://bugs.python.org/issue15130 opened by tshepang #15131: Document py/pyw launchers http://bugs.python.org/issue15131 opened by brian.curtin #15132: Let unittest.TestProgram()'s defaultTest argument be a list http://bugs.python.org/issue15132 opened by cjerdonek #15133: tkinter.BooleanVar.get() docstring is wrong http://bugs.python.org/issue15133 opened by mark #15134: urllib.request.thishost() fails on OSX 10.7 http://bugs.python.org/issue15134 opened by ronaldoussoren #15135: HOWTOs doesn't link to "Idioms and Anti-Idioms" article http://bugs.python.org/issue15135 opened by fossilet #15136: Decimal accepting Fraction http://bugs.python.org/issue15136 opened by joncle #15137: Cleaned source of `cmd` module http://bugs.python.org/issue15137 opened by zearin #15138: base64.urlsafe_b64**code are too slow http://bugs.python.org/issue15138 opened by gvanrossum #15139: Speed up threading.Condition wakeup http://bugs.python.org/issue15139 opened by kristjan.jonsson Most recent 15 issues with no replies (15) ========================================== #15135: HOWTOs doesn't link to "Idioms and Anti-Idioms" article http://bugs.python.org/issue15135 #15134: urllib.request.thishost() fails on OSX 10.7 http://bugs.python.org/issue15134 #15133: tkinter.BooleanVar.get() docstring is wrong http://bugs.python.org/issue15133 #15132: Let unittest.TestProgram()'s defaultTest argument be a list http://bugs.python.org/issue15132 #15131: Document py/pyw launchers http://bugs.python.org/issue15131 #15130: remove redundant paragraph in socket howto http://bugs.python.org/issue15130 #15127: Supressing warnings with -w "whether gcc supports ParseTuple" http://bugs.python.org/issue15127 #15117: Please document top-level sqlite3 module variables http://bugs.python.org/issue15117 #15112: argparse: nargs='*' positional argument doesn't accept any ite http://bugs.python.org/issue15112 #15106: Potential Bug in errors.c http://bugs.python.org/issue15106 #15105: curses: wrong indentation http://bugs.python.org/issue15105 #15094: Incorrectly placed #endif in _tkinter.c. http://bugs.python.org/issue15094 #15092: Using enum PyUnicode_Kind http://bugs.python.org/issue15092 #15088: PyGen_NeedsFinalizing is public, but undocumented http://bugs.python.org/issue15088 #15083: Rewrite ElementTree tests in a cleaner and safer way http://bugs.python.org/issue15083 Most recent 15 issues waiting for review (15) ============================================= #15139: Speed up threading.Condition wakeup http://bugs.python.org/issue15139 #15130: remove redundant paragraph in socket howto http://bugs.python.org/issue15130 #15128: inspect raises exception when frames are misleading about sour http://bugs.python.org/issue15128 #15124: _thread.LockType: Optimize lock deletion, acquisition of uncon http://bugs.python.org/issue15124 #15119: ctypes mixed-types bitfield layout nonsensical; doesn't match http://bugs.python.org/issue15119 #15118: uname and other os functions should return a struct sequence i http://bugs.python.org/issue15118 #15114: Deprecate strict mode of HTMLParser http://bugs.python.org/issue15114 #15102: Fix 64-bit building for buildbot scripts http://bugs.python.org/issue15102 #15094: Incorrectly placed #endif in _tkinter.c. http://bugs.python.org/issue15094 #15092: Using enum PyUnicode_Kind http://bugs.python.org/issue15092 #15079: pickle: Possibly misplaced test http://bugs.python.org/issue15079 #15068: fileinput requires two EOF when reading stdin http://bugs.python.org/issue15068 #15063: Source code links for JSON documentation http://bugs.python.org/issue15063 #15061: hmac.secure_compare() leaks information about length of string http://bugs.python.org/issue15061 #15056: Have imp.cache_from_source() raise NotImplementedError when ca http://bugs.python.org/issue15056 Top 10 most discussed issues (10) ================================= #15038: Optimize python Locks on Windows http://bugs.python.org/issue15038 27 msgs #15061: hmac.secure_compare() leaks information about length of string http://bugs.python.org/issue15061 24 msgs #15104: Unclear language in __main__ description http://bugs.python.org/issue15104 9 msgs #14814: Implement PEP 3144 (the ipaddress module) http://bugs.python.org/issue14814 8 msgs #15008: PEP 362 "Signature Objects" reference implementation http://bugs.python.org/issue15008 8 msgs #15109: sqlite3.Connection.iterdump() dies with encoding exception http://bugs.python.org/issue15109 8 msgs #6727: ImportError when package is symlinked on Windows http://bugs.python.org/issue6727 7 msgs #15124: _thread.LockType: Optimize lock deletion, acquisition of uncon http://bugs.python.org/issue15124 7 msgs #13590: extension module builds fail with python.org OS X installers o http://bugs.python.org/issue13590 6 msgs #15052: Outdated comments in build_ssl.py http://bugs.python.org/issue15052 6 msgs Issues closed (46) ================== #7359: mailbox cannot modify mailboxes in system mail spool http://bugs.python.org/issue7359 closed by petri.lehtinen #7584: datetime.rfcformat() for Date and Time on the Internet http://bugs.python.org/issue7584 closed by belopolsky #10053: Don???t close fd when FileIO.__init__ fails http://bugs.python.org/issue10053 closed by hynek #12046: Windows build identification incomplete http://bugs.python.org/issue12046 closed by georg.brandl #13455: Reorganize tracker docs in the devguide http://bugs.python.org/issue13455 closed by ezio.melotti #13783: Clean up PEP 380 C API additions http://bugs.python.org/issue13783 closed by ncoghlan #13825: Datetime failing while reading active directory time attribute http://bugs.python.org/issue13825 closed by belopolsky #14055: Implement __sizeof__ for etree Element http://bugs.python.org/issue14055 closed by loewis #14059: Implement multiprocessing.Barrier http://bugs.python.org/issue14059 closed by sbt #14225: _cursesmodule compile error in OS X 32-bit-only installer buil http://bugs.python.org/issue14225 closed by ned.deily #14653: Improve mktime_tz to use calendar.timegm instead of time.mktim http://bugs.python.org/issue14653 closed by belopolsky #14657: Avoid two importlib copies http://bugs.python.org/issue14657 closed by pitrou #14684: zlib set dictionary support inflateSetDictionary http://bugs.python.org/issue14684 closed by nadeem.vawda #14769: Add test to automatically detect missing format units in skipi http://bugs.python.org/issue14769 closed by larry #14772: Return destination values in some shutil functions http://bugs.python.org/issue14772 closed by brian.curtin #14840: Tutorial: Add a bit on the difference between tuples and lists http://bugs.python.org/issue14840 closed by ezio.melotti #14874: Faster charmap decoding http://bugs.python.org/issue14874 closed by pitrou #14919: what disables one from adding self to the "nosy" list http://bugs.python.org/issue14919 closed by ezio.melotti #14928: Fix importlib bootstrapping issues http://bugs.python.org/issue14928 closed by pitrou #14933: Misleading documentation about weakrefs http://bugs.python.org/issue14933 closed by pitrou #14982: pkgutil.walk_packages seems to not work properly on Python 3.3 http://bugs.python.org/issue14982 closed by brett.cannon #15006: Allow equality comparison between naive and aware datetime obj http://bugs.python.org/issue15006 closed by belopolsky #15026: Faster UTF-16 encoding http://bugs.python.org/issue15026 closed by pitrou #15036: mailbox.mbox fails to pop two items in a row, flushing in betw http://bugs.python.org/issue15036 closed by petri.lehtinen #15043: test_gdb is disallowed by default security settings in Fedora http://bugs.python.org/issue15043 closed by ncoghlan #15054: bytes literals erroneously tokenized http://bugs.python.org/issue15054 closed by meador.inge #15064: Use context manager protocol for more multiprocessing types http://bugs.python.org/issue15064 closed by sbt #15074: Strange behaviour of python cmd module. (Ignores slash) http://bugs.python.org/issue15074 closed by ned.deily #15075: XincludeTest failure in test_xml_etree http://bugs.python.org/issue15075 closed by python-dev #15081: No documentation for PyState_FindModule() http://bugs.python.org/issue15081 closed by christian.heimes #15084: Add option to os.mkdir to not raise an exception for existing http://bugs.python.org/issue15084 closed by loewis #15085: set.union accepts not set iterables for all but the first argu http://bugs.python.org/issue15085 closed by r.david.murray #15086: Ubuntu bot: error while loading shared libraries http://bugs.python.org/issue15086 closed by pitrou #15087: Add gzip function to read gzip'd strings http://bugs.python.org/issue15087 closed by nadeem.vawda #15089: Add gzip support to urllib.request.retrieve() http://bugs.python.org/issue15089 closed by pitrou #15096: Drop support for the "ur" string prefix http://bugs.python.org/issue15096 closed by christian.heimes #15098: "TypeError" can give a misleading message http://bugs.python.org/issue15098 closed by r.david.murray #15103: Solaris compiler chokes on importlib.h http://bugs.python.org/issue15103 closed by pitrou #15107: Potential Bug in mpdecimal.c http://bugs.python.org/issue15107 closed by mark.dickinson #15113: IDLE Shell: delattr(__builtins__,"getattr") causes shell to st http://bugs.python.org/issue15113 closed by r.david.murray #15120: Different behavior of html.parser.HTMLParser http://bugs.python.org/issue15120 closed by hansokumake #15123: float().__format__() disregards given field width http://bugs.python.org/issue15123 closed by luismsgomes #15126: Theading isAlive() missing version note http://bugs.python.org/issue15126 closed by georg.brandl #15129: file.readline() cannot read weird ascii character in file http://bugs.python.org/issue15129 closed by amaury.forgeotdarc #1229239: optionally allow mutable builtin types http://bugs.python.org/issue1229239 closed by terry.reedy #730473: Add Py_AtInit() startup hook for extenders http://bugs.python.org/issue730473 closed by patmiller From jnoller at gmail.com Fri Jun 22 18:12:12 2012 From: jnoller at gmail.com (Jesse Noller) Date: Fri, 22 Jun 2012 12:12:12 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE336D3.1090406@plope.com> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE2A6C7.9070207@plope.com> <4FE30A12.8020202@plope.com> <4FE318B8.3010005@plope.com> <4FE32BB6.6090601@plope.com> <4FE336D3.1090406@plope.com> Message-ID: <45F9152254024DC3AC0E2033C478FC0F@gmail.com> More fuel; fire: http://lucumr.pocoo.org/2012/6/22/hate-hate-hate-everywhere/ From donald.stufft at gmail.com Fri Jun 22 18:35:28 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Fri, 22 Jun 2012 12:35:28 -0400 Subject: [Python-Dev] Signed packages In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622161910.5baf0584@pitrou.net> <20120622172443.Horde.096CMqGZi1VP5I477AO0JhA@webmail.df.eu> Message-ID: <2D51471ABFFE4F0585166A44AAE8CCC7@gmail.com> Ideally authors will be signing their packages (using gpg keys). Of course how to distribute keys is an exercise left to the reader. On Friday, June 22, 2012 at 11:48 AM, Vinay Sajip wrote: > v.loewis.de (http://v.loewis.de)> writes: > > > > > See above. Also notice that such signing is already implemented, as part > > of PEP 381. > > > > > BTW, I notice that the certificate for https://pypi.python.org/ expired a week > ago ... > > Regards, > > Vinay Sajip > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org (mailto:Python-Dev at python.org) > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/donald.stufft%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.zani at gmail.com Fri Jun 22 18:54:28 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Fri, 22 Jun 2012 09:54:28 -0700 Subject: [Python-Dev] Signed packages In-Reply-To: <2D51471ABFFE4F0585166A44AAE8CCC7@gmail.com> References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622161910.5baf0584@pitrou.net> <20120622172443.Horde.096CMqGZi1VP5I477AO0JhA@webmail.df.eu> <2D51471ABFFE4F0585166A44AAE8CCC7@gmail.com> Message-ID: On Fri, Jun 22, 2012 at 9:35 AM, Donald Stufft wrote: > Ideally authors will be signing their packages (using gpg keys). Of course > how to distribute keys is an exercise left to the reader. Key distribution is the real issue though. If there isn't a key distribution infrastructure in place, we might as well not bother with signatures. PyPI could issue x509 certs to packagers. You wouldn't be able to verify that the name given is accurate, but you would be able to verify that all packages with the same listed author are actually by that author. > > On Friday, June 22, 2012 at 11:48 AM, Vinay Sajip wrote: > > v.loewis.de> writes: > > > See above. Also notice that such signing is already implemented, as part > of PEP 381. > > > BTW, I notice that the certificate for https://pypi.python.org/ expired a > week > ago ... > > Regards, > > Vinay Sajip > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/donald.stufft%40gmail.com > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/alexandre.zani%40gmail.com > From donald.stufft at gmail.com Fri Jun 22 18:56:41 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Fri, 22 Jun 2012 12:56:41 -0400 Subject: [Python-Dev] Signed packages In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622161910.5baf0584@pitrou.net> <20120622172443.Horde.096CMqGZi1VP5I477AO0JhA@webmail.df.eu> <2D51471ABFFE4F0585166A44AAE8CCC7@gmail.com> Message-ID: On Friday, June 22, 2012 at 12:54 PM, Alexandre Zani wrote: > > Key distribution is the real issue though. If there isn't a key > distribution infrastructure in place, we might as well not bother with > signatures. PyPI could issue x509 certs to packagers. You wouldn't be > able to verify that the name given is accurate, but you would be able > to verify that all packages with the same listed author are actually > by that author. > > I've been sketching out ideas for key distribution, but it's very much a chicken and egg problem, very few people sign their packages (because nothing uses it currently), and nobody is motivated to work on infrastructure or tooling because no one signs their packages. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.zani at gmail.com Fri Jun 22 19:09:22 2012 From: alexandre.zani at gmail.com (Alexandre Zani) Date: Fri, 22 Jun 2012 10:09:22 -0700 Subject: [Python-Dev] Signed packages In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622161910.5baf0584@pitrou.net> <20120622172443.Horde.096CMqGZi1VP5I477AO0JhA@webmail.df.eu> <2D51471ABFFE4F0585166A44AAE8CCC7@gmail.com> Message-ID: On Fri, Jun 22, 2012 at 9:56 AM, Donald Stufft wrote: > On Friday, June 22, 2012 at 12:54 PM, Alexandre Zani wrote: > > > Key distribution is the real issue though. If there isn't a key > distribution infrastructure in place, we might as well not bother with > signatures. PyPI could issue x509 certs to packagers. You wouldn't be > able to verify that the name given is accurate, but you would be able > to verify that all packages with the same listed author are actually > by that author. > > I've been sketching out ideas for key distribution, but it's very much > a chicken and egg problem, very few people sign their packages (because > nothing uses it currently), and nobody is motivated to work on > infrastructure > or tooling because no one signs their packages. Are those ideas available publicly? I would love to chip in. From donald.stufft at gmail.com Fri Jun 22 19:11:34 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Fri, 22 Jun 2012 13:11:34 -0400 Subject: [Python-Dev] Signed packages In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622161910.5baf0584@pitrou.net> <20120622172443.Horde.096CMqGZi1VP5I477AO0JhA@webmail.df.eu> <2D51471ABFFE4F0585166A44AAE8CCC7@gmail.com> Message-ID: Not at the moment, but I could gather them up and make them public later today. They are very rough draft at the moment. On Friday, June 22, 2012 at 1:09 PM, Alexandre Zani wrote: > On Fri, Jun 22, 2012 at 9:56 AM, Donald Stufft wrote: > > On Friday, June 22, 2012 at 12:54 PM, Alexandre Zani wrote: > > > > > > Key distribution is the real issue though. If there isn't a key > > distribution infrastructure in place, we might as well not bother with > > signatures. PyPI could issue x509 certs to packagers. You wouldn't be > > able to verify that the name given is accurate, but you would be able > > to verify that all packages with the same listed author are actually > > by that author. > > > > I've been sketching out ideas for key distribution, but it's very much > > a chicken and egg problem, very few people sign their packages (because > > nothing uses it currently), and nobody is motivated to work on > > infrastructure > > or tooling because no one signs their packages. > > > > > Are those ideas available publicly? I would love to chip in. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Fri Jun 22 18:39:42 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 22 Jun 2012 12:39:42 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14769: test_capi now has SkipitemTest, which cleverly checks In-Reply-To: References: Message-ID: <4FE49FCE.8040209@udel.edu> On 6/22/2012 6:57 AM, larry.hastings wrote: > http://hg.python.org/cpython/rev/ace45d23628a > changeset: 77567:ace45d23628a > user: Larry Hastings > date: Fri Jun 22 03:56:29 2012 -0700 > summary: > Issue #14769: test_capi now has SkipitemTest, which cleverly checks > for "parity" between PyArg_ParseTuple() and the Python/getargs.c static > function skipitem() for all possible "format units". You sensibly only test printable ascii chars, which are in the contiguous range 32 to 127 inclusive. So it makes no sense to claim otherwise and then deny the wrong claim, or to enlarge the range and then shrink it again. > + This function brute-force tests all** ASCII characters (1 to 127 > + inclusive) as format units, checking to see that With a few exceptions**, test all printable ASCII characters (32 to 127 inclusive) as... > + > + ** Okay, it actually skips some ASCII characters. Some characters ** Some characters ... > + have special funny semantics, and it would be difficult to > + accomodate them here. > + for i in range(1, 128): for i in range(32, 128): > + if (not c.isprintable()) or (c in '()e|$'): if c in '()e|$': tjr From python at mrabarnett.plus.com Fri Jun 22 20:21:25 2012 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 22 Jun 2012 19:21:25 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14769: test_capi now has SkipitemTest, which cleverly checks In-Reply-To: <4FE49FCE.8040209@udel.edu> References: <4FE49FCE.8040209@udel.edu> Message-ID: <4FE4B7A5.9030004@mrabarnett.plus.com> On 22/06/2012 17:39, Terry Reedy wrote: > On 6/22/2012 6:57 AM, larry.hastings wrote: >> http://hg.python.org/cpython/rev/ace45d23628a >> changeset: 77567:ace45d23628a >> user: Larry Hastings >> date: Fri Jun 22 03:56:29 2012 -0700 >> summary: >> Issue #14769: test_capi now has SkipitemTest, which cleverly checks >> for "parity" between PyArg_ParseTuple() and the Python/getargs.c static >> function skipitem() for all possible "format units". > > You sensibly only test printable ascii chars, which are in the > contiguous range 32 to 127 inclusive. So it makes no sense to claim > otherwise and then deny the wrong claim, or to enlarge the range and > then shrink it again. > ASCII character 127 is a control character, not a printable character. >> + This function brute-force tests all** ASCII characters (1 to 127 >> + inclusive) as format units, checking to see that > There are 128 ASCII characters (0 to 127 inclusive). > With a few exceptions**, test all printable ASCII characters (32 to 127 > inclusive) as... > >> + >> + ** Okay, it actually skips some ASCII characters. Some characters > > ** Some characters ... >> + have special funny semantics, and it would be difficult to >> + accomodate them here. > >> + for i in range(1, 128): > > for i in range(32, 128): > > >> + if (not c.isprintable()) or (c in '()e|$'): > > if c in '()e|$': > From larry at hastings.org Fri Jun 22 20:31:39 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 22 Jun 2012 11:31:39 -0700 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) Message-ID: <4FE4BA0B.5070800@hastings.org> Here's PEP 362: http://www.python.org/dev/peps/pep-0362/ It adds easy introspection abilities to Python callables. After a whirlwind of activity over the past several weeks we think it's ready. All it needs now is an official pronouncement from some seasoned veteran of the Python community. But that's where it's hit an impasse. Nick Coghlan has recused himself because he was heavily involved in shaping it. And obviously the authors (including myself, in a small way) are ineligible. Nobody else has stepped forward. Yet the feature freeze for 3.3 fast approaches--ominous, unstoppable, like a big round boulder headed straight for Indiana Jones. Time is running out. If you're BDFAP material, why not spend an enjoyable hour today perusing the fine work of these capable folks? Then naturally all you'd need do is haul out your rubber stamp. Mere moments later you'd be on your way, whistling a happy tune, a new spring in your step, knowing in your heart you'd made the world a better place. The reference implementation for CPython trunk is here: https://bitbucket.org/1st1/cpython/changesets/tip/branch(%22pep362%22) And here's the bug tracker issue: http://bugs.python.org/issue15008 So shines a good deed in a weary world, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Jun 22 20:36:26 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 22 Jun 2012 11:36:26 -0700 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: <4FE4BA0B.5070800@hastings.org> References: <4FE4BA0B.5070800@hastings.org> Message-ID: I'll review it right now. On Fri, Jun 22, 2012 at 11:31 AM, Larry Hastings wrote: > > > Here's PEP 362: > > http://www.python.org/dev/peps/pep-0362/ > > It adds easy introspection abilities to Python callables.? After a whirlwind > of activity over the past several weeks we think it's ready. > > All it needs now is an official pronouncement from some seasoned veteran of > the Python community.? But that's where it's hit an impasse.? Nick Coghlan > has recused himself because he was heavily involved in shaping it.? And > obviously the authors (including myself, in a small way) are ineligible. > Nobody else has stepped forward.? Yet the feature freeze for 3.3 fast > approaches--ominous, unstoppable, like a big round boulder headed straight > for Indiana Jones.? Time is running out. > > If you're BDFAP material, why not spend an enjoyable hour today perusing the > fine work of these capable folks?? Then naturally all you'd need do is haul > out your rubber stamp.? Mere moments later you'd be on your way, whistling a > happy tune, a new spring in your step, knowing in your heart you'd made the > world a better place. > > The reference implementation for CPython trunk is here: > > https://bitbucket.org/1st1/cpython/changesets/tip/branch(%22pep362%22) > > And here's the bug tracker issue: > > http://bugs.python.org/issue15008 > > > So shines a good deed in a weary world, > > > /arry > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From guido at python.org Fri Jun 22 20:52:03 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 22 Jun 2012 11:52:03 -0700 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: References: <4FE4BA0B.5070800@hastings.org> Message-ID: This looks great, much better than the version I reviewed half a year ago! Thanks you and others (especially Yuri) for all your efforts in guiding the discussion and implementing as the discussion went along; also thanks to Nick for participating to intensely. Quick review notes: (1) I don't like APIs that require you to use hasattr(), which you use for return_annotation. Since you have Signature.empty to remove it, why not set it to Signature.empty when not set? Ditto for other values that are currently not set when they don't apply (Parameter.empty I think.) (2) Could use an example on how to remove and add parameters using replace(). (3) You are using name(arg1, *, arg2) a lot. I think in most cases you mean for arg2 to be an optional keyword arg, but this notation doesn't convey that it is optional. Can you clarify? (4) "If the object is a method" -- shouldn't that be "bound method"? (Unbound methods are undetectable.) Or is there some wider definition of method? What does it do for static or class methods? (5) Too bad there's no proposal for adding signatures to builtin functions/methods, but understood. Of these, only (1) is a blocker for PEP acceptance -- I'd either like to see this defended vigorously (maybe it was discussed? then please quote, I can't go back and read the threads) or changed. Otherwise it looks great! --Guido On Fri, Jun 22, 2012 at 11:36 AM, Guido van Rossum wrote: > I'll review it right now. > > On Fri, Jun 22, 2012 at 11:31 AM, Larry Hastings wrote: >> >> >> Here's PEP 362: >> >> http://www.python.org/dev/peps/pep-0362/ >> >> It adds easy introspection abilities to Python callables.? After a whirlwind >> of activity over the past several weeks we think it's ready. >> >> All it needs now is an official pronouncement from some seasoned veteran of >> the Python community.? But that's where it's hit an impasse.? Nick Coghlan >> has recused himself because he was heavily involved in shaping it.? And >> obviously the authors (including myself, in a small way) are ineligible. >> Nobody else has stepped forward.? Yet the feature freeze for 3.3 fast >> approaches--ominous, unstoppable, like a big round boulder headed straight >> for Indiana Jones.? Time is running out. >> >> If you're BDFAP material, why not spend an enjoyable hour today perusing the >> fine work of these capable folks?? Then naturally all you'd need do is haul >> out your rubber stamp.? Mere moments later you'd be on your way, whistling a >> happy tune, a new spring in your step, knowing in your heart you'd made the >> world a better place. >> >> The reference implementation for CPython trunk is here: >> >> https://bitbucket.org/1st1/cpython/changesets/tip/branch(%22pep362%22) >> >> And here's the bug tracker issue: >> >> http://bugs.python.org/issue15008 >> >> >> So shines a good deed in a weary world, >> >> >> /arry >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > > -- > --Guido van Rossum (python.org/~guido) -- --Guido van Rossum (python.org/~guido) From ethan at stoneleaf.us Fri Jun 22 21:00:26 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 22 Jun 2012 12:00:26 -0700 Subject: [Python-Dev] Feature Freeze Message-ID: <4FE4C0CA.9040203@stoneleaf.us> Does the feature freeze affect documentation enhancements? If it does, can somebody review (and commit! :) issues: http://bugs.python.org/issue14954 http://bugs.python.org/issue14617 Thanks! (And if not necessary before the feature freeze, sorry for the noise.) ~Ethan~ From martin at v.loewis.de Fri Jun 22 21:01:32 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Fri, 22 Jun 2012 21:01:32 +0200 Subject: [Python-Dev] Feature Freeze In-Reply-To: <4FE4C0CA.9040203@stoneleaf.us> References: <4FE4C0CA.9040203@stoneleaf.us> Message-ID: <20120622210132.Horde.im6KBbuWis5P5MEMV7qCp7A@webmail.df.eu> Zitat von Ethan Furman : > Does the feature freeze affect documentation enhancements? No. Incorrect/missing documentation is always a bug (unless there is a debate whether something is an implementation detail or a language feature); bugs can be fixed at any time, and documentation bugs in particular in an ongoing manner even after the release. > Thanks! (And if not necessary before the feature freeze, sorry for > the noise.) Pushing issues is certainly on-topic for the list. Regards, Martin From yselivanov.ml at gmail.com Fri Jun 22 21:10:24 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 22 Jun 2012 15:10:24 -0400 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: References: <4FE4BA0B.5070800@hastings.org> Message-ID: <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> Guido, On 2012-06-22, at 2:52 PM, Guido van Rossum wrote: > This looks great, much better than the version I reviewed half a year > ago! Thanks you and others (especially Yuri) for all your efforts in > guiding the discussion and implementing as the discussion went along; > also thanks to Nick for participating to intensely. > > Quick review notes: > > (1) I don't like APIs that require you to use hasattr(), which you use > for return_annotation. Since you have Signature.empty to remove it, > why not set it to Signature.empty when not set? Ditto for other values > that are currently not set when they don't apply (Parameter.empty I > think.) I think that if a function lacks an annotation, that should be reflected in the same way for its signature. Currently: if hasattr(signature, 'return_annotation'): If we use Signature.empty: if signature.return_annotation is not signature.empty: So (in my humble opinion) it doesn't simplify things too much. And also you can use 'try .. except AttributeError .. else' blocks, which make code even more readable. All in all, I don't have a very strong opinion in this topic. 'empty' will also work. When python-dev collectively decided to go with missing attributes, 'empty' didn't yet exist (we added it with 'replace()' methods). If you think that using 'empty' is better, we can add that to the PEP. > (2) Could use an example on how to remove and add parameters using replace(). You have to build a new list of parameters and then pass it to 'replace()'. Example (from the actual signature() function implementation): if isinstance(obj, types.MethodType): # In this case we skip the first parameter of the underlying # function (usually `self` or `cls`). sig = signature(obj.__func__) return sig.replace(parameters=tuple(sig.parameters.values())[1:]) > (3) You are using name(arg1, *, arg2) a lot. I think in most cases you > mean for arg2 to be an optional keyword arg, but this notation doesn't > convey that it is optional. Can you clarify? Yes, I meant optional. Would 'name(arg1, *, [arg2])' be better? > (4) "If the object is a method" -- shouldn't that be "bound method"? > (Unbound methods are undetectable.) Or is there some wider definition > of method? What does it do for static or class methods? Yes, it should be "If the object is a bound method". We'll fix this shortly. classmethod as a descriptor returns a BoundMethod (bound to the class), staticmethod returns the original unmodified function, so both of them are supported automatically. > (5) Too bad there's no proposal for adding signatures to builtin > functions/methods, but understood. > > Of these, only (1) is a blocker for PEP acceptance -- I'd either like > to see this defended vigorously (maybe it was discussed? then please > quote, I can't go back and read the threads) or changed. > > Otherwise it looks great! > > --Guido Thanks! - Yury From guido at python.org Fri Jun 22 21:18:17 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 22 Jun 2012 12:18:17 -0700 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> Message-ID: On Fri, Jun 22, 2012 at 12:10 PM, Yury Selivanov wrote: > Guido, > > On 2012-06-22, at 2:52 PM, Guido van Rossum wrote: > >> This looks great, much better than the version I reviewed half a year >> ago! Thanks you and others (especially Yuri) for all your efforts in >> guiding the discussion and implementing as the discussion went along; >> also thanks to Nick for participating to intensely. >> >> Quick review notes: >> >> (1) I don't like APIs that require you to use hasattr(), which you use >> for return_annotation. Since you have Signature.empty to remove it, >> why not set it to Signature.empty when not set? Ditto for other values >> that are currently not set when they don't apply (Parameter.empty I >> think.) > > I think that if a function lacks an annotation, that should be reflected > in the same way for its signature. > > Currently: > > ? ?if hasattr(signature, 'return_annotation'): > > If we use Signature.empty: > > ? ?if signature.return_annotation is not signature.empty: > > So (in my humble opinion) it doesn't simplify things too much. But it is much more of a pain if someone just wants to pass the value on or print it. Instead of print(sig.return_annotation) you'd have to write if hasattr(sig, 'return_annotation'): print(sig.return_annotation) else: print('empty') In general hasattr() (or getattr() with a 3rd arg) has a code smell; making it a required part of an API truly stinks IMO. > And also you can use 'try .. except AttributeError .. else' blocks, > which make code even more readable. No, try/except blocks make code *less* readable. > All in all, I don't have a very strong opinion in this topic. But I do. :-) > 'empty' will also work. ?When python-dev collectively decided to > go with missing attributes, 'empty' didn't yet exist (we added > it with 'replace()' methods). > > If you think that using 'empty' is better, we can add that to the PEP. Yes, please do. >> (2) Could use an example on how to remove and add parameters using replace(). > > You have to build a new list of parameters and then pass it to 'replace()'. > Example (from the actual signature() function implementation): > > ? ?if isinstance(obj, types.MethodType): > ? ? ? ?# In this case we skip the first parameter of the underlying > ? ? ? ?# function (usually `self` or `cls`). > ? ? ? ?sig = signature(obj.__func__) > ? ? ? ?return sig.replace(parameters=tuple(sig.parameters.values())[1:]) > >> (3) You are using name(arg1, *, arg2) a lot. I think in most cases you >> mean for arg2 to be an optional keyword arg, but this notation doesn't >> convey that it is optional. Can you clarify? > > Yes, I meant optional. ?Would 'name(arg1, *, [arg2])' be better? Hardly, because that's not valid syntax. I'd write name(arg1, *, arg2=). >> (4) "If the object is a method" -- shouldn't that be "bound method"? >> (Unbound methods are undetectable.) Or is there some wider definition >> of method? What does it do for static or class methods? > > Yes, it should be "If the object is a bound method". ?We'll fix this > shortly. Great. > classmethod as a descriptor returns a BoundMethod (bound to the class), > staticmethod returns the original unmodified function, so both of > them are supported automatically. Oh, great. IIRC it wasn't always like that. Maybe just add this to the PEP as a note? >> (5) Too bad there's no proposal for adding signatures to builtin >> functions/methods, but understood. >> >> Of these, only (1) is a blocker for PEP acceptance -- I'd either like >> to see this defended vigorously (maybe it was discussed? then please >> quote, I can't go back and read the threads) or changed. >> >> Otherwise it looks great! >> >> --Guido > > Thanks! Thank *you* for your hard work! -- --Guido van Rossum (python.org/~guido) From lists at cheimes.de Fri Jun 22 21:21:14 2012 From: lists at cheimes.de (Christian Heimes) Date: Fri, 22 Jun 2012 21:21:14 +0200 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: References: <4FE4BA0B.5070800@hastings.org> Message-ID: <4FE4C5AA.7040909@cheimes.de> Am 22.06.2012 20:52, schrieb Guido van Rossum: > (5) Too bad there's no proposal for adding signatures to builtin > functions/methods, but understood. Larry et al. did an experiment with a mutable __signature__ attribute to PyCFunction. He immediately backed out and removed the attribute as I explained that it breaks isolation between subinterpreter instances. The PEP is already complex enough and went to several incarnations. It was a wise decision to focus on the features that could be implemented before the first beta is released. Kudos for pulling it off, Larry! Signatures for builtin functions should be handled by a new PEP. We need a way to extract or define the signatures (perhaps parse the C code and parse PyArg_* signatures) and a secure way to store the signature (perhaps implement the signature class in C?). That's a LOT of work. Christian From yselivanov.ml at gmail.com Fri Jun 22 21:24:41 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 22 Jun 2012 15:24:41 -0400 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> Message-ID: On 2012-06-22, at 3:18 PM, Guido van Rossum wrote: > On Fri, Jun 22, 2012 at 12:10 PM, Yury Selivanov > wrote: >> Guido, >> >> On 2012-06-22, at 2:52 PM, Guido van Rossum wrote: ... >> 'empty' will also work. When python-dev collectively decided to >> go with missing attributes, 'empty' didn't yet exist (we added >> it with 'replace()' methods). >> >> If you think that using 'empty' is better, we can add that to the PEP. > > Yes, please do. OK >>> (2) Could use an example on how to remove and add parameters using replace(). >> >> You have to build a new list of parameters and then pass it to 'replace()'. >> Example (from the actual signature() function implementation): >> >> if isinstance(obj, types.MethodType): >> # In this case we skip the first parameter of the underlying >> # function (usually `self` or `cls`). >> sig = signature(obj.__func__) >> return sig.replace(parameters=tuple(sig.parameters.values())[1:]) >> >>> (3) You are using name(arg1, *, arg2) a lot. I think in most cases you >>> mean for arg2 to be an optional keyword arg, but this notation doesn't >>> convey that it is optional. Can you clarify? >> >> Yes, I meant optional. Would 'name(arg1, *, [arg2])' be better? > > Hardly, because that's not valid syntax. I'd write name(arg1, *, > arg2=). Like replace(*, name=, kind=, default=, annotation=) -> Parameter or replace(*, name=, kind=, default=, annotation=) -> Parameter >>> (4) "If the object is a method" -- shouldn't that be "bound method"? >>> (Unbound methods are undetectable.) Or is there some wider definition >>> of method? What does it do for static or class methods? >> >> Yes, it should be "If the object is a bound method". We'll fix this >> shortly. > > Great. > >> classmethod as a descriptor returns a BoundMethod (bound to the class), >> staticmethod returns the original unmodified function, so both of >> them are supported automatically. > > Oh, great. IIRC it wasn't always like that. Maybe just add this to the > PEP as a note? OK. I'll clarify that. - Yury From guido at python.org Fri Jun 22 21:24:43 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 22 Jun 2012 12:24:43 -0700 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: <4FE4C5AA.7040909@cheimes.de> References: <4FE4BA0B.5070800@hastings.org> <4FE4C5AA.7040909@cheimes.de> Message-ID: On Fri, Jun 22, 2012 at 12:21 PM, Christian Heimes wrote: > Am 22.06.2012 20:52, schrieb Guido van Rossum: >> (5) Too bad there's no proposal for adding signatures to builtin >> functions/methods, but understood. > > Larry et al. did an experiment with a mutable __signature__ attribute to > PyCFunction. He immediately backed out and removed the attribute as I > explained that it breaks isolation between subinterpreter instances. Good point. Maybe the PEP could explain this (remember that a good PEP also mentioned some rejected ideas and the reason why they were rejected). > The PEP is already complex enough and went to several incarnations. It > was a wise decision to focus on the features that could be implemented > before the first beta is released. Kudos for pulling it off, Larry! Indeed, limiting the scope in this way was very wise. > Signatures for builtin functions should be handled by a new PEP. We need > a way to extract or define the signatures (perhaps parse the C code and > parse PyArg_* signatures) and a secure way to store the signature > (perhaps implement the signature class in C?). That's a LOT of work. Agreed it's an open problem. I just hope someone will tackle it next. -- --Guido van Rossum (python.org/~guido) From lists at cheimes.de Fri Jun 22 21:25:14 2012 From: lists at cheimes.de (Christian Heimes) Date: Fri, 22 Jun 2012 21:25:14 +0200 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> Message-ID: Am 22.06.2012 21:10, schrieb Yury Selivanov: > I think that if a function lacks an annotation, that should be reflected > in the same way for its signature. > > Currently: > > if hasattr(signature, 'return_annotation'): > > If we use Signature.empty: > > if signature.return_annotation is not signature.empty: > > So (in my humble opinion) it doesn't simplify things too much. > And also you can use 'try .. except AttributeError .. else' blocks, > which make code even more readable. The second form has two benefits: * you get a sensible error message when you mistype the name of the attribute. hasattr(signature, 'return_annotatoin') is clearly an error, hard to notice with the naked eye and passes silently. * modern Python IDEs have code completion. "signature.re is not signature.em" safes key strokes. Christian From solipsis at pitrou.net Fri Jun 22 21:23:10 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 22 Jun 2012 21:23:10 +0200 Subject: [Python-Dev] A reference leak with PyType_FromSpec Message-ID: <20120622212310.29833b29@pitrou.net> Hi, As mentioned in the commit message for 96513d71e650, creating a type using PyType_FromSpec seems to leak references when the type is instantiated. This happens with SSLError: >>> e = ssl.SSLError() >>> sys.getrefcount(ssl.SSLError) 35 >>> e = ssl.SSLError() >>> sys.getrefcount(ssl.SSLError) 36 >>> e = ssl.SSLError() >>> sys.getrefcount(ssl.SSLError) 37 (the SSLError definition is quite simple; it only uses the Py_tp_base, Py_tp_doc and Py_tp_str slots) The SSLError subclasses, e.g. SSLWantReadError, which are created using PyErr_NewExceptionWithDoc, are not affected: >>> e = ssl.SSLWantReadError() >>> sys.getrefcount(ssl.SSLWantReadError) 8 >>> e = ssl.SSLWantReadError() >>> sys.getrefcount(ssl.SSLWantReadError) 8 >>> e = ssl.SSLWantReadError() >>> sys.getrefcount(ssl.SSLWantReadError) 8 Regards Antoine. From guido at python.org Fri Jun 22 21:26:48 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 22 Jun 2012 12:26:48 -0700 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> Message-ID: On Fri, Jun 22, 2012 at 12:24 PM, Yury Selivanov wrote: > On 2012-06-22, at 3:18 PM, Guido van Rossum wrote: > >> On Fri, Jun 22, 2012 at 12:10 PM, Yury Selivanov >> wrote: >>> Guido, >>> >>> On 2012-06-22, at 2:52 PM, Guido van Rossum wrote: > ... >>> 'empty' will also work. ?When python-dev collectively decided to >>> go with missing attributes, 'empty' didn't yet exist (we added >>> it with 'replace()' methods). >>> >>> If you think that using 'empty' is better, we can add that to the PEP. >> >> Yes, please do. > > OK > >>>> (2) Could use an example on how to remove and add parameters using replace(). >>> >>> You have to build a new list of parameters and then pass it to 'replace()'. >>> Example (from the actual signature() function implementation): >>> >>> ? ?if isinstance(obj, types.MethodType): >>> ? ? ? ?# In this case we skip the first parameter of the underlying >>> ? ? ? ?# function (usually `self` or `cls`). >>> ? ? ? ?sig = signature(obj.__func__) >>> ? ? ? ?return sig.replace(parameters=tuple(sig.parameters.values())[1:]) >>> >>>> (3) You are using name(arg1, *, arg2) a lot. I think in most cases you >>>> mean for arg2 to be an optional keyword arg, but this notation doesn't >>>> convey that it is optional. Can you clarify? >>> >>> Yes, I meant optional. ?Would 'name(arg1, *, [arg2])' be better? >> >> Hardly, because that's not valid syntax. I'd write name(arg1, *, >> arg2=). > > Like > > ? ?replace(*, name=, kind=, default=, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?annotation=) -> Parameter > > or > > ? ?replace(*, name=, kind=, default=, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?annotation=) -> Parameter Either one's an improvement, but you'll have to explain at the top of the PEP what you intend this notation to mean. I'd go with since the key thing here seems to be that various keywords, when not specified, mean that nothing changes. OTOH in some places you can probably write "foo=Signature.empty" (etc.). >>>> (4) "If the object is a method" -- shouldn't that be "bound method"? >>>> (Unbound methods are undetectable.) Or is there some wider definition >>>> of method? What does it do for static or class methods? >>> >>> Yes, it should be "If the object is a bound method". ?We'll fix this >>> shortly. >> >> Great. >> >>> classmethod as a descriptor returns a BoundMethod (bound to the class), >>> staticmethod returns the original unmodified function, so both of >>> them are supported automatically. >> >> Oh, great. IIRC it wasn't always like that. Maybe just add this to the >> PEP as a note? > > OK. ?I'll clarify that. > > - > Yury -- --Guido van Rossum (python.org/~guido) From larry at hastings.org Fri Jun 22 21:32:34 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 22 Jun 2012 12:32:34 -0700 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: <4FE4C5AA.7040909@cheimes.de> References: <4FE4BA0B.5070800@hastings.org> <4FE4C5AA.7040909@cheimes.de> Message-ID: <4FE4C852.9090402@hastings.org> On 06/22/2012 12:21 PM, Christian Heimes wrote: > The PEP is already complex enough and went to several incarnations. It > was a wise decision to focus on the features that could be implemented > before the first beta is released. Kudos for pulling it off, Larry! Guys, guys! I have done next-to-nothing on this PEP. Yury has absolutely been the one driving it since it rose from the dead this year. All the kudos should be addressed to him. The only feature I contributed is the one everyone savagely *hated* (the "is_implemented" attribute). That was taken out and shot last week. ;-) > Signatures for builtin functions should be handled by a new PEP. We need > a way to extract or define the signatures (perhaps parse the C code and > parse PyArg_* signatures) and a secure way to store the signature > (perhaps implement the signature class in C?). That's a LOT of work. I'm growing something for 3.4; I hope to preview it in the next month or two. Does it really need a new PEP, though? It's an implementation detail for CPython; it shouldn't affect Python-level interfaces one jot. It's essentially a replacement for PyArg_ParseTupleAndKeywords()... I didn't think that merited a PEP. Cheers, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Fri Jun 22 21:36:39 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 22 Jun 2012 12:36:39 -0700 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14769: test_capi now has SkipitemTest, which cleverly checks In-Reply-To: <4FE4B7A5.9030004@mrabarnett.plus.com> References: <4FE49FCE.8040209@udel.edu> <4FE4B7A5.9030004@mrabarnett.plus.com> Message-ID: <4FE4C947.4030106@hastings.org> On 06/22/2012 11:21 AM, MRAB wrote: > On 22/06/2012 17:39, Terry Reedy wrote: >> You sensibly only test printable ascii chars, which are in the >> contiguous range 32 to 127 inclusive. So it makes no sense to claim >> otherwise and then deny the wrong claim, or to enlarge the range and >> then shrink it again. > ASCII character 127 is a control character, not a printable character. > >>> + This function brute-force tests all** ASCII characters (1 >>> to 127 >>> + inclusive) as format units, checking to see that >> > There are 128 ASCII characters (0 to 127 inclusive). Okay, message received. I'll test from 32 to 126 inclusive. I'm going to be obnoxious and code those values straight in--which I concede will be a maintenance nightmare should the ASCII standard change. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Fri Jun 22 21:47:47 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 22 Jun 2012 12:47:47 -0700 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> Message-ID: <4FE4CBE3.8080506@stoneleaf.us> Guido van Rossum wrote: > On Fri, Jun 22, 2012 at 12:24 PM, Yury Selivanov > wrote: >> On 2012-06-22, at 3:18 PM, Guido van Rossum wrote: >> >>> On Fri, Jun 22, 2012 at 12:10 PM, Yury Selivanov >>> wrote: >>>> Yes, I meant optional. Would 'name(arg1, *, [arg2])' be better? >>> Hardly, because that's not valid syntax. I'd write name(arg1, *, >>> arg2=). >> Like >> >> replace(*, name=, kind=, default=, >> annotation=) -> Parameter >> >> or >> >> replace(*, name=, kind=, default=, >> annotation=) -> Parameter > > Either one's an improvement, but you'll have to explain at the top of > the PEP what you intend this notation to mean. I'd go with > since the key thing here seems to be that various keywords, when not > specified, mean that nothing changes. OTOH in some places you can > probably write "foo=Signature.empty" (etc.). Parameter names that follow '*' in the signature are not optional (unless that has changed since 3.2). In other words, the above signature requires that name, kind, default, and annotation be specified by name *and* be given values when replace is called) ~Ethan~ From lists at cheimes.de Fri Jun 22 21:49:38 2012 From: lists at cheimes.de (Christian Heimes) Date: Fri, 22 Jun 2012 21:49:38 +0200 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: <4FE4C852.9090402@hastings.org> References: <4FE4BA0B.5070800@hastings.org> <4FE4C5AA.7040909@cheimes.de> <4FE4C852.9090402@hastings.org> Message-ID: <4FE4CC52.3040508@cheimes.de> Am 22.06.2012 21:32, schrieb Larry Hastings: > > On 06/22/2012 12:21 PM, Christian Heimes wrote: >> The PEP is already complex enough and went to several incarnations. It >> was a wise decision to focus on the features that could be implemented >> before the first beta is released. Kudos for pulling it off, Larry! > > Guys, guys! I have done next-to-nothing on this PEP. Yury has > absolutely been the one driving it since it rose from the dead this > year. All the kudos should be addressed to him. No kudos for you then and all kodus to Yury. Sorry for the misconception, Yury! :) Larry, you shouldn't underrate your role for the PEP on the Python dev list. You did a fine job managing the 'political' side over the past days and weeks. Christian From yselivanov.ml at gmail.com Fri Jun 22 21:52:55 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 22 Jun 2012 15:52:55 -0400 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: <4FE4CBE3.8080506@stoneleaf.us> References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> <4FE4CBE3.8080506@stoneleaf.us> Message-ID: <4EDD9709-21F0-4DC0-BA34-E72BEA51169F@gmail.com> On 2012-06-22, at 3:47 PM, Ethan Furman wrote: > Guido van Rossum wrote: >> On Fri, Jun 22, 2012 at 12:24 PM, Yury Selivanov >> wrote: >>> On 2012-06-22, at 3:18 PM, Guido van Rossum wrote: >>> >>>> On Fri, Jun 22, 2012 at 12:10 PM, Yury Selivanov >>>> wrote: >>>>> Yes, I meant optional. Would 'name(arg1, *, [arg2])' be better? >>>> Hardly, because that's not valid syntax. I'd write name(arg1, *, >>>> arg2=). >>> Like >>> >>> replace(*, name=, kind=, default=, >>> annotation=) -> Parameter >>> >>> or >>> >>> replace(*, name=, kind=, default=, >>> annotation=) -> Parameter >> Either one's an improvement, but you'll have to explain at the top of >> the PEP what you intend this notation to mean. I'd go with >> since the key thing here seems to be that various keywords, when not >> specified, mean that nothing changes. OTOH in some places you can >> probably write "foo=Signature.empty" (etc.). > > Parameter names that follow '*' in the signature are not optional (unless that has changed since 3.2). In other words, the above signature requires that name, kind, default, and annotation be specified by name *and* be given values when replace is called) I know. Those are optional keyword-only arguments. In the code: def replace(self, *, name=_void, kind=_void, annotation=_void, default=_void): We just need some clear convention for the PEP - and the mark should work. - Yury From solipsis at pitrou.net Fri Jun 22 21:56:04 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 22 Jun 2012 21:56:04 +0200 Subject: [Python-Dev] A reference leak with PyType_FromSpec References: <20120622212310.29833b29@pitrou.net> Message-ID: <20120622215604.0ae3e6df@pitrou.net> On Fri, 22 Jun 2012 21:23:10 +0200 Antoine Pitrou wrote: > > Hi, > > As mentioned in the commit message for 96513d71e650, creating a type > using PyType_FromSpec seems to leak references when the type is > instantiated. This happens with SSLError: The patch in http://bugs.python.org/issue15142 seems to fix it. Feedback welcome from typeobject experts :) Regards Antoine. From tjreedy at udel.edu Fri Jun 22 21:59:39 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 22 Jun 2012 15:59:39 -0400 Subject: [Python-Dev] Feature Freeze In-Reply-To: <4FE4C0CA.9040203@stoneleaf.us> References: <4FE4C0CA.9040203@stoneleaf.us> Message-ID: On 6/22/2012 3:00 PM, Ethan Furman wrote: > can somebody review (and commit! :) issues: > > http://bugs.python.org/issue14954 About weakref, no response yet. Beyond my knowledge. > http://bugs.python.org/issue14617 About __hash__, a short response from ?ric Araujo I might look at this. -- Terry Jan Reedy From tjreedy at udel.edu Fri Jun 22 22:08:24 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 22 Jun 2012 16:08:24 -0400 Subject: [Python-Dev] ssh://hg@hg.python.org/cpython unstable? Message-ID: 90% of the way through recloning cpython on Win7, I got Putty Error: Network error: software called connection abort Tortoise hg said abort: stream ended unexpectedly (got 53602 bytes, expected 55236) Two retries give same Putty error almost immdiately, with hg message no suitable response from remote hg Third retry started download, but it failed again after 10 minutes. similar message from hg. Is ssh://hg at hg.python.org/cpython unstable (intermittantly down)? or is something strange happening on my system? (I have previously downloaded multi-gigabyte files, though not today and with perhaps more error tolerant downloaders.) -- Terry Jan Reedy From pje at telecommunity.com Fri Jun 22 22:11:17 2012 From: pje at telecommunity.com (PJ Eby) Date: Fri, 22 Jun 2012 16:11:17 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <4FE43946.2050505@astro.uio.no> References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> Message-ID: On Fri, Jun 22, 2012 at 5:22 AM, Dag Sverre Seljebotn < d.s.seljebotn at astro.uio.no> wrote: > On 06/22/2012 10:40 AM, Paul Moore wrote: > >> On 22 June 2012 06:05, Nick Coghlan wrote: >> >>> distutils really only plays at the SRPM level - there is no defined OS >>> neutral RPM equivalent. That's why I brought up the bdist_simple >>> discussion earlier in the thread - if we can agree on a standard >>> bdist_simple format, then we can more cleanly decouple the "build" >>> step from the "install" step. >>> >> >> That was essentially the key insight I was trying to communicate in my >> "think about the end users" comment. Thanks, Nick! >> > > The subtlety here is that there's no way to know before building the > package what files should be installed. (For simple extensions, and perhaps > documentation, you could get away with ad-hoc rules or special support for > Sphinx and what-not, but there's no general solution that works in all > cases.) > > What Bento does is have one metadata file for the source-package, and > another metadata file (manifest) for the built-package. The latter is > normally generated by the build process (but follows a standard > nevertheless). Then that manifest is used for installation (through several > available methods). > This is the right thing to do, IMO. Also, I think rather than bikeshedding the One Serialization To Rule Them All, it should only be the *built* manifest that is standardized for tool consumption, and leave source descriptions to end-user tools. setup.cfg, bento.info, or whatever... that part should NOT be the first thing designed, and should not be the part that's frozen in a spec, since it otherwise locks out the ability to enhance that format. There's also been a huge amount of previous discussion regarding setup.cfg, which anyone proposing to alter it should probably read. setup.cfg allows hooks to external systems, so IIUC, you should be able to write a setup.cfg file that contains little besides your publication metadata (name, version, dependencies) and a hook to invoke whatever build tools you want, as long as you're willing to write a Python hook. This means that bikeshedding the build process is totally beside the point. If people want to use distutils, bento, SCons, ... it really doesn't matter, as long as they're willing to write a hook. This is a killer app for "packaging", as it frees up the stdlib from having to do every bloody thing itself and create One Build Process To Rule Them All. I didn't invent setup.cfg or write the "packaging" code, but I support this design approach wholeheartedly. I have only the smallest of quibbles and questions with it, but they aren't showstoppers. I've already had some discussion about these points on Distutils-SIG, and I think that should be continued. If there *is* to be any major discussion about switching directions in packaging, the place to start should be *use cases* rather than file formats. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Fri Jun 22 22:18:22 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 22 Jun 2012 16:18:22 -0400 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> Message-ID: <4D244E1E-950E-4C8A-ADC9-C08391544C87@gmail.com> On 2012-06-22, at 3:18 PM, Guido van Rossum wrote: > On Fri, Jun 22, 2012 at 12:10 PM, Yury Selivanov > wrote: >> Guido, >> >> On 2012-06-22, at 2:52 PM, Guido van Rossum wrote: >>> Of these, only (1) is a blocker for PEP acceptance -- I'd either like >>> to see this defended vigorously (maybe it was discussed? then please >>> quote, I can't go back and read the threads) or changed. >>> >>> Otherwise it looks great! OK, we've updated the PEP: http://www.python.org/dev/peps/pep-0362/ ( and the implementation: http://bugs.python.org/issue15008 ) Please take a look. - Yury From tjreedy at udel.edu Fri Jun 22 22:20:52 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 22 Jun 2012 16:20:52 -0400 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> Message-ID: On 6/22/2012 3:24 PM, Yury Selivanov wrote: > On 2012-06-22, at 3:18 PM, Guido van Rossum wrote: >> Hardly, because that's not valid syntax. I'd write name(arg1, *, >> arg2=). > > Like > > replace(*, name=, kind=, default=, > annotation=) -> Parameter > > or > > replace(*, name=, kind=, default=, > annotation=) -> Parameter I do not undertand the 'or'. I hope you mean default argument expressions in the standard manner rather than '' or ''. -- Terry Jan Reedy From tjreedy at udel.edu Fri Jun 22 22:29:07 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 22 Jun 2012 16:29:07 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14769: test_capi now has SkipitemTest, which cleverly checks In-Reply-To: <4FE4B7A5.9030004@mrabarnett.plus.com> References: <4FE49FCE.8040209@udel.edu> <4FE4B7A5.9030004@mrabarnett.plus.com> Message-ID: On 6/22/2012 2:21 PM, MRAB wrote: > On 22/06/2012 17:39, Terry Reedy wrote: >> On 6/22/2012 6:57 AM, larry.hastings wrote: >>> http://hg.python.org/cpython/rev/ace45d23628a >>> changeset: 77567:ace45d23628a >>> user: Larry Hastings >>> date: Fri Jun 22 03:56:29 2012 -0700 >>> summary: >>> Issue #14769: test_capi now has SkipitemTest, which cleverly checks >>> for "parity" between PyArg_ParseTuple() and the Python/getargs.c static >>> function skipitem() for all possible "format units". >> >> You sensibly only test printable ascii chars, which are in the >> contiguous range 32 to 127 inclusive. So it makes no sense to claim >> otherwise and then deny the wrong claim, or to enlarge the range and >> then shrink it again. >> > ASCII character 127 is a control character, not a printable character. Of course. And character 32 is also not usable and perhaps not worth testing. >>> + This function brute-force tests all** ASCII characters (1 to >>> 127 >>> + inclusive) as format units, checking to see that >> > There are 128 ASCII characters (0 to 127 inclusive). > >> With a few exceptions**, test all printable ASCII characters (32 to 127 >> inclusive) as... >> >>> + >>> + ** Okay, it actually skips some ASCII characters. Some >>> characters >> >> ** Some characters ... >>> + have special funny semantics, and it would be difficult to >>> + accomodate them here. >> >>> + for i in range(1, 128): >> >> for i in range(32, 128): so, for i in range(32, 127): or for i in range(33, 127): >>> + if (not c.isprintable()) or (c in '()e|$'): >> >> if c in '()e|$': >> -- Terry Jan Reedy From cournape at gmail.com Fri Jun 22 22:35:09 2012 From: cournape at gmail.com (David Cournapeau) Date: Fri, 22 Jun 2012 21:35:09 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> Message-ID: On Fri, Jun 22, 2012 at 9:11 PM, PJ Eby wrote: > On Fri, Jun 22, 2012 at 5:22 AM, Dag Sverre Seljebotn > wrote: >> >> On 06/22/2012 10:40 AM, Paul Moore wrote: >>> >>> On 22 June 2012 06:05, Nick Coghlan ?wrote: >>>> >>>> distutils really only plays at the SRPM level - there is no defined OS >>>> neutral RPM equivalent. That's why I brought up the bdist_simple >>>> discussion earlier in the thread - if we can agree on a standard >>>> bdist_simple format, then we can more cleanly decouple the "build" >>>> step from the "install" step. >>> >>> >>> That was essentially the key insight I was trying to communicate in my >>> "think about the end users" comment. Thanks, Nick! >> >> >> The subtlety here is that there's no way to know before building the >> package what files should be installed. (For simple extensions, and perhaps >> documentation, you could get away with ad-hoc rules or special support for >> Sphinx and what-not, but there's no general solution that works in all >> cases.) >> >> What Bento does is have one metadata file for the source-package, and >> another metadata file (manifest) for the built-package. The latter is >> normally generated by the build process (but follows a standard >> nevertheless). Then that manifest is used for installation (through several >> available methods). > > > This is the right thing to do, IMO. > > Also, I think rather than bikeshedding the One Serialization To Rule Them > All, it should only be the *built* manifest that is standardized for tool > consumption, and leave source descriptions to end-user tools.? setup.cfg, > bento.info, or whatever...? that part should NOT be the first thing > designed, and should not be the part that's frozen in a spec, since it > otherwise locks out the ability to enhance that format. agreed. I may not have been very clear before, but the bento.info format is really peripherical to what bento is about (it just happens that what would become bento was started as a 2 hours proof of concept for another packaging discussion 3 years ago :) ). As for the build manifest, I have a few, very out-dated notes there: http://cournape.github.com/Bento/html/hacking.html#build-manifest-and-building-installers I will try to update them this WE. I do have code to install, produce eggs, msi, .exe and .mpkg from this format. The API is kind of crappy/inconsistent, but the features are there, and there are even some tests around it. I don't think it would be very difficult to hack distutils2 to produce this build manifest. David From solipsis at pitrou.net Fri Jun 22 22:40:29 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 22 Jun 2012 22:40:29 +0200 Subject: [Python-Dev] removing packaging References: <4FE0F336.7030709@netwok.org> Message-ID: <20120622224029.3cec31c5@pitrou.net> On Tue, 19 Jun 2012 17:46:30 -0400 ?ric Araujo wrote: > > With beta coming, a way to deal with that unfortunate situation needs > to be found. We could (a) grant an exception to packaging to allow > changes after beta1; (b) keep packaging as it is now under a provisional > status, with due warnings that many things are expected to change; (c) > remove the unstable parts and deliver a subset that works (proposed by > Tarek to the Pyramid author on distutils-sig); (d) not release packaging > as part of Python 3.3 (I think that was also suggested on distutils-sig > last month). > > I don?t think (a) would give us enough time; we really want a few > months (and releases) to hash out the API (most notably with the pip and > buildout developers) and clean the bdist situation. Likewise (c) would > require developer (my) time that is currently in short supply. (b) also > requires time and would make development harder, not to mention probable > user pain. This leaves (d), after long reflection, as my preferred > choice, even though I disliked the idea at first (and I fully expect > Tarek to feel the same way). So if you want to remove packaging from 3.3, it should be before the beta which will be out in a few days. Regards Antoine. From yselivanov.ml at gmail.com Fri Jun 22 22:44:39 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 22 Jun 2012 16:44:39 -0400 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> Message-ID: <85A96578-0B5B-4776-AAA2-8EA735C1B400@gmail.com> On 2012-06-22, at 4:20 PM, Terry Reedy wrote: > On 6/22/2012 3:24 PM, Yury Selivanov wrote: >> On 2012-06-22, at 3:18 PM, Guido van Rossum wrote: > >>> Hardly, because that's not valid syntax. I'd write name(arg1, *, >>> arg2=). >> >> Like >> >> replace(*, name=, kind=, default=, >> annotation=) -> Parameter >> >> or >> >> replace(*, name=, kind=, default=, >> annotation=) -> Parameter > > I do not undertand the 'or'. I hope you mean default argument expressions in the standard manner rather than '' or ''. It's now in the updated PEP. We use '' notation just to specify that the parameter is optional, i.e. can (or in some cases should) be omitted. - Yury From larry at hastings.org Fri Jun 22 22:45:20 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 22 Jun 2012 13:45:20 -0700 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14769: test_capi now has SkipitemTest, which cleverly checks In-Reply-To: References: <4FE49FCE.8040209@udel.edu> <4FE4B7A5.9030004@mrabarnett.plus.com> Message-ID: <4FE4D960.3060606@hastings.org> On 06/22/2012 01:29 PM, Terry Reedy wrote: > Of course. And character 32 is also not usable and perhaps not > worth testing. Au contraire! I grant you, it's hard to imagine how using it would be a good idea. But strictly speaking it is *usable*. (And what is this thread about if not wanton pedantry!) Therefore I'm leaving it in. Feel free to go behind my back and fix it if you feel strongly about this. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Fri Jun 22 22:48:59 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 22 Jun 2012 16:48:59 -0400 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> Message-ID: <657D6CE5-8396-4B06-82B7-EC5B3410EBC9@gmail.com> On 2012-06-22, at 3:25 PM, Christian Heimes wrote: > Am 22.06.2012 21:10, schrieb Yury Selivanov: >> I think that if a function lacks an annotation, that should be reflected >> in the same way for its signature. >> >> Currently: >> >> if hasattr(signature, 'return_annotation'): >> >> If we use Signature.empty: >> >> if signature.return_annotation is not signature.empty: >> >> So (in my humble opinion) it doesn't simplify things too much. >> And also you can use 'try .. except AttributeError .. else' blocks, >> which make code even more readable. > > The second form has two benefits: > > * you get a sensible error message when you mistype the name of the > attribute. hasattr(signature, 'return_annotatoin') is clearly an error, > hard to notice with the naked eye and passes silently. > > * modern Python IDEs have code completion. "signature.re is not > signature.em" safes key strokes. Agree on both. This change also cut 20 lines from the implementation. So I guess it is a good decision after all ;) - Yury From tjreedy at udel.edu Fri Jun 22 22:55:17 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 22 Jun 2012 16:55:17 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE4406C.5040009@astro.uio.no> Message-ID: On 6/22/2012 6:09 AM, Vinay Sajip wrote: > Easy enough on Posix platforms, perhaps, but what about Windows? Every time windows users download and install a binary, they are taking a chance. I try to use a bit more sense than some people, but I know it is not risk free. There *is* a third party site that builds installers, but should I trust it? I would prefer that (except perhaps for known and trusted authors) PyPI compile binaries, perhaps after running code through a security checker, followed by running it through one or more virus checkers. >One can't expect a C compiler to be installed everywhere. Just having 'a C compiler' is not enough to compile on Windows. -- Terry Jan Reedy From ethan at stoneleaf.us Fri Jun 22 22:58:40 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 22 Jun 2012 13:58:40 -0700 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: <4EDD9709-21F0-4DC0-BA34-E72BEA51169F@gmail.com> References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> <4FE4CBE3.8080506@stoneleaf.us> <4EDD9709-21F0-4DC0-BA34-E72BEA51169F@gmail.com> Message-ID: <4FE4DC80.6020106@stoneleaf.us> Yury Selivanov wrote: > On 2012-06-22, at 3:47 PM, Ethan Furman wrote: > >> Guido van Rossum wrote: >>> On Fri, Jun 22, 2012 at 12:24 PM, Yury Selivanov >>> wrote: >>>> On 2012-06-22, at 3:18 PM, Guido van Rossum wrote: >>>> >>>>> On Fri, Jun 22, 2012 at 12:10 PM, Yury Selivanov >>>>> wrote: >>>>>> Yes, I meant optional. Would 'name(arg1, *, [arg2])' be better? >>>>> Hardly, because that's not valid syntax. I'd write name(arg1, *, >>>>> arg2=). >>>> Like >>>> >>>> replace(*, name=, kind=, default=, >>>> annotation=) -> Parameter >>>> >>>> or >>>> >>>> replace(*, name=, kind=, default=, >>>> annotation=) -> Parameter >>> Either one's an improvement, but you'll have to explain at the top of >>> the PEP what you intend this notation to mean. I'd go with >>> since the key thing here seems to be that various keywords, when not >>> specified, mean that nothing changes. OTOH in some places you can >>> probably write "foo=Signature.empty" (etc.). >> Parameter names that follow '*' in the signature are not optional (unless that has changed since 3.2). In other words, the above signature requires that name, kind, default, and annotation be specified by name *and* be given values when replace is called) > > I know. Those are optional keyword-only arguments. > > In the code: > > def replace(self, *, name=_void, kind=_void, annotation=_void, > default=_void): > > We just need some clear convention for the PEP - and the > mark should work. That looks strange to me -- I suggest putting brackets around each one, like: replace(*, [name=,] [kind=,] [default=,] [annotation=]) -> Parameter which is still a bit noisy. At the risk of raising the ire of those who don't condone the use of '...' :) how about: replace(*, [name=...,] [kind=...,] [default=...,] [annotation=...] ) -> Parameter ~Ethan~ From yselivanov.ml at gmail.com Fri Jun 22 23:04:03 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 22 Jun 2012 17:04:03 -0400 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: <4FE4DC80.6020106@stoneleaf.us> References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> <4FE4CBE3.8080506@stoneleaf.us> <4EDD9709-21F0-4DC0-BA34-E72BEA51169F@gmail.com> <4FE4DC80.6020106@stoneleaf.us> Message-ID: On 2012-06-22, at 4:58 PM, Ethan Furman wrote: > That looks strange to me -- I suggest putting brackets around each one, like: > > replace(*, [name=,] [kind=,] [default=,] [annotation=]) -> Parameter Isn't it too much? The PEP clearly indicates '=' is just a notation for an optional parameter. If it's that much of an issue - We can use '=_void' instead, as it is in the implementation, and describe how it works. But that's just noise, that will distract the reader from the PEP. > which is still a bit noisy. At the risk of raising the ire of those who don't condone the use of '...' :) how about: > > replace(*, [name=...,] [kind=...,] [default=...,] [annotation=...] > ) -> Parameter That may confuse someone, as ... - Ellipsis is a legitimate object to be used as a default value or annotation: def foo(bar:...=...)->...: pass I'd keep it simple ;) - Yury From donald.stufft at gmail.com Fri Jun 22 23:06:06 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Fri, 22 Jun 2012 17:06:06 -0400 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE4406C.5040009@astro.uio.no> Message-ID: <79CB331AF07C420AAE05C5F7824C5472@gmail.com> On Friday, June 22, 2012 at 4:55 PM, Terry Reedy wrote: > > Every time windows users download and install a binary, they are taking > a chance. I try to use a bit more sense than some people, but I know it > is not risk free. There *is* a third party site that builds installers, > but should I trust it? I would prefer that (except perhaps for known and > trusted authors) PyPI compile binaries, perhaps after running code > through a security checker, followed by running it through one or more > virus checkers. > I think you overestimate the abilities of "security checkers" and antivirus. Installing from PyPI is a risk, wether you use source or binaries. There is currently not a very good security story for installing python packages from PyPI (not all of this falls on PyPI), but even if we get to a point there is, PyPI can never be as safe as installing from RPM's or DEB and somewhat mores in the case of binaries. You _have_ to make a case by case choice if you trust the authors/maintainers of a particular package. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Jun 22 23:26:38 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 22 Jun 2012 14:26:38 -0700 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> <4FE4CBE3.8080506@stoneleaf.us> <4EDD9709-21F0-4DC0-BA34-E72BEA51169F@gmail.com> <4FE4DC80.6020106@stoneleaf.us> Message-ID: I am accepting the PEP. Congrats Yuri! (And others who feel they deserve it. :-) On Fri, Jun 22, 2012 at 2:04 PM, Yury Selivanov wrote: > On 2012-06-22, at 4:58 PM, Ethan Furman wrote: >> That looks strange to me -- I suggest putting brackets around each one, like: >> >> ? replace(*, [name=,] [kind=,] [default=,] [annotation=]) -> Parameter > > Isn't it too much? ?The PEP clearly indicates '=' is just > a notation for an optional parameter. > > If it's that much of an issue - We can use '=_void' instead, as it is > in the implementation, and describe how it works. ?But that's just noise, > that will distract the reader from the PEP. > >> which is still a bit noisy. ?At the risk of raising the ire of those who don't condone the use of '...' :) ?how about: >> >> ? replace(*, [name=...,] [kind=...,] [default=...,] [annotation=...] >> ? ? ? ? ? ?) -> Parameter > > That may confuse someone, as ... - Ellipsis is a legitimate object > to be used as a default value or annotation: > > ? ? def foo(bar:...=...)->...: > ? ? ? ? pass > > I'd keep it simple ;) Please leave in. -- --Guido van Rossum (python.org/~guido) From yselivanov.ml at gmail.com Fri Jun 22 23:31:33 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 22 Jun 2012 17:31:33 -0400 Subject: [Python-Dev] A Desperate Plea For Introspection (aka: BDFAP Needed) In-Reply-To: References: <4FE4BA0B.5070800@hastings.org> <65E4D23A-4977-41DD-B65F-2EF8419086A0@gmail.com> <4FE4CBE3.8080506@stoneleaf.us> <4EDD9709-21F0-4DC0-BA34-E72BEA51169F@gmail.com> <4FE4DC80.6020106@stoneleaf.us> Message-ID: On 2012-06-22, at 5:26 PM, Guido van Rossum wrote: > I am accepting the PEP. Congrats Yuri! (And others who feel they deserve it. :-) Great! Larry will merge the implementation then. Larry, Brett and I worked on the PEP together (~200 emails in private discussions), so everybody deserves ;) - Yury From python at mrabarnett.plus.com Sat Jun 23 00:48:19 2012 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 22 Jun 2012 23:48:19 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14769: test_capi now has SkipitemTest, which cleverly checks In-Reply-To: <4FE4D960.3060606@hastings.org> References: <4FE49FCE.8040209@udel.edu> <4FE4B7A5.9030004@mrabarnett.plus.com> <4FE4D960.3060606@hastings.org> Message-ID: <4FE4F633.3020801@mrabarnett.plus.com> On 22/06/2012 21:45, Larry Hastings wrote: > On 06/22/2012 01:29 PM, Terry Reedy wrote: >> Of course. And character 32 is also not usable and perhaps not >> worth testing. > > Au contraire! I grant you, it's hard to imagine how using it would be a > good idea. But strictly speaking it is *usable*. (And what is this > thread about if not wanton pedantry!) > > Therefore I'm leaving it in. Feel free to go behind my back and fix it > if you feel strongly about this. > It _is_ a printable character after all. From larry at hastings.org Sat Jun 23 01:15:39 2012 From: larry at hastings.org (Larry Hastings) Date: Fri, 22 Jun 2012 16:15:39 -0700 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14769: test_capi now has SkipitemTest, which cleverly checks In-Reply-To: <4FE4F633.3020801@mrabarnett.plus.com> References: <4FE49FCE.8040209@udel.edu> <4FE4B7A5.9030004@mrabarnett.plus.com> <4FE4D960.3060606@hastings.org> <4FE4F633.3020801@mrabarnett.plus.com> Message-ID: <4FE4FC9B.6010405@hastings.org> On 06/22/2012 03:48 PM, MRAB wrote: > On 22/06/2012 21:45, Larry Hastings wrote: >> On 06/22/2012 01:29 PM, Terry Reedy wrote: >>> Of course. And character 32 is also not usable and perhaps not >>> worth testing. >> >> Au contraire! I grant you, it's hard to imagine how using it would be a >> good idea. But strictly speaking it is *usable*. (And what is this >> thread about if not wanton pedantry!) >> >> Therefore I'm leaving it in. Feel free to go behind my back and fix it >> if you feel strongly about this. >> > It _is_ a printable character after all. Well, if we wanted to get *really* pedantic--and, who am I kidding, of course we do, this is python-dev--any byte besides 0 would *work* as a format unit, printable or not. But who among us would dare? //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Sat Jun 23 06:30:52 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 23 Jun 2012 00:30:52 -0400 Subject: [Python-Dev] ssh://hg@hg.python.org/cpython unstable? In-Reply-To: References: Message-ID: Since worked ok. -- Terry Jan Reedy From martin at v.loewis.de Sat Jun 23 09:29:33 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sat, 23 Jun 2012 09:29:33 +0200 Subject: [Python-Dev] ssh://hg@hg.python.org/cpython unstable? In-Reply-To: References: Message-ID: <20120623092933.Horde.TyI4H1NNcXdP5XBdBBS2zCA@webmail.df.eu> Yes. The network link on dinsdale is maxed out, since it hosts too many services (and since there was a massive increase in traffic over the last year). As a consequence, connections may time out. We are working on migrating services to a new machine. Unfortunately, the current hold-up is that OSU/OSL is slow in assigning IP addresses, which means that we cannot create new VMs as quickly as we wish. Regards, Martin From stephen at xemacs.org Sat Jun 23 10:02:18 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sat, 23 Jun 2012 17:02:18 +0900 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> Paul Moore writes: > I suppose if you're saying that "pip install lxml" should download and > install for me Visual Studio, libxml2 sources and any dependencies, > and run all the builds, then you're right. But I assume you're not. Indeed, if only a source package is available, it should. That's precisely what happens for source builds that depend on a non-system compiler on say Gentoo or MacPorts. What I'm saying is that the packaging system should *always be prepared* to offer that service (perhaps via plugins to OS distros' PMSes). Whether a particular package does, or not, presumably is up to the package's maintainer. Even if a binary package is available, it may only be partial. Indeed, by some definitions it alway will be (it will depend on an OS being installed!) Such a package should *also* offer the ability to fix up an incomplete system, by building from source if needed. Why can I say "should" here? Because if the packaging standard is decent and appropriate tools provided, there will be a source package because it's the easiest way to create a distribution! Such tools will surely be able to search the system for a preinstalled dependency, or a binary package cache for an installable package. A "complete" binary package would just provide the package cache on its distribution medium. > So why should I need to install Visual Studio just to *use* lxml? Because the packaging standard cannot mandate "high quality" packages from any given user's perspective, it can only provide the necessary features to implement them. If the lxml maintainer chooses to depend on a pre-installed libxml2, AFAICS you're SOL -- you need to go elsewhere for the library. VS + libxml2 source is just the most reliable way to go elsewhere in some sense (prebuilt binaries have a habit of showing up late or built with incompatible compiler options or the like). > But I do think that there's a risk that the discussion, because it > is necessarily driven by developers, forgets that "end users" > really don't have some tools that a developer would consider > "trivial" to have. I don't understand the problem. As long as binary packages have a default spec to depend on nothing but a browser to download the MSI, all you need is a buildbot that has a minimal Windows installation, and it won't be forgotten. The developers may forget to check, but the bot will remember! I certainly agree that the *default* spec should be that if you can get your hands on binary installer, that's all you need -- it will do *all* the work needed. Heck, it's not clear to me what else the default spec might be for a binary package. OTOH, if the developers make a conscious decision to depend on a given library, and/or a compiler, being pre-installed on the target system, what can Python or the packaging standard do about that? From cf.natali at gmail.com Sat Jun 23 10:28:12 2012 From: cf.natali at gmail.com (=?ISO-8859-1?Q?Charles=2DFran=E7ois_Natali?=) Date: Sat, 23 Jun 2012 10:28:12 +0200 Subject: [Python-Dev] Checking if unsigned int less then zero. In-Reply-To: <302e463f840221a066fb659e4963abb6@tochlab.net> References: <302e463f840221a066fb659e4963abb6@tochlab.net> Message-ID: > Playing with cpython source, I found some strange strings in > socketmodule.c: > > --- > if (flowinfo < 0 || flowinfo > 0xfffff) { > PyErr_SetString( > PyExc_OverflowError, > "getsockaddrarg: flowinfo must be 0-1048575."); > return 0; > } > --- > > --- > if (flowinfo < 0 || flowinfo > 0xfffff) { > PyErr_SetString(PyExc_OverflowError, > "getsockaddrarg: flowinfo must be 0-1048575."); > return NULL; > } > --- > > The flowinfo variable declared few strings above as unsgined int. Is > there any practical sense in this check? Seems like gcc just removes > this check. I think any compiler will generate code that checks as > unsigned, for example in x86 its JAE/JGE. May be this code is for "bad" > compilers or exotic arch? Removed. Thanks, cf From mstefanro at gmail.com Sat Jun 23 12:19:05 2012 From: mstefanro at gmail.com (M Stefan) Date: Sat, 23 Jun 2012 13:19:05 +0300 Subject: [Python-Dev] On a new version of pickle [PEP 3154]: self-referential frozensets Message-ID: <4FE59819.8030304@gmail.com> Hello, I'm one of this year's Google Summer of Code students working on improving pickle by creating a new version. My name is Stefan and my mentor is Alexandre Vassalotti. If you're interested, you can monitor the progress in the dedicated blog at [2] and the bitbucket repository at [3]. One of the goals for picklev4 is to add native opcodes for pickling of sets and frozensets. Currently these 4 opcodes were added: * EMPTY_SET, EMPTY_FROZENSET: push an empty set/frozenset in the stack * UPDATE_SET: update the set in the stack with the top stack slice stack before: ... pyset mark stackslice stack after : ... pyset effect: pyset.update(stackslice) # inplace union * UNION_FROZENSET: like UPDATE_SET, but create a new frozenset stack before: ... pyfrozenset mark stackslice stack after : ... pyfrozenset.union(stackslice) While this design allows pickling of self-referential sets, self-referential frozensets are still problematic. For instance, trying to pickle `fs': a=A(); fs=frozenset([a]); a.fs = fs (when unpickling, the object a has to be initialized before it is added to the frozenset) The only way I can think of to make this work is to postpone the initialization of all the objects inside the frozenset until after UNION_FROZENSET. I believe this is doable, but there might be memory penalties if the approach is to simply store all the initialization opcodes in memory until pickling the frozenset is finished. Currently, pickle.dumps(fs,4) generates: EMPTY_FROZENSET BINPUT 0 MARK BINGLOBAL_COMMON '0 A' # same as GLOBAL '__main__ A' in v3 EMPTY_TUPLE NEWOBJ EMPTY_DICT SHORT_BINUNICODE 'fs' BINGET 0 # retrieves the frozenset which is empty at this point, and it # will never be filled because it's immutable SETITEM BUILD # a.__setstate__({'fs' : frozenset()}) UNION_FROZENSET By postponing the initialization of a, it should instead generate: EMPTY_FROZENSET BINPUT 0 MARK BINGLOBAL_COMMON '0 A' # same as GLOBAL '__main__ A' in v3 EMPTY_TUPLE NEWOBJ # create the object but don't initialize its state yet BINPUT 1 UNION_FROZENSET BINGET 1 EMPTY_DICT SHORT_BINUNICODE 'fs' BINGET 0 SETITEM BUILD POP While self-referential frozensets are uncommon, a far more problematic situation is with the self-referential objects created with REDUCE. While pickle uses the idea of creating empty collections and then filling them, reduce tipically creates already-filled objects. For instance: cnt = collections.Counter(); cnt[a]=3; a.cnt=cnt; cnt.__reduce__() (, ({<__main__.A object at 0x0286E8F8>: 3},)) where the A object contains a reference to the counter. Unpickling an object pickled with this reduce function is not possible, because the reduce function, which "explains" how to create the object, is asking for the object to exist before being created. The fix here would be to pass Counter's dictionary in the state argument, as opposed to the "constructor parameters" one, as follows: (, (), {<__main__.A object at 0x0286E8F8>: 3}) When unpickling this, an empty Counter will be created first, and then __setstate__ will be called to fill it, at which point self-references are allowed. I assume this modification has to be done in the implementations of the data structures rather than in pickle itself. Pickle could try to fix this by detecting when reduce returns a class type as the first tuple arg and move the dict ctor parameter to the state, but this may not always be intended. It's also a bit strange that __getstate__ is never used anywhere in pickle directly. I'm looking forward to hearing your suggestions and opinions in this matter. Regards, Stefan [1] http://www.python.org/dev/peps/pep-3154/ [2] http://pypickle4.wordpress.com/ [3] http://bitbucket.org/mstefanro/pickle4 From peck at us.ibm.com Sat Jun 23 12:01:53 2012 From: peck at us.ibm.com (Jon K Peck) Date: Sat, 23 Jun 2012 04:01:53 -0600 Subject: [Python-Dev] AUTO: Jon K Peck is out of the office (returning 06/30/2012) Message-ID: I am out of the office until 06/30/2012. I will be out of the office the week of June 25. I expect to have some email access but will likely be delayed in responding. Note: This is an automated response to your message "Python-Dev Digest, Vol 107, Issue 104" sent on 06/23/2012 2:02:24. This is the only notification you will receive while this person is away. -------------- next part -------------- An HTML attachment was scrubbed... URL: From regebro at gmail.com Sat Jun 23 12:37:30 2012 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 23 Jun 2012 12:37:30 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: Why do I get the feeling that most people who hate distutils and want to replace it, has transferred those feelings to distutils2/packaging, mainly because of the name? In the end, I think this discussion is very similar to all previous packaging/building/installing discussions: There is a lot of emotions, and a lot of willingness to declare that "X sucks" but very little concrete explanation of *why* X sucks and why it can't be fixed. //Lennart From flub at devork.be Sat Jun 23 12:52:42 2012 From: flub at devork.be (Floris Bruynooghe) Date: Sat, 23 Jun 2012 11:52:42 +0100 Subject: [Python-Dev] Signed packages In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622161910.5baf0584@pitrou.net> <20120622172443.Horde.096CMqGZi1VP5I477AO0JhA@webmail.df.eu> <2D51471ABFFE4F0585166A44AAE8CCC7@gmail.com> Message-ID: On 22 June 2012 17:56, Donald Stufft wrote: > On Friday, June 22, 2012 at 12:54 PM, Alexandre Zani wrote: > > Key distribution is the real issue though. If there isn't a key > distribution infrastructure in place, we might as well not bother with > signatures. PyPI could issue x509 certs to packagers. You wouldn't be > able to verify that the name given is accurate, but you would be able > to verify that all packages with the same listed author are actually > by that author. > > I've been sketching out ideas for key distribution, but it's very much > a chicken and egg problem, very few people sign their packages (because > nothing uses it currently), and nobody is motivated to work on > infrastructure > or tooling because no one signs their packages. I'm surprised gpg hasn't been mentioned here. I think these are all solved problems, most free software that is signed signs it with the gpg key of the author. In that case all that is needed is that the cheeseshop allows the uploading of the signature. As for key distribution, the keyservers take care of that just fine and we'd probably see more and better attended signing parties at python conferences. Regards, Floris From g.brandl-nospam at gmx.net Sat Jun 23 12:54:46 2012 From: g.brandl-nospam at gmx.net (g.brandl-nospam at gmx.net) Date: Sat, 23 Jun 2012 12:54:46 +0200 Subject: [Python-Dev] 3.3 release plans Message-ID: <20120623105446.251580@gmx.net> Hi all, now that the final PEP scheduled for 3.3 is final, we're entering the next round of the 3.3 cycle. I've decided to make Tuesday 26th the big release day. That means: - Saturday: last feature-level changes that should be done before beta, e.g. removal of packaging - Sunday: final feature freeze, bug fixing - Monday: focus on stability of the buildbots, even unstable ones - Tuesday: forking of the 3.3.0b1 release clone, tagging, start of binary building cheers, Georg -- NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone! Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a From kristjan at ccpgames.com Sat Jun 23 13:00:34 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Sat, 23 Jun 2012 11:00:34 +0000 Subject: [Python-Dev] 3.3 release plans In-Reply-To: <20120623105446.251580@gmx.net> References: <20120623105446.251580@gmx.net> Message-ID: I realize it is late, but any chance to get http://bugs.python.org/issue15139 in today? ________________________________________ Fr?: python-dev-bounces+kristjan=ccpgames.com at python.org [python-dev-bounces+kristjan=ccpgames.com at python.org] fyrir hönd g.brandl-nospam at gmx.net [g.brandl-nospam at gmx.net] Sent: 23. j?n? 2012 10:54 To: python-dev at python.org Efni: [Python-Dev] 3.3 release plans Hi all, now that the final PEP scheduled for 3.3 is final, we're entering the next round of the 3.3 cycle. I've decided to make Tuesday 26th the big release day. That means: - Saturday: last feature-level changes that should be done before beta, e.g. removal of packaging - Sunday: final feature freeze, bug fixing - Monday: focus on stability of the buildbots, even unstable ones - Tuesday: forking of the 3.3.0b1 release clone, tagging, start of binary building cheers, Georg -- NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone! Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From d.s.seljebotn at astro.uio.no Sat Jun 23 13:13:23 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Sat, 23 Jun 2012 13:13:23 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <4FE5A4D3.90905@astro.uio.no> On 06/23/2012 12:37 PM, Lennart Regebro wrote: > Why do I get the feeling that most people who hate distutils and want > to replace it, has transferred those feelings to distutils2/packaging, > mainly because of the name? > > In the end, I think this discussion is very similar to all previous > packaging/building/installing discussions: There is a lot of emotions, > and a lot of willingness to declare that "X sucks" but very little > concrete explanation of *why* X sucks and why it can't be fixed. I think David has been pretty concrete in a lot of his criticism too (though he has refrained from repeating himself too much in this thread). Some of the criticism is spelled out here: http://bento.readthedocs.org/en/latest/faq.html This blog post is even more concrete (but perhaps outdated?): http://cournape.wordpress.com/2010/10/13/271/ As for me, I believe I've been rather blunt and direct in my criticism in this thread: It's been said by Tarek that distutils2 authors that they don't know anything about compilers. Therefore it's almost unconceivable to me that much good can come from distutils2 *for my needs*. Even if packaging and building isn't the same, the two issues do tangle at a fundamental level, *and* most existing solutions already out there (RPM, MSI..) distribute compiled software and therefore one needs a solid understanding of build processes to also understand these tools fully and draw on their experiences and avoid reinventing the wheel. Somebody with a deep understanding of 3-4 existing build systems and long experience in cross-platform builds and cross-architecture builds would need to be on board for me to take it seriously (even the packaging parts). As per Tarek's comments, I'm therefore pessimistic about the distutils2 efforts. (You can always tell me that I shouldn't criticise unless I'm willing to join and do something about it. That's fair. I'm just saying that my unwillingness to cheer for distutils2 is NOT based on the name only!) Dag From solipsis at pitrou.net Sat Jun 23 13:12:19 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 23 Jun 2012 13:12:19 +0200 Subject: [Python-Dev] 3.3 release plans References: <20120623105446.251580@gmx.net> Message-ID: <20120623131219.36c5fc35@pitrou.net> On Sat, 23 Jun 2012 11:00:34 +0000 Kristj?n Valur J?nsson wrote: > I realize it is late, but any chance to get http://bugs.python.org/issue15139 in today? -1. Regards Antoine. From ncoghlan at gmail.com Sat Jun 23 13:25:16 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 23 Jun 2012 21:25:16 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Sat, Jun 23, 2012 at 8:37 PM, Lennart Regebro wrote: > In the end, I think this discussion is very similar to all previous > packaging/building/installing discussions: There is a lot of emotions, > and a lot of willingness to declare that "X sucks" but very little > concrete explanation of *why* X sucks and why it can't be fixed. If you think that, you haven't read the whole thread. Thanks to this discussion, I now have a *much* clearer idea of what's broken, and a few ideas on what can be done to fix it. However, distutils-sig and python-ideas will be the place to post about those. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From flub at devork.be Sat Jun 23 13:32:54 2012 From: flub at devork.be (Floris Bruynooghe) Date: Sat, 23 Jun 2012 12:32:54 +0100 Subject: [Python-Dev] Signed packages In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622161910.5baf0584@pitrou.net> <20120622172443.Horde.096CMqGZi1VP5I477AO0JhA@webmail.df.eu> <2D51471ABFFE4F0585166A44AAE8CCC7@gmail.com> Message-ID: Oh sorry, having read the thread this spawned from I see you're taking about MS Windows singed binaries. Something I know next to nothing about, so ignore my babbling. On 23 June 2012 11:52, Floris Bruynooghe wrote: > On 22 June 2012 17:56, Donald Stufft wrote: >> On Friday, June 22, 2012 at 12:54 PM, Alexandre Zani wrote: >> >> Key distribution is the real issue though. If there isn't a key >> distribution infrastructure in place, we might as well not bother with >> signatures. PyPI could issue x509 certs to packagers. You wouldn't be >> able to verify that the name given is accurate, but you would be able >> to verify that all packages with the same listed author are actually >> by that author. >> >> I've been sketching out ideas for key distribution, but it's very much >> a chicken and egg problem, very few people sign their packages (because >> nothing uses it currently), and nobody is motivated to work on >> infrastructure >> or tooling because no one signs their packages. > > > I'm surprised gpg hasn't been mentioned here. ?I think these are all > solved problems, most free software that is signed signs it with the > gpg key of the author. ?In that case all that is needed is that the > cheeseshop allows the uploading of the signature. ?As for key > distribution, the keyservers take care of that just fine and we'd > probably see more and better attended signing parties at python > conferences. > > Regards, > Floris -- Debian GNU/Linux -- The Power of Freedom www.debian.org | www.gnu.org | www.kernel.org From cournape at gmail.com Sat Jun 23 13:53:53 2012 From: cournape at gmail.com (David Cournapeau) Date: Sat, 23 Jun 2012 12:53:53 +0100 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Sat, Jun 23, 2012 at 12:25 PM, Nick Coghlan wrote: > On Sat, Jun 23, 2012 at 8:37 PM, Lennart Regebro wrote: >> In the end, I think this discussion is very similar to all previous >> packaging/building/installing discussions: There is a lot of emotions, >> and a lot of willingness to declare that "X sucks" but very little >> concrete explanation of *why* X sucks and why it can't be fixed. > > If you think that, you haven't read the whole thread. Thanks to this > discussion, I now have a *much* clearer idea of what's broken, and a > few ideas on what can be done to fix it. > > However, distutils-sig and python-ideas will be the place to post about those. Nick, I am unfamiliar with python-ideas rules: should we continue discussion in distutils-sig entirely, or are there some specific topics that are more appropriate for python-ideas ? David From regebro at gmail.com Sat Jun 23 13:55:26 2012 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 23 Jun 2012 13:55:26 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Sat, Jun 23, 2012 at 1:25 PM, Nick Coghlan wrote: > If you think that, you haven't read the whole thread. This is true, I kinda gave up early yesterday. It's good that it became better. //Lennart From ncoghlan at gmail.com Sat Jun 23 13:57:51 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 23 Jun 2012 21:57:51 +1000 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE18FE6.3030703@ziade.org> <20120620111227.1058a864@pitrou.net> <20120620134612.3a9f7cfe@pitrou.net> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Sat, Jun 23, 2012 at 9:53 PM, David Cournapeau wrote: > Nick, I am unfamiliar with python-ideas rules: should we continue > discussion in distutils-sig entirely, or are there some specific > topics that are more appropriate for python-ideas ? No, I think I just need to join distutils-sig. python-ideas is more for ideas that don't have an appropriate SIG list. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From solipsis at pitrou.net Sat Jun 23 13:54:32 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 23 Jun 2012 13:54:32 +0200 Subject: [Python-Dev] 3.3 release plans References: <20120623105446.251580@gmx.net> <20120623131219.36c5fc35@pitrou.net> Message-ID: <20120623135432.38fd0ba4@pitrou.net> On Sat, 23 Jun 2012 13:12:19 +0200 Antoine Pitrou wrote: > On Sat, 23 Jun 2012 11:00:34 +0000 > Kristj?n Valur J?nsson wrote: > > I realize it is late, but any chance to get http://bugs.python.org/issue15139 in today? > > -1. Let me elaborate: the patch hasn't been reviewed, and it's a very minor improvement (assuming it's an improvement at all) in a rather delicate area. Regards Antoine. From martin at v.loewis.de Sat Jun 23 14:03:10 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sat, 23 Jun 2012 14:03:10 +0200 Subject: [Python-Dev] Signed packages In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622161910.5baf0584@pitrou.net> <20120622172443.Horde.096CMqGZi1VP5I477AO0JhA@webmail.df.eu> <2D51471ABFFE4F0585166A44AAE8CCC7@gmail.com> Message-ID: <20120623140310.Horde.G7dnY8L8999P5bB_K3XH9vA@webmail.df.eu> > I'm surprised gpg hasn't been mentioned here. I think these are all > solved problems, most free software that is signed signs it with the > gpg key of the author. In that case all that is needed is that the > cheeseshop allows the uploading of the signature. For the record, the cheeseshop has been supporting pgp signatures for about ten years now. Several projects have been using that for quite a while in their releases. Regards, Martin From vinay_sajip at yahoo.co.uk Sat Jun 23 14:27:52 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 23 Jun 2012 12:27:52 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> <4FE5A4D3.90905@astro.uio.no> Message-ID: Dag Sverre Seljebotn astro.uio.no> writes: > As for me, I believe I've been rather blunt and direct in my criticism > in this thread: It's been said by Tarek that distutils2 authors that > they don't know anything about compilers. Therefore it's almost > unconceivable to me that much good can come from distutils2 *for my > needs*. Even if packaging and building isn't the same, the two issues do > tangle at a fundamental level, *and* most existing solutions already out > there (RPM, MSI..) distribute compiled software and therefore one needs > a solid understanding of build processes to also understand these tools > fully and draw on their experiences and avoid reinventing the wheel. But packaging/distutils2 contains functionality for hooks, which can be used to implement custom builds using tools that packaging/distutils2 doesn't need to know or care about (a hook will do that). One can imagine that a set of commonly used templates would become available over time, so that some problems wouldn't need to have solutions re-invented. > Somebody with a deep understanding of 3-4 existing build systems and > long experience in cross-platform builds and cross-architecture builds > would need to be on board for me to take it seriously (even the > packaging parts). As per Tarek's comments, I'm therefore pessimistic > about the distutils2 efforts. This deep understanding is not essential in the packaging/distutil2 team, AFAICT. They just need to make sure that the hook APIs are sufficiently flexible, that the hooks invoked at the appropriate time, and that they are adequately documented with appropriate examples. For me, the bigger problem with the present distutils2/packaging implementation is that it propagates the command-class style of design which IMO caused so much pain in extending distutils. Perhaps some of the dafter limitations have been removed, and no doubt the rationale was to get to something usable more quickly, but it seems a bit like papering over cracks. The basic low-level building blocks like versioning, metadata and markers should be fine, but I'm not convinced that the command-class paradigm is appropriate in this case. The whole intricate "initialize_options"/"finalize_options"/"set_undefined_options" /"get_finalized_command"/"reinitialize_command" dance just makes me say, "Seriously?". Regards, Vinay Sajip From kristjan at ccpgames.com Sat Jun 23 14:32:37 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Sat, 23 Jun 2012 12:32:37 +0000 Subject: [Python-Dev] 3.3 release plans In-Reply-To: <20120623135432.38fd0ba4@pitrou.net> References: <20120623105446.251580@gmx.net> <20120623131219.36c5fc35@pitrou.net>, <20120623135432.38fd0ba4@pitrou.net> Message-ID: Au contraire, it is actually a very major improvement, the result of pretty extensive profiling, see http://blog.ccpgames.com/kristjan/2012/05/25/optimizing-python-condition-variables-with-telemetry/ The proposed patch reduces signaling latency in busy applications as demonstrated by the example program from 10s of milliseconds to about one, on my 64 bit windows box. This matters very much for applications using threading.Condition to dispatch work to threads. This includes those using queue.Queue(). K ________________________________________ Fr?: python-dev-bounces+kristjan=ccpgames.com at python.org [python-dev-bounces+kristjan=ccpgames.com at python.org] fyrir hönd Antoine Pitrou [solipsis at pitrou.net] Sent: 23. j?n? 2012 11:54 To: python-dev at python.org Efni: Re: [Python-Dev] 3.3 release plans On Sat, 23 Jun 2012 13:12:19 +0200 Antoine Pitrou wrote: > On Sat, 23 Jun 2012 11:00:34 +0000 > Kristj?n Valur J?nsson wrote: > > I realize it is late, but any chance to get http://bugs.python.org/issue15139 in today? > > -1. Let me elaborate: the patch hasn't been reviewed, and it's a very minor improvement (assuming it's an improvement at all) in a rather delicate area. Regards Antoine. _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From solipsis at pitrou.net Sat Jun 23 14:35:13 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 23 Jun 2012 14:35:13 +0200 Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> <4FE5A4D3.90905@astro.uio.no> Message-ID: <20120623143513.1dd1cfdd@pitrou.net> On Sat, 23 Jun 2012 12:27:52 +0000 (UTC) Vinay Sajip wrote: > > For me, the bigger problem with the present distutils2/packaging implementation > is that it propagates the command-class style of design which IMO caused so > much pain in extending distutils. Perhaps some of the dafter limitations have > been removed, and no doubt the rationale was to get to something usable more > quickly, but it seems a bit like papering over cracks. Remember that distutils2 was at first distutils. It was only decided to be forked as a "new" package when some people complained. This explains a lot of the methodology. Also, forking distutils helped maintain a strong level of compatibility. Apparently people now think it's time to redesign it all. That's fine, but it won't work without a huge amount of man-hours. It's not like you can write a couple of PEPs and call it done. Regards Antoine. From solipsis at pitrou.net Sat Jun 23 14:37:43 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 23 Jun 2012 14:37:43 +0200 Subject: [Python-Dev] 3.3 release plans In-Reply-To: References: <20120623105446.251580@gmx.net> <20120623131219.36c5fc35@pitrou.net> <20120623135432.38fd0ba4@pitrou.net> Message-ID: <20120623143743.239bbe9f@pitrou.net> On Sat, 23 Jun 2012 12:32:37 +0000 Kristj?n Valur J?nsson wrote: > Au contraire, it is actually a very major improvement, the result of pretty extensive profiling, see http://blog.ccpgames.com/kristjan/2012/05/25/optimizing-python-condition-variables-with-telemetry/ It might be. But to evaluate it, we have to digest a long technical blog post, then carefully review the patch (and perhaps even re-do some measurements). You understand that it's not reasonable to do so just one day before feature freeze. Regards Antoine. From d.s.seljebotn at astro.uio.no Sat Jun 23 14:48:33 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Sat, 23 Jun 2012 14:48:33 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> <4FE5A4D3.90905@astro.uio.no> Message-ID: <4FE5BB21.1020802@astro.uio.no> On 06/23/2012 02:27 PM, Vinay Sajip wrote: > Dag Sverre Seljebotn astro.uio.no> writes: > >> As for me, I believe I've been rather blunt and direct in my criticism >> in this thread: It's been said by Tarek that distutils2 authors that >> they don't know anything about compilers. Therefore it's almost >> unconceivable to me that much good can come from distutils2 *for my >> needs*. Even if packaging and building isn't the same, the two issues do >> tangle at a fundamental level, *and* most existing solutions already out >> there (RPM, MSI..) distribute compiled software and therefore one needs >> a solid understanding of build processes to also understand these tools >> fully and draw on their experiences and avoid reinventing the wheel. > > But packaging/distutils2 contains functionality for hooks, which can be used to > implement custom builds using tools that packaging/distutils2 doesn't need to > know or care about (a hook will do that). One can imagine that a set of > commonly used templates would become available over time, so that some problems > wouldn't need to have solutions re-invented. Of course you can always do anything, as numpy.distutils is a living proof of. Question is if it is good design. Can I be confident that the hooks are well-designed for my purposes? I think Bento's hook concept was redesigned 2 or 3 times to make sure it fit well... >> Somebody with a deep understanding of 3-4 existing build systems and >> long experience in cross-platform builds and cross-architecture builds >> would need to be on board for me to take it seriously (even the >> packaging parts). As per Tarek's comments, I'm therefore pessimistic >> about the distutils2 efforts. > > This deep understanding is not essential in the packaging/distutil2 team, > AFAICT. They just need to make sure that the hook APIs are sufficiently > flexible, that the hooks invoked at the appropriate time, and that they are > adequately documented with appropriate examples. > > For me, the bigger problem with the present distutils2/packaging implementation > is that it propagates the command-class style of design which IMO caused so > much pain in extending distutils. Perhaps some of the dafter limitations have > been removed, and no doubt the rationale was to get to something usable more > quickly, but it seems a bit like papering over cracks. The basic low-level > building blocks like versioning, metadata and markers should be fine, but I'm > not convinced that the command-class paradigm is appropriate in this case. The > whole intricate "initialize_options"/"finalize_options"/"set_undefined_options" > /"get_finalized_command"/"reinitialize_command" dance just makes me say, And of course, propagating compilation options/configuration, and auto-detecting configuration options, is one of the most important parts of a complex build. Thus this seem to contradict what you say above. (Sorry all, I felt like I should answer to a direct challenge. This will be my last post in this thread; I've subscribed to distutils-sig.) Dag From vinay_sajip at yahoo.co.uk Sat Jun 23 15:14:42 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 23 Jun 2012 13:14:42 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> <4FE5A4D3.90905@astro.uio.no> <20120623143513.1dd1cfdd@pitrou.net> Message-ID: Antoine Pitrou pitrou.net> writes: > Remember that distutils2 was at first distutils. It was only decided to > be forked as a "new" package when some people complained. > This explains a lot of the methodology. Also, forking distutils helped > maintain a strong level of compatibility. Right, but distutils was hard to extend for a reason, even though designed with extensibility in mind; hence the rise of setuptools. I understand the pragmatic nature of the design decisions in packaging, but in this case a little too much purity was sacrificed for practicality. Compatibility at a command-line level should be possible to achieve even with a quite different internal design. > Apparently people now think it's time to redesign it all. That's fine, > but it won't work without a huge amount of man-hours. It's not like you > can write a couple of PEPs and call it done. Surely. But more than implementation man-hours, it requires that people are willing to devote some time and expertise in firming up the requirements, use cases etc. to go into the PEPs. It's classic chicken-and-egg; no-one wants to invest that time until they know a project's going somewhere and will have widespread backing, but the project won't go anywhere quickly unless they step up and invest the time up front. Kudos to Tarek, ?ric and others for taking this particular world on their shoulders and re-energizing the discussion and development work to date, but it seems the net needs to be spread even wider to ensure that all constituencies are represented (for example, features needed only on Windows, such as binary distributions and executable scripts, have lagged a little bit behind). Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Sat Jun 23 15:20:47 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 23 Jun 2012 13:20:47 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> <4FE5A4D3.90905@astro.uio.no> <4FE5BB21.1020802@astro.uio.no> Message-ID: Dag Sverre Seljebotn astro.uio.no> writes: > Of course you can always do anything, as numpy.distutils is a living > proof of. Question is if it is good design. Can I be confident that the > hooks are well-designed for my purposes? Only you can look at the design to determine that. > And of course, propagating compilation options/configuration, and > auto-detecting configuration options, is one of the most important parts > of a complex build. Thus this seem to contradict what you say above. I'm not talking about those needs being invalid; just that distutils' way of fulfilling those needs is too fragile - else, why do you need things like Bento? Regards, Vinay Sajip From solipsis at pitrou.net Sat Jun 23 15:28:02 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 23 Jun 2012 15:28:02 +0200 Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> <4FE5A4D3.90905@astro.uio.no> <20120623143513.1dd1cfdd@pitrou.net> Message-ID: <20120623152802.46bf6dcb@pitrou.net> On Sat, 23 Jun 2012 13:14:42 +0000 (UTC) Vinay Sajip wrote: > Kudos to Tarek, ?ric and others for taking this particular world on their > shoulders and re-energizing the discussion and development work to date, but it > seems the net needs to be spread even wider to ensure that all constituencies > are represented (for example, features needed only on Windows, such as binary > distributions and executable scripts, have lagged a little bit behind). But what makes you think that redesigning everything would make those Windows features magically available? This isn't about "representing" "constitutencies". python-dev is not a bureaucracy, it needs people doing actual work. People could have proposed patches for these features and they didn't do it (AFAIK). Like WSGI2 and other similar things, this is the kind of discussion that will peter out in a few weeks and fall into oblivion. Regards Antoine. From solipsis at pitrou.net Sat Jun 23 15:29:55 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 23 Jun 2012 15:29:55 +0200 Subject: [Python-Dev] cpython: #15114: the strict mode of HTMLParser and the HTMLParseError exception are References: Message-ID: <20120623152955.3c74e5b8@pitrou.net> On Sat, 23 Jun 2012 15:28:00 +0200 ezio.melotti wrote: > > + .. deprecated-removed:: 3.3 3.5 > + The *strict* argument and the strict mode have been deprecated. > + The parser is now able to accept and parse invalid markup too. > + What if people want to accept only valid markup? Regards Antoine. From d.s.seljebotn at astro.uio.no Sat Jun 23 15:37:32 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Sat, 23 Jun 2012 15:37:32 +0200 Subject: [Python-Dev] Status of packaging in 3.3 In-Reply-To: References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> <4FE5A4D3.90905@astro.uio.no> <4FE5BB21.1020802@astro.uio.no> Message-ID: <4FE5C69C.8080201@astro.uio.no> On 06/23/2012 03:20 PM, Vinay Sajip wrote: > Dag Sverre Seljebotn astro.uio.no> writes: > >> Of course you can always do anything, as numpy.distutils is a living >> proof of. Question is if it is good design. Can I be confident that the >> hooks are well-designed for my purposes? > > Only you can look at the design to determine that. But the point is I can't! I don't trust myself to do that. I've many times wasted days and weeks on starting to use tools that looked quite nice to me, but suddenly I get bit by needing to do something turns out to be almost impossible to do cleanly due to design constraints. That's why it's so import to me to rely on experts that have *more* experience than I do (such as David). (That's of course also why it's so important to copy designs from what works elsewhere. And have a really deep knowledge of those designs and their rationale.) On 06/23/2012 02:27 PM, Vinay Sajip wrote: > This deep understanding is not essential in the packaging/distutil2 team, > AFAICT. They just need to make sure that the hook APIs are sufficiently > flexible, that the hooks invoked at the appropriate time, and that they are > adequately documented with appropriate examples. All I'm doing is expressing my doubts that "making the hook API sufficiently flexible" and "invoked at the appropriate time" (and I'll add "has the right form") can be achieved at all without having subject expertise covering all the relevant usecases. Of course I can't mathematically prove this, it's just my experience as a software developer. Dag From vinay_sajip at yahoo.co.uk Sat Jun 23 16:14:46 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 23 Jun 2012 14:14:46 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> <4FE5A4D3.90905@astro.uio.no> <20120623143513.1dd1cfdd@pitrou.net> <20120623152802.46bf6dcb@pitrou.net> Message-ID: Antoine Pitrou pitrou.net> writes: > But what makes you think that redesigning everything would make those > Windows features magically available? Nothing at all. > This isn't about "representing" "constitutencies". python-dev is not a > bureaucracy, it needs people doing actual work. People could have Well, for example, PEP 397 was proposed by Mark Hammond to satisfy a particular constituency (people using multiple Python versions on Windows). Interested parties added their input. Then it got implemented and integrated. You can see from some of the Numpy/Scipy developer comments that some in that constituency feel that their needs aren't/weren't being addressed. > proposed patches for these features and they didn't do it (AFAIK). Sure they did - for example, I implemented Windows executable script handling as part of my work on testing venv operation with pysetup, and linked to it on the relevant ticket. It wasn't exactly rejected, though perhaps it wasn't reviewed because of lack of time, other priorities etc. and fell between the cracks. But I'm not making any complaint about this; there were definitely bigger issues to work on. I only bring it up in response to the "don't just talk about it; code doesn't write itself, you know" I read in your comment. > Like WSGI2 and other similar things, this is the kind of discussion > that will peter out in a few weeks and fall into oblivion. Quite possibly, but that'll be the chicken-and-egg thing I mentioned. Some projects can be worked on in comparative isolation; other things, like packaging, need inputs from a wider range of people to gain the necessary credibility. Regards, Vinay Sajip From solipsis at pitrou.net Sat Jun 23 16:24:55 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 23 Jun 2012 16:24:55 +0200 Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> <4FE5A4D3.90905@astro.uio.no> <20120623143513.1dd1cfdd@pitrou.net> <20120623152802.46bf6dcb@pitrou.net> Message-ID: <20120623162455.4900ecdb@pitrou.net> On Sat, 23 Jun 2012 14:14:46 +0000 (UTC) Vinay Sajip wrote: > > Some > projects can be worked on in comparative isolation; other things, like > packaging, need inputs from a wider range of people to gain the necessary > credibility. packaging already improves a lot over distutils. I don't see where there is a credibility problem, except for people who think "distutils is sh*t". Regards Antoine. From ezio.melotti at gmail.com Sat Jun 23 17:20:34 2012 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Sat, 23 Jun 2012 17:20:34 +0200 Subject: [Python-Dev] cpython: #15114: the strict mode of HTMLParser and the HTMLParseError exception are In-Reply-To: <20120623152955.3c74e5b8@pitrou.net> References: <20120623152955.3c74e5b8@pitrou.net> Message-ID: On Sat, Jun 23, 2012 at 3:29 PM, Antoine Pitrou wrote: > On Sat, 23 Jun 2012 15:28:00 +0200 > ezio.melotti wrote: >> >> + ? .. deprecated-removed:: 3.3 3.5 >> + ? ? ?The *strict* argument and the strict mode have been deprecated. >> + ? ? ?The parser is now able to accept and parse invalid markup too. >> + > > What if people want to accept only valid markup? > The problem with the "strict" mode is that is not really strict. Originally the parser was trying to work around some common errors (e.g. missing quotes around attribute values), but was giving up when other markup errors were encountered. When the non-strict mode was introduced, the old behavior was called "strict" and left unchanged for backward compatibility, even thought it wasn't strict enough to be used for validation and it was happy to parse some broken markup (but not other). At the same time the non-strict mode was able to accept some markup errors but not others, and sometimes parsing valid markup yielded different results in strict and non-strict modes. Then HTML5 was announced, with specific algorithms to parse both valid and invalid markup, so I improved the non-strict mode to 1) be able to parse everything; 2) try to be as close as the HTML5 standard as possible (I don't claim HTML5 conformance though). Now parsing a valid HTML page should give the same result in strict and non-strict mode, so the strict mode is now only useful if you want HTMLParseErrors for an arbitrary subset of markup errors. As someone already suggested, I should write a blog post explaining all this, but I'm still working on ironing out the last things in the code, so the blog post has yet to reach the top of my todo list. Best Regards, Ezio Melotti > Regards > > Antoine. From solipsis at pitrou.net Sat Jun 23 17:25:34 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 23 Jun 2012 17:25:34 +0200 Subject: [Python-Dev] Empty directory is a namespace? Message-ID: <20120623172534.6eb74ee9@pitrou.net> Hello, I've just noticed the following: $ mkdir foo $ ./python Python 3.3.0a4+ (default:837d51ba1aa2+1794308c1ea7+, Jun 23 2012, 14:43:41) [GCC 4.5.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import foo >>> foo Should even an empty directory be a valid namespace package? Regards Antoine. From guido at python.org Sat Jun 23 17:38:02 2012 From: guido at python.org (Guido van Rossum) Date: Sat, 23 Jun 2012 08:38:02 -0700 Subject: [Python-Dev] Empty directory is a namespace? In-Reply-To: <20120623172534.6eb74ee9@pitrou.net> References: <20120623172534.6eb74ee9@pitrou.net> Message-ID: Yes. Otherwise, where to draw the line? What if it contains a single dot file? What if it contains no Python files? What if it contains only empty subdirectories? On Sat, Jun 23, 2012 at 8:25 AM, Antoine Pitrou wrote: > > Hello, > > I've just noticed the following: > > $ mkdir foo > $ ./python > Python 3.3.0a4+ (default:837d51ba1aa2+1794308c1ea7+, Jun 23 2012, > 14:43:41) [GCC 4.5.2] on linux > Type "help", "copyright", "credits" or "license" for more information. >>>> import foo >>>> foo > > > > Should even an empty directory be a valid namespace package? > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From solipsis at pitrou.net Sat Jun 23 17:39:41 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 23 Jun 2012 17:39:41 +0200 Subject: [Python-Dev] Empty directory is a namespace? In-Reply-To: References: <20120623172534.6eb74ee9@pitrou.net> Message-ID: <20120623173941.38687b15@pitrou.net> On Sat, 23 Jun 2012 08:38:02 -0700 Guido van Rossum wrote: > Yes. Otherwise, where to draw the line? What if it contains a single > dot file? What if it contains no Python files? What if it contains > only empty subdirectories? That's true. I would have hoped for it to be recognized only when there's at least one module or package inside, but it doesn't sound easy to check for (especially in the recursive namespace packages case - is that possible?). Regards Antoine. From martin at v.loewis.de Sat Jun 23 17:47:57 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sat, 23 Jun 2012 17:47:57 +0200 Subject: [Python-Dev] Empty directory is a namespace? In-Reply-To: <20120623172534.6eb74ee9@pitrou.net> References: <20120623172534.6eb74ee9@pitrou.net> Message-ID: <20120623174757.Horde.2mKWQFNNcXdP5eUtIyrSqPA@webmail.df.eu> > Should even an empty directory be a valid namespace package? Yes, that's what the PEP says, by BDFL pronouncement. Regards, Martin From martin at v.loewis.de Sat Jun 23 17:55:24 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sat, 23 Jun 2012 17:55:24 +0200 Subject: [Python-Dev] Empty directory is a namespace? In-Reply-To: <20120623173941.38687b15@pitrou.net> References: <20120623172534.6eb74ee9@pitrou.net> <20120623173941.38687b15@pitrou.net> Message-ID: <20120623175524.Horde.CiCoP1NNcXdP5ebsMKSiuHA@webmail.df.eu> > That's true. I would have hoped for it to be recognized only when > there's at least one module or package inside, but it doesn't sound > easy to check for (especially in the recursive namespace packages case > - is that possible?). Yes - a directory becomes a namespace package by not having an __init__.py, so the "namespace package" case will likely become the default, and people will start removing the empty __init__.pys when they don't need to support 3.2- anymore. If you wonder whether a nested namespace package may have multiple portions: that can also happen, i.e. if you have z3c.recipe.ldap, z3c.recipe.template, z3c.recipe.sphinxdoc. They may all get installed as separate zip files, each contributing a portion to z3c.recipe. In the long run, I expect that we will see namespace packages such as org.openstack, com.canonical, com.ibm, etc. Then, "com" is a namespace package, com.canonical is a namespace package, and com.canonical.launchpad might still be a namespace package with multiple portions. Regards, Martin From vinay_sajip at yahoo.co.uk Sat Jun 23 17:57:45 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 23 Jun 2012 15:57:45 +0000 (UTC) Subject: [Python-Dev] Status of packaging in 3.3 References: <4FE0F336.7030709@netwok.org> <4FE342C9.10004@plope.com> <4FE34F51.3010201@plope.com> <4FE357D8.1090006@ziade.org> <4FE3604B.3050606@plope.com> <4FE3732D.9090401@ziade.org> <87d34rammv.fsf@uwakimon.sk.tsukuba.ac.jp> <878vffa5c9.fsf@uwakimon.sk.tsukuba.ac.jp> <8762aia22d.fsf@uwakimon.sk.tsukuba.ac.jp> <4FE5A4D3.90905@astro.uio.no> <20120623143513.1dd1cfdd@pitrou.net> <20120623152802.46bf6dcb@pitrou.net> <20120623162455.4900ecdb@pitrou.net> Message-ID: Antoine Pitrou pitrou.net> writes: > packaging already improves a lot over distutils. I don't see where I don't dispute that. > there is a credibility problem, except for people who think "distutils > is sh*t". I don't think you have to take such an extreme position in order to suggest that there might be problems with its basic design. Regards, Vinay Sajip From solipsis at pitrou.net Sat Jun 23 17:58:34 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 23 Jun 2012 17:58:34 +0200 Subject: [Python-Dev] Empty directory is a namespace? References: <20120623172534.6eb74ee9@pitrou.net> <20120623173941.38687b15@pitrou.net> <20120623175524.Horde.CiCoP1NNcXdP5ebsMKSiuHA@webmail.df.eu> Message-ID: <20120623175834.72e0dcbf@pitrou.net> On Sat, 23 Jun 2012 17:55:24 +0200 martin at v.loewis.de wrote: > > That's true. I would have hoped for it to be recognized only when > > there's at least one module or package inside, but it doesn't sound > > easy to check for (especially in the recursive namespace packages case > > - is that possible?). > > Yes - a directory becomes a namespace package by not having an __init__.py, > so the "namespace package" case will likely become the default, and people > will start removing the empty __init__.pys when they don't need to support > 3.2- anymore. Have you tested the performance of namespace packages compared to normal packages? > In the long run, I expect that we will see namespace packages such as > org.openstack, com.canonical, com.ibm, etc. Then, "com" is a namespace > package, com.canonical is a namespace package, and com.canonical.launchpad > might still be a namespace package with multiple portions. I hope we are spared such naming schemes. Regards Antoine. From g.brandl-nospam at gmx.net Sat Jun 23 12:22:12 2012 From: g.brandl-nospam at gmx.net (g.brandl-nospam at gmx.net) Date: Sat, 23 Jun 2012 12:22:12 +0200 Subject: [Python-Dev] 3.3 release plans Message-ID: <20120623102212.251600@gmx.net> Hi all, I've checked in the (hopefully final) update of PEP 398: all PEP scale changes are now final or deferred to 3.4. I also adjusted the release day to be the 26th of June, which leaves us with the following rough plan: - Saturday: last large changes, such as removal of packaging - Sunday: final feature freeze for 3.3; resolve last blockers from bugs.python.org - Monday: ensure build stability for stable buildbots, and as many unstable buildbots as possible - Tuesday: release clone forked off the main repo; tagging and start of binary building cheers, Georg -- NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone! Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a From ncoghlan at gmail.com Sat Jun 23 18:40:14 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 24 Jun 2012 02:40:14 +1000 Subject: [Python-Dev] [Python-checkins] cpython: #4489: Add a shutil.rmtree that isn't suspectible to symlink attacks In-Reply-To: References: Message-ID: On Sun, Jun 24, 2012 at 2:18 AM, hynek.schlawack wrote: > http://hg.python.org/cpython/rev/c910af2e3c98 > changeset: ? 77635:c910af2e3c98 > user: ? ? ? ?Hynek Schlawack > date: ? ? ? ?Sat Jun 23 17:58:42 2012 +0200 > summary: > ?#4489: Add a shutil.rmtree that isn't suspectible to symlink attacks > > It is used automatically on platforms supporting the necessary os.openat() and > os.unlinkat() functions. Main code by Martin von L?wis. Unfortunately, this isn't actually having any effect at the moment since the os module APIs changed for the beta release. The "hasattr(os, 'unlinkat')" and "hasattr(os, 'openat')" checks need to become "os.unlink in os.supports_dir_fd" and "os.open in os.supports_dir_fd", and the affected calls need to be updated to pass "dir_fd" as an argument to the normal versions of the functions. At least we know the graceful fallback to the old behaviour is indeed graceful, though :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From hs at ox.cx Sat Jun 23 20:12:08 2012 From: hs at ox.cx (Hynek Schlawack) Date: Sat, 23 Jun 2012 20:12:08 +0200 Subject: [Python-Dev] [Python-checkins] cpython: #4489: Add a shutil.rmtree that isn't suspectible to symlink attacks In-Reply-To: References: Message-ID: <4FE606F8.3020205@ox.cx> >> It is used automatically on platforms supporting the necessary os.openat() and >> os.unlinkat() functions. Main code by Martin von L?wis. > > Unfortunately, this isn't actually having any effect at the moment > since the os module APIs changed for the beta release. > > The "hasattr(os, 'unlinkat')" and "hasattr(os, 'openat')" checks need > to become "os.unlink in os.supports_dir_fd" and "os.open in > os.supports_dir_fd", and the affected calls need to be updated to pass > "dir_fd" as an argument to the normal versions of the functions. > > At least we know the graceful fallback to the old behaviour is indeed > graceful, though :) Yeah I've been told on IRC already. I'll commit a fix in a few minutes if my regression tests on OS X and Linux work fine. From lists at cheimes.de Sat Jun 23 21:55:55 2012 From: lists at cheimes.de (Christian Heimes) Date: Sat, 23 Jun 2012 21:55:55 +0200 Subject: [Python-Dev] 3.3 release plans In-Reply-To: <20120623105446.251580@gmx.net> References: <20120623105446.251580@gmx.net> Message-ID: Am 23.06.2012 12:54, schrieb g.brandl-nospam at gmx.net: > Hi all, > > now that the final PEP scheduled for 3.3 is final, we're entering > the next round of the 3.3 cycle. > > I've decided to make Tuesday 26th the big release day. That means: > > - Saturday: last feature-level changes that should be done before beta, > e.g. removal of packaging > - Sunday: final feature freeze, bug fixing > - Monday: focus on stability of the buildbots, even unstable ones > - Tuesday: forking of the 3.3.0b1 release clone, tagging, start > of binary building I'd like to get the C implementation of the timing safe compare_digest into 3.3. http://bugs.python.org/issue15061 The patch went to several incarnations and I implemented input from Antoine, Serhiy and others. The function finally ended up as private function in the operator module because the _hashlib module isn't available without openssl and a new module for a single function is kinda overkill. Christian From g.brandl-nospam at gmx.net Sat Jun 23 22:45:22 2012 From: g.brandl-nospam at gmx.net (g.brandl-nospam at gmx.net) Date: Sat, 23 Jun 2012 22:45:22 +0200 Subject: [Python-Dev] 3.3 release plans In-Reply-To: References: <20120623105446.251580@gmx.net> Message-ID: <20120623204522.251590@gmx.net> -------- Original-Nachricht -------- > Datum: Sat, 23 Jun 2012 21:55:55 +0200 > Von: Christian Heimes > An: python-dev at python.org > Betreff: Re: [Python-Dev] 3.3 release plans > Am 23.06.2012 12:54, schrieb g.brandl-nospam at gmx.net: > > Hi all, > > > > now that the final PEP scheduled for 3.3 is final, we're entering > > the next round of the 3.3 cycle. > > > > I've decided to make Tuesday 26th the big release day. That means: > > > > - Saturday: last feature-level changes that should be done before beta, > > e.g. removal of packaging > > - Sunday: final feature freeze, bug fixing > > - Monday: focus on stability of the buildbots, even unstable ones > > - Tuesday: forking of the 3.3.0b1 release clone, tagging, start > > of binary building > > I'd like to get the C implementation of the timing safe compare_digest > into 3.3. http://bugs.python.org/issue15061 > > The patch went to several incarnations and I implemented input from > Antoine, Serhiy and others. The function finally ended up as private > function in the operator module because the _hashlib module isn't > available without openssl and a new module for a single function is > kinda overkill. Fine with me. You have time until tomorrow to push it. Georg -- NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone! Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a From martin at v.loewis.de Sat Jun 23 23:31:07 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 23 Jun 2012 23:31:07 +0200 Subject: [Python-Dev] Restricted API versioning Message-ID: <4FE6359B.1040205@v.loewis.de> I've been thinking about extensions to the stable ABI. On the one hand, introducing new API can cause extension modules not to run on older Python versions. On the other hand, the new API may well be stable in itself, i.e. remain available for all coming 3.x versions. As a compromise, I propose that such API can be added, but extension authors must explicitly opt into using it. To define their desired target Python versions, they need to set Py_LIMITED_API to the hexversion of the first Python release they want to support. Objections? The first use case of this are some glitches in the heap type API that Robin Schreiber detected in his GSoC project. E.g. specifying a heap type whose base also is a heap type was not really possible: the type spec would have to contain a pointer to the base, but that is not constant. In addition, if we use multiple interpreters, the base type should be a different object depending on the current interpreter - something that PyType_FromSpec couldn't support at all. So there is a new API function PyType_FromSpecWithBases which covers this case, and this API will only be available in 3.3+. Regards, Martin From brett at python.org Sat Jun 23 23:41:06 2012 From: brett at python.org (Brett Cannon) Date: Sat, 23 Jun 2012 17:41:06 -0400 Subject: [Python-Dev] Restricted API versioning In-Reply-To: <4FE6359B.1040205@v.loewis.de> References: <4FE6359B.1040205@v.loewis.de> Message-ID: On Sat, Jun 23, 2012 at 5:31 PM, "Martin v. L?wis" wrote: > I've been thinking about extensions to the stable ABI. On the one hand, > introducing new API can cause extension modules not to run on older > Python versions. On the other hand, the new API may well be stable in > itself, i.e. remain available for all coming 3.x versions. > > As a compromise, I propose that such API can be added, but extension > authors must explicitly opt into using it. To define their desired > target Python versions, they need to set Py_LIMITED_API to the > hexversion of the first Python release they want to support. > > Objections? > Nope, it sounds like a good idea to allow for the ABI to slowly grow. -Brett > > The first use case of this are some glitches in the heap type API > that Robin Schreiber detected in his GSoC project. E.g. specifying > a heap type whose base also is a heap type was not really possible: > the type spec would have to contain a pointer to the base, but that > is not constant. In addition, if we use multiple interpreters, the > base type should be a different object depending on the current > interpreter - something that PyType_FromSpec couldn't support at all. > So there is a new API function PyType_FromSpecWithBases which covers > this case, and this API will only be available in 3.3+. > > Regards, > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Sat Jun 23 23:41:42 2012 From: greg at krypto.org (Gregory P. Smith) Date: Sat, 23 Jun 2012 14:41:42 -0700 Subject: [Python-Dev] Restricted API versioning In-Reply-To: <4FE6359B.1040205@v.loewis.de> References: <4FE6359B.1040205@v.loewis.de> Message-ID: On Sat, Jun 23, 2012 at 2:31 PM, "Martin v. L?wis" wrote: > I've been thinking about extensions to the stable ABI. On the one hand, > introducing new API can cause extension modules not to run on older > Python versions. On the other hand, the new API may well be stable in > itself, i.e. remain available for all coming 3.x versions. > > As a compromise, I propose that such API can be added, but extension > authors must explicitly opt into using it. To define their desired > target Python versions, they need to set Py_LIMITED_API to the > hexversion of the first Python release they want to support. > > Objections? > +1 This sounds reasonable to me. Many other libraries have used this approach in the past. > The first use case of this are some glitches in the heap type API > that Robin Schreiber detected in his GSoC project. E.g. specifying > a heap type whose base also is a heap type was not really possible: > the type spec would have to contain a pointer to the base, but that > is not constant. In addition, if we use multiple interpreters, the > base type should be a different object depending on the current > interpreter - something that PyType_FromSpec couldn't support at all. > So there is a new API function PyType_FromSpecWithBases which covers > this case, and this API will only be available in 3.3+. > > Regards, > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat Jun 23 23:41:19 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 23 Jun 2012 23:41:19 +0200 Subject: [Python-Dev] Restricted API versioning References: <4FE6359B.1040205@v.loewis.de> Message-ID: <20120623234119.4cd8cf6b@pitrou.net> On Sat, 23 Jun 2012 23:31:07 +0200 "Martin v. L?wis" wrote: > I've been thinking about extensions to the stable ABI. On the one hand, > introducing new API can cause extension modules not to run on older > Python versions. On the other hand, the new API may well be stable in > itself, i.e. remain available for all coming 3.x versions. > > As a compromise, I propose that such API can be added, but extension > authors must explicitly opt into using it. To define their desired > target Python versions, they need to set Py_LIMITED_API to the > hexversion of the first Python release they want to support. Perhaps something more user-friendly than the hexversion? Regards Antoine. From lists at cheimes.de Sun Jun 24 00:06:12 2012 From: lists at cheimes.de (Christian Heimes) Date: Sun, 24 Jun 2012 00:06:12 +0200 Subject: [Python-Dev] Restricted API versioning In-Reply-To: <20120623234119.4cd8cf6b@pitrou.net> References: <4FE6359B.1040205@v.loewis.de> <20120623234119.4cd8cf6b@pitrou.net> Message-ID: <4FE63DD4.3080102@cheimes.de> Am 23.06.2012 23:41, schrieb Antoine Pitrou: > Perhaps something more user-friendly than the hexversion? IMHO 0x03030000 for 3.0.0 is user-friendly enough. A macro like PY_VERSION(3, 0, 0) could be added, too. Christian From martin at v.loewis.de Sun Jun 24 00:08:35 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 24 Jun 2012 00:08:35 +0200 Subject: [Python-Dev] Restricted API versioning In-Reply-To: <20120623234119.4cd8cf6b@pitrou.net> References: <4FE6359B.1040205@v.loewis.de> <20120623234119.4cd8cf6b@pitrou.net> Message-ID: <4FE63E63.1050306@v.loewis.de> On 23.06.2012 23:41, Antoine Pitrou wrote: > On Sat, 23 Jun 2012 23:31:07 +0200 > "Martin v. L?wis" wrote: >> I've been thinking about extensions to the stable ABI. On the one hand, >> introducing new API can cause extension modules not to run on older >> Python versions. On the other hand, the new API may well be stable in >> itself, i.e. remain available for all coming 3.x versions. >> >> As a compromise, I propose that such API can be added, but extension >> authors must explicitly opt into using it. To define their desired >> target Python versions, they need to set Py_LIMITED_API to the >> hexversion of the first Python release they want to support. > > Perhaps something more user-friendly than the hexversion? Please propose something. I think the hexversion *is* user-friendly, since it allows easy comparisons (Py_LIMITED_API+0 >= 0x03030000). Users that run into missing symbols will, after inspection of the header file, easily know what to do. We could require a second macro, but users will already have to define Py_LIMITED_API, so not making them define a second macro is also more friendly. Plus, with the hexversion, we can add stuff to a bugfix release, such a annoying omissions (e.g. the omission of the _SizeT functions, which I missed since I didn't compile the headers with PY_SSIZE_T_CLEAN when generating the function list). Regards, Martin From larry at hastings.org Sun Jun 24 01:11:08 2012 From: larry at hastings.org (Larry Hastings) Date: Sat, 23 Jun 2012 16:11:08 -0700 Subject: [Python-Dev] Restricted API versioning In-Reply-To: <4FE63E63.1050306@v.loewis.de> References: <4FE6359B.1040205@v.loewis.de> <20120623234119.4cd8cf6b@pitrou.net> <4FE63E63.1050306@v.loewis.de> Message-ID: <4FE64D0C.5050302@hastings.org> On 06/23/2012 03:08 PM, "Martin v. L?wis" wrote: > On 23.06.2012 23:41, Antoine Pitrou wrote: >> Perhaps something more user-friendly than the hexversion? > Please propose something. I think the hexversion *is* user-friendly, +1 to the idea, and specifically to using hexversion here. (Though what will we do after Python 255.0?) //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at cheimes.de Sun Jun 24 01:40:13 2012 From: lists at cheimes.de (Christian Heimes) Date: Sun, 24 Jun 2012 01:40:13 +0200 Subject: [Python-Dev] Restricted API versioning In-Reply-To: <4FE64D0C.5050302@hastings.org> References: <4FE6359B.1040205@v.loewis.de> <20120623234119.4cd8cf6b@pitrou.net> <4FE63E63.1050306@v.loewis.de> <4FE64D0C.5050302@hastings.org> Message-ID: <4FE653DD.6000602@cheimes.de> Am 24.06.2012 01:11, schrieb Larry Hastings: > On 06/23/2012 03:08 PM, "Martin v. L?wis" wrote: >> On 23.06.2012 23:41, Antoine Pitrou wrote: >>> Perhaps something more user-friendly than the hexversion? >> Please propose something. I think the hexversion *is* user-friendly, > > +1 to the idea, and specifically to using hexversion here. +1 for the general idea and for using Py_LIMITED_API. I still like my idea of a simple macro based on Include/patchlevel.h, for example: #define Py_API_VERSION(major, minor, micro) \ (((major) << 24) | ((minor) << 16) | ((micro) << 8)) #if Py_LIMITED_API+0 >= Py_API_VERSION(3, 3, 0) #endif > (Though what will we do after Python 255.0?) Luckily it's gonna take another 1500 years, or so. Our progenies could rename Python to Circus ... From rosuav at gmail.com Sun Jun 24 01:44:55 2012 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 24 Jun 2012 09:44:55 +1000 Subject: [Python-Dev] Restricted API versioning In-Reply-To: <4FE653DD.6000602@cheimes.de> References: <4FE6359B.1040205@v.loewis.de> <20120623234119.4cd8cf6b@pitrou.net> <4FE63E63.1050306@v.loewis.de> <4FE64D0C.5050302@hastings.org> <4FE653DD.6000602@cheimes.de> Message-ID: On Sun, Jun 24, 2012 at 9:40 AM, Christian Heimes wrote: > +1 for the general idea and for using Py_LIMITED_API. I still like my > idea of a simple macro based on Include/patchlevel.h, for example: > > #define Py_API_VERSION(major, minor, micro) \ > ? (((major) << 24) | ((minor) << 16) | ((micro) << 8)) > > #if Py_LIMITED_API+0 >= Py_API_VERSION(3, 3, 0) > #endif This strikes me as in opposition to the Python-level policy of duck typing. Would it be more appropriate to, instead of asking if it's Python 3.3.0, ask if it's a Python that supports PY_FEATURE_FOOBAR? Or would that result in an unnecessary proliferation of flag macros? ChrisA From larry at hastings.org Sun Jun 24 01:51:52 2012 From: larry at hastings.org (Larry Hastings) Date: Sat, 23 Jun 2012 16:51:52 -0700 Subject: [Python-Dev] Restricted API versioning In-Reply-To: References: <4FE6359B.1040205@v.loewis.de> <20120623234119.4cd8cf6b@pitrou.net> <4FE63E63.1050306@v.loewis.de> <4FE64D0C.5050302@hastings.org> <4FE653DD.6000602@cheimes.de> Message-ID: <4FE65698.9040705@hastings.org> On 06/23/2012 04:44 PM, Chris Angelico wrote: > On Sun, Jun 24, 2012 at 9:40 AM, Christian Heimes wrote: >> +1 for the general idea and for using Py_LIMITED_API. I still like my >> idea of a simple macro based on Include/patchlevel.h, for example: >> >> #define Py_API_VERSION(major, minor, micro) \ >> (((major)<< 24) | ((minor)<< 16) | ((micro)<< 8)) >> >> #if Py_LIMITED_API+0>= Py_API_VERSION(3, 3, 0) >> #endif > This strikes me as in opposition to the Python-level policy of duck > typing. Would it be more appropriate to, instead of asking if it's > Python 3.3.0, ask if it's a Python that supports PY_FEATURE_FOOBAR? Or > would that result in an unnecessary proliferation of flag macros? python != c Or, if you prefer python is not c C lacks niceties like constructors, destructors, and default arguments. I think C APIs need to be much more precise than Python APIs; mix-n-match C APIs would be an invitation to heartburn and migranes. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at cheimes.de Sun Jun 24 02:02:08 2012 From: lists at cheimes.de (Christian Heimes) Date: Sun, 24 Jun 2012 02:02:08 +0200 Subject: [Python-Dev] Restricted API versioning In-Reply-To: References: <4FE6359B.1040205@v.loewis.de> <20120623234119.4cd8cf6b@pitrou.net> <4FE63E63.1050306@v.loewis.de> <4FE64D0C.5050302@hastings.org> <4FE653DD.6000602@cheimes.de> Message-ID: <4FE65900.3030000@cheimes.de> Am 24.06.2012 01:44, schrieb Chris Angelico: > This strikes me as in opposition to the Python-level policy of duck > typing. Would it be more appropriate to, instead of asking if it's > Python 3.3.0, ask if it's a Python that supports PY_FEATURE_FOOBAR? Or > would that result in an unnecessary proliferation of flag macros? The version number is a sufficient rule. Flags aren't necessary as we can never remove or alter the signature of a API function. We can only add new features. Otherwise we'd break the API and binary interface (ABI) for C extensions. C compilers, linkers, dynamic library loaders and calling conventions are limited and don't support fancy stuff like OOP. Christian From kristjan at ccpgames.com Sun Jun 24 02:06:07 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Sun, 24 Jun 2012 00:06:07 +0000 Subject: [Python-Dev] 3.3 release plans In-Reply-To: <20120623143743.239bbe9f@pitrou.net> References: <20120623105446.251580@gmx.net> <20120623131219.36c5fc35@pitrou.net> <20120623135432.38fd0ba4@pitrou.net> , <20120623143743.239bbe9f@pitrou.net> Message-ID: As long as we are _before_ feature freeze, I would have thought it fine. Also, I'm sure we could allow ourselves some flexibility, after all, it is we that make these schedules, not the other way round. But no matter. Condition variable wakeup has been sluggish for many years, I'm sure our users can wait a few more. Meanwhile, the patch is there for those who want it. Cheers, K ________________________________________ Fr?: python-dev-bounces+kristjan=ccpgames.com at python.org [python-dev-bounces+kristjan=ccpgames.com at python.org] fyrir hönd Antoine Pitrou [solipsis at pitrou.net] Sent: 23. j?n? 2012 12:37 To: python-dev at python.org Efni: Re: [Python-Dev] 3.3 release plans On Sat, 23 Jun 2012 12:32:37 +0000 Kristj?n Valur J?nsson wrote: > Au contraire, it is actually a very major improvement, the result of pretty extensive profiling, see http://blog.ccpgames.com/kristjan/2012/05/25/optimizing-python-condition-variables-with-telemetry/ It might be. But to evaluate it, we have to digest a long technical blog post, then carefully review the patch (and perhaps even re-do some measurements). You understand that it's not reasonable to do so just one day before feature freeze. Regards Antoine. _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From ncoghlan at gmail.com Sun Jun 24 06:56:30 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 24 Jun 2012 14:56:30 +1000 Subject: [Python-Dev] Restricted API versioning In-Reply-To: <4FE653DD.6000602@cheimes.de> References: <4FE6359B.1040205@v.loewis.de> <20120623234119.4cd8cf6b@pitrou.net> <4FE63E63.1050306@v.loewis.de> <4FE64D0C.5050302@hastings.org> <4FE653DD.6000602@cheimes.de> Message-ID: On Sun, Jun 24, 2012 at 9:40 AM, Christian Heimes wrote: > Am 24.06.2012 01:11, schrieb Larry Hastings: >> On 06/23/2012 03:08 PM, "Martin v. L?wis" wrote: >>> On 23.06.2012 23:41, Antoine Pitrou wrote: >>>> Perhaps something more user-friendly than the hexversion? >>> Please propose something. I think the hexversion *is* user-friendly, >> >> +1 to the idea, and specifically to using hexversion here. > > +1 for the general idea and for using Py_LIMITED_API. I still like my > idea of a simple macro based on Include/patchlevel.h, for example: > > #define Py_API_VERSION(major, minor, micro) \ > ? (((major) << 24) | ((minor) << 16) | ((micro) << 8)) > > #if Py_LIMITED_API+0 >= Py_API_VERSION(3, 3, 0) > #endif +1 to all 3 of those from me (the general idea, using hexversion, and providing a convenience macro to skip having to spell out hexversion manually). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From martin at v.loewis.de Sun Jun 24 09:00:06 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 24 Jun 2012 09:00:06 +0200 Subject: [Python-Dev] Restricted API versioning In-Reply-To: References: <4FE6359B.1040205@v.loewis.de> <20120623234119.4cd8cf6b@pitrou.net> <4FE63E63.1050306@v.loewis.de> <4FE64D0C.5050302@hastings.org> <4FE653DD.6000602@cheimes.de> Message-ID: <4FE6BAF6.7040808@v.loewis.de> > This strikes me as in opposition to the Python-level policy of duck > typing. Would it be more appropriate to, instead of asking if it's > Python 3.3.0, ask if it's a Python that supports PY_FEATURE_FOOBAR? Or > would that result in an unnecessary proliferation of flag macros? It would, hence I'm -1. I believe it is the motivation for the gcc assertion preprocessor feature, which never caught on. Regards, Martin From ncoghlan at gmail.com Sun Jun 24 09:08:53 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 24 Jun 2012 17:08:53 +1000 Subject: [Python-Dev] Restricted API versioning In-Reply-To: <4FE6BAF6.7040808@v.loewis.de> References: <4FE6359B.1040205@v.loewis.de> <20120623234119.4cd8cf6b@pitrou.net> <4FE63E63.1050306@v.loewis.de> <4FE64D0C.5050302@hastings.org> <4FE653DD.6000602@cheimes.de> <4FE6BAF6.7040808@v.loewis.de> Message-ID: On Sun, Jun 24, 2012 at 5:00 PM, "Martin v. L?wis" wrote: >> This strikes me as in opposition to the Python-level policy of duck >> typing. Would it be more appropriate to, instead of asking if it's >> Python 3.3.0, ask if it's a Python that supports PY_FEATURE_FOOBAR? Or >> would that result in an unnecessary proliferation of flag macros? > > It would, hence I'm -1. I believe it is the motivation for the gcc > assertion preprocessor feature, which never caught on. Right, if someone wants to check for a specific feature rather than just figuring out once the minimum version of the stable ABI that they need, then they can write an autotools macro (or equivalent in other build systems). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From martin at v.loewis.de Sun Jun 24 09:51:28 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 24 Jun 2012 09:51:28 +0200 Subject: [Python-Dev] Empty directory is a namespace? In-Reply-To: <20120623175834.72e0dcbf@pitrou.net> References: <20120623172534.6eb74ee9@pitrou.net> <20120623173941.38687b15@pitrou.net> <20120623175524.Horde.CiCoP1NNcXdP5ebsMKSiuHA@webmail.df.eu> <20120623175834.72e0dcbf@pitrou.net> Message-ID: <4FE6C700.3080205@v.loewis.de> On 23.06.2012 17:58, Antoine Pitrou wrote: > On Sat, 23 Jun 2012 17:55:24 +0200 > martin at v.loewis.de wrote: >>> That's true. I would have hoped for it to be recognized only when >>> there's at least one module or package inside, but it doesn't sound >>> easy to check for (especially in the recursive namespace packages case >>> - is that possible?). >> >> Yes - a directory becomes a namespace package by not having an __init__.py, >> so the "namespace package" case will likely become the default, and people >> will start removing the empty __init__.pys when they don't need to support >> 3.2- anymore. > > Have you tested the performance of namespace packages compared to > normal packages? No, I haven't. Regards, Martin From rosuav at gmail.com Sun Jun 24 10:18:19 2012 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 24 Jun 2012 18:18:19 +1000 Subject: [Python-Dev] Restricted API versioning In-Reply-To: References: <4FE6359B.1040205@v.loewis.de> <20120623234119.4cd8cf6b@pitrou.net> <4FE63E63.1050306@v.loewis.de> <4FE64D0C.5050302@hastings.org> <4FE653DD.6000602@cheimes.de> <4FE6BAF6.7040808@v.loewis.de> Message-ID: On Sun, Jun 24, 2012 at 5:08 PM, Nick Coghlan wrote: > On Sun, Jun 24, 2012 at 5:00 PM, "Martin v. L?wis" wrote: >>> This strikes me as in opposition to the Python-level policy of duck >>> typing. Would it be more appropriate to, instead of asking if it's >>> Python 3.3.0, ask if it's a Python that supports PY_FEATURE_FOOBAR? Or >>> would that result in an unnecessary proliferation of flag macros? >> >> It would, hence I'm -1. I believe it is the motivation for the gcc >> assertion preprocessor feature, which never caught on. > > Right, if someone wants to check for a specific feature rather than > just figuring out once the minimum version of the stable ABI that they > need, then they can write an autotools macro (or equivalent in other > build systems). Fair enough. I assume these sorts of things are only ever going to be added once, and not backported to old versions, so a single version number is guaranteed to suffice (it's not like "available in 4.5.6 and 4.6.2 and 4.7.4"). Go with the easy option! ChrisA From barry at python.org Sun Jun 24 14:09:05 2012 From: barry at python.org (Barry Warsaw) Date: Sun, 24 Jun 2012 08:09:05 -0400 Subject: [Python-Dev] Restricted API versioning In-Reply-To: <4FE63E63.1050306@v.loewis.de> References: <4FE6359B.1040205@v.loewis.de> <20120623234119.4cd8cf6b@pitrou.net> <4FE63E63.1050306@v.loewis.de> Message-ID: <20120624080905.16344a83@limelight.wooz.org> On Jun 24, 2012, at 12:08 AM, Martin v. L?wis wrote: >Please propose something. I think the hexversion *is* user-friendly, >since it allows easy comparisons (Py_LIMITED_API+0 >= 0x03030000). >Users that run into missing symbols will, after inspection of the >header file, easily know what to do. +1 for hexversion for the reasons Martin states. -Barry From barry at barrys-emacs.org Sun Jun 24 14:52:04 2012 From: barry at barrys-emacs.org (Barry Scott) Date: Sun, 24 Jun 2012 13:52:04 +0100 Subject: [Python-Dev] Offer of help: http://bugs.python.org/issue10910 Message-ID: <9ACF1734-D4B9-492D-A706-639A20E2AFB0@barrys-emacs.org> I see that issue 10910 needs a reviewer for a patch. I know the python code and C++ and offer to review any patches to fix this issue. Having updated Xcode on my Mac I'm having to code workarounds for this issue. My understanding is that you cannot define ispace, toupper etc as macros in C++ environment. These are defined as functions in C++. The minimum patch would #ifdef out the offending lines in byte_methods.h and pyport.h if compiling for C++. I'm going to be releasing a PyCXX release to work around this issue. Barry From martin at v.loewis.de Sun Jun 24 15:29:56 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 24 Jun 2012 15:29:56 +0200 Subject: [Python-Dev] Offer of help: http://bugs.python.org/issue10910 In-Reply-To: <9ACF1734-D4B9-492D-A706-639A20E2AFB0@barrys-emacs.org> References: <9ACF1734-D4B9-492D-A706-639A20E2AFB0@barrys-emacs.org> Message-ID: <4FE71654.5000208@v.loewis.de> On 24.06.2012 14:52, Barry Scott wrote: > I see that issue 10910 needs a reviewer for a patch. > > I know the python code and C++ and offer to review > any patches to fix this issue. Is this even an issue for 3.x? ISTM that the C library macros aren't used, anyway, so I think this entire section could go from the header files. For 2.7, things are more difficult. Regards, Martin From jeremy.kloth at gmail.com Sun Jun 24 16:12:07 2012 From: jeremy.kloth at gmail.com (Jeremy Kloth) Date: Sun, 24 Jun 2012 08:12:07 -0600 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15102: find python.exe in OutDir, not SolutionDir. In-Reply-To: References: Message-ID: > --- a/PCbuild/pyproject.props > +++ b/PCbuild/pyproject.props > @@ -2,7 +2,7 @@ > ? > ? > ? ? python33$(PyDebugExt) > - ? ?$(SolutionDir)python$(PyDebugExt).exe > + ? ?$(OutDir)python$(PyDebugExt).exe > ? ? $(OutDir)kill_python$(PyDebugExt).exe > ? ? ..\.. > ? ? $(externalsDir)\sqlite-3.7.12 In order for this change to accurately reflect the OutDir in the x64 builds, all imports of x64.props need to be moved to be before the pyproject.props import statements. From eric at trueblade.com Sun Jun 24 17:05:35 2012 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 24 Jun 2012 11:05:35 -0400 Subject: [Python-Dev] 3.3 release plans In-Reply-To: <20120623105446.251580@gmx.net> References: <20120623105446.251580@gmx.net> Message-ID: <4FE72CBF.5020708@trueblade.com> On 06/23/2012 06:54 AM, g.brandl-nospam at gmx.net wrote: > I've decided to make Tuesday 26th the big release day. That means: > > - Saturday: last feature-level changes that should be done before beta, > e.g. removal of packaging > - Sunday: final feature freeze, bug fixing What's your timeframe for bug fixes today? I'd very much like to fix http://bugs.python.org/issue15039, but it will probably take me another 2 hours or so. Eric. From solipsis at pitrou.net Sun Jun 24 17:27:54 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 24 Jun 2012 17:27:54 +0200 Subject: [Python-Dev] 3.3 release plans References: <20120623105446.251580@gmx.net> <4FE72CBF.5020708@trueblade.com> Message-ID: <20120624172754.5b22e8d8@pitrou.net> On Sun, 24 Jun 2012 11:05:35 -0400 "Eric V. Smith" wrote: > On 06/23/2012 06:54 AM, g.brandl-nospam at gmx.net wrote: > > I've decided to make Tuesday 26th the big release day. That means: > > > > - Saturday: last feature-level changes that should be done before beta, > > e.g. removal of packaging > > - Sunday: final feature freeze, bug fixing > > What's your timeframe for bug fixes today? I'd very much like to fix > http://bugs.python.org/issue15039, but it will probably take me another > 2 hours or so. If it's a bugfix, it's not blocked by the feature freeze at all: you can commit after beta (but you might want the bugfix to be in the beta). Regards Antoine. From eric at trueblade.com Sun Jun 24 17:44:32 2012 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 24 Jun 2012 11:44:32 -0400 Subject: [Python-Dev] 3.3 release plans In-Reply-To: <20120624172754.5b22e8d8@pitrou.net> References: <20120623105446.251580@gmx.net> <4FE72CBF.5020708@trueblade.com> <20120624172754.5b22e8d8@pitrou.net> Message-ID: <4FE735E0.2010300@trueblade.com> On 6/24/2012 11:27 AM, Antoine Pitrou wrote: >> What's your timeframe for bug fixes today? I'd very much like to fix >> http://bugs.python.org/issue15039, but it will probably take me another >> 2 hours or so. > > If it's a bugfix, it's not blocked by the feature freeze at all: you > can commit after beta (but you might want the bugfix to be in the beta). Indeed, that's what I'm after: the fix in the beta. I'll do my best to get it in, but obviously it's not a blocker. Eric. From martin at v.loewis.de Sun Jun 24 18:55:58 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 24 Jun 2012 18:55:58 +0200 Subject: [Python-Dev] Offer of help: http://bugs.python.org/issue10910 In-Reply-To: References: <9ACF1734-D4B9-492D-A706-639A20E2AFB0@barrys-emacs.org> <4FE71654.5000208@v.loewis.de> Message-ID: <4FE7469E.5090301@v.loewis.de> >> Is this even an issue for 3.x? ISTM that the C library macros aren't >> used, anyway, so I think this entire section could go from the header >> files. > > $ grep isspace /Library/Frameworks/Python.framework/Versions/3.2/include/python3.2m/*.h > /Library/Frameworks/Python.framework/Versions/3.2/include/python3.2m/pyport.h:#undef isspace > /Library/Frameworks/Python.framework/Versions/3.2/include/python3.2m/pyport.h:#define isspace(c) iswspace(btowc(c)) > > I'm not familiar with pyport.h usage. I do see that it protects the problem lines with: > #ifdef _PY_PORT_CTYPE_UTF8_ISSUE I think you missed my point. Python shouldn't be using isspace anymore at all, so any work-arounds for certain BSD versions should be outdated and can be removed entirely. Of course, before implementing that solution, one would have to verify that this claim (macros not used) is indeed true. > So long as that is not defined when C++ is in use no problem. I'm not so much concerned with compiling with C++, but care about a potential cleanup of the headers. >> For 2.7, things are more difficult. > > This is where a fix is required. Is there going to be another 2.7 release to deliver a fix in? Yes, there will be more 2.7 bugfix releases. If a fix is too intrusive or too hacky, it might be that the bug must stay unfixed, though. Regards, Martin From barry at barrys-emacs.org Sun Jun 24 18:48:50 2012 From: barry at barrys-emacs.org (Barry Scott) Date: Sun, 24 Jun 2012 17:48:50 +0100 Subject: [Python-Dev] Offer of help: http://bugs.python.org/issue10910 In-Reply-To: <4FE71654.5000208@v.loewis.de> References: <9ACF1734-D4B9-492D-A706-639A20E2AFB0@barrys-emacs.org> <4FE71654.5000208@v.loewis.de> Message-ID: On 24 Jun 2012, at 14:29, Martin v. L?wis wrote: > On 24.06.2012 14:52, Barry Scott wrote: >> I see that issue 10910 needs a reviewer for a patch. >> >> I know the python code and C++ and offer to review >> any patches to fix this issue. > > Is this even an issue for 3.x? ISTM that the C library macros aren't > used, anyway, so I think this entire section could go from the header > files. $ grep isspace /Library/Frameworks/Python.framework/Versions/3.2/include/python3.2m/*.h /Library/Frameworks/Python.framework/Versions/3.2/include/python3.2m/pyport.h:#undef isspace /Library/Frameworks/Python.framework/Versions/3.2/include/python3.2m/pyport.h:#define isspace(c) iswspace(btowc(c)) I'm not familiar with pyport.h usage. I do see that it protects the problem lines with: #ifdef _PY_PORT_CTYPE_UTF8_ISSUE So long as that is not defined when C++ is in use no problem. > > For 2.7, things are more difficult. This is where a fix is required. Is there going to be another 2.7 release to deliver a fix in? Barry From pje at telecommunity.com Sun Jun 24 19:44:52 2012 From: pje at telecommunity.com (PJ Eby) Date: Sun, 24 Jun 2012 13:44:52 -0400 Subject: [Python-Dev] Empty directory is a namespace? In-Reply-To: <4FE6C700.3080205@v.loewis.de> References: <20120623172534.6eb74ee9@pitrou.net> <20120623173941.38687b15@pitrou.net> <20120623175524.Horde.CiCoP1NNcXdP5ebsMKSiuHA@webmail.df.eu> <20120623175834.72e0dcbf@pitrou.net> <4FE6C700.3080205@v.loewis.de> Message-ID: On Sun, Jun 24, 2012 at 3:51 AM, "Martin v. L?wis" wrote: > On 23.06.2012 17:58, Antoine Pitrou wrote: > > On Sat, 23 Jun 2012 17:55:24 +0200 > > martin at v.loewis.de wrote: > >>> That's true. I would have hoped for it to be recognized only when > >>> there's at least one module or package inside, but it doesn't sound > >>> easy to check for (especially in the recursive namespace packages case > >>> - is that possible?). > >> > >> Yes - a directory becomes a namespace package by not having an > __init__.py, > >> so the "namespace package" case will likely become the default, and > people > >> will start removing the empty __init__.pys when they don't need to > support > >> 3.2- anymore. > > > > Have you tested the performance of namespace packages compared to > > normal packages? > > No, I haven't. > It's probably not worthwhile; any performance cost increase due to looking at more sys.path entries should be offset by the speedup of any subsequent imports from later sys.path entries. Or, to put it another way, almost all the extra I/O cost of namespace packages is paid only once, for the *first* namespace package imported. In effect, this means that the amortized cost of using namespace packages actually *decreases* as namespace packages become more popular. Also, the total extra overhead equals the cost of a listdir() for each directory on sys.path that would otherwise not have been checked for an import. (So, for example, if even one import fails over the life of a program's execution, or it performs even one import from the last directory on sys.path, then there is no actual extra overhead.) Of course, there are still cache validation stat() calls, and they make the cost of an initial import of a namespace package (vs. a self-contained package with __init__.py) to be an extra N stat() calls, where N is the number of sys.path entries that appear *after* the sys.path directory where the package is found. (This cost of course must still be compared against the costs of finding, opening, and running an empty __init__.py[co] file, so it may actually still be quite competitive in many cases.) For imports *within* a namespace package, similar considerations apply, except that N is smaller, and in the simple case of replacing a self-contained package with a namespace (but not adding any additional path locations), N will be zero, making imports from inside the namespace run exactly as quickly as normal imports. In short, it's not worth worrying about, and definitely nothing that should cause people to spread an idea that __init__.py somehow speeds things up. If there's a difference, it'll likely be lost in measurement noise, due to importlib's new directory caching mechanism. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sun Jun 24 19:46:15 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 24 Jun 2012 19:46:15 +0200 Subject: [Python-Dev] Empty directory is a namespace? In-Reply-To: References: <20120623172534.6eb74ee9@pitrou.net> <20120623173941.38687b15@pitrou.net> <20120623175524.Horde.CiCoP1NNcXdP5ebsMKSiuHA@webmail.df.eu> <20120623175834.72e0dcbf@pitrou.net> <4FE6C700.3080205@v.loewis.de> Message-ID: <20120624194615.34299555@pitrou.net> On Sun, 24 Jun 2012 13:44:52 -0400 PJ Eby wrote: > On Sun, Jun 24, 2012 at 3:51 AM, "Martin v. L?wis" wrote: > > > On 23.06.2012 17:58, Antoine Pitrou wrote: > > > On Sat, 23 Jun 2012 17:55:24 +0200 > > > martin at v.loewis.de wrote: > > >>> That's true. I would have hoped for it to be recognized only when > > >>> there's at least one module or package inside, but it doesn't sound > > >>> easy to check for (especially in the recursive namespace packages case > > >>> - is that possible?). > > >> > > >> Yes - a directory becomes a namespace package by not having an > > __init__.py, > > >> so the "namespace package" case will likely become the default, and > > people > > >> will start removing the empty __init__.pys when they don't need to > > support > > >> 3.2- anymore. > > > > > > Have you tested the performance of namespace packages compared to > > > normal packages? > > > > No, I haven't. > > > > It's probably not worthwhile; any performance cost increase due to looking > at more sys.path entries should be offset by the speedup of any subsequent > imports from later sys.path entries. > > Or, to put it another way, almost all the extra I/O cost of namespace > packages is paid only once, for the *first* namespace package imported. And how about CPU cost? > In short, it's not worth worrying about, and definitely nothing that > should cause people to spread an idea that __init__.py somehow speeds > things up. The best way to avoid people spreading that idea would be to show hard measurements. Regards Antoine. From solipsis at pitrou.net Sun Jun 24 20:24:21 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 24 Jun 2012 20:24:21 +0200 Subject: [Python-Dev] OS X buildbots Message-ID: <20120624202421.17a5f818@pitrou.net> Hello, We only have a x86 Tiger OS X buildbot left. People wanting to see OS X supported may decide to maintain a buildbot that will help us avoid regressions. See http://wiki.python.org/moin/BuildBot Regards Antoine. From martin at v.loewis.de Sun Jun 24 21:27:56 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 24 Jun 2012 21:27:56 +0200 Subject: [Python-Dev] Empty directory is a namespace? In-Reply-To: <20120624194615.34299555@pitrou.net> References: <20120623172534.6eb74ee9@pitrou.net> <20120623173941.38687b15@pitrou.net> <20120623175524.Horde.CiCoP1NNcXdP5ebsMKSiuHA@webmail.df.eu> <20120623175834.72e0dcbf@pitrou.net> <4FE6C700.3080205@v.loewis.de> <20120624194615.34299555@pitrou.net> Message-ID: <4FE76A3C.9090700@v.loewis.de> >> In short, it's not worth worrying about, and definitely nothing that >> should cause people to spread an idea that __init__.py somehow speeds >> things up. > > The best way to avoid people spreading that idea would be to show hard > measurements. PJE wants people to spread an idea, not to avoid them doing so. In any case, hard measurements might help to spread the idea, here are mine. For the attached project, ec656d79b8ac gives, on my system import time for a namespace package: 113?s (fastest run, hot caches) import time for a regular package: 128?s (---- " ------) first-time import of regular package: 1859?s (due to pyc generation) (remove __init__.py and __pycache__ to construct the first setup) So namespace packages are indeed faster than regular packages, at least in some cases. Regards, Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: spacetiming.tgz Type: application/x-compressed-tar Size: 386 bytes Desc: not available URL: From pje at telecommunity.com Sun Jun 24 21:51:31 2012 From: pje at telecommunity.com (PJ Eby) Date: Sun, 24 Jun 2012 15:51:31 -0400 Subject: [Python-Dev] Empty directory is a namespace? In-Reply-To: <4FE76A3C.9090700@v.loewis.de> References: <20120623172534.6eb74ee9@pitrou.net> <20120623173941.38687b15@pitrou.net> <20120623175524.Horde.CiCoP1NNcXdP5ebsMKSiuHA@webmail.df.eu> <20120623175834.72e0dcbf@pitrou.net> <4FE6C700.3080205@v.loewis.de> <20120624194615.34299555@pitrou.net> <4FE76A3C.9090700@v.loewis.de> Message-ID: On Sun, Jun 24, 2012 at 3:27 PM, "Martin v. L?wis" wrote: > >> In short, it's not worth worrying about, and definitely nothing that > >> should cause people to spread an idea that __init__.py somehow speeds > >> things up. > > > > The best way to avoid people spreading that idea would be to show hard > > measurements. > > PJE wants people to spread an idea, not to avoid them doing so. > > In any case, hard measurements might help to spread the idea, here are > mine. For the attached project, ec656d79b8ac gives, on my system > > import time for a namespace package: 113? (fastest run, hot caches) > import time for a regular package: 128? (---- " ------) > first-time import of regular package: 1859? (due to pyc generation) > (remove __init__.py and __pycache__ to construct the first setup) > > So namespace packages are indeed faster than regular packages, at least > in some cases. > I don't really want to spread the idea that they're faster, either: the exact same benchmark can probably be made to turn out differently if you have, say, a hundred unzipped eggs on sys.path after the benchmark directory. A more realistic benchmark would import more than one module, though... and then it goes back and forth, dueling benchmarks that can always be argued against with a different benchmark measuring different things with other setup conditions. That's what I meant by "lost in the noise": the outcome of the benchmark depends on which of many potentially-plausible setups and applications you choose to use as your basis for measurement, so it's silly to think that either omitting or including __init__.py should be done for performance reasons. Do whatever your application needs, because it's not going to make much difference either way in any realistic program. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sun Jun 24 21:51:20 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 24 Jun 2012 21:51:20 +0200 Subject: [Python-Dev] Empty directory is a namespace? In-Reply-To: References: <20120623172534.6eb74ee9@pitrou.net> <20120623173941.38687b15@pitrou.net> <20120623175524.Horde.CiCoP1NNcXdP5ebsMKSiuHA@webmail.df.eu> <20120623175834.72e0dcbf@pitrou.net> <4FE6C700.3080205@v.loewis.de> <20120624194615.34299555@pitrou.net> <4FE76A3C.9090700@v.loewis.de> Message-ID: <1340567480.3390.46.camel@localhost.localdomain> Le dimanche 24 juin 2012 ? 15:51 -0400, PJ Eby a ?crit : > > I don't really want to spread the idea that they're faster, either: > the exact same benchmark can probably be made to turn out differently > if you have, say, a hundred unzipped eggs on sys.path after the > benchmark directory. Yes, the case where sys.path is long (thanks to setuptools) is precisely what I was thinking about. > A more realistic benchmark would import more than one module, > though... Indeed. > That's what I meant by "lost in the noise": the outcome of the > benchmark depends on which of many potentially-plausible setups and > applications you choose to use as your basis for measurement, Should we forget to care about performance, just because different setups might yield different results? That's a rather unconstructive attitude. Regards Antoine. From g.brandl-nospam at gmx.net Sun Jun 24 22:12:23 2012 From: g.brandl-nospam at gmx.net (g.brandl-nospam at gmx.net) Date: Sun, 24 Jun 2012 22:12:23 +0200 Subject: [Python-Dev] 3.3 feature freeze Message-ID: <20120624201223.118130@gmx.net> Hi all, please consider the default branch frozen for new features as of now. As you know, this also includes changes like large cleanups that cannot be considered bug fixes. Contact me directly (IRC or mail) with urgent questions regarding the release. I hope that we will see the branch (and the buildbots) calm down and stabilize a bit tomorrow, so that everything is ready for Tuesday. cheers, Georg -- NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone! Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a From martin at v.loewis.de Sun Jun 24 22:13:29 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 24 Jun 2012 22:13:29 +0200 Subject: [Python-Dev] Empty directory is a namespace? In-Reply-To: <1340567480.3390.46.camel@localhost.localdomain> References: <20120623172534.6eb74ee9@pitrou.net> <20120623173941.38687b15@pitrou.net> <20120623175524.Horde.CiCoP1NNcXdP5ebsMKSiuHA@webmail.df.eu> <20120623175834.72e0dcbf@pitrou.net> <4FE6C700.3080205@v.loewis.de> <20120624194615.34299555@pitrou.net> <4FE76A3C.9090700@v.loewis.de> <1340567480.3390.46.camel@localhost.localdomain> Message-ID: <4FE774E9.8060505@v.loewis.de> > Should we forget to care about performance, just because different > setups might yield different results? No, we are not forgetting about performance. You asked for a benchmark, I presented one. I fail to see your problem. I claim that the performance of namespace packages is just fine, and presented a benchmark. PJE claims that the performance of namespace packages is fine, and provided reasoning. If you want to see two specific scenarios compared, provide *at least* a description of what these scenarios are. Better, just do the benchmark then yourself. In general, I think there is a widespread misunderstanding how new features impact performance. There are really several cases to be considered: 1.what is the impact of feature on existing applications which don't use it. This is difficult to measure since you first need to construct an implementation which doesn't have the feature, but is otherwise identical. This is often easy to reason about, though. 2,what is the performance of the feature when it is used. This is easy to measure, but difficult to evaluate. If you measure it, and you get some result - is that good, good enough, or bad? For 1, it may be tempting to compare the new implementation with the previous release. However, in the specific case, this is misleading, since the entire import machinery was replaced. So you really need to compare with a version of importlib that doesn't have namespace packages. Regards, Martin From eric at trueblade.com Sun Jun 24 22:14:23 2012 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 24 Jun 2012 16:14:23 -0400 Subject: [Python-Dev] 3.3 release plans In-Reply-To: <4FE735E0.2010300@trueblade.com> References: <20120623105446.251580@gmx.net> <4FE72CBF.5020708@trueblade.com> <20120624172754.5b22e8d8@pitrou.net> <4FE735E0.2010300@trueblade.com> Message-ID: <4FE7751F.9060709@trueblade.com> On 6/24/2012 11:44 AM, Eric V. Smith wrote: > On 6/24/2012 11:27 AM, Antoine Pitrou wrote: >>> What's your timeframe for bug fixes today? I'd very much like to fix >>> http://bugs.python.org/issue15039, but it will probably take me another >>> 2 hours or so. >> >> If it's a bugfix, it's not blocked by the feature freeze at all: you >> can commit after beta (but you might want the bugfix to be in the beta). > > Indeed, that's what I'm after: the fix in the beta. I'll do my best to > get it in, but obviously it's not a blocker. And, no surprise: it's harder to fix than I thought. It won't make it in to the beta. Eric. From ncoghlan at gmail.com Mon Jun 25 02:32:41 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 25 Jun 2012 10:32:41 +1000 Subject: [Python-Dev] 3.3 release plans In-Reply-To: <4FE7751F.9060709@trueblade.com> References: <20120623105446.251580@gmx.net> <4FE72CBF.5020708@trueblade.com> <20120624172754.5b22e8d8@pitrou.net> <4FE735E0.2010300@trueblade.com> <4FE7751F.9060709@trueblade.com> Message-ID: On Mon, Jun 25, 2012 at 6:14 AM, Eric V. Smith wrote: > On 6/24/2012 11:44 AM, Eric V. Smith wrote: >> On 6/24/2012 11:27 AM, Antoine Pitrou wrote: >>>> What's your timeframe for bug fixes today? I'd very much like to fix >>>> http://bugs.python.org/issue15039, but it will probably take me another >>>> 2 hours or so. >>> >>> If it's a bugfix, it's not blocked by the feature freeze at all: you >>> can commit after beta (but you might want the bugfix to be in the beta). >> >> Indeed, that's what I'm after: the fix in the beta. I'll do my best to >> get it in, but obviously it's not a blocker. > > And, no surprise: it's harder to fix than I thought. It won't make it in > to the beta. FWIW, the way I'll often handle that kind of change is to temporarily set it to "release-blocker" (so the RM will at least look at it before deciding whether or not to proceed with the release), and then drop it down to "deferred blocker" if I decide it's not going to be ready, but want to commit to getting it fixed before the *next* release. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From eric at trueblade.com Mon Jun 25 02:35:37 2012 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 24 Jun 2012 20:35:37 -0400 Subject: [Python-Dev] 3.3 release plans In-Reply-To: References: <20120623105446.251580@gmx.net> <4FE72CBF.5020708@trueblade.com> <20120624172754.5b22e8d8@pitrou.net> <4FE735E0.2010300@trueblade.com> <4FE7751F.9060709@trueblade.com> Message-ID: <4FE7B259.3040407@trueblade.com> On 06/24/2012 08:32 PM, Nick Coghlan wrote: > On Mon, Jun 25, 2012 at 6:14 AM, Eric V. Smith wrote: >> On 6/24/2012 11:44 AM, Eric V. Smith wrote: >>> On 6/24/2012 11:27 AM, Antoine Pitrou wrote: >>>>> What's your timeframe for bug fixes today? I'd very much like to fix >>>>> http://bugs.python.org/issue15039, but it will probably take me another >>>>> 2 hours or so. >>>> >>>> If it's a bugfix, it's not blocked by the feature freeze at all: you >>>> can commit after beta (but you might want the bugfix to be in the beta). >>> >>> Indeed, that's what I'm after: the fix in the beta. I'll do my best to >>> get it in, but obviously it's not a blocker. >> >> And, no surprise: it's harder to fix than I thought. It won't make it in >> to the beta. > > FWIW, the way I'll often handle that kind of change is to temporarily > set it to "release-blocker" (so the RM will at least look at it before > deciding whether or not to proceed with the release), and then drop it > down to "deferred blocker" if I decide it's not going to be ready, but > want to commit to getting it fixed before the *next* release. Thanks. I was going to make it a deferred blocker, but I managed to come up with a fix in time (I hope!). Eric. From eric at trueblade.com Mon Jun 25 02:30:50 2012 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 24 Jun 2012 20:30:50 -0400 Subject: [Python-Dev] Error in buildbot link Message-ID: <4FE7B13A.9020300@trueblade.com> http://docs.python.org/devguide/buildbots.html contains a link to http://python.org/dev/buildbot/, which redirects to http://buildbot.python.org/index.html, which gives a 404. I think it should point to http://buildbot.python.org/all/waterfall, or maybe some subset of it. Eric. From ncoghlan at gmail.com Mon Jun 25 04:21:11 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 25 Jun 2012 12:21:11 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15164: Change return value of platform.uname() from a In-Reply-To: References: Message-ID: On Mon, Jun 25, 2012 at 7:31 AM, larry.hastings wrote: > diff --git a/Misc/NEWS b/Misc/NEWS > --- a/Misc/NEWS > +++ b/Misc/NEWS > @@ -59,9 +59,8 @@ > ?Library > ?------- > > -- Support Mageia Linux in the platform module. > - > -- Issue #11678: Support Arch linux in the platform module. > +- Issue #15164: Change return value of platform.uname() from a > + ?plain tuple to a collections.namedtuple. Larry, I think this commit accidentally reverted a couple of entries in Misc/NEWS. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From larry at hastings.org Mon Jun 25 05:00:56 2012 From: larry at hastings.org (Larry Hastings) Date: Sun, 24 Jun 2012 20:00:56 -0700 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15164: Change return value of platform.uname() from a In-Reply-To: References: Message-ID: <4FE7D468.80602@hastings.org> On 06/24/2012 07:21 PM, Nick Coghlan wrote: > Larry, I think this commit accidentally reverted a couple of entries > in Misc/NEWS. It did; I restored them in the subsequent commit. It's been a hectic couple of days, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From merwok at netwok.org Mon Jun 25 08:02:23 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Mon, 25 Jun 2012 02:02:23 -0400 Subject: [Python-Dev] [Python-checkins] peps: Mark PEP 362 as accepted. Huzzah! In-Reply-To: References: Message-ID: <4FE7FEEF.9000905@netwok.org> Hey Larry, > http://hg.python.org/peps/rev/5019413bf672 > user: Larry Hastings > date: Fri Jun 22 15:16:35 2012 -0700 > summary: > Mark PEP 362 as accepted. Huzzah! > diff --git a/pep-0362.txt b/pep-0362.txt > --- a/pep-0362.txt > +++ b/pep-0362.txt > @@ -4,7 +4,7 @@ > Last-Modified: $Date$ > Author: Brett Cannon , Jiwon Seo , > Yury Selivanov , Larry Hastings > -Status: Draft > +Status: Final > Type: Standards Track > Content-Type: text/x-rst > Created: 21-Aug-2006 > @@ -546,12 +546,19 @@ > > return wrapper > > +Acceptance > +========== > + > +PEP 362 was accepted by Guido, Friday, June 22, 2012 [#accepted]_ . > +The reference implementation was committed to trunk later that day. A PEP header is now required for accepted PEPs: http://www.python.org/dev/peps/pep-0001/#pep-header-preamble . Many PEPs accepted these last months don?t comply with that new rule though. a-kitten-dies-everytime-you-apply-a-subversion-term-to-mercurial?ly yours From solipsis at pitrou.net Mon Jun 25 10:57:11 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 25 Jun 2012 10:57:11 +0200 Subject: [Python-Dev] Error in buildbot link References: <4FE7B13A.9020300@trueblade.com> Message-ID: <20120625105711.7e36176f@pitrou.net> On Sun, 24 Jun 2012 20:30:50 -0400 "Eric V. Smith" wrote: > > http://docs.python.org/devguide/buildbots.html contains a link to > http://python.org/dev/buildbot/, which redirects to > http://buildbot.python.org/index.html, which gives a 404. > > I think it should point to http://buildbot.python.org/all/waterfall, or > maybe some subset of it. Well, there used to be a written text at that place. Regards Antoine. From solipsis at pitrou.net Mon Jun 25 14:17:21 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 25 Jun 2012 14:17:21 +0200 Subject: [Python-Dev] cpython: Issue #15177: Added dir_fd parameter to os.fwalk(). References: Message-ID: <20120625141721.4edd2bf8@pitrou.net> On Mon, 25 Jun 2012 13:49:14 +0200 larry.hastings wrote: > http://hg.python.org/cpython/rev/7bebd9870c75 > changeset: 77770:7bebd9870c75 > user: Larry Hastings > date: Mon Jun 25 04:49:05 2012 -0700 > summary: > Issue #15177: Added dir_fd parameter to os.fwalk(). Was it really a good idea to rush this? It's not fixing anything, just adding a new API. Regards Antoine. From barry at python.org Mon Jun 25 16:01:19 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 25 Jun 2012 10:01:19 -0400 Subject: [Python-Dev] cpython: Issue #15177: Added dir_fd parameter to os.fwalk(). In-Reply-To: <20120625141721.4edd2bf8@pitrou.net> References: <20120625141721.4edd2bf8@pitrou.net> Message-ID: <20120625100119.56fc87d6@limelight.wooz.org> On Jun 25, 2012, at 02:17 PM, Antoine Pitrou wrote: >On Mon, 25 Jun 2012 13:49:14 +0200 >larry.hastings wrote: >> http://hg.python.org/cpython/rev/7bebd9870c75 >> changeset: 77770:7bebd9870c75 >> user: Larry Hastings >> date: Mon Jun 25 04:49:05 2012 -0700 >> summary: >> Issue #15177: Added dir_fd parameter to os.fwalk(). > >Was it really a good idea to rush this? It's not fixing anything, just >adding a new API. Wouldn't this be considered a new feature added past the beta feature freeze? I didn't see any explicit permission from the 3.3 RM in the tracker issues for this commit. I don't read Georg's comment in msg163937 as providing that permission. Please either revert or have Georg approve the patch in the tracker. -Barry From g.brandl-nospam at gmx.net Mon Jun 25 18:13:29 2012 From: g.brandl-nospam at gmx.net (Georg Brandl) Date: Mon, 25 Jun 2012 18:13:29 +0200 Subject: [Python-Dev] cpython: Issue #15177: Added dir_fd parameter to os.fwalk(). In-Reply-To: <20120625100119.56fc87d6@limelight.wooz.org> References: <20120625141721.4edd2bf8@pitrou.net> <20120625100119.56fc87d6@limelight.wooz.org> Message-ID: <20120625161329.156990@gmx.net> -------- Original-Nachricht -------- > Datum: Mon, 25 Jun 2012 10:01:19 -0400 > Von: Barry Warsaw > An: python-dev at python.org > Betreff: Re: [Python-Dev] cpython: Issue #15177: Added dir_fd parameter to os.fwalk(). > On Jun 25, 2012, at 02:17 PM, Antoine Pitrou wrote: > > >On Mon, 25 Jun 2012 13:49:14 +0200 > >larry.hastings wrote: > >> http://hg.python.org/cpython/rev/7bebd9870c75 > >> changeset: 77770:7bebd9870c75 > >> user: Larry Hastings > >> date: Mon Jun 25 04:49:05 2012 -0700 > >> summary: > >> Issue #15177: Added dir_fd parameter to os.fwalk(). > > > >Was it really a good idea to rush this? It's not fixing anything, just > >adding a new API. How do you know it was rushed? There is plenty time for testing during the beta period. > Wouldn't this be considered a new feature added past the beta feature > freeze? > I didn't see any explicit permission from the 3.3 RM in the tracker issues > for > this commit. I don't read Georg's comment in msg163937 as providing that > permission. Please either revert or have Georg approve the patch in the > tracker. Relax. It was with my permission. IMO it is quite a logical extension of fwalk using the new features supporting dir_fd -- the whole point of fwalk() is working with directory fds instead of names. cheers, Georg -- NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone! Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a From merwok at netwok.org Mon Jun 25 22:34:15 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Mon, 25 Jun 2012 16:34:15 -0400 Subject: [Python-Dev] 3.3 feature freeze In-Reply-To: <20120624201223.118130@gmx.net> References: <20120624201223.118130@gmx.net> Message-ID: <4FE8CB47.7000605@netwok.org> Hi, > please consider the default branch frozen for new features as of now. > As you know, this also includes changes like large cleanups that cannot > be considered bug fixes. [...] > > I hope that we will see the branch (and the buildbots) calm down and > stabilize a bit tomorrow, so that everything is ready for Tuesday. Can bug fixes be committed as usual or should we wait? Also, when will the 3.3 branch be started and default be open again to features? Cheers From g.brandl at gmx.net Mon Jun 25 23:03:05 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 25 Jun 2012 23:03:05 +0200 Subject: [Python-Dev] 3.3 feature freeze In-Reply-To: <4FE8CB47.7000605@netwok.org> References: <20120624201223.118130@gmx.net> <4FE8CB47.7000605@netwok.org> Message-ID: Am 25.06.2012 22:34, schrieb ?ric Araujo: > Hi, > >> please consider the default branch frozen for new features as of now. >> As you know, this also includes changes like large cleanups that cannot >> be considered bug fixes. [...] >> >> I hope that we will see the branch (and the buildbots) calm down and >> stabilize a bit tomorrow, so that everything is ready for Tuesday. > > Can bug fixes be committed as usual or should we wait? Yes, they can be committed: I'll branch a release clone for tagging. > Also, when will > the 3.3 branch be started and default be open again to features? I think the first rc is a good time for that. Until then, it is a good thing for everyone to be able to concentrate on bugfixing. A final 3.2 bugfix release will follow soon after 3.3.0, after which 3.2 will go into security mode, so that the time with three active 3.x branches will be short. cheers, Georg From barry at barrys-emacs.org Mon Jun 25 23:28:18 2012 From: barry at barrys-emacs.org (Barry Scott) Date: Mon, 25 Jun 2012 22:28:18 +0100 Subject: [Python-Dev] Offer of help: http://bugs.python.org/issue10910 In-Reply-To: <4FE7469E.5090301@v.loewis.de> References: <9ACF1734-D4B9-492D-A706-639A20E2AFB0@barrys-emacs.org> <4FE71654.5000208@v.loewis.de> <4FE7469E.5090301@v.loewis.de> Message-ID: On 24 Jun 2012, at 17:55, "Martin v. L?wis" wrote: >>> Is this even an issue for 3.x? ISTM that the C library macros aren't >>> used, anyway, so I think this entire section could go from the header >>> files. >> >> $ grep isspace /Library/Frameworks/Python.framework/Versions/3.2/include/python3.2m/*.h >> /Library/Frameworks/Python.framework/Versions/3.2/include/python3.2m/pyport.h:#undef isspace >> /Library/Frameworks/Python.framework/Versions/3.2/include/python3.2m/pyport.h:#define isspace(c) iswspace(btowc(c)) >> >> I'm not familiar with pyport.h usage. I do see that it protects the problem lines with: >> #ifdef _PY_PORT_CTYPE_UTF8_ISSUE > > I think you missed my point. Python shouldn't be using isspace anymore > at all, so any work-arounds for certain BSD versions should be outdated > and can be removed entirely. > > Of course, before implementing that solution, one would have to verify > that this claim (macros not used) is indeed true. Fine so long as the bad code goes. > >> So long as that is not defined when C++ is in use no problem. > > I'm not so much concerned with compiling with C++, but care about a > potential cleanup of the headers. I hope you are not claiming that it is o.k for python to ignore c++ developers! I hope that it is rasonable to state that the pyhon api can be used from C++ without fear or the need for hacky work arounds. > >>> For 2.7, things are more difficult. >> >> This is where a fix is required. Is there going to be another 2.7 release to deliver a fix in? > > Yes, there will be more 2.7 bugfix releases. If a fix is too intrusive > or too hacky, it might be that the bug must stay unfixed, though. It seems that the only reason for the problem in the header is to detect an unexpected use of isspace and friends. I cannot see why you could not at a minimum remove when C++ compiler is used. I suspect C users could rightly be unset at a C api being broken after Python.h is included. Barry From martin at v.loewis.de Tue Jun 26 00:58:57 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 26 Jun 2012 00:58:57 +0200 Subject: [Python-Dev] Error in buildbot link In-Reply-To: <20120625105711.7e36176f@pitrou.net> References: <4FE7B13A.9020300@trueblade.com> <20120625105711.7e36176f@pitrou.net> Message-ID: <4FE8ED31.9070900@v.loewis.de> On 25.06.2012 10:57, Antoine Pitrou wrote: > On Sun, 24 Jun 2012 20:30:50 -0400 > "Eric V. Smith" wrote: >> >> http://docs.python.org/devguide/buildbots.html contains a link to >> http://python.org/dev/buildbot/, which redirects to >> http://buildbot.python.org/index.html, which gives a 404. >> >> I think it should point to http://buildbot.python.org/all/waterfall, or >> maybe some subset of it. > > Well, there used to be a written text at that place. This is now fixed; the new rewrite rule was incorrect. Regards, Martin From roundup-admin at psf.upfronthosting.co.za Tue Jun 26 08:50:12 2012 From: roundup-admin at psf.upfronthosting.co.za (Python tracker) Date: Tue, 26 Jun 2012 06:50:12 +0000 Subject: [Python-Dev] Failed issue tracker submission Message-ID: <20120626065012.906261CC95@psf.upfronthosting.co.za> The node specified by the designator in the subject of your message ("15817") does not exist. Subject was: "[issue15817]" Mail Gateway Help ================= Incoming messages are examined for multiple parts: . In a multipart/mixed message or part, each subpart is extracted and examined. The text/plain subparts are assembled to form the textual body of the message, to be stored in the file associated with a "msg" class node. Any parts of other types are each stored in separate files and given "file" class nodes that are linked to the "msg" node. . In a multipart/alternative message or part, we look for a text/plain subpart and ignore the other parts. . A message/rfc822 is treated similar tomultipart/mixed (except for special handling of the first text part) if unpack_rfc822 is set in the mailgw config section. Summary ------- The "summary" property on message nodes is taken from the first non-quoting section in the message body. The message body is divided into sections by blank lines. Sections where the second and all subsequent lines begin with a ">" or "|" character are considered "quoting sections". The first line of the first non-quoting section becomes the summary of the message. Addresses --------- All of the addresses in the To: and Cc: headers of the incoming message are looked up among the user nodes, and the corresponding users are placed in the "recipients" property on the new "msg" node. The address in the From: header similarly determines the "author" property of the new "msg" node. The default handling for addresses that don't have corresponding users is to create new users with no passwords and a username equal to the address. (The web interface does not permit logins for users with no passwords.) If we prefer to reject mail from outside sources, we can simply register an auditor on the "user" class that prevents the creation of user nodes with no passwords. Actions ------- The subject line of the incoming message is examined to determine whether the message is an attempt to create a new item or to discuss an existing item. A designator enclosed in square brackets is sought as the first thing on the subject line (after skipping any "Fwd:" or "Re:" prefixes). If an item designator (class name and id number) is found there, the newly created "msg" node is added to the "messages" property for that item, and any new "file" nodes are added to the "files" property for the item. If just an item class name is found there, we attempt to create a new item of that class with its "messages" property initialized to contain the new "msg" node and its "files" property initialized to contain any new "file" nodes. Triggers -------- Both cases may trigger detectors (in the first case we are calling the set() method to add the message to the item's spool; in the second case we are calling the create() method to create a new node). If an auditor raises an exception, the original message is bounced back to the sender with the explanatory message given in the exception. $Id: mailgw.py,v 1.196 2008-07-23 03:04:44 richard Exp $ -------------- next part -------------- Return-Path: X-Original-To: report at bugs.python.org Delivered-To: roundup+tracker at psf.upfronthosting.co.za Received: from mail.python.org (mail.python.org [82.94.164.166]) by psf.upfronthosting.co.za (Postfix) with ESMTPS id 7F8771CBB4 for ; Tue, 26 Jun 2012 08:50:11 +0200 (CEST) Received: from albatross.python.org (localhost [127.0.0.1]) by mail.python.org (Postfix) with ESMTP id 3WLyYb21g5zPJ4 for ; Tue, 26 Jun 2012 08:50:11 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org; s=200901; t=1340693411; bh=MXGTWyOd852eyn+HLLV5N+r21/D3dyRf6FtHBCuUlYg=; h=Date:Message-Id:Content-Type:MIME-Version: Content-Transfer-Encoding:From:To:Subject; b=j1zqdBGYKFgD7k5LANirZo3rCACM1wxD40cuazDZxto1ONgx0w1wE7a25pHbx2yy7 AKsfTYuANuh77s8HKjmW8e6uZFggjVjANTloxwpTqIg6Iid3YV8qBGlfn9cl7VkPc5 7nKKJmtsUpV921/nU5jnixFcfeBNRbMLBqphj1uk= Received: from localhost (HELO mail.python.org) (127.0.0.1) by albatross.python.org with SMTP; 26 Jun 2012 08:50:11 +0200 Received: from dinsdale.python.org (svn.python.org [IPv6:2001:888:2000:d::a4]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.python.org (Postfix) with ESMTPS for ; Tue, 26 Jun 2012 08:50:11 +0200 (CEST) Received: from localhost ([127.0.0.1] helo=dinsdale.python.org ident=hg) by dinsdale.python.org with esmtp (Exim 4.72) (envelope-from ) id 1SjPbH-0001gC-32 for report at bugs.python.org; Tue, 26 Jun 2012 08:50:11 +0200 Date: Tue, 26 Jun 2012 08:50:11 +0200 Message-Id: Content-Type: text/plain; charset="utf8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 From: python-dev at python.org To: report at bugs.python.org Subject: [issue15817] TmV3IGNoYW5nZXNldCAxZmE1MGJiY2MyMWYgYnkgTGFycnkgSGFzdGluZ3MgaW4gYnJhbmNoICdk ZWZhdWx0JzoKSXNzdWUgIzE1ODE3OiBCdWdmaXg6IHJlbW92ZSB0ZW1wb3JhcnkgZGlyZWN0b3Jp ZXMgdGVzdF9zaHV0aWwgd2FzIGxlYXZpbmcKaHR0cDovL2hnLnB5dGhvbi5vcmcvY3B5dGhvbi9y ZXYvMWZhNTBiYmNjMjFmCg== From g.brandl at gmx.net Tue Jun 26 10:03:08 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 26 Jun 2012 10:03:08 +0200 Subject: [Python-Dev] cpython: Added tag v3.3.0b1 for changeset e15c554cd43e In-Reply-To: References: Message-ID: It is done -- the beta is tagged. Thanks to everyone for your hard work, especially to the intrepid devs on IRC. Georg On 26.06.2012 09:43, georg.brandl wrote: > http://hg.python.org/cpython/rev/fadcc985010b > changeset: 77802:fadcc985010b > user: Georg Brandl > date: Tue Jun 26 09:43:46 2012 +0200 > summary: > Added tag v3.3.0b1 for changeset e15c554cd43e > > files: > .hgtags | 1 + > 1 files changed, 1 insertions(+), 0 deletions(-) > > > diff --git a/.hgtags b/.hgtags > --- a/.hgtags > +++ b/.hgtags > @@ -103,3 +103,4 @@ > 2f69db52d6de306cdaef0a0cc00cc823fb350b01 v3.3.0a2 > 0b53b70a40a00013505eb35e3660057b62be77be v3.3.0a3 > 7c51388a3aa7ce76a8541bbbdfc05d2d259a162c v3.3.0a4 > +e15c554cd43eb23bc0a528a4e8741da9bbec9607 v3.3.0b1 From techtonik at gmail.com Tue Jun 26 10:03:06 2012 From: techtonik at gmail.com (anatoly techtonik) Date: Tue, 26 Jun 2012 11:03:06 +0300 Subject: [Python-Dev] itertools.chunks(iterable, size, fill=None) Message-ID: Now that Python 3 is all about iterators (which is a user killer feature for Python according to StackOverflow - http://stackoverflow.com/questions/tagged/python) would it be nice to introduce more first class functions to work with them? One function to be exact to split string into chunks. itertools.chunks(iterable, size, fill=None) Which is the 33th most voted Python question on SO - http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks-in-python/312464 P.S. CC'ing to python-dev@ to notify about the thread in python-ideas. From jeanpierreda at gmail.com Tue Jun 26 12:51:02 2012 From: jeanpierreda at gmail.com (Devin Jeanpierre) Date: Tue, 26 Jun 2012 06:51:02 -0400 Subject: [Python-Dev] Poking about issue 1677 Message-ID: Hi guys, I just wanted to bring some more attention to issue #1677 , because I feel it's important and misunderstood. See: http://bugs.python.org/issue1677 The issue is that sometimes, if you press ctrl-c on Windows, instead of raising a KeyboardInterrupt, Python will exit completely. Because of this, any program that relies on ctrl-c/KeyboardInterrupt is not guaranteed to work on windows. Also, working with the interactive interpreter becomes really annoying for those with the habit of deleting the whole input line via ctrl-c. Some people that read the bug report think that this only happens if you hold down ctrl-c long enough or fast enough or some such thing. That's not so; it can happen just from pressing ctrl-c once. Whatever race condition here is not related to the timing gaps between presses of ctrl-c. The "test cases" of "hold down ctrl-c for a bit" are to conveniently reproduce, not a description of the problem. Hope this was the right place. #python-dev encouraged me to post here, so, yeah. And thanks for all your hard work making Python a pleasant place to be. :) -- Devin From mail at timgolden.me.uk Tue Jun 26 12:59:14 2012 From: mail at timgolden.me.uk (Tim Golden) Date: Tue, 26 Jun 2012 11:59:14 +0100 Subject: [Python-Dev] Poking about issue 1677 In-Reply-To: References: Message-ID: <4FE99602.3020603@timgolden.me.uk> On 26/06/2012 11:51, Devin Jeanpierre wrote: > Hi guys, > > I just wanted to bring some more attention to issue #1677 , because I > feel it's important and misunderstood. See: > http://bugs.python.org/issue1677 > > The issue is that sometimes, if you press ctrl-c on Windows, instead > of raising a KeyboardInterrupt, Python will exit completely. Because > of this, any program that relies on ctrl-c/KeyboardInterrupt is not > guaranteed to work on windows. Also, working with the interactive > interpreter becomes really annoying for those with the habit of > deleting the whole input line via ctrl-c. > > Some people that read the bug report think that this only happens if > you hold down ctrl-c long enough or fast enough or some such thing. > That's not so; it can happen just from pressing ctrl-c once. Whatever > race condition here is not related to the timing gaps between presses > of ctrl-c. The "test cases" of "hold down ctrl-c for a bit" are to > conveniently reproduce, not a description of the problem. > > Hope this was the right place. #python-dev encouraged me to post here, > so, yeah. And thanks for all your hard work making Python a pleasant > place to be. :) Thanks, Devin. Definitely useful info. AFAICT you haven't added that particular snippet of info to the call. (ie the fact that even one press will trigger the issue). Please feel free to add; I notice that you're the last submitter, some time last year. Goodness knows if I'll get the time, but the natural thing would be to hunt down the uses of SetConsoleCtrlHandler to see what we're doing with them. TJG From martin at v.loewis.de Tue Jun 26 13:21:01 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 26 Jun 2012 13:21:01 +0200 Subject: [Python-Dev] Poking about issue 1677 In-Reply-To: References: Message-ID: <20120626132101.Horde.CAsZaUlCcOxP6ZsdKc-WxdA@webmail.df.eu> > I just wanted to bring some more attention to issue #1677 , because I > feel it's important and misunderstood. Please consider working even more on a solution then. If I had time to work on this, I'd run Python in a debugger, and see what happens. Finding out in what state Python is when it stops might be enough to create a solution. I find this very hard to reproduce. All of the versions reported to crash work fine for me most of the time, except that a small percentage (1 out of 5 starts perhaps) actually does crash. Regards, Martin From mail at timgolden.me.uk Tue Jun 26 13:28:28 2012 From: mail at timgolden.me.uk (Tim Golden) Date: Tue, 26 Jun 2012 12:28:28 +0100 Subject: [Python-Dev] Poking about issue 1677 In-Reply-To: <4FE99602.3020603@timgolden.me.uk> References: <4FE99602.3020603@timgolden.me.uk> Message-ID: <4FE99CDC.2020803@timgolden.me.uk> On 26/06/2012 11:59, Tim Golden wrote: > On 26/06/2012 11:51, Devin Jeanpierre wrote: >> Hi guys, >> >> I just wanted to bring some more attention to issue #1677 , because I >> feel it's important and misunderstood. See: >> http://bugs.python.org/issue1677 >> >> The issue is that sometimes, if you press ctrl-c on Windows, instead >> of raising a KeyboardInterrupt, Python will exit completely. Because >> of this, any program that relies on ctrl-c/KeyboardInterrupt is not >> guaranteed to work on windows. Also, working with the interactive >> interpreter becomes really annoying for those with the habit of >> deleting the whole input line via ctrl-c. >> >> Some people that read the bug report think that this only happens if >> you hold down ctrl-c long enough or fast enough or some such thing. >> That's not so; it can happen just from pressing ctrl-c once. Whatever >> race condition here is not related to the timing gaps between presses >> of ctrl-c. The "test cases" of "hold down ctrl-c for a bit" are to >> conveniently reproduce, not a description of the problem. >> >> Hope this was the right place. #python-dev encouraged me to post here, >> so, yeah. And thanks for all your hard work making Python a pleasant >> place to be. :) > > Thanks, Devin. Definitely useful info. AFAICT you haven't added that > particular snippet of info to the call. (ie the fact that even one press > will trigger the issue). Please feel free to add; I notice that you're > the last submitter, some time last year. > > Goodness knows if I'll get the time, but the > natural thing would be to hunt down the uses of SetConsoleCtrlHandler to > see what we're doing with them. OK. We clearly *don't* set a console handler as I thought we did. Scratch that idea off the list. As Martin said: need to run this with a debugger attached to try to catch in action. TJG From tjreedy at udel.edu Tue Jun 26 21:02:58 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 26 Jun 2012 15:02:58 -0400 Subject: [Python-Dev] Poking about issue 1677 In-Reply-To: References: Message-ID: On 6/26/2012 6:51 AM, Devin Jeanpierre wrote: > The issue is that sometimes, if you press ctrl-c on Windows, instead > of raising a KeyboardInterrupt, Python will exit completely. Because > of this, any program that relies on ctrl-c/KeyboardInterrupt is not > guaranteed to work on windows. Also, working with the interactive > interpreter becomes really annoying for those with the habit of > deleting the whole input line via ctrl-c. Idle Shell, 3.3.0a4, Win 7does not seem to have this problem. Still up after 6000 ^Cs. It is better anyway, in multiple ways, than Command Prompt. (That does not help batch-mode programs, though.) That aside, perhaps the way it handles ^C might help. I did get the CP to close four times, each time after a few hundred to maybe a thousand ^Cs. It seems to require more than just one held down key press. I suspect the closures happened after the limited line buffer was filled and it was starting to delete the earliest lines. -- Terry Jan Reedy From g.rodola at gmail.com Wed Jun 27 01:49:34 2012 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Wed, 27 Jun 2012 01:49:34 +0200 Subject: [Python-Dev] os.path.exists() / os.path.isdir() inconsistency when dealing with gvfs directories Message-ID: I've just noticed a strange behavior when dealing with gvfs filesystems: giampaolo at ubuntu:~$ python -c "import os; print(os.path.exists('/home/giampaolo/.gvfs'))" True giampaolo at ubuntu:~$ sudo su root at ubuntu:~# python -c "import os; print(os.path.exists('/home/giampaolo/.gvfs'))" False This is due to os.stat() which internally fails with PermissionError (EACCES). The same problem exists with os.path.isdir() which will return True as limited user and False as root. I'm not sure what's best to do here nor I know if there are other cases other than when dealing with gvfs which can produce similar behaviors but here's an idea: - make os.path.exists() return True in case of PermissionError because that's supposed to mean there's an existing path to deny access to - fix isdir(), islink(), isfile() documentation pointing out that in case of EACCES/EPERM or when dealing with exotic paths/fs it may return incorrect results. Comments? --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ http://code.google.com/p/pysendfile/ From cs at zip.com.au Wed Jun 27 02:02:38 2012 From: cs at zip.com.au (Cameron Simpson) Date: Wed, 27 Jun 2012 10:02:38 +1000 Subject: [Python-Dev] os.path.exists() / os.path.isdir() inconsistency when dealing with gvfs directories In-Reply-To: References: Message-ID: <20120627000238.GA22009@cskk.homeip.net> On 27Jun2012 01:49, Giampaolo Rodol? wrote: | I've just noticed a strange behavior when dealing with gvfs filesystems: | | giampaolo at ubuntu:~$ python -c "import os; | print(os.path.exists('/home/giampaolo/.gvfs'))" | True | giampaolo at ubuntu:~$ sudo su | root at ubuntu:~# python -c "import os; | print(os.path.exists('/home/giampaolo/.gvfs'))" | False | | This is due to os.stat() which internally fails with PermissionError (EACCES). | The same problem exists with os.path.isdir() which will return True as | limited user and False as root. | I'm not sure what's best to do here nor I know if there are other | cases other than when dealing with gvfs which can produce similar | behaviors but here's an idea: | | - make os.path.exists() return True in case of PermissionError because | that's supposed to mean there's an existing path to deny access to Definitely not. Firstly, if I ask about "a/b/c" and am denied access to "a/b", then it would be a lie to say "c" exists - it may not. Secondly, that's not at all what the UNIX stat() call does, and these library calls mirror stat() very closely, possibly identically. And that it good, because people don't need to keep two models in their head: one for what the OS actually does and one for what Python's close-to-the-OS library calls do. | - fix isdir(), islink(), isfile() documentation pointing out that in | case of EACCES/EPERM or when dealing with exotic paths/fs it may | return incorrect results. I don't think False in incorrect. Arguably the docs should be clearer that True means it exists and False means it does not, or could not be accessed. A bit like the empty() tests on Queues etc; one side of the test is strong (at least at the time of the test) and the other is weak. So I'd be +0.5 for making the docs more clear that True is reliable and False may merely mean "could not access". And -1 on changing the semantics; I think they are correct. Cheers, -- Cameron Simpson Against stupidity....the Gods themselves contend in vain! From g.rodola at gmail.com Wed Jun 27 02:19:03 2012 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Wed, 27 Jun 2012 02:19:03 +0200 Subject: [Python-Dev] os.path.exists() / os.path.isdir() inconsistency when dealing with gvfs directories In-Reply-To: <20120627000238.GA22009@cskk.homeip.net> References: <20120627000238.GA22009@cskk.homeip.net> Message-ID: 2012/6/27 Cameron Simpson : > On 27Jun2012 01:49, Giampaolo Rodol? wrote: > | I've just noticed a strange behavior when dealing with gvfs filesystems: > | > | giampaolo at ubuntu:~$ python -c "import os; > | print(os.path.exists('/home/giampaolo/.gvfs'))" > | True > | giampaolo at ubuntu:~$ sudo su > | root at ubuntu:~# python -c "import os; > | print(os.path.exists('/home/giampaolo/.gvfs'))" > | False > | > | This is due to os.stat() which internally fails with PermissionError (EACCES). > | The same problem exists with os.path.isdir() which will return True as > | limited user and False as root. > | I'm not sure what's best to do here nor I know if there are other > | cases other than when dealing with gvfs which can produce similar > | behaviors but here's an idea: > | > | - make os.path.exists() return True in case of PermissionError because > | that's supposed to mean there's an existing path to deny access to > > Definitely not. > > Firstly, if I ask about "a/b/c" and am denied access to "a/b", then it > would be a lie to say "c" exists - it may not. Right, I wasn't taking that into account. > So I'd be +0.5 for making the docs more clear that True is reliable and > False may merely mean "could not access". +1. I was about to propose a 'strict' parameter which lets the exception propagate in case of errno != EACCES/EPERM but a doc fix is probably just fine. I'll file a bug report later today. Regards, --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ http://code.google.com/p/pysendfile/ From georg at python.org Wed Jun 27 08:10:26 2012 From: georg at python.org (Georg Brandl) Date: Wed, 27 Jun 2012 08:10:26 +0200 Subject: [Python-Dev] [RELEASED] Python 3.3.0 beta 1 Message-ID: <4FEAA3D2.8080007@python.org> On behalf of the Python development team, I'm happy to announce the first beta release of Python 3.3.0. This is a preview release, and its use is not recommended in production settings. Python 3.3 includes a range of improvements of the 3.x series, as well as easier porting between 2.x and 3.x. Major new features and changes in the 3.3 release series are: * PEP 380, syntax for delegating to a subgenerator ("yield from") * PEP 393, flexible string representation (doing away with the distinction between "wide" and "narrow" Unicode builds) * A C implementation of the "decimal" module, with up to 80x speedup for decimal-heavy applications * The import system (__import__) now based on importlib by default * The new "lzma" module with LZMA/XZ support * PEP 397, a Python launcher for Windows * PEP 405, virtual environment support in core * PEP 420, namespace package support * PEP 3151, reworking the OS and IO exception hierarchy * PEP 3155, qualified name for classes and functions * PEP 409, suppressing exception context * PEP 414, explicit Unicode literals to help with porting * PEP 418, extended platform-independent clocks in the "time" module * PEP 412, a new key-sharing dictionary implementation that significantly saves memory for object-oriented code * PEP 362, the function-signature object * The new "faulthandler" module that helps diagnosing crashes * The new "unittest.mock" module * The new "ipaddress" module * The "sys.implementation" attribute * A policy framework for the email package, with a provisional (see PEP 411) policy that adds much improved unicode support for email header parsing * A "collections.ChainMap" class for linking mappings to a single unit * Wrappers for many more POSIX functions in the "os" and "signal" modules, as well as other useful functions such as "sendfile()" * Hash randomization, introduced in earlier bugfix releases, is now switched on by default In total, almost 500 API items are new or improved in Python 3.3. For a more extensive list of changes in 3.3.0, see http://docs.python.org/3.3/whatsnew/3.3.html (*) To download Python 3.3.0 visit: http://www.python.org/download/releases/3.3.0/ Please consider trying Python 3.3.0 with your code and reporting any bugs you may notice to: http://bugs.python.org/ Enjoy! (*) Please note that this document is usually finalized late in the release cycle and therefore may have stubs and missing entries at this point. -- Georg Brandl, Release Manager georg at python.org (on behalf of the entire python-dev team and 3.3's contributors) From larry at hastings.org Wed Jun 27 08:49:43 2012 From: larry at hastings.org (Larry Hastings) Date: Tue, 26 Jun 2012 23:49:43 -0700 Subject: [Python-Dev] [RELEASED] Python 3.3.0 beta 1 In-Reply-To: <4FEAA3D2.8080007@python.org> References: <4FEAA3D2.8080007@python.org> Message-ID: <4FEAAD07.1010503@hastings.org> On 06/26/2012 11:10 PM, Georg Brandl wrote: > On behalf of the Python development team, I'm happy to announce the > first beta release of Python 3.3.0. I <3 <3.<3 Thanks Georg! And everybody who contributed. Stoked, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Wed Jun 27 09:32:52 2012 From: phd at phdru.name (Oleg Broytman) Date: Wed, 27 Jun 2012 11:32:52 +0400 Subject: [Python-Dev] os.path.exists() / os.path.isdir() inconsistency when dealing with gvfs directories In-Reply-To: References: Message-ID: <20120627073252.GA6711@iskra.aviel.ru> On Wed, Jun 27, 2012 at 01:49:34AM +0200, Giampaolo Rodol? wrote: > I've just noticed a strange behavior when dealing with gvfs filesystems: > > giampaolo at ubuntu:~$ python -c "import os; > print(os.path.exists('/home/giampaolo/.gvfs'))" > True > giampaolo at ubuntu:~$ sudo su > root at ubuntu:~# python -c "import os; > print(os.path.exists('/home/giampaolo/.gvfs'))" > False > > This is due to os.stat() which internally fails with PermissionError (EACCES). BTW, the same is true for FUSE when an FS has been mounted without something like "-o allow_other" or "-o allow_root": root at nb # ls /home/phd/mnt/net ls: cannot access /home/phd/mnt/net: Permission denied Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From ironfroggy at gmail.com Wed Jun 27 11:58:51 2012 From: ironfroggy at gmail.com (Calvin Spealman) Date: Wed, 27 Jun 2012 05:58:51 -0400 Subject: [Python-Dev] [RELEASED] Python 3.3.0 beta 1 In-Reply-To: <4FEAA3D2.8080007@python.org> References: <4FEAA3D2.8080007@python.org> Message-ID: All, Congradulations. This is a big one! On Wed, Jun 27, 2012 at 2:10 AM, Georg Brandl wrote: > On behalf of the Python development team, I'm happy to announce the > first beta release of Python 3.3.0. > > This is a preview release, and its use is not recommended in > production settings. > > Python 3.3 includes a range of improvements of the 3.x series, as well > as easier porting between 2.x and 3.x. ?Major new features and changes > in the 3.3 release series are: > > * PEP 380, syntax for delegating to a subgenerator ("yield from") > * PEP 393, flexible string representation (doing away with the > ?distinction between "wide" and "narrow" Unicode builds) > * A C implementation of the "decimal" module, with up to 80x speedup > ?for decimal-heavy applications > * The import system (__import__) now based on importlib by default > * The new "lzma" module with LZMA/XZ support > * PEP 397, a Python launcher for Windows > * PEP 405, virtual environment support in core > * PEP 420, namespace package support > * PEP 3151, reworking the OS and IO exception hierarchy > * PEP 3155, qualified name for classes and functions > * PEP 409, suppressing exception context > * PEP 414, explicit Unicode literals to help with porting > * PEP 418, extended platform-independent clocks in the "time" module > * PEP 412, a new key-sharing dictionary implementation that > ?significantly saves memory for object-oriented code > * PEP 362, the function-signature object > * The new "faulthandler" module that helps diagnosing crashes > * The new "unittest.mock" module > * The new "ipaddress" module > * The "sys.implementation" attribute > * A policy framework for the email package, with a provisional (see > ?PEP 411) policy that adds much improved unicode support for email > ?header parsing > * A "collections.ChainMap" class for linking mappings to a single unit > * Wrappers for many more POSIX functions in the "os" and "signal" > ?modules, as well as other useful functions such as "sendfile()" > * Hash randomization, introduced in earlier bugfix releases, is now > ?switched on by default > > In total, almost 500 API items are new or improved in Python 3.3. > For a more extensive list of changes in 3.3.0, see > > ? ?http://docs.python.org/3.3/whatsnew/3.3.html (*) > > To download Python 3.3.0 visit: > > ? ?http://www.python.org/download/releases/3.3.0/ > > Please consider trying Python 3.3.0 with your code and reporting any bugs > you may notice to: > > ? ?http://bugs.python.org/ > > > Enjoy! > > (*) Please note that this document is usually finalized late in the release > ? ?cycle and therefore may have stubs and missing entries at this point. > > -- > Georg Brandl, Release Manager > georg at python.org > (on behalf of the entire python-dev team and 3.3's contributors) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy From benoit at marmelune.net Wed Jun 27 11:08:45 2012 From: benoit at marmelune.net (=?ISO-8859-1?Q?Beno=EEt_Bryon?=) Date: Wed, 27 Jun 2012 11:08:45 +0200 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging Message-ID: <4FEACD9D.8090208@marmelune.net> Hi, Here is an informational PEP proposal: http://hg.python.org/peps/file/52767ab7e140/pep-0423.txt Could you review it for style, consistency and content? Additional notes: * Original discussion posted to distutils-sig at python.org * started on May 2012 at http://mail.python.org/pipermail/distutils-sig/2012-May/018551.html * continues in June 2012 at http://mail.python.org/pipermail/distutils-sig/2012-June/018641.html * that's why I set "Discussion-To:" header. * Original document was edited as a contrib to cpython documentation: * http://bugs.python.org/issue14899 * file history at https://bitbucket.org/benoitbryon/cpython/history/Doc/packaging/packagenames.rst * but it looked like a PEP, so posted to peps at python.org... Regards, Benoit (benoitbb on irc.freenode.net) From solipsis at pitrou.net Wed Jun 27 12:50:55 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 27 Jun 2012 12:50:55 +0200 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging References: <4FEACD9D.8090208@marmelune.net> Message-ID: <20120627125055.196efe9f@pitrou.net> Hello, On Wed, 27 Jun 2012 11:08:45 +0200 Beno?t Bryon wrote: > Hi, > > Here is an informational PEP proposal: > http://hg.python.org/peps/file/52767ab7e140/pep-0423.txt > > Could you review it for style, consistency and content? There is one Zen principle this PEP is missing: Flat is better than nested. This PEP seems to promote the practice of having a top-level namespace denote ownership. I think it should do the reverse: promote meaningful top-level packages (e.g. "sphinx") as standard practice, and allow an exception for when a piece of software is part of a larger organizational body. (i.e., "Community-owned projects can avoid namespace packages" should be the first item in the PEP and renamed so that it appears common rule) I don't think we want a Java-like landscape where everyone operates behind their closed fences ? la org.myorganization.somecommunity and where package names shout "ownership" rather than "functionality". (*) Also, do note that "packaging" is ambiguous in Python-land. (*) (for the record, companies internally can do what they want; this PEP AFAICT addresses the case of publicly released packages) Regards Antoine. From ncoghlan at gmail.com Wed Jun 27 13:19:57 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 27 Jun 2012 21:19:57 +1000 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: <20120627125055.196efe9f@pitrou.net> References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> Message-ID: I thought the PEP actually covered it pretty well: - if you don't want to worry about name conflicts for every module, pick *one* short top level namespace for your group and use that - for shared modules, use the top level namespace with PyPI as the name registry It's reasonable advice when coupled with the "avoid more than two levels of nesting - when tempted by this, split out some peer modules" elsewhere in the doc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Jun 27 13:34:53 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 27 Jun 2012 13:34:53 +0200 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> Message-ID: <20120627133453.1766ea15@pitrou.net> On Wed, 27 Jun 2012 21:19:57 +1000 Nick Coghlan wrote: > I thought the PEP actually covered it pretty well: > - if you don't want to worry about name conflicts for every module, pick > *one* short top level namespace for your group and use that > - for shared modules, use the top level namespace with PyPI as the name > registry That's not very clear to me when reading the PEP. For example, one of the items in the "overview" is "use top-level namespace for ownership". I don't think it should be, unless we want to promote such a practice. Similarly, I think the section about private projects ("Private (including closed-source) projects use a namespace") should be removed. It is not our duty to promote naming standards for private (i.e. internal) projects. Also, I don't see what's so important about using your company's name as a top-level namespace. You don't need it for conflict avoidance: you can just as well use distinctive project names. Regards Antoine. From p.f.moore at gmail.com Wed Jun 27 14:20:41 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 27 Jun 2012 13:20:41 +0100 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: <20120627133453.1766ea15@pitrou.net> References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> Message-ID: On 27 June 2012 12:34, Antoine Pitrou wrote: > On Wed, 27 Jun 2012 21:19:57 +1000 > Nick Coghlan wrote: >> I thought the PEP actually covered it pretty well: >> - if you don't want to worry about name conflicts for every module, pick >> *one* short top level namespace for your group and use that >> - for shared modules, use the top level namespace with PyPI as the name >> registry > > That's not very clear to me when reading the PEP. > For example, one of the items in the "overview" is "use top-level > namespace for ownership". I don't think it should be, unless we want to > promote such a practice. I agree. I only skimmed the PEP, but even on a skimming, I got the impression that it was promoting the use of namespaces for ownership, in a Java-like way. The part Nick quoted is substantially more reasonable (assuming that's a direct quote, rather than Nick's summarisation) but the principle should be made clear right at the top. I'd say that a headline item should be something like; Using namespaces: - "Flat is better than nested" - where possible, use a single top-level name for your package (check on PyPI that the name you choose isn't in use). - Where you expect to have multiple packages all relating to the same top-level functionality, it may make sense to use a single top-level namespace package, and put your packages underneath that. - There should never be a need to use more than one level of namespace (if you think there is, you should probably promote each of the level-2 namespaces.to the top level). - Namespaces (and package names) should always be based on functionality, never on ownership[1]. Personal or private code doesn't need to follow these guidelines, but be aware that personal code often ends up more widely used than originally envisaged... [1] Sorry, Barry. There clearly needs to be an exception for the flufl namespace :-) Paul. From lists at cheimes.de Wed Jun 27 14:56:39 2012 From: lists at cheimes.de (Christian Heimes) Date: Wed, 27 Jun 2012 14:56:39 +0200 Subject: [Python-Dev] os.path.exists() / os.path.isdir() inconsistency when dealing with gvfs directories In-Reply-To: References: Message-ID: Am 27.06.2012 01:49, schrieb Giampaolo Rodol?: > I've just noticed a strange behavior when dealing with gvfs filesystems: > > giampaolo at ubuntu:~$ python -c "import os; > print(os.path.exists('/home/giampaolo/.gvfs'))" > True > giampaolo at ubuntu:~$ sudo su > root at ubuntu:~# python -c "import os; > print(os.path.exists('/home/giampaolo/.gvfs'))" > False The issue isn't isolated to GVFS. It's more the current user doesn't have the exec persmission on the directory entry /home/giampaolo. On most systems the home directories have either 0700 or 0750. A user needs the 'x' bit enter or traverse through a directory (mnemonic: exec -> enter), the 'r' bit to read the content of a directory (e.g. listdir) and the 'w' bit to write (create or delete files unless the sticky bit is set). Christian From hansmu at xs4all.nl Wed Jun 27 15:10:33 2012 From: hansmu at xs4all.nl (Hans Mulder) Date: Wed, 27 Jun 2012 15:10:33 +0200 Subject: [Python-Dev] os.path.exists() / os.path.isdir() inconsistency when dealing with gvfs directories In-Reply-To: References: <20120627000238.GA22009@cskk.homeip.net> Message-ID: On 27/06/12 02:19:03, Giampaolo Rodol? wrote: > 2012/6/27 Cameron Simpson : >> So I'd be +0.5 for making the docs more clear that True is reliable and >> False may merely mean "could not access". > > +1 +1 > I was about to propose a 'strict' parameter which lets the exception > propagate in case of errno != EACCES/EPERM but a doc fix is probably > just fine. > I'll file a bug report later today. A 'strict' parameter that just propagates the exception might be a good idea. That would allow the user to deal with whatever issues stat() encounters. Arbitrarily mapping EPERM to 'False' would be unhelpful, as it leaves the user in a position where one value can mean two different things. -- HansM From mail at timgolden.me.uk Wed Jun 27 15:35:17 2012 From: mail at timgolden.me.uk (Tim Golden) Date: Wed, 27 Jun 2012 14:35:17 +0100 Subject: [Python-Dev] Poking about issue 1677 In-Reply-To: References: Message-ID: <4FEB0C15.1080305@timgolden.me.uk> On 26/06/2012 20:02, Terry Reedy wrote: > On 6/26/2012 6:51 AM, Devin Jeanpierre wrote: > >> The issue is that sometimes, if you press ctrl-c on Windows, instead >> of raising a KeyboardInterrupt, Python will exit completely. Because >> of this, any program that relies on ctrl-c/KeyboardInterrupt is not >> guaranteed to work on windows. Also, working with the interactive >> interpreter becomes really annoying for those with the habit of >> deleting the whole input line via ctrl-c. > > Idle Shell, 3.3.0a4, Win 7does not seem to have this problem. Still up > after 6000 ^Cs. It is better anyway, in multiple ways, than Command > Prompt. (That does not help batch-mode programs, though.) > > That aside, perhaps the way it handles ^C might help. > > I did get the CP to close four times, each time after a few hundred to > maybe a thousand ^Cs. It seems to require more than just one held down > key press. I suspect the closures happened after the limited line buffer > was filled and it was starting to delete the earliest lines. > I've just updated the call with as much as I had time for just now. Your point about IDLE made me think; I installed pyreadline and now I can't get it to fail at all. Seems to point even more to myreadline.c. More later TJG From p.f.moore at gmail.com Wed Jun 27 16:57:56 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 27 Jun 2012 15:57:56 +0100 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> Message-ID: On 27 June 2012 13:20, Paul Moore wrote: > I agree. I only skimmed the PEP, but even on a skimming, I got the > impression that it was promoting the use of namespaces for ownership, > in a Java-like way. The part Nick quoted is substantially more > reasonable (assuming that's a direct quote, rather than Nick's > summarisation) but the principle should be made clear right at the > top. Reading in a bit more depth, I'd say that I specifically disagree with this section: """ Top-level namespace relates to code ownership ============================================= This helps avoid clashes between project names. Ownership could be: * an individual. [...] * an organization. [...] * a group or community. [...] * a group or community related to another package. """ I'd say a top-level namespace should *never* (hello again, Barry!) relate to an individual. And never to an organisation either, the Django case notwithstanding. In the case of Django, I see the top-level namespace as belonging to the *software* Django, not to the *organisation*, the Django foundation. In fact, with the exception of the "an individual" case, I'd say all of the others are actually referring to the software rather than the organisation/group/community owning that project. To be honest, I see this whole section as misguided - the top-level namespace is the project. Simple as that. Oh, and the terminology is further muddled here, as the "top level namespace" is usually not a namespace package in the sense of PEP 420. Generally, the impression I get is that the PEP is recommending more levels of nesting than I would agree with: But it's hard to be sure, because the concept of nesting feels a bit overloaded. The key for me is that generally, I like to be able to type "import X" where X is not a dotted name, and then refer to X.x1, X.x2, etc. I'd call that no levels of nesting, to be honest. For complex stuff, subpackages ("import X.Y") might be needed, but that's rare (and even then, key names should be exposed directly from X). Paul. PS Having said all this, I don't maintain any code on PyPI - I'm a user not a producer. That may affect my perspective... From mail at timgolden.me.uk Wed Jun 27 20:52:13 2012 From: mail at timgolden.me.uk (Tim Golden) Date: Wed, 27 Jun 2012 19:52:13 +0100 Subject: [Python-Dev] Poking about issue 1677 In-Reply-To: <4FEB0C15.1080305@timgolden.me.uk> References: <4FEB0C15.1080305@timgolden.me.uk> Message-ID: <4FEB565D.2020207@timgolden.me.uk> I can confirm that there is a race condition between the code in myreadline.c and the signal_handler. I have a patch in readiness which basically loops until the signal has been tripped. But what I don't know is: what to do if the signal *still* doesn't trip (after 100 millisecond-retries)? At present the code just drops through (with a comment warning) which is why we're seeing the interpreter exit. What should happen, though? Raise SystemError? From yselivanov.ml at gmail.com Wed Jun 27 22:08:38 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 27 Jun 2012 16:08:38 -0400 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> Message-ID: On 2012-06-27, at 10:57 AM, Paul Moore wrote: > Generally, the impression I get is that the PEP is recommending more > levels of nesting than I would agree with: But it's hard to be sure, > because the concept of nesting feels a bit overloaded. The key for me > is that generally, I like to be able to type "import X" where X is not > a dotted name, and then refer to X.x1, X.x2, etc. I'd call that no > levels of nesting, to be honest. For complex stuff, subpackages > ("import X.Y") might be needed, but that's rare (and even then, key > names should be exposed directly from X). Why instead of writing 'import project' you don't want to write 'from acme import project'? With python adoption (enterprise too) growing, we will inevitably find out that one single namespace (PyPI) is not enough, and name collisions will become a frequent headache. - Yury From pje at telecommunity.com Wed Jun 27 22:17:33 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 27 Jun 2012 16:17:33 -0400 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> Message-ID: On Wed, Jun 27, 2012 at 10:57 AM, Paul Moore wrote: > For complex stuff, subpackages > ("import X.Y") might be needed, but that's rare (and even then, key > names should be exposed directly from X). > > Paul. > > PS Having said all this, I don't maintain any code on PyPI - I'm a > user not a producer. That may affect my perspective... > That, and if you don't work with web stuff or networking stuff. Things having lots of subpackages are quite the rule there. Also, functional naming for top-level modules is actually an anti-pattern: an invitation to naming conflicts, especially with future stdlib contents. Suppose two people want to write an "email" package? Unless you jam the ownership into the name (e.g. joes_email and bobs_email), what are you supposed to do? This is why we have popular packages with names like nose and celery and django and pyramid and lamson: because unique memorable names > functionally descriptive names. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre at peadrop.com Wed Jun 27 23:12:48 2012 From: alexandre at peadrop.com (Alexandre Vassalotti) Date: Wed, 27 Jun 2012 14:12:48 -0700 Subject: [Python-Dev] On a new version of pickle [PEP 3154]: self-referential frozensets In-Reply-To: <4FE59819.8030304@gmail.com> References: <4FE59819.8030304@gmail.com> Message-ID: On Sat, Jun 23, 2012 at 3:19 AM, M Stefan wrote: > * UNION_FROZENSET: like UPDATE_SET, but create a new frozenset > stack before: ... pyfrozenset mark stackslice > stack after : ... pyfrozenset.union(stackslice) > Since frozenset are immutable, could you explain how adding the UNION_FROZENSET opcode helps in pickling self-referential frozensets? Or are you only adding this one to follow the current style used for pickling dicts and lists in protocols 1 and onward? > While this design allows pickling of self-referenti/Eal sets, > self-referential > frozensets are still problematic. For instance, trying to pickle `fs': > a=A(); fs=frozenset([a]); a.fs = fs > (when unpickling, the object a has to be initialized before it is added to > the frozenset) > > The only way I can think of to make this work is to postpone > the initialization of all the objects inside the frozenset until after > UNION_FROZENSET. > I believe this is doable, but there might be memory penalties if the > approach > is to simply store all the initialization opcodes in memory until pickling > the frozenset is finished. > > I don't think that's the only way. You could also emit POP opcode to discard the frozenset from stack and then emit a GET to fetch it back from the memo. This is how we currently handle self-referential tuples. Check out the save_tuple method in pickle.py to see how it is done. Personally, I would prefer that approach because it already well-tested and proven to work. That said, your approach sounds good too. The memory trade-off could lead to smaller pickles and more efficient decoding (though these self-referential objects are rare enough that I don't think that any improvements there would matter much). While self-referential frozensets are uncommon, a far more problematic > situation is with the self-referential objects created with REDUCE. While > pickle uses the idea of creating empty collections and then filling them, > reduce tipically creates already-filled objects. For instance: > cnt = collections.Counter(); cnt[a]=3; a.cnt=cnt; cnt.__reduce__() > (, ({<__main__.A object at 0x0286E8F8>: 3},)) > where the A object contains a reference to the counter. Unpickling an > object pickled with this reduce function is not possible, because the > reduce > function, which "explains" how to create the object, is asking for the > object > to exist before being created. > Your example seems to work on Python 3. I am not sure if I follow what you are trying to say. Can you provide a working example? $ python3 Python 3.1.2 (r312:79147, Dec 9 2011, 20:47:34) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pickle, collections >>> c = collections.Counter() >>> class A: pass ... >>> a = A() >>> c[a] = 3 >>> a.cnt = c >>> b =pickle.loads(pickle.dumps(a)) >>> b in b.cnt True > Pickle could try to fix this by detecting when reduce returns a class type > as the first tuple arg and move the dict ctor parameter to the state, but > this may not always be intended. It's also a bit strange that __getstate__ > is never used anywhere in pickle directly. > I would advise against any such change. The reduce protocol is already fairly complex. Further I don't think change it this way would give us any extra flexibility. The documentation has a good explanation of how __getstate__ works under hood: http://docs.python.org/py3k/library/pickle.html#pickling-class-instances And if you need more, PEP 307 (http://www.python.org/dev/peps/pep-0307/) provides some of the design rationales of the API. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Jun 27 23:31:31 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 28 Jun 2012 07:31:31 +1000 Subject: [Python-Dev] os.path.exists() / os.path.isdir() inconsistency when dealing with gvfs directories In-Reply-To: References: <20120627000238.GA22009@cskk.homeip.net> Message-ID: If someone wants to see the error details, they should use os.stat directly rather than an existence check. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Wed Jun 27 22:34:13 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 27 Jun 2012 16:34:13 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Changed importlib tests to use assertIs, assertIsInstance, etc., instead of In-Reply-To: References: Message-ID: <4FEB6E45.9070602@udel.edu> On 6/27/2012 3:26 PM, eric.smith wrote: > http://hg.python.org/cpython/rev/9623c83ba489 > changeset: 77825:9623c83ba489 > user: Eric V. Smith > date: Wed Jun 27 15:26:26 2012 -0400 > summary: > Changed importlib tests to use assertIs, assertIsInstance, etc., instead of just assertTrue. You forgot assertIsNone ;-) or was that intentional? > - self.assertTrue(loader is None) > + self.assertIs(loader, None) self.assertIsNone(loader) tjr From eric at trueblade.com Thu Jun 28 03:35:14 2012 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 27 Jun 2012 21:35:14 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Changed importlib tests to use assertIs, assertIsInstance, etc., instead of In-Reply-To: <4FEB6E45.9070602@udel.edu> References: <4FEB6E45.9070602@udel.edu> Message-ID: <4FEBB4D2.3070902@trueblade.com> On 6/27/2012 4:34 PM, Terry Reedy wrote: > On 6/27/2012 3:26 PM, eric.smith wrote: >> http://hg.python.org/cpython/rev/9623c83ba489 >> changeset: 77825:9623c83ba489 >> user: Eric V. Smith >> date: Wed Jun 27 15:26:26 2012 -0400 >> summary: >> Changed importlib tests to use assertIs, assertIsInstance, etc., >> instead of just assertTrue. > > You forgot assertIsNone ;-) > or was that intentional? Darn it! I can never remember which ones exist. I was hoping for an assertHasAttr, but no such luck. I'll fix it. From martin at v.loewis.de Thu Jun 28 09:13:54 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 28 Jun 2012 09:13:54 +0200 Subject: [Python-Dev] Buildbot master moved Message-ID: <4FEC0432.3040507@v.loewis.de> I have now moved the buildbot master to OSU/OSL, and upgraded the buildbot version in the process. If there are any issues, let me know or Antoine. Slaves should (and apparently do) reconnect if they were changed to use "buildbot.python.org" as the master. Some probably weren't; we'll figure this out. Regards, Martin From solipsis at pitrou.net Thu Jun 28 12:06:40 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 28 Jun 2012 12:06:40 +0200 Subject: [Python-Dev] [Infrastructure] Buildbot master moved In-Reply-To: <4FEC0432.3040507@v.loewis.de> References: <4FEC0432.3040507@v.loewis.de> Message-ID: <1340878000.3396.4.camel@localhost.localdomain> Le jeudi 28 juin 2012 ? 09:13 +0200, "Martin v. L?wis" a ?crit : > I have now moved the buildbot master to OSU/OSL, and upgraded the > buildbot version in the process. If there are any issues, let me > know or Antoine. It seems we lost the "force build" button. Judging from the templates, the form is here, it just isn't displayed. I also don't see any custom builders: http://buildbot.python.org/all/waterfall?category=custom.stable&category=custom.unstable Regards Antoine. From solipsis at pitrou.net Thu Jun 28 12:11:32 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 28 Jun 2012 12:11:32 +0200 Subject: [Python-Dev] [Infrastructure] Buildbot master moved In-Reply-To: <1340878000.3396.4.camel@localhost.localdomain> References: <4FEC0432.3040507@v.loewis.de> <1340878000.3396.4.camel@localhost.localdomain> Message-ID: <1340878292.3396.5.camel@localhost.localdomain> Le jeudi 28 juin 2012 ? 12:06 +0200, Antoine Pitrou a ?crit : > Le jeudi 28 juin 2012 ? 09:13 +0200, "Martin v. L?wis" a ?crit : > > I have now moved the buildbot master to OSU/OSL, and upgraded the > > buildbot version in the process. If there are any issues, let me > > know or Antoine. > > It seems we lost the "force build" button. Judging from the templates, > the form is here, it just isn't displayed. > > I also don't see any custom builders: > http://buildbot.python.org/all/waterfall?category=custom.stable&category=custom.unstable Ok, it seems we should migrate to the ForceScheduler API: http://buildbot.net/buildbot/docs/0.8.6/manual/cfg-schedulers.html#forcescheduler-scheduler From benoit at marmelune.net Thu Jun 28 12:36:21 2012 From: benoit at marmelune.net (=?ISO-8859-1?Q?Beno=EEt_Bryon?=) Date: Thu, 28 Jun 2012 12:36:21 +0200 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: <20120627133453.1766ea15@pitrou.net> References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> Message-ID: <4FEC33A5.7080108@marmelune.net> Le 27/06/2012 13:34, Antoine Pitrou a ?crit : > Similarly, I think the section about private projects ("Private > (including closed-source) projects use a namespace") should be removed. > It is not our duty to promote naming standards for private (i.e. > internal) projects. The intention in the proposed PEP is to promote standards for general Python usage, which implicitely includes both public and private use. The proposed PEP tries to explain how the conventions apply in most use cases. Public and private scopes are mentioned explicitely because they were identified as valuable use cases. Here are some reasons why the "private code" use case has been identified as valuable: * New Python developers (or more accurately new distribution authors) may wonder "What makes a good name?", even if they are working in a private area. Guidelines covering private code would be welcome. * At work, I already had discussions about naming patterns for closed source projects. These discussions consumed some energy made the team focus on some "less valuable" topics. We searched for an official convention and didn't find one. We made choices but none of us was really satisfied about it. An external arbitration from a trusted authority would have been welcome, even if we were making closed-source software. * As Paul said, "personal code often ends up more widely used than originally envisaged". So following the convention from the start may help. * Here, the PEP already covers (or tries to) most public code use cases. It's quite easy to extend it to private code. I feel drawbacks are negligible compared to potential benefits. .. note:: IMHO, main drawback is "read this long document". * Isn't it obvious that, at last, people do what they want to in private code? In fact, they also do in public code. I mean the document is an informational PEP. It recommends to apply conventions but the actual choice is left to developers. That said, would the changes below improve the document? * Keep the parts about private and closed-source code, but add a note to insist on "in private code, you obviously do what you want to" and "be aware that personal code often ends up more widely used than originally envisaged". * At the beginning of the document, add a section like http://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds Another option would have been to deal with "general Python code" and don't mention "public" and "private" areas, i.e. implicitely cover both. I haven't followed this way because it is implicit. > Also, I don't see what's so important about using > your company's name as a top-level namespace. You don't need it for > conflict avoidance: you can just as well use distinctive project names. Using company's name as top-level namespace has been proven a good practice: * avoids clashes and confusion with public projects, i.e. don't create a "cms" private project because there could be a "cms" project on PyPI. * makes it easy to release the code as open-source: don't change the project name. * if there is no reason at all for the project to contain the company name (i.e. the project is not specific to the company), why not realeasing it as open source? (with a one-level name) Using company's name is not the only option. But, as far as I know, it fits most use cases, which is enough (and required) for a convention. Another option is to use any arbitrary name as top-level namespace. You can. If an arbitrary name seems obvious to you, feel free to use it. But, in most cases, company's name is an obvious choice. So, would you appreciate a change so that: * company name is recommended as a best practice. * but any arbitrary name can be used. Could be something in: 1. "For private projects, use company name (or any unique arbitrary name) as top-level namespace". 2. "For private projects, use any arbitrary name (company name is generally a good choice) as top-level namespace". 3. "For private projects, use a top-level namespace (company name is generally a good choice, but you can use any unique arbitrary name)." Benoit From benoit at marmelune.net Thu Jun 28 12:35:24 2012 From: benoit at marmelune.net (=?ISO-8859-1?Q?Beno=EEt_Bryon?=) Date: Thu, 28 Jun 2012 12:35:24 +0200 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: <20120627125055.196efe9f@pitrou.net> References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> Message-ID: <4FEC336C.4060809@marmelune.net> Le 27/06/2012 12:50, Antoine Pitrou a ?crit : > There is one Zen principle this PEP is missing: > Flat is better than nested. > > This PEP seems to promote the practice of having a top-level namespace > denote ownership. I think it should do the reverse: promote > meaningful top-level packages (e.g. "sphinx") as standard practice, and > allow an exception for when a piece of software is part of a larger > organizational body. The PEP's intention is to tell what Nick described. Maybe the words are not clear... Notes about the order of items in the overview: 1. respect names registered on PyPI. This is a requirement. 2. adopt specific conventions if any, i.e. don't break the rules that already exist in existing projects. This convention makes the PEP backward compatible. It is the first rule because, for project-related contributions, the first action should be to read the documentation of the main project. 3. important thing is to avoid name collisions. This is a requirement for project names, and a strong recommendation for package/module names. 4. use a single name. This rule simplifies the action of choosing a name and brings consistency. If you already know how your package/module is imported in code, you can simply use it as project name. 5. then, we'd better make it easy to find and remember the project. This is a strong advice. 6. then, all the points above being considered, we'd better use flat names. 7. rules about syntax (PEP 8). 8. state about specific conventions, if any. This is the complement of point 2. 9. ask. Would you appreciate a reordering? Could be something like that: 1. adopt specific conventions if any. 2. avoid name collisions. 3. you'd better use flat names. 4. you'd better make it easy to find and remember the project. 5. you'd better use a single name and distribute only one package at a time 6. follow rules about syntax (PEP 8) 7. once you are done, state about specific choices, if any. This is the complement of point 1. 8. if in doubt, ask. > (i.e., "Community-owned projects can avoid namespace packages" should > be the first item in the PEP and renamed so that it appears common rule) I guess we could move "shared projects can use a one-level name" before guidelines related to private projects. Benoit From ncoghlan at gmail.com Thu Jun 28 12:58:27 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 28 Jun 2012 20:58:27 +1000 Subject: [Python-Dev] Creating release blockers for Python 3.4 Message-ID: Just a heads up (primarily for Georg and Larry): I've started marking some issues as release blockers for 3.4 (currently just the bytes-bytes/str-str transform() API and the set_encoding() method discussed recently on python-ideas). These are important gaps identified in the new Unicode handling model, and we should make sure to review and discuss them well in advance of 3.4 being released. Those are the only two I've identified so far where that course of action has seemed appropriate - most of the time for this kind of thing, I'd just assign the issue to myself to take a look at after the feature freeze is over or else bump the priority up to high. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From p.f.moore at gmail.com Thu Jun 28 12:58:30 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 28 Jun 2012 11:58:30 +0100 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: <4FEC33A5.7080108@marmelune.net> References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> <4FEC33A5.7080108@marmelune.net> Message-ID: On 28 June 2012 11:36, Beno?t Bryon wrote: >> Also, I don't see what's so important about using >> your company's name as a top-level namespace. You don't need it for >> conflict avoidance: you can just as well use distinctive project names. > > Using company's name as top-level namespace has been proven a > good practice: Not to me. This is what Java does, and whenever I have encountered it, I have found it a major pain. As an individual developer, I have no company name. The "use your domain" option doesn't help either, as I have 3 registered domains to my name, none of which I use consistently enough to want to use as the definitive domain to identify "my" code forever. What if I abandon a project, and someone else picks it up? Do they need to change the name? I have lots of little projects. Do they all have to sit under a single namespace package "paul"? That's a maintenance problem, as there's no standard namespace package facility prior to 3.3 (I don't use setuptools, in general). The concept of using a company/domain/personal name as the top level raises far more questions than it answers for me... Paul. From solipsis at pitrou.net Thu Jun 28 13:07:02 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 28 Jun 2012 13:07:02 +0200 Subject: [Python-Dev] cpython (2.7): #9559: Append data to single-file mailbox files if messages are only added References: Message-ID: <20120628130702.124ac6d3@pitrou.net> On Thu, 28 Jun 2012 12:59:02 +0200 petri.lehtinen wrote: > http://hg.python.org/cpython/rev/c37cb11b546f > changeset: 77832:c37cb11b546f > branch: 2.7 > parent: 77823:73710ae9fedc > user: Petri Lehtinen > date: Thu Jun 28 13:48:17 2012 +0300 > summary: > #9559: Append data to single-file mailbox files if messages are only added > > If messages were only added, a new file is no longer created and > renamed over the old file when flush() is called on an mbox, MMDF or > Babyl mailbox. Why so? Appending is not atomic and, if it fails in the middle, you could get a corrupt mbox file. Furthermore, I disagree that it's a bugfix: IMO it should wait for 3.4. Regards Antoine. From hs at ox.cx Thu Jun 28 13:09:56 2012 From: hs at ox.cx (Hynek Schlawack) Date: Thu, 28 Jun 2012 13:09:56 +0200 Subject: [Python-Dev] Signed packages In-Reply-To: <20120623140310.Horde.G7dnY8L8999P5bB_K3XH9vA@webmail.df.eu> References: <4FE0F336.7030709@netwok.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622161910.5baf0584@pitrou.net> <20120622172443.Horde.096CMqGZi1VP5I477AO0JhA@webmail.df.eu> <2D51471ABFFE4F0585166A44AAE8CCC7@gmail.com> <20120623140310.Horde.G7dnY8L8999P5bB_K3XH9vA@webmail.df.eu> Message-ID: <4FEC3B84.10804@ox.cx> Am 23.06.12 14:03, schrieb martin at v.loewis.de: >> I'm surprised gpg hasn't been mentioned here. I think these are all >> solved problems, most free software that is signed signs it with the >> gpg key of the author. In that case all that is needed is that the >> cheeseshop allows the uploading of the signature. > For the record, the cheeseshop has been supporting pgp signatures > for about ten years now. Several projects have been using that for > quite a while in their releases. Also for the record, it?s broken as of Python 3.2. See http://bugs.python.org/issue10571 From solipsis at pitrou.net Thu Jun 28 13:11:32 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 28 Jun 2012 13:11:32 +0200 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <4FEC336C.4060809@marmelune.net> Message-ID: <20120628131132.621599ee@pitrou.net> On Thu, 28 Jun 2012 12:35:24 +0200 Beno?t Bryon wrote: > > Would you appreciate a reordering? > Could be something like that: > > 1. adopt specific conventions if any. > > 2. avoid name collisions. > > 3. you'd better use flat names. > > 4. you'd better make it easy to find and remember the project. > > 5. you'd better use a single name and distribute only one > package at a time > > 6. follow rules about syntax (PEP 8) > > 7. once you are done, state about specific choices, if any. > This is the complement of point 1. > > 8. if in doubt, ask. Yes, that would be better. I think the PEP also needs to spell out the rationale better. Also, reminding the distinction between modules, packages and namespace packages could be useful to the reader, especially if it's supposed to end up in the documentation. Regards Antoine. From benoit at marmelune.net Thu Jun 28 12:53:47 2012 From: benoit at marmelune.net (=?ISO-8859-1?Q?Beno=EEt_Bryon?=) Date: Thu, 28 Jun 2012 12:53:47 +0200 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> Message-ID: <4FEC37BB.40406@marmelune.net> Le 27/06/2012 22:08, Yury Selivanov a ?crit : > With python adoption (enterprise too) growing, we will inevitably > find out that one single namespace (PyPI) is not enough, and > name collisions will become a frequent headache. An argument for top-level namespace is related to PyPI as a central place to publish Python code VS code released in popular code repositories: * many developers don't register projects with PyPI, even if open source. As an example, many projects are hosted on code repositories, such as Github.com, and not on PyPI. * one reason (not the only one) is that, as an individual, publishing some "proof of concept" code at PyPI scares me: * it is very personal, at least at the beginning. Not sure it is interesting. * what if the project gets abandoned? It will remain on PyPI and block a name slot. * on Github, people work in a user space. All projects are managed under the user account. Groups and companies can use organization accounts. This scheme seems popular and comfortable. * on Github, people can fork projects. Project names are not unique, but "user/organization+project name" is unique. It seems to work well. * sometimes, forks become more popular than original projects. Sometimes original projects are abandoned and several forks are active. * Notice that distinct projects (i.e. not forks) can have the same name, provided they are owned by distinct users. * Also notice that there is no deep nesting. There are only two levels: one for the user or organization, one for the project. * if we consider PyPI as the unique reference and central place to check for (public) name availability, then shouldn't we promote registration with PyPI? * there are other reasons why authors should register with PyPI. As an example the ability to ``pip install project`` without using complicated pip options. * if many projects on Github are published on PyPI, then what would happen? I bet that, without adequate naming conventions, there will be many name collisions. * so promoting top-level namespace (including individual) can help. * a risk is that it also becomes difficult to find a project within PyPI. But having lots of projects in PyPI is not the problem. The problem is more or less related to the search. Meaningful names, memorable names and packaging metadata are important for that purpose. And if necessary, we will be able to improve PyPI search engine or list/browse views. Benoit From martin at v.loewis.de Thu Jun 28 13:56:30 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Thu, 28 Jun 2012 13:56:30 +0200 Subject: [Python-Dev] Signed packages In-Reply-To: <4FEC3B84.10804@ox.cx> References: <4FE0F336.7030709@netwok.org> <8359F922B4E045BD819F7811725306D6@gmail.com> <4B990DDC346C4C89949FC1DDF29B3D80@gmail.com> <4FE43946.2050505@astro.uio.no> <4FE448CE.7040909@astro.uio.no> <20120622161910.5baf0584@pitrou.net> <20120622172443.Horde.096CMqGZi1VP5I477AO0JhA@webmail.df.eu> <2D51471ABFFE4F0585166A44AAE8CCC7@gmail.com> <20120623140310.Horde.G7dnY8L8999P5bB_K3XH9vA@webmail.df.eu> <4FEC3B84.10804@ox.cx> Message-ID: <20120628135630.Horde.7kqgVcL8999P7EZuqVBQazA@webmail.df.eu> Zitat von Hynek Schlawack : > Am 23.06.12 14:03, schrieb martin at v.loewis.de: > >>> I'm surprised gpg hasn't been mentioned here. I think these are all >>> solved problems, most free software that is signed signs it with the >>> gpg key of the author. In that case all that is needed is that the >>> cheeseshop allows the uploading of the signature. >> For the record, the cheeseshop has been supporting pgp signatures >> for about ten years now. Several projects have been using that for >> quite a while in their releases. > > Also for the record, it?s broken as of Python 3.2. See > http://bugs.python.org/issue10571 That's different, though: PyPI continues to support it just fine. It's only distutils which has it broken. If you manually run gpg, and manually upload through the web interface, it still works. Regards, Martin From chrism at plope.com Thu Jun 28 14:14:07 2012 From: chrism at plope.com (Chris McDonough) Date: Thu, 28 Jun 2012 08:14:07 -0400 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: <4FEC33A5.7080108@marmelune.net> References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> <4FEC33A5.7080108@marmelune.net> Message-ID: <4FEC4A8F.1010207@plope.com> On 06/28/2012 06:36 AM, Beno?t Bryon wrote: > Le 27/06/2012 13:34, Antoine Pitrou a ?crit : >> Similarly, I think the section about private projects ("Private >> (including closed-source) projects use a namespace") should be removed. >> It is not our duty to promote naming standards for private (i.e. >> internal) projects. > The intention in the proposed PEP is to promote standards > for general Python usage, which implicitely includes both public > and private use. > > The proposed PEP tries to explain how the conventions > apply in most use cases. Public and private scopes are mentioned > explicitely because they were identified as valuable use cases. > > Here are some reasons why the "private code" use case has > been identified as valuable: > > * New Python developers (or more accurately new distribution > authors) may wonder "What makes a good name?", even if they are > working in a private area. Guidelines covering private code would > be welcome. > > * At work, I already had discussions about naming patterns for > closed source projects. These discussions consumed some energy > made the team focus on some "less valuable" topics. We searched > for an official convention and didn't find one. We made choices > but none of us was really satisfied about it. An external > arbitration from a trusted authority would have been welcome, > even if we were making closed-source software. > > * As Paul said, "personal code often ends up more widely used than > originally envisaged". So following the convention from the start > may help. > > * Here, the PEP already covers (or tries to) most public code use > cases. It's quite easy to extend it to private code. I feel > drawbacks are negligible compared to potential benefits. > > .. note:: IMHO, main drawback is "read this long document". > > * Isn't it obvious that, at last, people do what they want to in > private code? In fact, they also do in public code. I mean the > document is an informational PEP. It recommends to apply conventions > but the actual choice is left to developers. > > > That said, would the changes below improve the document? > > * Keep the parts about private and closed-source code, but add a > note to insist on "in private code, you obviously do what you want > to" and "be aware that personal code often ends up more widely used > than originally envisaged". > > * At the beginning of the document, add a section like > > http://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds > > > > Another option would have been to deal with "general Python code" > and don't mention "public" and "private" areas, i.e. implicitely > cover both. I haven't followed this way because it is implicit. > > >> Also, I don't see what's so important about using >> your company's name as a top-level namespace. You don't need it for >> conflict avoidance: you can just as well use distinctive project names. > > Using company's name as top-level namespace has been proven a > good practice: > > * avoids clashes and confusion with public projects, i.e. don't > create a "cms" private project because there could be a "cms" > project on PyPI. > > * makes it easy to release the code as open-source: don't change > the project name. > > * if there is no reason at all for the project to contain the > company name (i.e. the project is not specific to the company), > why not realeasing it as open source? (with a one-level name) > > Using company's name is not the only option. But, as far as I know, > it fits most use cases, which is enough (and required) for a > convention. > > Another option is to use any arbitrary name as top-level namespace. > You can. If an arbitrary name seems obvious to you, feel free to > use it. But, in most cases, company's name is an obvious choice. > > So, would you appreciate a change so that: > > * company name is recommended as a best practice. > * but any arbitrary name can be used. > > Could be something in: > > 1. "For private projects, use company name (or any unique arbitrary name) > as top-level namespace". > > 2. "For private projects, use any arbitrary name (company name is > generally a good choice) as top-level namespace". > > 3. "For private projects, use a top-level namespace (company name is > generally a good choice, but you can use any unique arbitrary name)." It's probably always a reasonable idea to use a brand-prefixed namespace for *private* packages but in my experience it's almost always a bad idea to publish any set of otherwise-unrelated packages that share a branded namespace prefix to PyPI. I know this because I've been involved with it at least twice with "zope." and "repoze." brands/namespaces. The meaning of both of those namespaces has become soft over time and both now mean basically "this code was created by a group of people" instead of "this code is useful under a circumstance or for a purpose". Those namespaces are both the moral equivalent of a "garbage barge" class in development: code related to the namespace might do anything ("zope" now means a company and two completely different application servers; "repoze" never really meant anything, it was always a pure brand). People typically look for code on PyPI that solves a problem, and branding in namespacing there is usually confusing. E.g. there are many highly-general useful things in both the zope. and repoze. namespace packages, but people are skittish about the meaning of the namespaces and tend to look for a "more generic" solution. They are often right to do so. Putting a package in a "garbage barge" namespace does make it easy to avoid conflicts when publishing to PyPI, but it also makes it easy to avoid doing other release management tasks like creating good docs and making sure your package doesn't depend inappropriately on other unrelated same-namespace packages. And even if a particular distribution from a namespace has great docs and great release management practices, it's often the case that the distribution is ignored by potential consumers because it's just too hard to wade through the meaning of the branding. So I'd suggest that if the namespace represents a brand, the brand should be related to a concrete bit of software (e.g. django, pyramid) rather than a project or a company to avoid the fate of the above-mentioned namespace. At least for *public* releases of software; for private ones it matters a lot less. - C From solipsis at pitrou.net Thu Jun 28 15:02:53 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 28 Jun 2012 15:02:53 +0200 Subject: [Python-Dev] [Infrastructure] Buildbot master moved In-Reply-To: <1340878292.3396.5.camel@localhost.localdomain> References: <4FEC0432.3040507@v.loewis.de> <1340878000.3396.4.camel@localhost.localdomain> <1340878292.3396.5.camel@localhost.localdomain> Message-ID: <1340888573.3396.6.camel@localhost.localdomain> Le jeudi 28 juin 2012 ? 12:11 +0200, Antoine Pitrou a ?crit : > Le jeudi 28 juin 2012 ? 12:06 +0200, Antoine Pitrou a ?crit : > > Le jeudi 28 juin 2012 ? 09:13 +0200, "Martin v. L?wis" a ?crit : > > > I have now moved the buildbot master to OSU/OSL, and upgraded the > > > buildbot version in the process. If there are any issues, let me > > > know or Antoine. > > > > It seems we lost the "force build" button. Judging from the templates, > > the form is here, it just isn't displayed. > > > > I also don't see any custom builders: > > http://buildbot.python.org/all/waterfall?category=custom.stable&category=custom.unstable Ok, both are fixed. Note: there are uncommitted changes in the local git repo, I left them uncommitted. Regards Antoine. From rdmurray at bitdance.com Thu Jun 28 15:47:23 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 28 Jun 2012 09:47:23 -0400 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: <4FEC4A8F.1010207@plope.com> References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> <4FEC33A5.7080108@marmelune.net> <4FEC4A8F.1010207@plope.com> Message-ID: <20120628134724.9F58C25050E@webabinitio.net> On Thu, 28 Jun 2012 08:14:07 -0400, Chris McDonough wrote: > It's probably always a reasonable idea to use a brand-prefixed namespace > for *private* packages but in my experience it's almost always a bad > idea to publish any set of otherwise-unrelated packages that share a > branded namespace prefix to PyPI. > > I know this because I've been involved with it at least twice with > "zope." and "repoze." brands/namespaces. The meaning of both of those > namespaces has become soft over time and both now mean basically "this > code was created by a group of people" instead of "this code is useful > under a circumstance or for a purpose". Those namespaces are both the > moral equivalent of a "garbage barge" class in development: code related > to the namespace might do anything ("zope" now means a company and two > completely different application servers; "repoze" never really meant > anything, it was always a pure brand). > > People typically look for code on PyPI that solves a problem, and > branding in namespacing there is usually confusing. E.g. there are many > highly-general useful things in both the zope. and repoze. namespace > packages, but people are skittish about the meaning of the namespaces > and tend to look for a "more generic" solution. They are often right to Looking at Zope mostly from the outside (I was involved with zope development during an early stage of zope 3, but I haven't been involved or used it for years), this matches my perception as well. The zope namespace made some sense early on, but as the project got refactored into more cleanly separated pieces, it ends up just getting in the way of wider adoption of the most useful pieces. For what it is worth, notice that perl does not use organization names, it uses functional names. Which languages other than Java use organizational names? --David From rdmurray at bitdance.com Thu Jun 28 15:49:42 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 28 Jun 2012 09:49:42 -0400 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: <4FEC37BB.40406@marmelune.net> References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> <4FEC37BB.40406@marmelune.net> Message-ID: <20120628134943.06F7C2502CF@webabinitio.net> On Thu, 28 Jun 2012 12:53:47 +0200, =?ISO-8859-1?Q?Beno=EEt_Bryon?= wrote: > Le 27/06/2012 22:08, Yury Selivanov a ??crit : > > With python adoption (enterprise too) growing, we will inevitably > > find out that one single namespace (PyPI) is not enough, and > > name collisions will become a frequent headache. > > * on Github, people work in a user space. All projects are > managed under the user account. Groups and companies > can use organization accounts. This scheme seems popular > and comfortable. > > * on Github, people can fork projects. Project names are > not unique, but "user/organization+project name" is > unique. It seems to work well. > > * sometimes, forks become more popular than original > projects. Sometimes original projects are abandoned > and several forks are active. > > * Notice that distinct projects (i.e. not forks) can > have the same name, provided they are owned by distinct > users. That is completely irrelevant. The top level name in the github case isolates the forks only. It has nothing to do with the organization of the *software*, only the *forks*. Within the fork, the software itself retains the same name...that's the whole point. --David From petri at digip.org Thu Jun 28 15:16:45 2012 From: petri at digip.org (Petri Lehtinen) Date: Thu, 28 Jun 2012 16:16:45 +0300 Subject: [Python-Dev] cpython (2.7): #9559: Append data to single-file mailbox files if messages are only added In-Reply-To: <20120628130702.124ac6d3@pitrou.net> References: <20120628130702.124ac6d3@pitrou.net> Message-ID: <20120628131645.GP3455@p16.foo.com> Antoine Pitrou wrote: > > If messages were only added, a new file is no longer created and > > renamed over the old file when flush() is called on an mbox, MMDF or > > Babyl mailbox. > > Why so? Appending is not atomic and, if it fails in the middle, you > could get a corrupt mbox file. > Furthermore, I disagree that it's a bugfix: IMO it should wait for 3.4. The code previosly already appended messages to the end of the file when calling add(). This patch just changed it to not do a full rewrite when flush() is called. Having a partially written message in the end of your mailbox doesn't seem like a fatal corruption to me. Furthermore, I (and R. David Murray) think this is not so surprising for users. Most (or all) other implementations always write changes in-place without renaming, as this makes it possible to find out whether new mail has arrived. From hs at ox.cx Thu Jun 28 16:04:21 2012 From: hs at ox.cx (Hynek Schlawack) Date: Thu, 28 Jun 2012 16:04:21 +0200 Subject: [Python-Dev] [Infrastructure] Buildbot master moved In-Reply-To: <1340888573.3396.6.camel@localhost.localdomain> References: <4FEC0432.3040507@v.loewis.de> <1340878000.3396.4.camel@localhost.localdomain> <1340878292.3396.5.camel@localhost.localdomain> <1340888573.3396.6.camel@localhost.localdomain> Message-ID: <4FEC6465.1000603@ox.cx> Hi, I don?t know if it?s known, but the bot infrastructure is FUBAR now. http://buildbot.python.org/all/waterfall is a stacktrace and all tests fail because of the XML-RPC tests that use our buildbot API. Regards Hynek From rdmurray at bitdance.com Thu Jun 28 17:08:43 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 28 Jun 2012 11:08:43 -0400 Subject: [Python-Dev] cpython (2.7): #9559: Append data to single-file mailbox files if messages are only added In-Reply-To: <20120628131645.GP3455@p16.foo.com> References: <20120628130702.124ac6d3@pitrou.net> <20120628131645.GP3455@p16.foo.com> Message-ID: <20120628150844.549DE250676@webabinitio.net> On Thu, 28 Jun 2012 16:16:45 +0300, Petri Lehtinen wrote: > Antoine Pitrou wrote: > > > If messages were only added, a new file is no longer created and > > > renamed over the old file when flush() is called on an mbox, MMDF or > > > Babyl mailbox. > > > > Why so? Appending is not atomic and, if it fails in the middle, you > > could get a corrupt mbox file. > > Furthermore, I disagree that it's a bugfix: IMO it should wait for 3.4. > > The code previosly already appended messages to the end of the file > when calling add(). This patch just changed it to not do a full > rewrite when flush() is called. Having a partially written message in > the end of your mailbox doesn't seem like a fatal corruption to me. > > Furthermore, I (and R. David Murray) think this is not so surprising > for users. Most (or all) other implementations always write changes > in-place without renaming, as this makes it possible to find out > whether new mail has arrived. It is true, however, that Petri found that mutt (I think?) does some extra gymnastics to provide recovery where the write fails part way through, and it would be worth adding that as an enhanced bugfix if someone has the motivation (basically, make a copy of the unmodified mailbox and mv it back into place if the write fails). Even that fix won't prevent corruption in the case of a system crash, but, then, not much will in that case. --David From rdmurray at bitdance.com Thu Jun 28 17:11:04 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 28 Jun 2012 11:11:04 -0400 Subject: [Python-Dev] [pydotorg-www] [Infrastructure] Buildbot master moved In-Reply-To: <4FEC6465.1000603@ox.cx> References: <4FEC0432.3040507@v.loewis.de> <1340878000.3396.4.camel@localhost.localdomain> <1340878292.3396.5.camel@localhost.localdomain> <1340888573.3396.6.camel@localhost.localdomain> <4FEC6465.1000603@ox.cx> Message-ID: <20120628151104.E323E250676@webabinitio.net> On Thu, 28 Jun 2012 16:04:21 +0200, Hynek Schlawack wrote: > I don???t know if it???s known, but the bot infrastructure is FUBAR now. > http://buildbot.python.org/all/waterfall is a stacktrace and all tests > fail because of the XML-RPC tests that use our buildbot API. Heh, that's somewhat amusing. Last upgrade, /console made a traceback and /waterfall worked fine, this upgrade it is the reverse. --David From solipsis at pitrou.net Thu Jun 28 18:07:11 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 28 Jun 2012 18:07:11 +0200 Subject: [Python-Dev] cpython (2.7): #9559: Append data to single-file mailbox files if messages are only added References: <20120628130702.124ac6d3@pitrou.net> <20120628131645.GP3455@p16.foo.com> Message-ID: <20120628180711.19391209@pitrou.net> On Thu, 28 Jun 2012 16:16:45 +0300 Petri Lehtinen wrote: > Antoine Pitrou wrote: > > > If messages were only added, a new file is no longer created and > > > renamed over the old file when flush() is called on an mbox, MMDF or > > > Babyl mailbox. > > > > Why so? Appending is not atomic and, if it fails in the middle, you > > could get a corrupt mbox file. > > Furthermore, I disagree that it's a bugfix: IMO it should wait for 3.4. > > The code previosly already appended messages to the end of the file > when calling add(). This patch just changed it to not do a full > rewrite when flush() is called. Ok, I agree it sounds good then. Thanks for explaining. Regards Antoine. From solipsis at pitrou.net Thu Jun 28 18:09:51 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 28 Jun 2012 18:09:51 +0200 Subject: [Python-Dev] [Infrastructure] Buildbot master moved In-Reply-To: <4FEC6465.1000603@ox.cx> References: <4FEC0432.3040507@v.loewis.de> <1340878000.3396.4.camel@localhost.localdomain> <1340878292.3396.5.camel@localhost.localdomain> <1340888573.3396.6.camel@localhost.localdomain> <4FEC6465.1000603@ox.cx> Message-ID: <1340899791.3379.3.camel@localhost.localdomain> Le jeudi 28 juin 2012 ? 16:04 +0200, Hynek Schlawack a ?crit : > Hi, > > I don?t know if it?s known, but the bot infrastructure is FUBAR now. > http://buildbot.python.org/all/waterfall is a stacktrace and all tests > fail because of the XML-RPC tests that use our buildbot API. It works if you reload the page, though. Looks like a weird bug in buildbot, has anyone reported it upstream? Regards Antoine. From solipsis at pitrou.net Thu Jun 28 18:31:29 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 28 Jun 2012 18:31:29 +0200 Subject: [Python-Dev] [Infrastructure] Buildbot master moved In-Reply-To: <1340899791.3379.3.camel@localhost.localdomain> References: <4FEC0432.3040507@v.loewis.de> <1340878000.3396.4.camel@localhost.localdomain> <1340878292.3396.5.camel@localhost.localdomain> <1340888573.3396.6.camel@localhost.localdomain> <4FEC6465.1000603@ox.cx> <1340899791.3379.3.camel@localhost.localdomain> Message-ID: <1340901089.3379.5.camel@localhost.localdomain> Le jeudi 28 juin 2012 ? 18:09 +0200, Antoine Pitrou a ?crit : > Le jeudi 28 juin 2012 ? 16:04 +0200, Hynek Schlawack a ?crit : > > Hi, > > > > I don?t know if it?s known, but the bot infrastructure is FUBAR now. > > http://buildbot.python.org/all/waterfall is a stacktrace and all tests > > fail because of the XML-RPC tests that use our buildbot API. > > It works if you reload the page, though. Looks like a weird bug in > buildbot, has anyone reported it upstream? Ok, apparently it's http://trac.buildbot.net/ticket/2301 and it seems to have been fixed upstream. Now someone needs to incorporate these changes into our local git. I'm a git newbie, and basic commands seem to fail for me: $ git fetch -v Permission denied (publickey). fatal: The remote end hung up unexpectedly Hynek, do you want to help? Regards Antoine. From tjreedy at udel.edu Thu Jun 28 18:38:30 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 28 Jun 2012 12:38:30 -0400 Subject: [Python-Dev] [Infrastructure] Buildbot master moved In-Reply-To: <4FEC6465.1000603@ox.cx> References: <4FEC0432.3040507@v.loewis.de> <1340878000.3396.4.camel@localhost.localdomain> <1340878292.3396.5.camel@localhost.localdomain> <1340888573.3396.6.camel@localhost.localdomain> <4FEC6465.1000603@ox.cx> Message-ID: On 6/28/2012 10:04 AM, Hynek Schlawack wrote: > Hi, > > I don?t know if it?s known, but the bot infrastructure is FUBAR now. > http://buildbot.python.org/all/waterfall is a stacktrace and all tests > fail because of the XML-RPC tests that use our buildbot API. Errors seem intermittant. Above just worked for me after failing. Clicking on links on http://www.python.org/dev/buildbot/ (which builtbot.python.org redirects to) may give display or error (from FF) web.Server Traceback (most recent call last): : 'NoneType' object has no attribute 'getStatus' /usr/lib/python2.7/dist-packages/twisted/web/server.py, line 132 in process 130 try: 131 resrc = self.site.getResourceFor(self) 132 self.render(resrc) 133 except: /usr/lib/python2.7/dist-packages/twisted/web/server.py, line 167 in render 165 """ 166 try: 167 body = resrc.render(self) 168 except UnsupportedMethod, e: /data/buildbot/lib/python/buildbot/status/web/base.py, line 324 in render 322 return '' 323 324 ctx = self.getContext(request) 325 /data/buildbot/lib/python/buildbot/status/web/base.py, line 196 in getContext 194class ContextMixin(AccessorMixin): 195 def getContext(self, request): 196 status = self.getStatus(request) 197 rootpath = path_to_root(request) /data/buildbot/lib/python/buildbot/status/web/base.py, line 182 in getStatus 180class AccessorMixin(object): 181 def getStatus(self, request): 182 return request.site.buildbot_service.getStatus() 183 /data/buildbot/lib/python/buildbot/status/web/baseweb.py, line 498 in getStatus 496 497 def getStatus(self): 498 return self.master.getStatus() 499 : 'NoneType' object has no attribute 'getStatus' -- Terry Jan Reedy From ethan at stoneleaf.us Thu Jun 28 18:36:53 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 28 Jun 2012 09:36:53 -0700 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: <4FEC4A8F.1010207@plope.com> References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> <4FEC33A5.7080108@marmelune.net> <4FEC4A8F.1010207@plope.com> Message-ID: <4FEC8825.5030205@stoneleaf.us> Chris McDonough wrote: > People typically look for code on PyPI that solves a problem, and > branding in namespacing there is usually confusing. E.g. there are many > highly-general useful things in both the zope When I search PyPI I ignore anything with djange, zope, etc., as I have zero interest in pulling in a bunch of unrelated packages that I don't need. If some of these pieces are truly stand-alone it would be nice if they were presented that way. ~Ethan~ From hs at ox.cx Thu Jun 28 19:08:44 2012 From: hs at ox.cx (Hynek Schlawack) Date: Thu, 28 Jun 2012 19:08:44 +0200 Subject: [Python-Dev] [Infrastructure] Buildbot master moved In-Reply-To: <1340901089.3379.5.camel@localhost.localdomain> References: <4FEC0432.3040507@v.loewis.de> <1340878000.3396.4.camel@localhost.localdomain> <1340878292.3396.5.camel@localhost.localdomain> <1340888573.3396.6.camel@localhost.localdomain> <4FEC6465.1000603@ox.cx> <1340899791.3379.3.camel@localhost.localdomain> <1340901089.3379.5.camel@localhost.localdomain> Message-ID: > Now someone needs to incorporate these changes into our local git. I'm a > git newbie, and basic commands seem to fail for me: > > $ git fetch -v > Permission denied (publickey). > fatal: The remote end hung up unexpectedly > > Hynek, do you want to help? I'd love to, but I'm afk for the rest of the (CEST) day. :( If it's broken still broken tomorrow, you know where to find me. :) Cheers, Hynek From martin at v.loewis.de Thu Jun 28 19:28:02 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Thu, 28 Jun 2012 19:28:02 +0200 Subject: [Python-Dev] [Infrastructure] Buildbot master moved In-Reply-To: <4FEC6465.1000603@ox.cx> References: <4FEC0432.3040507@v.loewis.de> <1340878000.3396.4.camel@localhost.localdomain> <1340878292.3396.5.camel@localhost.localdomain> <1340888573.3396.6.camel@localhost.localdomain> <4FEC6465.1000603@ox.cx> Message-ID: <4FEC9422.4000508@v.loewis.de> > I don?t know if it?s known, but the bot infrastructure is FUBAR now. I'm quite certain it can be repaired. Regards, Martin From v+python at g.nevcal.com Thu Jun 28 19:37:18 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 28 Jun 2012 10:37:18 -0700 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: <4FEC8825.5030205@stoneleaf.us> References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> <4FEC33A5.7080108@marmelune.net> <4FEC4A8F.1010207@plope.com> <4FEC8825.5030205@stoneleaf.us> Message-ID: <4FEC964E.8070906@g.nevcal.com> On 6/28/2012 9:36 AM, Ethan Furman wrote: > When I search PyPI I ignore anything with djange, zope, etc., as I > have zero interest in pulling in a bunch of unrelated packages that I > don't need. If some of these pieces are truly stand-alone it would be > nice if they were presented that way. +1 Precisely. If a user wishes to install packages into "who_made_it" or "where_did_it_come_from" namespaces, and has a version of Python that supports namespaces, it would probably be a good idea for installers to easily permit that. Then, if there are naming conflicts among packages from different sources, and they wish to use more than one "mypackage"... one from PyPI and one (or 3) from github, they can decide to name them as mypackage (hah, this one is from PyPI and they used it first) github.mypackage (first one from github) github_fred.mypackage (the one by fred from github) github_mary.mypackage (the one by mary from github) The may choose to rename mypackage as PyPI.mypackage and github.mypackage as github_billy.mypackage to help discriminate, now that they have a need to discriminate, but that should be their choice, change versus compatibility. Happily, it is possible to do either of the following to keep code names shorter: import github_fred.mypackage as mypackage import github_fred.mypackage as fredpackage Reading JAVA code with lots of fully qualified by organization name is thoroughly confusing... it is unambiguous where the code came from, but it is also distracting when trying to understand the code, with all these long names that are irrelevant to the logic of the code. And, of course, "mypackage" may be a set of alternative implementations of similar functionality, or it may be completely different functionalities that happen to use the same name (just as in English, when one word can have multitple definitions). Maybe the PEP would be more useful by having it define package metadata standards such as mypackage.__organization__ = 'The company or group or individual that created this package' mypackage.__obtained_from__ = 'The location from which this package was obtained' The former would be supplied by the package author; the latter would be filled in by the repository publisher (which may or may not be the same as the __organization__), or be a requirement to upload to the repository, or something along that line, so you'd know where to start a search for an updated version. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at timgolden.me.uk Thu Jun 28 19:42:46 2012 From: mail at timgolden.me.uk (Tim Golden) Date: Thu, 28 Jun 2012 18:42:46 +0100 Subject: [Python-Dev] Bitbucket mirror? Message-ID: <4FEC9796.7060300@timgolden.me.uk> Just recently I'm sure I saw a post saying that the main Python repo was mirrored on bitbucket.org for the convenience of developers who could then fork to their own accounts. For the life of me I can't find it now. Can someone confirm and/or nudge me in the right direction, please? TJG From brian at python.org Thu Jun 28 19:44:57 2012 From: brian at python.org (Brian Curtin) Date: Thu, 28 Jun 2012 12:44:57 -0500 Subject: [Python-Dev] Bitbucket mirror? In-Reply-To: <4FEC9796.7060300@timgolden.me.uk> References: <4FEC9796.7060300@timgolden.me.uk> Message-ID: On Thu, Jun 28, 2012 at 12:42 PM, Tim Golden wrote: > Just recently I'm sure I saw a post saying that the main Python repo was > mirrored on bitbucket.org for the convenience of developers who could then > fork to their own accounts. > > For the life of me I can't find it now. Can someone confirm and/or nudge me > in the right direction, please? https://bitbucket.org/python_mirrors From phd at phdru.name Thu Jun 28 19:47:07 2012 From: phd at phdru.name (Oleg Broytman) Date: Thu, 28 Jun 2012 21:47:07 +0400 Subject: [Python-Dev] Bitbucket mirror? In-Reply-To: <4FEC9796.7060300@timgolden.me.uk> References: <4FEC9796.7060300@timgolden.me.uk> Message-ID: <20120628174707.GA19436@iskra.aviel.ru> On Thu, Jun 28, 2012 at 06:42:46PM +0100, Tim Golden wrote: > Just recently I'm sure I saw a post saying that the main Python repo > was mirrored on bitbucket.org for the convenience of developers who > could then fork to their own accounts. > > For the life of me I can't find it now. Can someone confirm and/or > nudge me in the right direction, please? This one? https://bitbucket.org/mirror/python-py3k Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From vinay_sajip at yahoo.co.uk Thu Jun 28 20:08:48 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 28 Jun 2012 18:08:48 +0000 (UTC) Subject: [Python-Dev] Bitbucket mirror? References: <4FEC9796.7060300@timgolden.me.uk> Message-ID: Tim Golden timgolden.me.uk> writes: > > Just recently I'm sure I saw a post saying that the main Python repo was > mirrored on bitbucket.org for the convenience of developers who could > then fork to their own accounts. > > For the life of me I can't find it now. Can someone confirm and/or nudge > me in the right direction, please? I don't know how official it is, but I've used this mirror when I wanted to make use of BitBucket side-by-side comparisons: https://bitbucket.org/mirror/cpython/ It seems fairly up-to-date. Much of my early work on the venv code was done using a clone from this mirror, and I pulled and merged from it lots of times without any issues. Regards, Vinay Sajip From petri at digip.org Thu Jun 28 20:09:51 2012 From: petri at digip.org (Petri Lehtinen) Date: Thu, 28 Jun 2012 21:09:51 +0300 Subject: [Python-Dev] cpython (2.7): #9559: Append data to single-file mailbox files if messages are only added In-Reply-To: <20120628150844.549DE250676@webabinitio.net> References: <20120628130702.124ac6d3@pitrou.net> <20120628131645.GP3455@p16.foo.com> <20120628150844.549DE250676@webabinitio.net> Message-ID: <20120628180951.GA29699@chang> R. David Murray wrote: > It is true, however, that Petri found that mutt (I think?) does some extra > gymnastics to provide recovery where the write fails part way through, > and it would be worth adding that as an enhanced bugfix if someone > has the motivation (basically, make a copy of the unmodified mailbox > and mv it back into place if the write fails). This is not what mutt does. It just writes the modified part of the mailbox to a temporary file, and then copies the data from the temporary file to the mailbox file. If this last step fails, the temporary file is left behind for recovery. Copying the whole mailbox before making modifications might be clever, though. It's just quite a lot of writing, especially for big mailboxes. OTOH, the whole file is rewritten by the current code, too. From brian at python.org Thu Jun 28 20:10:48 2012 From: brian at python.org (Brian Curtin) Date: Thu, 28 Jun 2012 13:10:48 -0500 Subject: [Python-Dev] Bitbucket mirror? In-Reply-To: References: <4FEC9796.7060300@timgolden.me.uk> Message-ID: On Thu, Jun 28, 2012 at 1:08 PM, Vinay Sajip wrote: > Tim Golden timgolden.me.uk> writes: > >> >> Just recently I'm sure I saw a post saying that the main Python repo was >> mirrored on bitbucket.org for the convenience of developers who could >> then fork to their own accounts. >> >> For the life of me I can't find it now. Can someone confirm and/or nudge >> me in the right direction, please? > > I don't know how official it is, but I've used this mirror when I wanted to make > use of BitBucket side-by-side comparisons: > > https://bitbucket.org/mirror/cpython/ > > It seems fairly up-to-date. Much of my early work on the venv code was done > using a clone from this mirror, and I pulled and merged from it lots of times > without any issues. Atlassian setup https://bitbucket.org/python_mirrors to mirror the entire hg.python.org setup. http://blog.python.org/2012/06/mercurial-mirrors-provided-by-atlassian.html From mail at timgolden.me.uk Thu Jun 28 20:24:55 2012 From: mail at timgolden.me.uk (Tim Golden) Date: Thu, 28 Jun 2012 19:24:55 +0100 Subject: [Python-Dev] Bitbucket mirror? In-Reply-To: References: <4FEC9796.7060300@timgolden.me.uk> Message-ID: <4FECA177.9070202@timgolden.me.uk> On 28/06/2012 19:10, Brian Curtin wrote: > Atlassian setup https://bitbucket.org/python_mirrors to mirror the > entire hg.python.org setup. > > http://blog.python.org/2012/06/mercurial-mirrors-provided-by-atlassian.html Thanks, Brian. That's obviously where I read about it, too. TJG From martin at v.loewis.de Thu Jun 28 20:36:01 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 28 Jun 2012 20:36:01 +0200 Subject: [Python-Dev] [pydotorg-www] [Infrastructure] Buildbot master moved In-Reply-To: <20120628193012.0218c957@pitrou.net> References: <4FEC0432.3040507@v.loewis.de> <1340878000.3396.4.camel@localhost.localdomain> <1340878292.3396.5.camel@localhost.localdomain> <1340888573.3396.6.camel@localhost.localdomain> <4FEC6465.1000603@ox.cx> <1340899791.3379.3.camel@localhost.localdomain> <1340901089.3379.5.camel@localhost.localdomain> <20120628193012.0218c957@pitrou.net> Message-ID: <4FECA411.5080108@v.loewis.de> > Ok, I've applied the patches by hand in the local repo, without > committing them. It seems to fix the issue AFAICT. I tried merging the change with git, but that would merge a lot of other changes as well, and produce conflicts. So I just committed and pushed your changes. Feel free to commit anything you change right away; I can then push the changes. Or you push them yourself. Just leaving them uncommitted is fine as well. Regards, Martin From benoit at marmelune.net Thu Jun 28 21:32:59 2012 From: benoit at marmelune.net (=?ISO-8859-1?Q?Beno=EEt_Bryon?=) Date: Thu, 28 Jun 2012 21:32:59 +0200 Subject: [Python-Dev] PEP 423 : naming conventions and recipes related to packaging In-Reply-To: <4FEC4A8F.1010207@plope.com> References: <4FEACD9D.8090208@marmelune.net> <20120627125055.196efe9f@pitrou.net> <20120627133453.1766ea15@pitrou.net> <4FEC33A5.7080108@marmelune.net> <4FEC4A8F.1010207@plope.com> Message-ID: <4FECB16B.5070809@marmelune.net> Let's try to summarize answers about top-level namespace with use cases and examples... I hope I understood them well... About "yes" or "no" meaning: yes It fits the (work-in-progress) convention. You would recommend it. no You wouldn't recommend the naming pattern for *new* projects (we can't require existing projects to be renamed). ===== Project is standalone (doesn't mean "have no dependencies"), released on PyPI: * only one-level name is recommended, no namespace package * yes: sphinx, flask, lettuce * no: zc.rst2 (brand name is superfluous) ===== Project is made of several subprojects which are not standalone, released on PyPI: * a namespace package is recommended * the top-level namespace is functional: projects in namespace make a bigger project. They are not designed to work as standalone components. * yes: ? Have you examples of such a use case? * no: plone.app.* (too many levels) ===== Project is related to another one (i.e. kind of contrib), released on PyPI: * note: there is a difference between "related to another project" and "depends on another project". As an example, Fabric depends on ssh, but is not a contrib of it. * choice depends on conventions of related project * if there is no specific convention, a namespace package is recommended * the top-level namespace is functional: projects in namespace have a common characteristic, they are specific to something, usually another project. * yes: collective.castle * yes because of explicit specific convention: sphinxcontrib-feed, Flask-Admin * no: castle, feed, admin, Plone.recipe.command (not specific to Plone, in fact related to zc.buildout) * Use of additional metadata is highly recommended (keywords, topic::framework) ===== Project is standalone, but really experimental (i.e. name could change, not sure to publish version 0.2), want to make it public: * I want to share code, but I am really not sure it will live long. I don't want to "block" a name slot. * use a one-level name, as any standalone public project * publish it on gitorious/github/bitbucket accounts * don't register it with PyPI until it becomes a bit mature? i.e. start with code repositories only? * not valuable enough to be mentioned in PEP 423? Maybe not in the scope of PEP 423. I mean it is more about "what kind of projects we register with PyPI" than "which name to choose". ===== Project is standalone, but specific to my own usage, i.e. I use it as personal software. It's not private because I want to share the code (maybe someone will like it). * use an one-level name, as any standalone public project * publish it on code repositories. * register it with PyPI? if only ready to maintain and document? * not valuable enough to be mentioned in PEP 423? Maybe not in the scope of PEP 423. I mean it is more about "what kind of projects we register with PyPI" than "which name to choose". ===== .. note:: conventions for private projects are provided as informational guidelines. Project is private, made of only one component: * a namespace package is recommended (not sure for this rule, could be an one-level project name) * the top-level name can be any unique arbitrary value. The company name could be a good choice. * the top-level namespace is not functional, but represents an ownership: the project is specific to the customer/product. It contains closed-source parts. * yes: mycustomer.website * no: mycustomerwebsite, website ===== Project is private, made of several components: * a namespace package is recommended * the top-level name can be any unique arbitrary value. The company name could be a good choice. * the top-level namespace is not functional, but represents a common background: as an example, all components are specific to the customer/product. * yes: mywebsite.blog, mywebsite.auth, mywebsite.calendar (where these components could be standalone WSGI applications, but contain specific stuff related to the customer/product). * no: blog, auth, calendar ===== Do you prefer the examples above to the "top-level namespace relates to code ownership" rule? Do you see other use cases? Benoit From g.rodola at gmail.com Fri Jun 29 02:21:08 2012 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Fri, 29 Jun 2012 02:21:08 +0200 Subject: [Python-Dev] os.path.exists() / os.path.isdir() inconsistency when dealing with gvfs directories In-Reply-To: References: <20120627000238.GA22009@cskk.homeip.net> Message-ID: 2012/6/27 Nick Coghlan : > If someone wants to see the error details, they should use os.stat directly > rather than an existence check. This is now tracked at http://bugs.python.org/issue15221 Regards, --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ http://code.google.com/p/pysendfile/ From eliben at gmail.com Fri Jun 29 05:58:42 2012 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 29 Jun 2012 06:58:42 +0300 Subject: [Python-Dev] Bitbucket mirror? In-Reply-To: References: <4FEC9796.7060300@timgolden.me.uk> Message-ID: > > I don't know how official it is, but I've used this mirror when I wanted > to make > > use of BitBucket side-by-side comparisons: > > > > https://bitbucket.org/mirror/cpython/ > > > > It seems fairly up-to-date. Much of my early work on the venv code was > done > > using a clone from this mirror, and I pulled and merged from it lots of > times > > without any issues. > > Atlassian setup https://bitbucket.org/python_mirrors to mirror the > entire hg.python.org setup. > > http://blog.python.org/2012/06/mercurial-mirrors-provided-by-atlassian.html > _______________________________________________ > The devguide (http://docs.python.org/devguide/committing.html) says: Bitbucket also maintain an up to date cloneof the main cpython repository that can be used as the basis for a new clone or patch queue. [the link goes to https://bitbucket.org/mirror/cpython/overview] Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at timgolden.me.uk Fri Jun 29 17:16:50 2012 From: mail at timgolden.me.uk (Tim Golden) Date: Fri, 29 Jun 2012 16:16:50 +0100 Subject: [Python-Dev] Issue 1677 - please advise Message-ID: <4FEDC6E2.30108@timgolden.me.uk> I've been working on issue 1677 which concerns a race condition in the interactive interpreter on Windows where a Ctrl-C can in some circumstances cause the interpreter to exit as though a Ctrl-Z had been pressed. I've added patches to the issue for 2.7, 3.2 & default. I can't see any realistic way to add a test for this. Unsurprisingly there don't appear to be any tests in the test suite for the interactive interpreter and even if there were, this is an inconsistent race condition I'm fixing. So... should I go ahead and push anyway, or is there anything else I should be doing as part of the change? Thanks TJG From status at bugs.python.org Fri Jun 29 18:07:13 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 29 Jun 2012 18:07:13 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120629160713.3980F1C9C9@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-06-22 - 2012-06-29) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3485 ( +3) closed 23515 (+80) total 27000 (+83) Open issues with patches: 1456 Issues opened (54) ================== #11728: mbox parser incorrect behaviour http://bugs.python.org/issue11728 reopened by petri.lehtinen #15034: Devguide should document best practices for stdlib exceptions http://bugs.python.org/issue15034 reopened by r.david.murray #15140: PEP 384 inconsistent with implementation http://bugs.python.org/issue15140 opened by pitrou #15141: IDLE horizontal scroll bar missing (Win-XPsp3) http://bugs.python.org/issue15141 opened by NyteHawk #15144: Possible integer overflow in operations with addresses and siz http://bugs.python.org/issue15144 opened by storchaka #15145: Faster *_find_max_char http://bugs.python.org/issue15145 opened by storchaka #15147: Remove packaging from the stdlib http://bugs.python.org/issue15147 opened by pitrou #15148: shutil.which() docstring could be clearer http://bugs.python.org/issue15148 opened by tshepang #15151: Documentation for Signature, Parameter and signature in inspec http://bugs.python.org/issue15151 opened by ncoghlan #15152: test_subprocess fqailures on awfully slow builtbots http://bugs.python.org/issue15152 opened by neologix #15158: Add support for multi-character delimiters in csv http://bugs.python.org/issue15158 opened by ramchandra.apte #15163: pydoc displays __loader__ as module data http://bugs.python.org/issue15163 opened by pitrou #15165: test_email: failure on Windows http://bugs.python.org/issue15165 opened by skrah #15166: Implement imp.get_tag() using sys.implementation http://bugs.python.org/issue15166 opened by brett.cannon #15167: Re-implement imp.get_magic() in pure Python http://bugs.python.org/issue15167 opened by brett.cannon #15168: Move importlib.test to test.importlib http://bugs.python.org/issue15168 opened by brett.cannon #15169: Clear C code under PyImport_ExecCodeModuleObject() http://bugs.python.org/issue15169 opened by brett.cannon #15170: Fix 64-bit building for buildbot scripts (2.7) http://bugs.python.org/issue15170 opened by skrah #15171: Fix 64-bit building for buildbot scripts (3.2) http://bugs.python.org/issue15171 opened by skrah #15172: Document nasm-2.10.01 as required version for openssl http://bugs.python.org/issue15172 opened by skrah #15174: amd64\python_d.exe -m test fails http://bugs.python.org/issue15174 opened by skrah #15175: pydoc -k zip throws segmentation fault http://bugs.python.org/issue15175 opened by shank #15178: Doctest should handle situations when test files are not reada http://bugs.python.org/issue15178 opened by bkabrda #15180: Cryptic traceback from os.path.join when mixing str & bytes http://bugs.python.org/issue15180 opened by ncoghlan #15182: find_library_file() should try to link http://bugs.python.org/issue15182 opened by jdemeyer #15183: it should be made clear that the statement in the --setup opti http://bugs.python.org/issue15183 opened by tshepang #15184: Test failure in test_sysconfig_module http://bugs.python.org/issue15184 opened by georg.brandl #15185: Validate callbacks in 'contextlib.ExitStack.callback()' http://bugs.python.org/issue15185 opened by Yury.Selivanov #15186: Support os.walk(dir_fd=) http://bugs.python.org/issue15186 opened by larry #15188: test_ldshared_value failure on OS X using python.org Pythons http://bugs.python.org/issue15188 opened by ned.deily #15189: tkinter.messagebox does not use the application's icon http://bugs.python.org/issue15189 opened by mark #15191: tkinter convenience dialogs don't use themed widgets http://bugs.python.org/issue15191 opened by mark #15192: test_bufio failures on Win64 buildbot http://bugs.python.org/issue15192 opened by pitrou #15194: libffi-3.0.11 update http://bugs.python.org/issue15194 opened by doko #15195: test_distutils fails when ARCHFLAGS is set on a Mac http://bugs.python.org/issue15195 opened by Marc.Abramowitz #15197: test_gettext failure on Win64 buildbot http://bugs.python.org/issue15197 opened by pitrou #15198: multiprocessing Pipe send of non-picklable objects doesn't rai http://bugs.python.org/issue15198 opened by Ian.Bell #15199: Default mimetype for javascript should be application/javascri http://bugs.python.org/issue15199 opened by bkabrda #15200: Faster os.walk http://bugs.python.org/issue15200 opened by storchaka #15201: C argument errors and Python arguments error are different http://bugs.python.org/issue15201 opened by ramchandra.apte #15202: followlinks/follow_symlinks/symlinks flags unification http://bugs.python.org/issue15202 opened by storchaka #15204: Deprecate the 'U' open mode http://bugs.python.org/issue15204 opened by storchaka #15205: distutils dereferences symlinks on Mac OS X but not on Linux http://bugs.python.org/issue15205 opened by olliewalsh #15206: uuid module falls back to unsuitable RNG http://bugs.python.org/issue15206 opened by christian.heimes #15207: mimetypes.read_windows_registry() uses the wrong regkey, creat http://bugs.python.org/issue15207 opened by dlchambers #15209: Re-raising exceptions from an expression http://bugs.python.org/issue15209 opened by Tyler.Crompton #15210: importlib.__init__ checks for the wrong exception when looking http://bugs.python.org/issue15210 opened by brett.cannon #15212: Rename SC_GLOBAL_EXPLICT to SC_GLOBAL_EXPLICIT in compiler mod http://bugs.python.org/issue15212 opened by Arfrever #15213: _PyOS_URandom documentation http://bugs.python.org/issue15213 opened by christian.heimes #15216: Support setting the encoding on a text stream after creation http://bugs.python.org/issue15216 opened by ncoghlan #15220: Reduce parsing overhead in email.feedparser.BufferedSubFile http://bugs.python.org/issue15220 opened by r.david.murray #15221: os.path.is*() may return False if path can't be accessed http://bugs.python.org/issue15221 opened by giampaolo.rodola #15222: mailbox.mbox writes without end-of-line at the file end. http://bugs.python.org/issue15222 opened by lilydjwg #665194: datetime-RFC2822 roundtripping http://bugs.python.org/issue665194 reopened by belopolsky Most recent 15 issues with no replies (15) ========================================== #15212: Rename SC_GLOBAL_EXPLICT to SC_GLOBAL_EXPLICIT in compiler mod http://bugs.python.org/issue15212 #15210: importlib.__init__ checks for the wrong exception when looking http://bugs.python.org/issue15210 #15205: distutils dereferences symlinks on Mac OS X but not on Linux http://bugs.python.org/issue15205 #15201: C argument errors and Python arguments error are different http://bugs.python.org/issue15201 #15199: Default mimetype for javascript should be application/javascri http://bugs.python.org/issue15199 #15198: multiprocessing Pipe send of non-picklable objects doesn't rai http://bugs.python.org/issue15198 #15195: test_distutils fails when ARCHFLAGS is set on a Mac http://bugs.python.org/issue15195 #15191: tkinter convenience dialogs don't use themed widgets http://bugs.python.org/issue15191 #15189: tkinter.messagebox does not use the application's icon http://bugs.python.org/issue15189 #15188: test_ldshared_value failure on OS X using python.org Pythons http://bugs.python.org/issue15188 #15182: find_library_file() should try to link http://bugs.python.org/issue15182 #15174: amd64\python_d.exe -m test fails http://bugs.python.org/issue15174 #15168: Move importlib.test to test.importlib http://bugs.python.org/issue15168 #15167: Re-implement imp.get_magic() in pure Python http://bugs.python.org/issue15167 #15163: pydoc displays __loader__ as module data http://bugs.python.org/issue15163 Most recent 15 issues waiting for review (15) ============================================= #15220: Reduce parsing overhead in email.feedparser.BufferedSubFile http://bugs.python.org/issue15220 #15212: Rename SC_GLOBAL_EXPLICT to SC_GLOBAL_EXPLICIT in compiler mod http://bugs.python.org/issue15212 #15209: Re-raising exceptions from an expression http://bugs.python.org/issue15209 #15207: mimetypes.read_windows_registry() uses the wrong regkey, creat http://bugs.python.org/issue15207 #15206: uuid module falls back to unsuitable RNG http://bugs.python.org/issue15206 #15204: Deprecate the 'U' open mode http://bugs.python.org/issue15204 #15202: followlinks/follow_symlinks/symlinks flags unification http://bugs.python.org/issue15202 #15200: Faster os.walk http://bugs.python.org/issue15200 #15199: Default mimetype for javascript should be application/javascri http://bugs.python.org/issue15199 #15194: libffi-3.0.11 update http://bugs.python.org/issue15194 #15186: Support os.walk(dir_fd=) http://bugs.python.org/issue15186 #15185: Validate callbacks in 'contextlib.ExitStack.callback()' http://bugs.python.org/issue15185 #15184: Test failure in test_sysconfig_module http://bugs.python.org/issue15184 #15180: Cryptic traceback from os.path.join when mixing str & bytes http://bugs.python.org/issue15180 #15178: Doctest should handle situations when test files are not reada http://bugs.python.org/issue15178 Top 10 most discussed issues (10) ================================= #1677: Ctrl-C will exit out of Python interpreter in Windows http://bugs.python.org/issue1677 18 msgs #10142: Support for SEEK_HOLE/SEEK_DATA http://bugs.python.org/issue10142 16 msgs #15030: PyPycLoader can't read cached .pyc files http://bugs.python.org/issue15030 16 msgs #444582: Finding programs in PATH, adding shutil.which http://bugs.python.org/issue444582 16 msgs #15147: Remove packaging from the stdlib http://bugs.python.org/issue15147 12 msgs #15209: Re-raising exceptions from an expression http://bugs.python.org/issue15209 12 msgs #15202: followlinks/follow_symlinks/symlinks flags unification http://bugs.python.org/issue15202 11 msgs #15206: uuid module falls back to unsuitable RNG http://bugs.python.org/issue15206 11 msgs #15133: tkinter.BooleanVar.get() behavior and docstring disagree http://bugs.python.org/issue15133 10 msgs #15139: Speed up threading.Condition wakeup http://bugs.python.org/issue15139 10 msgs Issues closed (80) ================== #3665: Support \u and \U escapes in regexes http://bugs.python.org/issue3665 closed by pitrou #4489: shutil.rmtree is vulnerable to a symlink attack http://bugs.python.org/issue4489 closed by hynek #5067: Error msg from using wrong quotes in JSON is unhelpful http://bugs.python.org/issue5067 closed by pitrou #5346: mailbox._singlefileMailbox.flush doesn't preserve file rights http://bugs.python.org/issue5346 closed by petri.lehtinen #5441: Convenience API for timeit.main http://bugs.python.org/issue5441 closed by ncoghlan #7360: [mailbox] race: mbox may lose data with concurrent access http://bugs.python.org/issue7360 closed by petri.lehtinen #7582: Use ISO timestamp in diff.py http://bugs.python.org/issue7582 closed by belopolsky #8916: Move PEP 362 (function signature objects) into inspect http://bugs.python.org/issue8916 closed by eric.araujo #9527: Add aware local time support to datetime module http://bugs.python.org/issue9527 closed by belopolsky #9559: mailbox.mbox creates new file when adding message to mbox http://bugs.python.org/issue9559 closed by petri.lehtinen #10376: ZipFile unzip is unbuffered http://bugs.python.org/issue10376 closed by pitrou #10571: "setup.py upload --sign" broken: TypeError: 'str' does not sup http://bugs.python.org/issue10571 closed by pitrou #11113: html.entities mapping dicts need updating? http://bugs.python.org/issue11113 closed by ezio.melotti #11626: Py_LIMITED_API on windows: unresolved symbol __imp___PyArg_Par http://bugs.python.org/issue11626 closed by loewis #11678: Add support for Arch Linux to platform.linux_distributions() http://bugs.python.org/issue11678 closed by python-dev #12559: gzip.open() needs an optional encoding argument http://bugs.python.org/issue12559 closed by nadeem.vawda #13062: Introspection generator and function closure state http://bugs.python.org/issue13062 closed by python-dev #13556: When tzinfo.utcoffset is out-of-bounds, the exception message http://bugs.python.org/issue13556 closed by belopolsky #13666: datetime documentation typos http://bugs.python.org/issue13666 closed by orsenthil #13685: argparse update help msg for % signs http://bugs.python.org/issue13685 closed by orsenthil #14127: add st_*time_ns fields to os.stat(), add ns keyword to os.*uti http://bugs.python.org/issue14127 closed by larry #14226: Expose dict_proxy type from descrobject.c http://bugs.python.org/issue14226 closed by eric.snow #14286: xxlimited.obj: unresolved external symbol __imp__PyObject_New http://bugs.python.org/issue14286 closed by loewis #14327: replace use of uname in the configury with macros set by AC_CA http://bugs.python.org/issue14327 closed by doko #14469: Python 3 documentation links http://bugs.python.org/issue14469 closed by orsenthil #14626: os module: use keyword-only arguments for dir_fd and nofollow http://bugs.python.org/issue14626 closed by georg.brandl #14698: test_posix failures - getpwduid()/initgroups()/getgroups() http://bugs.python.org/issue14698 closed by neologix #14742: test_tools very slow http://bugs.python.org/issue14742 closed by mark.dickinson #14785: Add sys._debugmallocstats() http://bugs.python.org/issue14785 closed by dmalcolm #14815: random_seed uses only 32-bits of hash on Win64 http://bugs.python.org/issue14815 closed by larry #14837: Better SSL errors http://bugs.python.org/issue14837 closed by pitrou #14906: rotatingHandler WindowsError http://bugs.python.org/issue14906 closed by vinay.sajip #14917: Make os.symlink on Win32 detect if target is directory http://bugs.python.org/issue14917 closed by larry #14923: Even faster UTF-8 decoding http://bugs.python.org/issue14923 closed by mark.dickinson #15008: PEP 362 "Signature Objects" reference implementation http://bugs.python.org/issue15008 closed by Yury.Selivanov #15042: Implemented PyState_AddModule, PyState_RemoveModule http://bugs.python.org/issue15042 closed by loewis #15055: dictnotes.txt is out of date http://bugs.python.org/issue15055 closed by pitrou #15061: hmac.secure_compare() leaks information about length of string http://bugs.python.org/issue15061 closed by christian.heimes #15079: pickle: Possibly misplaced test http://bugs.python.org/issue15079 closed by pitrou #15080: Cookie library doesn't parse date properly http://bugs.python.org/issue15080 closed by terry.reedy #15102: Fix 64-bit building for buildbot scripts (3.3) http://bugs.python.org/issue15102 closed by skrah #15118: uname and other os functions should return a struct sequence i http://bugs.python.org/issue15118 closed by larry #15124: _thread.LockType: Optimize lock deletion, acquisition of uncon http://bugs.python.org/issue15124 closed by pitrou #15135: HOWTOs doesn't link to "Idioms and Anti-Idioms" article http://bugs.python.org/issue15135 closed by terry.reedy #15137: Cleaned source of `cmd` module http://bugs.python.org/issue15137 closed by terry.reedy #15138: base64.urlsafe_b64**code are too slow http://bugs.python.org/issue15138 closed by gvanrossum #15142: Fix reference leak with types created using PyType_FromSpec http://bugs.python.org/issue15142 closed by pitrou #15143: Windows compile errors http://bugs.python.org/issue15143 closed by georg.brandl #15146: Implemented PyType_FromSpecWithBases http://bugs.python.org/issue15146 closed by loewis #15149: Release Schedule needs updating http://bugs.python.org/issue15149 closed by georg.brandl #15150: Windows build does not link http://bugs.python.org/issue15150 closed by loewis #15153: Add inspect.getgeneratorlocals http://bugs.python.org/issue15153 closed by python-dev #15154: remove "rmdir" argument from os.unlink, add "dir_fd" to os.rmd http://bugs.python.org/issue15154 closed by larry #15155: sporadic failure in RecvmsgSCTPStreamTest http://bugs.python.org/issue15155 closed by neologix #15156: Refactor HTMLParser.unescape to use html.entities.html5 http://bugs.python.org/issue15156 closed by ezio.melotti #15157: venvs should include pydoc http://bugs.python.org/issue15157 closed by python-dev #15159: Add failover for follow_symlinks and effective_ids where possi http://bugs.python.org/issue15159 closed by larry #15160: Add support for MIME header parsing to the new provisional ema http://bugs.python.org/issue15160 closed by r.david.murray #15161: add new-style os API to two missing functions http://bugs.python.org/issue15161 closed by python-dev #15162: help() says "This is the online help utility." even though it http://bugs.python.org/issue15162 closed by python-dev #15164: add platform.uname() namedtuple interface? http://bugs.python.org/issue15164 closed by larry #15173: Copyright/licensing statement in new venv module http://bugs.python.org/issue15173 closed by python-dev #15176: Clarify the behavior of listdir(fd) in both code and documenta http://bugs.python.org/issue15176 closed by larry #15177: Support os.fwalk(dir_fd=) http://bugs.python.org/issue15177 closed by larry #15179: An infinite loop happens when we use SysLogHandler with eventl http://bugs.python.org/issue15179 closed by python-dev #15181: importlib.h: suncc warnings http://bugs.python.org/issue15181 closed by pitrou #15187: test_shutil does not clean up after itself http://bugs.python.org/issue15187 closed by larry #15190: Allow whitespace and comments after line continuation characte http://bugs.python.org/issue15190 closed by eric.smith #15193: Exception AttributeError: "'NoneType' object has no attribute http://bugs.python.org/issue15193 closed by mark.dickinson #15196: os.path.realpath gets confused when symlinks include '..' http://bugs.python.org/issue15196 closed by o11c #15203: Accepting of os functions of (path, dir_fd) pair as argument http://bugs.python.org/issue15203 closed by ncoghlan #15208: Uparrow doesn't show previously typed variable or character http://bugs.python.org/issue15208 closed by ned.deily #15211: Test http://bugs.python.org/issue15211 closed by r.david.murray #15214: list.startswith() and list.remove() fails to catch consecutive http://bugs.python.org/issue15214 closed by petri.lehtinen #15215: socket module setblocking and settimeout problem http://bugs.python.org/issue15215 closed by pitrou #15217: os.listdir is missing in os.supports_dir_fd http://bugs.python.org/issue15217 closed by larry #15218: Check for all necessary dir_fd and follow_symlinks functions http://bugs.python.org/issue15218 closed by python-dev #15219: Leak in "_hashlib.new()" if argument is not a string http://bugs.python.org/issue15219 closed by amaury.forgeotdarc #415492: Compiler generates relative filenames http://bugs.python.org/issue415492 closed by brett.cannon #1644987: ./configure --prefix=/ breaks, won't build C modules http://bugs.python.org/issue1644987 closed by jcea From martin at v.loewis.de Fri Jun 29 19:04:26 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 29 Jun 2012 19:04:26 +0200 Subject: [Python-Dev] Issue 1677 - please advise In-Reply-To: <4FEDC6E2.30108@timgolden.me.uk> References: <4FEDC6E2.30108@timgolden.me.uk> Message-ID: <4FEDE01A.4030602@v.loewis.de> > So... should I go ahead and push anyway, or is there anything else > I should be doing as part of the change? Go ahead! Martin From solipsis at pitrou.net Sat Jun 30 23:17:42 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 30 Jun 2012 23:17:42 +0200 Subject: [Python-Dev] cross-compiling patches Message-ID: <20120630231742.2c896682@pitrou.net> Hello, I think these patches are premature (they break compilation on OS X, and they break ctypes configure on my Linux box). Furthermore, they were committed post-beta, which means they should probably have waited for after the 3.3 release. So I propose for these commits to be reverted. (to be clear, I'm talking about all configure / Makefile / setup.py / libffi changes since and including http://hg.python.org/cpython/rev/e6e99d449bdc876fa57111e7e534c44ecbc3bcbd ) Regards Antoine.