From martin at v.loewis.de Thu Mar 1 00:02:59 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Thu, 01 Mar 2012 00:02:59 +0100 Subject: [Python-Dev] Backporting PEP 414 In-Reply-To: References: <4F4D38B6.4020103@stoneleaf.us> <20120229011313.Horde.zLfORVNNcXdPTW2ZumqDWGA@webmail.df.eu> Message-ID: <4F4EAEA3.40002@v.loewis.de> >> There is a really simple litmus test for whether something is a bug: >> does it deviate from the specification? >> >> In this case, the specification is the grammar, and the implementation >> certainly doesn't deviate from it. So it can't be a bug. > > I don't think anyone can assert that the specification itself is immune > to having "bugs". I can assert that - a specification inherently cannot be "incorrect". It can only be "unintentional". There are certainly documentation errors. They occur when the documentation deviates from the implementation *and* from the intent. They are easy to fix in a bug fix release (assuming the implementation correctly reflects the intent). But then, this isn't the case here, either: the *intent* of the current grammar is that there is no u prefix in the Python 3 language. So the specification clearly corresponds to the intent also. Regards, Martin From rosuav at gmail.com Thu Mar 1 00:13:01 2012 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 1 Mar 2012 10:13:01 +1100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: On Thu, Mar 1, 2012 at 8:08 AM, Paul Moore wrote: > It would (apparently) help Victor to fix issues in his pysandbox > project. I don't know if a secure Python sandbox is an important > enough concept to warrant core changes to make it possible. If a secure Python sandbox had been available last year, we would probably be still using Python at work for end-user scripting, instead of having had to switch to Javascript. At least, that would be the case if this sandbox is what I think it is (we embed a scripting language in our C++ main engine, and allow end users to customize and partly drive our code). But features enabling that needn't be core; I wouldn't object to having to get some third-party add-ons to make it all work. Chris Angelico From raymond.hettinger at gmail.com Thu Mar 1 00:25:43 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 29 Feb 2012 15:25:43 -0800 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: <6841B075-0292-4151-BE3C-7A982048C44F@gmail.com> On Feb 29, 2012, at 1:08 PM, Paul Moore wrote: > As it stands, I don't find the PEP compelling. The hardening use case > might be significant but Victor needs to spell it out if it's to make > a difference. If his sandboxing project needs it, the type need not be public. It can join dictproxy and structseq in our toolkit of internal types. Adding frozendict() as a new public type is unnecessary and undesirable -- a proliferation of types makes it harder to decide which tool is the most appropriate for a given problem. The itertools module ran into the issue early. Adding a new itertool tends to make the whole module harder to figure-out. Raymond P.S ISTM that lately Python is growing fatter without growing more powerful or expressive. Generators, context managers, and decorators were honking good ideas -- we need more of those rather than minor variations on things we already have. Plz forgive the typos -- I'm typing with one hand -- the other is holding a squiggling baby :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at haypocalc.com Thu Mar 1 00:52:48 2012 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Thu, 1 Mar 2012 00:52:48 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: > It would (apparently) help Victor to fix issues in his pysandbox > project. I don't know if a secure Python sandbox is an important > enough concept to warrant core changes to make it possible. Ok, let's talk about sandboxing and security. The main idea of pysandbox is to reuse most of CPython but hide "dangerous" functions and run untrusted code in a separated namespace. The problem is to create the sandbox and ensure that it is not possible to escape from this sandbox. pysandbox is still a proof-of-concept, even if it works pretty well for short dummy scripts. But pysandbox is not ready for real world programs. pysandbox uses various "tricks" and "hacks" to create a sandbox. But there is a major issue: the __builtins__ dict (or module) is available and used everywhere (in module globals, in frames, in functions globals, etc.), and can be modified. A read-only __builtins__ dict is required to protect the sandbox. If the untrusted can modify __builtins__, it can replace core functions like isinstance(), len(), ... and so modify code outside the sandbox. To implement a frozendict in Python, pysandbox uses the blacklist approach: a class inheriting from dict and override some methods to raise an error. The whitelist approach cannot be used for a type implemented in Python, because the __builtins__ type must inherit from dict: ceval.c expects a type compatible with PyDict_GetItem and PyDict_SetItem. Problem: if you implement a frozendict type inheriting from dict in Python, it is still possible to call dict methods (e.g. dict.__setitem__()). To fix this issue, pysandbox removes all dict methods modifying the dict: __setitem__, __delitem__, pop, etc. This is a problem because untrusted code cannot use these methods on valid dict created in the sandbox. > However, > if Victor was saying that implementing this PEP was all that is needed > to implement a secure sandbox, then that would be a very different > claim, and likely much more compelling (to some, at least - I have no > personal need for a secure sandbox). A builtin frozendict type "compatible" with the PyDict C API is very convinient for pysandbox because using this type for core features like builtins requires very few modification. For example, use frozendict for __builtins__ only requires to modify 3 lines in frameobject.c. I don't see how to solve the pysandbox issue (read-only __builtins__ issue, need to remove dict.__setitem__ & friends) without modifying CPython (so without adding a frozendict type). > As it stands, I don't find the PEP compelling. The hardening use case > might be significant but Victor needs to spell it out if it's to make > a difference. I don't know if hardening Python is a compelling argument to add a new builtin type. Victor From rdmurray at bitdance.com Thu Mar 1 01:02:26 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 29 Feb 2012 19:02:26 -0500 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: <20120301000227.BB9A82500E5@webabinitio.net> On Thu, 01 Mar 2012 10:13:01 +1100, Chris Angelico wrote: > On Thu, Mar 1, 2012 at 8:08 AM, Paul Moore wrote: > > It would (apparently) help Victor to fix issues in his pysandbox > > project. I don't know if a secure Python sandbox is an important > > enough concept to warrant core changes to make it possible. > > If a secure Python sandbox had been available last year, we would > probably be still using Python at work for end-user scripting, instead > of having had to switch to Javascript. At least, that would be the > case if this sandbox is what I think it is (we embed a scripting > language in our C++ main engine, and allow end users to customize and > partly drive our code). But features enabling that needn't be core; I > wouldn't object to having to get some third-party add-ons to make it > all work. I likewise am aware of a project where the availability of sandboxing might be make-or-break for continuing to use Python. In this case the idea would be sandboxing plugins called from a Python main program. I *think* that Victor's project would enable that, but I haven't looked at it closely. --David From raymond.hettinger at gmail.com Thu Mar 1 01:11:58 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 29 Feb 2012 16:11:58 -0800 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: On Feb 29, 2012, at 3:52 PM, Victor Stinner wrote: > I don't know if hardening Python is a compelling argument to add a new > builtin type. It isn't. Builtins are for general purpose use. It is not something most people should use; however, if it is a builtin, people will be drawn to frozendicts like moths to a flame. The tuple-as-frozenlist anti-pattern shows what we're up against. Another thought: if pypy is successful at providing sandboxing, the need for sandboxing in CPython is substantially abated. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at haypocalc.com Thu Mar 1 01:23:07 2012 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Thu, 1 Mar 2012 01:23:07 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: >> A frozendict type is a common request from users and there are various >> implementations. > > ISTM, this request is never from someone who has a use case. One of my colleagues implemented recently its own frozendict class (which the "frozendict" name ;-)). He tries to implement something like the PEP 351, not a generic freeze() function but a specialized function for his use case (only support list/tuple and dict/frozendict if I remember correctly). It remembers me the question: why does Python not provide a frozendict type? Even if it is not possible to write a perfect freeze() function, it looks like some developers need sort of this function and I hope that frozendict would be a first step in the good direction. Ruby has a freeze method. On a dict, it provides the same behaviour than frozendict: the mapping cannot be modified anymore, but values are still mutable. http://ruby-doc.org/core-1.9.3/Object.html#method-i-freeze > Many experienced Python users simply forget > that we have a frozenset type. ?We don't get bug reports or > feature requests about the type. I used it in my previous work to declare the access control list (ACL) on services provided by XML-RPC object. To be honest, set could also be used, but I chose frozenset to ensure that my colleagues don't try to modify it without understanding the consequences of such change. It was not a protecting against evil hackers from the Internet, but from my colleagues :-) Sorry, I didn't find any bug in frozenset :-) My usage was just to declare a frozendict and then check if an item is in the set, and it works pretty well! > P.S. ?The one advantage I can see for frozensets and frozendicts > is that we have an opportunity to optimize them once they are built > (optimizing insertion order to minimize collisions, increasing or > decreasing density, eliminating dummy entries, etc). ?That being > said, the same could be accomplished for regular sets and dicts > by the addition of an optimize() method. You can also implement more optimizations in Python peephole or PyPy JIT because the mapping is constant and so you can do the lookup at compilation, instead of doing it at runtime. Dummy example: --- config = frozendict(debug=False) if config['debug']: enable_debug() --- config['debug'] is always False and so you can just drop the call to enable_debug() while compiling this code. It would avoid the need of a preprocessor in some cases (especially conditional code, like the C #ifdef). Victor From steve at pearwood.info Thu Mar 1 01:36:15 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 01 Mar 2012 11:36:15 +1100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: <4F4EC47F.4020200@pearwood.info> Raymond Hettinger wrote: > On Feb 29, 2012, at 3:52 PM, Victor Stinner wrote: > >> I don't know if hardening Python is a compelling argument to add a new >> builtin type. > > It isn't. > > Builtins are for general purpose use. > It is not something most people should use; > however, if it is a builtin, people will be drawn > to frozendicts like moths to a flame. > The tuple-as-frozenlist anti-pattern shows > what we're up against. Perhaps I'm a little slow today, but I don't get this. Could you elaborate on tuple-as-frozenlist anti-pattern please? i.e. what it is, why it is an anti-pattern, and examples of it in real life? -- Steven From tjreedy at udel.edu Thu Mar 1 02:10:12 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 29 Feb 2012 20:10:12 -0500 Subject: [Python-Dev] State of PEP-3118 (memoryview part) In-Reply-To: <20120229193449.GA32607@sleipnir.bytereef.org> References: <20120226132721.GA1422@sleipnir.bytereef.org> <4F4AA2E9.1070901@canterbury.ac.nz> <20120229193449.GA32607@sleipnir.bytereef.org> Message-ID: On 2/29/2012 2:34 PM, Stefan Krah wrote: > Greg Ewing wrote: >>> Options 2) and 3) would ideally entail one backwards incompatible >>> bugfix: In 2.7 and 3.2 assignment to a memoryview with format 'B' >>> rejects integers but accepts byte objects, but according to the >>> struct syntax mandated by the PEP it should be the other way round. >> >> Maybe a compromise could be made to accept both in the >> backport? That would avoid breaking old code while allowing >> code that does the right thing to work. This *almost* sounds like a feature addition. > > This could definitely be done. But backporting is beginning to look unlikely, > since we currently have three +1 for "too complex to backport". > > > I'm not strongly in favor of backporting myself. The main reason for me > would be to prevent having additional 2->3 or 3->2 porting obstacles. > > > Stefan Krah > > > > > -- Terry Jan Reedy From tjreedy at udel.edu Thu Mar 1 02:21:47 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 29 Feb 2012 20:21:47 -0500 Subject: [Python-Dev] Backporting PEP 414 In-Reply-To: <4F4EAEA3.40002@v.loewis.de> References: <4F4D38B6.4020103@stoneleaf.us> <20120229011313.Horde.zLfORVNNcXdPTW2ZumqDWGA@webmail.df.eu> <4F4EAEA3.40002@v.loewis.de> Message-ID: Armin filed and argued for the addition in a PEP, a Python *Enhancement* Proposal. He did not file a bugfix behavior issue on the tracker. Let us leave it as that. x.y is a specified language. We continuously improve the x.y docs that describe and explain the specification. We also improve the implementation of x.y and periodically release the improvements as x.y.z. -- Terry Jan Reedy From steve at pearwood.info Thu Mar 1 02:28:41 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 01 Mar 2012 12:28:41 +1100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: <4F4ED0C9.4020908@pearwood.info> Raymond Hettinger wrote: > On Feb 27, 2012, at 10:53 AM, Victor Stinner wrote: > >> A frozendict type is a common request from users and there are various >> implementations. > > ISTM, this request is never from someone who has a use case. > Instead, it almost always comes from "completers", people > who see that we have a frozenset type and think the core devs > missed the ObviousThingToDo(tm). Frozendicts are trivial to > implement, so that is why there are various implementations > (i.e. the implementations are more fun to write than they are to use). They might be trivial for *you*, but the fact that people keep asking for help writing a frozendict, or stating that their implementation sucks, demonstrates that for the average Python coder they are not trivial at all. And the implementations I've seen don't seem to be so much fun as *tedious*. E.g. google on "python frozendict" and the second link is from somebody who had tried for "a couple of days" and is still not happy: http://python.6.n6.nabble.com/frozendict-td4377791.html You may dismiss him as a "completer", but what is asserted without evidence can be rejected without evidence, and so we may just as well declare that he has a brilliantly compelling use-case, if only we knew what it was... I see one implementation on ActiveState that has at least one serious problem, reported by you: http://code.activestate.com/recipes/414283-frozen-dictionaries/ So I don't think we can dismiss frozendict as "trivial". -- Steven From raymond.hettinger at gmail.com Thu Mar 1 02:45:06 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 29 Feb 2012 17:45:06 -0800 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: <873CAE07-45B6-4F19-903E-7FF516D9EEF1@gmail.com> On Feb 29, 2012, at 4:23 PM, Victor Stinner wrote: > One of my colleagues implemented recently its own frozendict class > (which the "frozendict" name ;-) I write new collection classes all the time. That doesn't mean they warrant inclusion in the library or builtins. There is a use case for ListenableSets and ListenableDicts -- do we need them in the library? I think not. How about case insensitive variants? I think not. There are tons of recipes on ASPN and on PyPI. That doesn't make them worth adding in to the core group of types. As core developers, we need to place some value on language compactness and learnability. The language has already gotten unnecessarily fat -- it is the rare Python programmer who knows set operations on dict views, new-style formatting, abstract base classes, contextlib/functools/itertools, how the with-statement works, how super() works, what properties/staticmethods/classmethods are for, differences between new and old-style classes, Exception versus BaseException, weakreferences, __slots__, chained exceptions, etc. If we were to add another collections type, it would need to be something that powerfully adds to the expressivity of the language. Minor variants on what we already have just makes that language harder to learn and remember but not providing much of a payoff in return. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Mar 1 02:48:37 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 29 Feb 2012 20:48:37 -0500 Subject: [Python-Dev] State of PEP-3118 (memoryview part) In-Reply-To: <20120229193449.GA32607@sleipnir.bytereef.org> References: <20120226132721.GA1422@sleipnir.bytereef.org> <4F4AA2E9.1070901@canterbury.ac.nz> <20120229193449.GA32607@sleipnir.bytereef.org> Message-ID: [erroneouly hit send button before instead of edit menu above it] On 2/29/2012 2:34 PM, Stefan Krah wrote: > Greg Ewing wrote: >>> Options 2) and 3) would ideally entail one backwards >>> incompatible bugfix: In 2.7 and 3.2 assignment to a memoryview >>> with format 'B' rejects integers but accepts byte objects, but >>> according to the struct syntax mandated by the PEP it should be >>> the other way round. If implementation and PEP conflict, the normal question is 'what does the doc say?' as doc takes precedent over PEP. However, in this case the 'MemoryView objects' section under 'Concrete objects' says nothing about the above that I could see and refers to Buffer Protocal in Abstract Objects Layer. I did not see anything there either, but could have missed it. >> Maybe a compromise could be made to accept both in the backport? >> That would avoid breaking old code while allowing code that does >> the right thing to work. This looks a bit like an enhancement ;-) > This could definitely be done. But backporting is beginning to look > unlikely, since we currently have three +1 for "too complex to > backport". My comment was more 'unnecessary to backpart'. This is based on the following thoughts (which could have mistakes). * I do not see enough benefit that I could wish you to write or anyone else to review a bugfix patch. I would in no way stop you if this continue to itch you ;-). * Sorting out bugfix changes from feature looks complex and possibly contentious and might take some time to discuss. * 3.2.3 is, I presume, less than a month away, and if that is missed, the next and last bugfix will be 3.2.4 at about the same time as 3.3.0. At that time, the full new memoryview version would be a better target. * As for porting, my impression is that the PEP directly affects only C code and Python code using ctypes and only some fraction of those. If the bugfix-only patch is significantly different from complete patch, porting to 3.2 would be significantly different from porting to 3.3. So I can foresee a temptation to just port to 3.3 anyway. > I'm not strongly in favor of backporting myself. The main reason for > me would be to prevent having additional 2->3 or 3->2 porting > obstacles. -- Terry Jan Reedy From ncoghlan at gmail.com Thu Mar 1 03:25:12 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 1 Mar 2012 12:25:12 +1000 Subject: [Python-Dev] State of PEP-3118 (memoryview part) In-Reply-To: References: <20120226132721.GA1422@sleipnir.bytereef.org> <4F4AA2E9.1070901@canterbury.ac.nz> <20120229193449.GA32607@sleipnir.bytereef.org> Message-ID: On Thu, Mar 1, 2012 at 11:48 AM, Terry Reedy wrote: > * As for porting, my impression is that the PEP directly affects only C code > and Python code using ctypes and only some fraction of those. If the > bugfix-only patch is significantly different from complete patch, porting to > 3.2 would be significantly different from porting to 3.3. So I can foresee a > temptation to just port to 3.3 anyway. memoryview as it exists in 2.7 and 3.2 misbehaves when used with certain buffer exporters - while Antoine bashed it into shape (mostly) for 1D views into 1D objects, it's rather temperamental if you try to go beyond that. So it affects the Python level as well, in terms of what objects are likely to upset memoryview. Still, I think backporting would be a lot of work for relatively small benefit, so it ends up in my "with infinite resources, sure, but with limited resources, there are more fruitful things to be doing" pile. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From guido at python.org Thu Mar 1 04:05:18 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 29 Feb 2012 19:05:18 -0800 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: On Wed, Feb 29, 2012 at 3:52 PM, Victor Stinner wrote: >> It would (apparently) help Victor to fix issues in his pysandbox >> project. I don't know if a secure Python sandbox is an important >> enough concept to warrant core changes to make it possible. > > Ok, let's talk about sandboxing and security. > > The main idea of pysandbox is to reuse most of CPython but hide > "dangerous" functions and run untrusted code in a separated namespace. > The problem is to create the sandbox and ensure that it is not > possible to escape from this sandbox. pysandbox is still a > proof-of-concept, even if it works pretty well for short dummy > scripts. But pysandbox is not ready for real world programs. I hope you have studied (recent) history. Sandboxes in Python traditionally have not been secure. Read the archives for details. > pysandbox uses various "tricks" and "hacks" to create a sandbox. But > there is a major issue: the __builtins__ dict (or module) is available > and used everywhere (in module globals, in frames, in functions > globals, etc.), and can be modified. A read-only __builtins__ dict is > required to protect the sandbox. If the untrusted can modify > __builtins__, it can replace core functions like isinstance(), len(), > ... and so modify code outside the sandbox. > > To implement a frozendict in Python, pysandbox uses the blacklist > approach: a class inheriting from dict and override some methods to > raise an error. The whitelist approach cannot be used ?for a type > implemented in Python, because the __builtins__ type must inherit from > dict: ceval.c expects a type compatible with PyDict_GetItem and > PyDict_SetItem. > > Problem: if you implement a frozendict type inheriting from dict in > Python, it is still possible to call dict methods (e.g. > dict.__setitem__()). To fix this issue, pysandbox removes all dict > methods modifying the dict: __setitem__, __delitem__, pop, etc. This > is a problem because untrusted code cannot use these methods on valid > dict created in the sandbox. > >> However, >> if Victor was saying that implementing this PEP was all that is needed >> to implement a secure sandbox, then that would be a very different >> claim, and likely much more compelling (to some, at least - I have no >> personal need for a secure sandbox). > > A builtin frozendict type "compatible" with the PyDict C API is very > convinient for pysandbox because using this type for core features > like builtins requires very few modification. For example, use > frozendict for __builtins__ only requires to modify 3 lines in > frameobject.c. > > I don't see how to solve the pysandbox issue (read-only __builtins__ > issue, need to remove dict.__setitem__ & friends) without modifying > CPython (so without adding a frozendict type). > >> As it stands, I don't find the PEP compelling. The hardening use case >> might be significant but Victor needs to spell it out if it's to make >> a difference. > > I don't know if hardening Python is a compelling argument to add a new > builtin type. > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From jimjjewett at gmail.com Thu Mar 1 05:33:21 2012 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Wed, 29 Feb 2012 20:33:21 -0800 (PST) Subject: [Python-Dev] PEP 416: Add a frozendict builtin type In-Reply-To: Message-ID: <4f4efc11.a77dec0a.531b.2e55@mx.google.com> In http://mail.python.org/pipermail/python-dev/2012-February/117113.html Victor Stinner posted: > An immutable mapping can be implemented using frozendict:: > class immutabledict(frozendict): > def __new__(cls, *args, **kw): > # ensure that all values are immutable > for key, value in itertools.chain(args, kw.items()): > if not isinstance(value, (int, float, complex, str, bytes)): > hash(value) > # frozendict ensures that all keys are immutable > return frozendict.__new__(cls, *args, **kw) What is the purpose of this? Is it just a hashable frozendict? If it is for security (as some other messages suggest), then I don't think it really helps. class Proxy: def __eq__(self, other): return self.value == other def __hash__(self): return hash(self.value) An instance of Proxy is hashable, and the hash is not object.hash, but it is still mutable. You're welcome to call that buggy, but a secure sandbox will have to deal with much worse. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From pmoody at google.com Thu Mar 1 06:13:33 2012 From: pmoody at google.com (Peter Moody) Date: Wed, 29 Feb 2012 21:13:33 -0800 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: Message-ID: Just checking in: On Mon, Feb 20, 2012 at 5:48 PM, Nick Coghlan wrote: > At the very least: > - the IP Interface API needs to move to a point where it more clearly > *is* an IP Address and *has* an associated IP Network (rather than > being the other way around) This is done [1]. There's cleanup that needs to happen here, but the interface classes are now subclasses of the respective address classes. Now I need to apply some consistency and then move on to the remaining issues points: > - IP Network needs to behave more like an ordered set of sequential IP > Addresses (without sometimes behaving like an Address in its own > right) > - iterable APIs should consistently produce iterators (leaving users > free to wrap list() around the calls if they want the concrete > realisation) Cheers, peter [1] http://code.google.com/p/ipaddress-py/source/detail?r=10dd6a68139fb99116219865afcd1c183777e8cc (the date is munged b/c I rebased to my original commit before submitting). -- Peter Moody? ? ? Google? ? 1.650.253.7306 Security Engineer? pgp:0xC3410038 From g.brandl at gmx.net Thu Mar 1 07:52:20 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 01 Mar 2012 07:52:20 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <873CAE07-45B6-4F19-903E-7FF516D9EEF1@gmail.com> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <873CAE07-45B6-4F19-903E-7FF516D9EEF1@gmail.com> Message-ID: On 01.03.2012 02:45, Raymond Hettinger wrote: > > On Feb 29, 2012, at 4:23 PM, Victor Stinner wrote: > >> One of my colleagues implemented recently its own frozendict class >> (which the "frozendict" name ;-) > > I write new collection classes all the time. > That doesn't mean they warrant inclusion in the library or builtins. > There is a use case for ListenableSets and ListenableDicts -- do we > need them in the library? I think not. How about case insensitive variants? > I think not. There are tons of recipes on ASPN and on PyPI. > That doesn't make them worth adding in to the core group of types. +1. Georg From ncoghlan at gmail.com Thu Mar 1 07:53:31 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 1 Mar 2012 16:53:31 +1000 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: Message-ID: On Thu, Mar 1, 2012 at 3:13 PM, Peter Moody wrote: > Just checking in: > > On Mon, Feb 20, 2012 at 5:48 PM, Nick Coghlan wrote: >> At the very least: >> - the IP Interface API needs to move to a point where it more clearly >> *is* an IP Address and *has* an associated IP Network (rather than >> being the other way around) > > This is done [1]. There's cleanup that needs to happen here, but the > interface classes are now subclasses of the respective address > classes. Thanks for the update! I'll be moving house this month, which may disrupt things a bit, but I'll still be trying to keep up with email, etc. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From storchaka at gmail.com Thu Mar 1 08:43:13 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 01 Mar 2012 09:43:13 +0200 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: 01.03.12 01:52, Victor Stinner ???????(??): > Problem: if you implement a frozendict type inheriting from dict in > Python, it is still possible to call dict methods (e.g. > dict.__setitem__()). To fix this issue, pysandbox removes all dict > methods modifying the dict: __setitem__, __delitem__, pop, etc. This > is a problem because untrusted code cannot use these methods on valid > dict created in the sandbox. You can redefine dict.__setitem__. oldsetitem = dict.__setitem__ def newsetitem(self, value): # check if self is not frozendict ... oldsetitem(self, value) .... dict.__setitem__ = newsetitem From victor.stinner at haypocalc.com Thu Mar 1 10:11:03 2012 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Thu, 1 Mar 2012 10:11:03 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: >> Problem: if you implement a frozendict type inheriting from dict in >> Python, it is still possible to call dict methods (e.g. >> dict.__setitem__()). To fix this issue, pysandbox removes all dict >> methods modifying the dict: __setitem__, __delitem__, pop, etc. This >> is a problem because untrusted code cannot use these methods on valid >> dict created in the sandbox. > > > You can redefine dict.__setitem__. Ah? It doesn't work here. >>> dict.__setitem__=lambda key, value: None Traceback (most recent call last): File "", line 1, in TypeError: can't set attributes of built-in/extension type 'dict' Victor From g.rodola at gmail.com Thu Mar 1 10:28:46 2012 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Thu, 1 Mar 2012 10:28:46 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <873CAE07-45B6-4F19-903E-7FF516D9EEF1@gmail.com> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <873CAE07-45B6-4F19-903E-7FF516D9EEF1@gmail.com> Message-ID: Il 01 marzo 2012 02:45, Raymond Hettinger ha scritto: > > On Feb 29, 2012, at 4:23 PM, Victor Stinner wrote: > > One of my colleagues implemented recently its own frozendict class > (which the "frozendict" name ;-) > > > I write new collection classes all the time. > That doesn't mean they warrant inclusion in the library or builtins. > There is a use case for ListenableSets and ListenableDicts -- do we > need them in the library? ?I think not. ?How about case insensitive > variants? > I think not. ?There are tons of recipes on?ASPN and on PyPI. > That doesn't make them worth adding in?to the core group of types. > > As core developers, we need to place?some value on language > compactness and learnability. ?The language has already gotten > unnecessarily fat -- it is the rare Python programmer who knows > set operations on dict views, new-style formatting, abstract base classes, > contextlib/functools/itertools, how the with-statement works, > how super() works, what properties/staticmethods/classmethods are for, > differences between new and old-style classes,?Exception versus > BaseException, > weakreferences, __slots__, chained exceptions, etc. > > If we were to add another collections type, it would need to be something > that powerfully adds to the expressivity of the language. ?Minor variants > on what we already have just makes that language harder to learn and > remember > but not providing much of a payoff in return. > > > Raymond > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/g.rodola%40gmail.com > +1 --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ http://code.google.com/p/pysendfile/ From nd at perlig.de Thu Mar 1 10:29:32 2012 From: nd at perlig.de (=?iso-8859-1?q?Andr=E9_Malo?=) Date: Thu, 1 Mar 2012 10:29:32 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: <201203011029.32559.nd@perlig.de> On Wednesday 29 February 2012 20:17:05 Raymond Hettinger wrote: > On Feb 27, 2012, at 10:53 AM, Victor Stinner wrote: > > A frozendict type is a common request from users and there are various > > implementations. > > ISTM, this request is never from someone who has a use case. > Instead, it almost always comes from "completers", people > who see that we have a frozenset type and think the core devs > missed the ObviousThingToDo(tm). Frozendicts are trivial to > implement, so that is why there are various implementations > (i.e. the implementations are more fun to write than they are to use). > > The frozenset type covers a niche case that is nice-to-have but > *rarely* used. Many experienced Python users simply forget > that we have a frozenset type. We don't get bug reports or > feature requests about the type. When I do Python consulting > work, I never see it in a client's codebase. It does occasionally > get discussed in questions on StackOverflow but rarely gets > offered as an answer (typically on variants of the "how do you > make a set-of-sets" question). If Google's codesearch were still > alive, we could add another datapoint showing how infrequently > this type is used. Here are my real-world use cases. Not for security, but for safety and performance reasons (I've built by own RODict and ROList modeled after dictproxy): - Global, but immutable containers, e.g. as class members - Caching. My data container objects (say, resultsets from a db or something) usually inherit from list or dict (sometimes also set) and are cached heavily. In order to ensure that they are not modified (accidentially), I have to choices: deepcopy or immutability. deepcopy is so expensive, that it's often cheaper to just leave out the cache. So I use immutability. (oh well, the objects are further restricted with __slots__) I agree, these are not general purpose issues, but they are not *rare*, I'd think. nd From victor.stinner at haypocalc.com Thu Mar 1 11:01:07 2012 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Thu, 1 Mar 2012 11:01:07 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: >> The main idea of pysandbox is to reuse most of CPython but hide >> "dangerous" functions and run untrusted code in a separated namespace. >> The problem is to create the sandbox and ensure that it is not >> possible to escape from this sandbox. pysandbox is still a >> proof-of-concept, even if it works pretty well for short dummy >> scripts. But pysandbox is not ready for real world programs. > > I hope you have studied (recent) history. Sandboxes in Python > traditionally have not been secure. Read the archives for details. The design of pysandbox makes it difficult to implement. It is mostly based on blacklist, so any omission would lead to a vulnerability. I read the recent history of sandboxes and see other security modules for Python, and I don't understand your reference to "Sandboxes in Python traditionally have not been secure." There is no known vulnerability in pysandbox, did I miss something? (there is only a limitation on the dict API because of the lack of frozendict.) Are you talking about rexec/Bastion? (which cannot be qualified as "recent" :-)) pysandbox limitations are documented in its README file: << pysandbox is a sandbox for the Python namespace, not a sandbox between Python and the operating system. It doesn't protect your system against Python security vulnerabilities: vulnerabilities in modules/functions available in your sandbox (depend on your sandbox configuration). By default, only few functions are exposed to the sandbox namespace which limits the attack surface. pysandbox is unable to limit the memory of the sandbox process: you have to use your own protection. >> Hum, I am also not sure that pysandbox "works" with threads :-) I mean that enabling pysandbox impacts all running threads, not only one thread, which can cause issues. It should also be mentioned. PyPy sandbox has a different design: it uses a process with no priviledge, all syscalls are redirected to another process which apply security checks to each syscall. http://doc.pypy.org/en/latest/sandbox.html See also the seccomp-nurse project, a generic sandbox using Linux SECCOMP: http://chdir.org/~nico/seccomp-nurse/ See also pysandbox README for a list of other Python security modules. Victor From stefan at bytereef.org Thu Mar 1 12:03:29 2012 From: stefan at bytereef.org (Stefan Krah) Date: Thu, 1 Mar 2012 12:03:29 +0100 Subject: [Python-Dev] State of PEP-3118 (memoryview part) In-Reply-To: References: <20120226132721.GA1422@sleipnir.bytereef.org> <4F4AA2E9.1070901@canterbury.ac.nz> <20120229193449.GA32607@sleipnir.bytereef.org> Message-ID: <20120301110329.GA4720@sleipnir.bytereef.org> Terry Reedy wrote: >>>> Options 2) and 3) would ideally entail one backwards >>>> incompatible bugfix: In 2.7 and 3.2 assignment to a memoryview >>>> with format 'B' rejects integers but accepts byte objects, but >>>> according to the struct syntax mandated by the PEP it should be >>>> the other way round. > > If implementation and PEP conflict, the normal question is 'what does > the doc say?' as doc takes precedent over PEP. However, in this case the > 'MemoryView objects' section under 'Concrete objects' says nothing about > the above that I could see and refers to Buffer Protocal in Abstract > Objects Layer. I did not see anything there either, but could have > missed it. For the C-API, it's here: http://docs.python.org/py3k/c-api/buffer.html#the-buffer-structure const char *format A NULL terminated string in struct module style syntax giving the contents of the elements available through the buffer. If this is NULL, "B" (unsigned bytes) is assumed. Unfortunately, the memoryview documentation itself gives examples like these: http://docs.python.org/py3k/library/stdtypes.html#typememoryview data = bytearray(b'abcefg') v = memoryview(data) v[0] = b'z' # That's where the struct module would throw an error. > * 3.2.3 is, I presume, less than a month away, and if that is missed, > the next and last bugfix will be 3.2.4 at about the same time as 3.3.0. That would be too soon indeed. > * As for porting, my impression is that the PEP directly affects only C > code and Python code using ctypes and only some fraction of those. If > the bugfix-only patch is significantly different from complete patch, > porting to 3.2 would be significantly different from porting to 3.3. So > I can foresee a temptation to just port to 3.3 anyway. The general problem is this: If someone supports 2 and 3 and uses the single codebase approach, it's unlikely that new features will ever be used. Even if a new 3.3 project is started that needs to be backwards compatible, I can imagine that people will shun anything that 3to2 isn't able to handle (out of the box). This would be less of an issue if the officially sanctioned way of porting were the "separate branches with 2to3 (or 3to2) for the initial conversion" approach. Stefan Krah From stefan at bytereef.org Thu Mar 1 12:11:36 2012 From: stefan at bytereef.org (Stefan Krah) Date: Thu, 1 Mar 2012 12:11:36 +0100 Subject: [Python-Dev] Spreading the Python 3 religion In-Reply-To: References: <20120228134113.B0E9F2500E4@webabinitio.net> <20120228095357.2b9fde87@resist.wooz.org> <20120228175056.Horde.KfPofklCcOxPTQXw0KqW1nA@webmail.df.eu> Message-ID: <20120301111136.GB4720@sleipnir.bytereef.org> Brett Cannon wrote: > Changes to http://docs.python.org/howto/pyporting.html are welcome. I tried to > make sure it exposed all possibilities with tips on how to support as far back > as Python 2.5. I'd like to add a section that highlights the advantages of separate branches. Starting perhaps with: Advantages of separate branches: 1) The two code bases are cleaner. 2) Neither version is a second class citizen. 3) New Python3 features can be adopted without worrying about conversion tools. 4) For the developer: psychologically, slowly the py3k version becomes the master branch (as it should). 5) For the user: running 2to3 on install sends the signal that version 2 is the real version. This is not the case if there are, say, src2/ and src3/ directories in the distribution. Stefan Krah From victor.stinner at gmail.com Thu Mar 1 13:08:19 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 1 Mar 2012 13:08:19 +0100 Subject: [Python-Dev] PEP 416: Add a frozendict builtin type In-Reply-To: <1330541549.7844.69.camel@surprise> References: <1330541549.7844.69.camel@surprise> Message-ID: >> Rationale >> ========= >> >> A frozendict mapping cannot be changed, but its values can be mutable >> (not hashable). A frozendict is hashable and so immutable if all >> values are hashable (immutable). > The wording of the above seems very unclear to me. > > Do you mean "A frozendict has a constant set of keys, and for every key, > d[key] has a specific value for the lifetime of the frozendict. > However, these values *may* be mutable. ?The frozendict is hashable iff > all of the values are hashable." ? ?(or somesuch) New try: "A frozendict is a read-only mapping: a key cannot be added nor removed, and a key is always mapped to the same value. However, frozendict values can be mutable (not hashable). A frozendict is hashable and so immutable if and only if all values are hashable (immutable)." >> ?* Register frozendict has a collections.abc.Mapping > s/has/as/ ? Oops, fixed. >> If frozendict is used to harden Python (security purpose), it must be >> implemented in C. A type implemented in C is also faster. > > You mention security purposes here, but this isn't mentioned in the > Rationale or Use Cases I added two use cases: security sandbox and cache. > Hope this is helpful Yes, thanks. Victor From valhallasw at arctus.nl Thu Mar 1 13:10:20 2012 From: valhallasw at arctus.nl (Merlijn van Deen) Date: Thu, 1 Mar 2012 13:10:20 +0100 Subject: [Python-Dev] Spreading the Python 3 religion In-Reply-To: <20120301111136.GB4720@sleipnir.bytereef.org> References: <20120228134113.B0E9F2500E4@webabinitio.net> <20120228095357.2b9fde87@resist.wooz.org> <20120228175056.Horde.KfPofklCcOxPTQXw0KqW1nA@webmail.df.eu> <20120301111136.GB4720@sleipnir.bytereef.org> Message-ID: On 1 March 2012 12:11, Stefan Krah wrote: > Advantages of separate branches: > Even though I agree on most of your points, I disagree with 2) Neither version is a second class citizen. In my experience, this is only true if you have a very strict discipline, or if both branches are used a lot. If there are two branches (say: py2 and py3), and one is used much less (say: py3), that one will always be the second class citizen - the py2 branch, which is used by 'most people' gets more feature requests and bug reports. People will implement the features and bug fixes in the py2 branch, and sometimes forget to port them to the py3 branch, which means the branches start diverging. This divergence makes applying newer changes even more difficult, leading to further divergence. Another cause for this is the painful merging in most version control systems. I'm guessing you all know the pain of 'svn merge' - and there are a *lot *of projects still using SVN or even CVS. As such, you need to impose the discipline to always apply changes to both branches. This is a reasonable thing for larger projects, but it is generally harder to implement it for smaller projects, as you're already lucky people are actually contributing. Best, Merlijn -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at haypocalc.com Thu Mar 1 13:22:59 2012 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Thu, 1 Mar 2012 13:22:59 +0100 Subject: [Python-Dev] PEP 416: Add a frozendict builtin type In-Reply-To: <4f4efc11.a77dec0a.531b.2e55@mx.google.com> References: <4f4efc11.a77dec0a.531b.2e55@mx.google.com> Message-ID: >> An immutable mapping can be implemented using frozendict:: > >> ? ? class immutabledict(frozendict): >> ? ? ? ? def __new__(cls, *args, **kw): >> ? ? ? ? ? ? # ensure that all values are immutable >> ? ? ? ? ? ? for key, value in itertools.chain(args, kw.items()): >> ? ? ? ? ? ? ? ? if not isinstance(value, (int, float, complex, str, bytes)): >> ? ? ? ? ? ? ? ? ? ? hash(value) >> ? ? ? ? ? ? # frozendict ensures that all keys are immutable >> ? ? ? ? ? ? return frozendict.__new__(cls, *args, **kw) > > What is the purpose of this? ?Is it just a hashable frozendict? It's an hashable frozendict or a "really frozen dict" or just "an immutable dict". It helps to detect errors earlier when you need a hashable frozendict. It is faster than hash(frozendict) because it avoids to hash known immutable types. If the recipe is confusion, it can be removed. Or it may be added to collections or somewhere else. > If it is for security (as some other messages suggest), then I don't > think it really helps. > > ? ?class Proxy: > ? ? ? ?def __eq__(self, other): return self.value == other > ? ? ? ?def __hash__(self): return hash(self.value) > > An instance of Proxy is hashable, and the hash is not object.hash, > but it is still mutable. ?You're welcome to call that buggy, but a > secure sandbox will have to deal with much worse. Your example looks to be incomplete: where does value come from? Is it supposed to be a read-only view of an object? Such Proxy class doesn't help to implement a sandbox because Proxy.value can be modified. I use closures to implement proxies in pysandbox. Dummy example: def createLengthProxy(secret): class Proxy: def __len__(self): return len(secret) return Proxy() Such proxy is not safe because it is possible to retrieve the secret: secret = "abc" value = createLengthProxy(secret).__len__.__closure__[0].cell_contents assert value is secret pysandbox implements other protections to block access to __closure__. Victor From victor.stinner at haypocalc.com Thu Mar 1 13:23:26 2012 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Thu, 1 Mar 2012 13:23:26 +0100 Subject: [Python-Dev] PEP 416: Add a frozendict builtin type In-Reply-To: <1330541549.7844.69.camel@surprise> References: <1330541549.7844.69.camel@surprise> Message-ID: >> Rationale >> ========= >> >> A frozendict mapping cannot be changed, but its values can be mutable >> (not hashable). A frozendict is hashable and so immutable if all >> values are hashable (immutable). > The wording of the above seems very unclear to me. > > Do you mean "A frozendict has a constant set of keys, and for every key, > d[key] has a specific value for the lifetime of the frozendict. > However, these values *may* be mutable. The frozendict is hashable iff > all of the values are hashable." ? (or somesuch) New try: "A frozendict is a read-only mapping: a key cannot be added nor removed, and a key is always mapped to the same value. However, frozendict values can be mutable (not hashable). A frozendict is hashable and so immutable if and only if all values are hashable (immutable)." >> * Register frozendict has a collections.abc.Mapping > s/has/as/ ? Oops, fixed. >> If frozendict is used to harden Python (security purpose), it must be >> implemented in C. A type implemented in C is also faster. > > You mention security purposes here, but this isn't mentioned in the > Rationale or Use Cases I added two use cases: security sandbox and cache. > Hope this is helpful Yes, thanks. Victor From yselivanov.ml at gmail.com Thu Mar 1 13:37:28 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 1 Mar 2012 07:37:28 -0500 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <873CAE07-45B6-4F19-903E-7FF516D9EEF1@gmail.com> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <873CAE07-45B6-4F19-903E-7FF516D9EEF1@gmail.com> Message-ID: <70E61790-B54E-46A7-ADDD-A8FE2B90F991@gmail.com> Actually I find fronzendict concept quite useful. We also have an implementation in our framework, and we use it, for instance, in http request object, for parsed arguments and parsed forms, which values shouldn't be ever modified once parsed. Of course everybody can live without it, but given the fact of how easy it is to implement it I think its OK to have it. +1. On 2012-02-29, at 8:45 PM, Raymond Hettinger wrote: > > On Feb 29, 2012, at 4:23 PM, Victor Stinner wrote: > >> One of my colleagues implemented recently its own frozendict class >> (which the "frozendict" name ;-) > > I write new collection classes all the time. > That doesn't mean they warrant inclusion in the library or builtins. > There is a use case for ListenableSets and ListenableDicts -- do we > need them in the library? I think not. How about case insensitive variants? > I think not. There are tons of recipes on ASPN and on PyPI. > That doesn't make them worth adding in to the core group of types. > > As core developers, we need to place some value on language > compactness and learnability. The language has already gotten > unnecessarily fat -- it is the rare Python programmer who knows > set operations on dict views, new-style formatting, abstract base classes, > contextlib/functools/itertools, how the with-statement works, > how super() works, what properties/staticmethods/classmethods are for, > differences between new and old-style classes, Exception versus BaseException, > weakreferences, __slots__, chained exceptions, etc. > > If we were to add another collections type, it would need to be something > that powerfully adds to the expressivity of the language. Minor variants > on what we already have just makes that language harder to learn and remember > but not providing much of a payoff in return. > > > Raymond > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com From regebro at gmail.com Thu Mar 1 13:42:49 2012 From: regebro at gmail.com (Lennart Regebro) Date: Thu, 1 Mar 2012 13:42:49 +0100 Subject: [Python-Dev] Spreading the Python 3 religion In-Reply-To: References: <20120228134113.B0E9F2500E4@webabinitio.net> <20120228095357.2b9fde87@resist.wooz.org> <20120228175056.Horde.KfPofklCcOxPTQXw0KqW1nA@webmail.df.eu> <20120301111136.GB4720@sleipnir.bytereef.org> Message-ID: I also don't agree with the claim that a py3 version using 2to3 is a "second class citizen". You need to adopt the Python 2 code to Python 3 in that case too, and none of the overrules the other. //Lennart From victor.stinner at gmail.com Thu Mar 1 14:00:38 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 1 Mar 2012 14:00:38 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: > A builtin frozendict type "compatible" with the PyDict C API is very > convinient for pysandbox because using this type for core features > like builtins requires very few modification. For example, use > frozendict for __builtins__ only requires to modify 3 lines in > frameobject.c. See the frozendict_builtins.patch attached to the issue #14162. Last version: http://bugs.python.org/file24690/frozendict_builtins.patch Victor From victor.stinner at gmail.com Thu Mar 1 14:07:10 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 1 Mar 2012 14:07:10 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <201203011029.32559.nd@perlig.de> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <201203011029.32559.nd@perlig.de> Message-ID: > Here are my real-world use cases. Not for security, but for safety and > performance reasons (I've built by own RODict and ROList modeled after > dictproxy): > > - Global, but immutable containers, e.g. as class members I attached type_final.patch to the issue #14162 to demonstrate how frozendict can be used to implement a "read-only" type. Last version: http://bugs.python.org/file24696/type_final.patch Example: >>> class FinalizedType: ... __final__=True ... attr = 10 ... def hello(self): ... print("hello") ... >>> FinalizedType.attr=12 TypeError: 'frozendict' object does not support item assignment >>> FinalizedType.hello=print TypeError: 'frozendict' object does not support item assignment (instance do still have a mutable dict) My patch checks for the __final__ class attribute, but the conversion from dict to frozendict may be done by a function or a type method. Creating a read-only type is a different issue, it's just another example of frozendict usage. Victor From nd at perlig.de Thu Mar 1 14:26:46 2012 From: nd at perlig.de (=?iso-8859-1?q?Andr=E9_Malo?=) Date: Thu, 1 Mar 2012 14:26:46 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <201203011029.32559.nd@perlig.de> Message-ID: <201203011426.46371.nd@perlig.de> On Thursday 01 March 2012 14:07:10 Victor Stinner wrote: > > Here are my real-world use cases. Not for security, but for safety and > > performance reasons (I've built by own RODict and ROList modeled after > > dictproxy): > > > > - Global, but immutable containers, e.g. as class members > > I attached type_final.patch to the issue #14162 to demonstrate how > frozendict can be used to implement a "read-only" type. Last version: > http://bugs.python.org/file24696/type_final.patch Oh, hmm. I rather meant something like that: """ class Foo: some_mapping = frozendict( blah=1, blub=2 ) or as a variant: def zonk(some_default=frozendict(...)): ... or simply a global object: baz = frozendict(some_immutable_mapping) """ I'm not sure about your final types. I'm using __slots__ = () for such things (?) nd From ncoghlan at gmail.com Thu Mar 1 14:34:56 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 1 Mar 2012 23:34:56 +1000 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <201203011029.32559.nd@perlig.de> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <201203011029.32559.nd@perlig.de> Message-ID: On Thu, Mar 1, 2012 at 7:29 PM, Andr? Malo wrote: > - Caching. My data container objects (say, resultsets from a db or something) > ?usually inherit from list or dict (sometimes also set) and are cached > ?heavily. In order to ensure that they are not modified (accidentially), I > ?have to choices: deepcopy or immutability. deepcopy is so expensive, that > ?it's often cheaper to just leave out the cache. So I use immutability. (oh > ?well, the objects are further restricted with __slots__) Speaking of caching - functools.lru_cache currently has to do a fair bit of work in order to correctly cache keyword arguments. It's obviously a *solvable* problem even without frozendict in the collections module (it just stores the dict contents as a sorted tuple of 2-tuples), but it would still be interesting to compare the readability, speed and memory consumption differences of a version of lru_cache that used frozendict to cache the keyword arguments instead. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From storchaka at gmail.com Thu Mar 1 14:44:18 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 01 Mar 2012 15:44:18 +0200 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: 01.03.12 11:11, Victor Stinner ???????(??): >> You can redefine dict.__setitem__. > Ah? It doesn't work here. > >>>> dict.__setitem__=lambda key, value: None > Traceback (most recent call last): > File "", line 1, in > TypeError: can't set attributes of built-in/extension type 'dict' Hmm, yes, it's true. It was too presumptuous of me to believe that you have not considered such simple approach. But I will try to suggest another approach. `frozendict` inherits from `dict`, but data is not stored in the parent, but in the internal dictionary. And even if dict.__setitem__ is used, it will have no visible effect. class frozendict(dict): def __init__(self, values={}): self._values = dict(values) def __getitem__(self, key): return self._values[key] def __setitem__(self, key, value): raise TypeError ("expect dict, got frozendict") ... >>> a = frozendict({1: 2, 3: 4}) >>> a[1] 2 >>> a[5] Traceback (most recent call last): File "", line 1, in File "", line 5, in __getitem__ KeyError: 5 >>> a[5] = 6 Traceback (most recent call last): File "", line 1, in File "", line 7, in __setitem__ TypeError: expect dict, got frozendict >>> dict.__setitem__(a, 5, 6) >>> a[5] Traceback (most recent call last): File "", line 1, in File "", line 5, in __getitem__ KeyError: 5 From p.f.moore at gmail.com Thu Mar 1 14:49:41 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 1 Mar 2012 13:49:41 +0000 Subject: [Python-Dev] PEP 416: Add a frozendict builtin type In-Reply-To: References: <1330541549.7844.69.camel@surprise> Message-ID: On 1 March 2012 12:08, Victor Stinner wrote: > New try: > > "A frozendict is a read-only mapping: a key cannot be added nor > removed, and a key is always mapped to the same value. However, > frozendict values can be mutable (not hashable). A frozendict is > hashable and so immutable if and only if all values are hashable > (immutable)." I'd suggest you don't link immutability and non-hashability so tightly. Misbehaved objects can be mutable but hashable: >>> class A: ... def __init__(self,a): ... self.a = a ... def __hash__(self): ... return 12 ... >>> a = A(1) >>> hash(a) 12 >>> a.a=19 >>> hash(a) 12 Just avoid using the term "immutable" at all: "A frozendict is a read-only mapping: a key cannot be added nor removed, and a key is always mapped to the same value. However, frozendict values can be mutable. A frozendict is hashable if and only if all values are hashable." I realise this is a weaker statement than you'd like to give (immutability seems to be what people *really* think they want when they talk about frozen objects), but don't promise immutability if that's not what you're providing. More specifically, I'd hate to think that someone for whom security is an issue would see your original description and think they could use a frozendict and get safety, only to find their system breached because of a class like A above. The same could happen to people who want to handle thread safety via immutable objects, who could also end up with errors if misbehaving classes found their way into an application. Paul. From p.f.moore at gmail.com Thu Mar 1 15:08:18 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 1 Mar 2012 14:08:18 +0000 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <70E61790-B54E-46A7-ADDD-A8FE2B90F991@gmail.com> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <873CAE07-45B6-4F19-903E-7FF516D9EEF1@gmail.com> <70E61790-B54E-46A7-ADDD-A8FE2B90F991@gmail.com> Message-ID: On 1 March 2012 12:37, Yury Selivanov wrote: > Actually I find fronzendict concept quite useful. ?We also have an > implementation in our framework, and we use it, for instance, in > http request object, for parsed arguments and parsed forms, which > values shouldn't be ever modified once parsed. The question isn't so much whether it's useful, as whether it's of sufficiently general use to warrant putting it into the core language (not even the stdlib, but the C core!). The fact that you have an implementation of your own, actually indicates that not having it in the core didn't cause you any real problems. Remember - the bar for core acceptance is higher than just "it is useful". I'm not even sure I see a strong enough case for frozendict being in the standard library yet, let alone in the core. Paul. From storchaka at gmail.com Thu Mar 1 15:17:35 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 01 Mar 2012 16:17:35 +0200 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <201203011029.32559.nd@perlig.de> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <201203011029.32559.nd@perlig.de> Message-ID: 01.03.12 11:29, Andr? Malo ???????(??): > - Caching. My data container objects (say, resultsets from a db or something) > usually inherit from list or dict (sometimes also set) and are cached > heavily. In order to ensure that they are not modified (accidentially), I > have to choices: deepcopy or immutability. deepcopy is so expensive, that > it's often cheaper to just leave out the cache. So I use immutability. (oh > well, the objects are further restricted with __slots__) This is the first rational use of frozendict that I see. However, a deep copy is still necessary to create the frozendict. For this case, I believe, would be better to "freeze" dict inplace and then copy-on-write it. From nd at perlig.de Thu Mar 1 15:47:08 2012 From: nd at perlig.de (=?utf-8?q?Andr=C3=A9_Malo?=) Date: Thu, 1 Mar 2012 15:47:08 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <201203011029.32559.nd@perlig.de> Message-ID: <201203011547.08603.nd@perlig.de> On Thursday 01 March 2012 15:17:35 Serhiy Storchaka wrote: > 01.03.12 11:29, Andr? Malo ???????(??): > > - Caching. My data container objects (say, resultsets from a db or > > something) usually inherit from list or dict (sometimes also set) and are > > cached heavily. In order to ensure that they are not modified > > (accidentially), I have to choices: deepcopy or immutability. deepcopy is > > so expensive, that it's often cheaper to just leave out the cache. So I > > use immutability. (oh well, the objects are further restricted with > > __slots__) > > This is the first rational use of frozendict that I see. However, a deep > copy is still necessary to create the frozendict. For this case, I > believe, would be better to "freeze" dict inplace and then copy-on-write > it. In my case it's actually a half one. The data mostly comes from memcache ;) I'm populating the object and then I'm done with it. People wanting to modify it, need to copy it, yes. OTOH usually a shallow copy is enough (here). Funnily my ROList actually provides a "sorted" method instead of "sort" in order to create a sorted copy of the list. nd From victor.stinner at gmail.com Thu Mar 1 15:49:29 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 1 Mar 2012 15:49:29 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: > But I will try to suggest another approach. `frozendict` inherits from > `dict`, but data is not stored in the parent, but in the internal > dictionary. And even if dict.__setitem__ is used, it will have no visible > effect. > > ?class frozendict(dict): > ? ? ?def __init__(self, values={}): > ? ? ? ? ?self._values = dict(values) > ? ? ?def __getitem__(self, key): > ? ? ? ? ?return self._values[key] > ? ? ?def __setitem__(self, key, value): > ? ? ? ? ?raise TypeError ("expect dict, got frozendict") > ? ? ?... I would like to implement frozendict in C to be able to pass it to PyDict_GetItem(), PyDict_SetItem() and PyDict_DelItem(). Using such Python implementation, you would get surprising result: d = frozendict() dict.__setitem__(d, 'x', 1) # this is what Python does internally when it expects a dict (e.g. in ceval.c for __builtins__) 'x' in d => False (Python is not supposed to use the PyDict API if the object is a dict subclass, but PyObject_Get/SetItem.) Victor From victor.stinner at gmail.com Thu Mar 1 15:54:01 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 1 Mar 2012 15:54:01 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <201203011426.46371.nd@perlig.de> References: <201203011029.32559.nd@perlig.de> <201203011426.46371.nd@perlig.de> Message-ID: >> > Here are my real-world use cases. Not for security, but for safety and >> > performance reasons (I've built by own RODict and ROList modeled after >> > dictproxy): >> > >> > - Global, but immutable containers, e.g. as class members >> >> I attached type_final.patch to the issue #14162 to demonstrate how >> frozendict can be used to implement a "read-only" type. Last version: >> http://bugs.python.org/file24696/type_final.patch > > Oh, hmm. I rather meant something like that: > > """ > class Foo: > ? ?some_mapping = frozendict( > ? ? ? ?blah=1, blub=2 > ? ?) > or as a variant: > > def zonk(some_default=frozendict(...)): > ? ?... > or simply a global object: > > baz = frozendict(some_immutable_mapping) > """ Ah yes, frozendict is useful for such cases. > I'm not sure about your final types. I'm using __slots__ = () for such things You can still replace an attribute value if a class defines __slots__: >>> class A: ... __slots__=('x',) ... x = 1 ... >>> A.x=2 >>> A.x 2 Victor From nd at perlig.de Thu Mar 1 16:05:27 2012 From: nd at perlig.de (=?utf-8?q?Andr=C3=A9_Malo?=) Date: Thu, 1 Mar 2012 16:05:27 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <201203011426.46371.nd@perlig.de> Message-ID: <201203011605.27381.nd@perlig.de> On Thursday 01 March 2012 15:54:01 Victor Stinner wrote: > > I'm not sure about your final types. I'm using __slots__ = () for such > > things > > You can still replace an attribute value if a class defines __slots__: > >>> class A: > > ... __slots__=('x',) > ... x = 1 > ... > > >>> A.x=2 > >>> A.x > > 2 Ah, ok, I missed that. It should be fixable with a metaclass. Not very nicely, though. nd From fpallanti at develer.com Thu Mar 1 15:15:07 2012 From: fpallanti at develer.com (Francesco Pallanti) Date: Thu, 01 Mar 2012 15:15:07 +0100 Subject: [Python-Dev] EuroPython 2012: Call for Proposal is Open! [Please spread the word] Message-ID: <1330611307.15056.23.camel@bmlab-palla> Hi guys, I'm Francesco and I am writing on behalf of EuroPython Staff (www.europython.eu). We are happy to announce that the Call for Proposals is now officially open! DEADLINE FOR PROPOSALS: MARCH 18TH, 23:59:59 CET For those who have never been at EuroPython (or similar conferences) before, the Call for Proposals is the period in which the organizers ask the community to submit proposals for talks to be held at the conference. Further details about Call for Proposal are online here: http://ep2012.europython.eu/call-for-proposals/ EuroPython is a conference run by the community for the community: the vast majority of talks that are presented at the conference will be proposed, prepared and given by members of the Python community itself. And not only that: the process that selects the best talks among all the proposals will also be public and fully driven by the community: it's called Community Voting, and will begin right after the Call for Proposals ends. CFP: Talks, Hands-On Trainings and Posters ------------------------------------------ We're looking for proposals on every aspect of Python: programming from novice to advanced levels, applications and frameworks, or how you have been involved in introducing Python into your organisation. There are three different kind of contribution that you can present at EuroPython: - Regular talk. These are standard "talk with slides", allocated in slots of 45, 60 or 90 minutes, depending on your preference and scheduling constraints. A Q&A session is held at the end of the talk. - Hands-on training. These are advanced training sessions for a smaller audience (10-20 people), to dive into the subject with all details. These sessions are 4-hours long, and the audience will be strongly encouraged to bring a laptop to experiment. They should be prepared with less slides and more source code. - Posters. Posters are a graphical way to describe a project or a technology, printed in large format; posters are exhibited at the conference, can be read at any time by participants, and can be discussed face to face with their authors during the poster session. We will take care of printing the posters too, so don't worry about logistics. More details about Call for Proposal are online here: http://ep2012.europython.eu/call-for-proposals/ Don't wait for the last day --------------------------- If possible, please avoid submitting your proposals on the last day. It might sound a strange request, but last year about 80% of the proposals were submitted in the last 72 hours. This creates a few problems for organizers because we can't have a good picture of the size of the conference until that day. Remember that proposals are fully editable at any time, even after the Call for Proposals ends. You just need to login on the website, go to the proposal page (linked from your profile page), and click the Edit button. First-time speakers are especially welcome; EuroPython is a community conference and we are eager to hear about your experience. If you have friends or colleagues who have something valuable to contribute, twist their arms to tell us about it! We are a conference run by the community for the community. Please help to spread the word by distributing this announcement to colleagues, mailing lists, your blog, Web site, and through your social networking connections. All the best, -- Francesco Pallanti - fpallanti at develer.com Develer S.r.l. - http://www.develer.com/ .software .hardware .innovation Tel.: +39 055 3984627 - ext.: 215 From stefan at bytereef.org Thu Mar 1 16:31:14 2012 From: stefan at bytereef.org (Stefan Krah) Date: Thu, 1 Mar 2012 16:31:14 +0100 Subject: [Python-Dev] Spreading the Python 3 religion In-Reply-To: References: <20120228134113.B0E9F2500E4@webabinitio.net> <20120228095357.2b9fde87@resist.wooz.org> <20120228175056.Horde.KfPofklCcOxPTQXw0KqW1nA@webmail.df.eu> <20120301111136.GB4720@sleipnir.bytereef.org> Message-ID: <20120301153114.GA6307@sleipnir.bytereef.org> Merlijn van Deen wrote: > Another cause for this is the painful merging in most version control systems. > I'm guessing you all know the pain of 'svn merge' - and there are a lot of > projects still using SVN or even CVS. > > As such, you need to impose the discipline to always apply changes to both > branches. This is a reasonable thing for larger projects, but it is generally > harder to implement it for smaller projects, as you're already lucky people are > actually contributing. What you say is all true, but I wonder if the additional work is really that much of a problem. Several people have said here that applying changes to both versions becomes second nature, and this is also my experience. While mercurial may be nicer, svnmerge.py isn't that bad. Projects have different needs and priorities. From my own experience with cdecimal I can positively say that the amount of work required to keep two branches [1] in sync is completely dwarfed by first figuring out what to write and then implementing it correctly. After doing all that, the actual synchronization work feels like a vacation. Another aspect, which may be again cdecimal-specific, is that keeping 2.5 compatibility is *at least* as bothersome as supporting 2.6/2.7 and 3.x. As an example for a pretty large project, it looks like Antoine is making good progress with Twisted: https://bitbucket.org/pitrou/t3k/wiki/Home I certainly can't say what's possible or best for other projects. I do think though that choosing the separate branches strategy will pay off eventually (at the very latest when Python-2.7 will reach the status that Python-1.5 currently has). Stefan Krah [1] I don't even use two branches but 2.c/3.c and 2.py/3.py file name patterns. From stefan at bytereef.org Thu Mar 1 16:45:25 2012 From: stefan at bytereef.org (Stefan Krah) Date: Thu, 1 Mar 2012 16:45:25 +0100 Subject: [Python-Dev] Spreading the Python 3 religion In-Reply-To: References: <20120228134113.B0E9F2500E4@webabinitio.net> <20120228095357.2b9fde87@resist.wooz.org> <20120228175056.Horde.KfPofklCcOxPTQXw0KqW1nA@webmail.df.eu> <20120301111136.GB4720@sleipnir.bytereef.org> Message-ID: <20120301154525.GB6307@sleipnir.bytereef.org> Lennart Regebro wrote: > I also don't agree with the claim that a py3 version using 2to3 is a > "second class citizen". You need to adopt the Python 2 code to Python > 3 in that case too, and none of the overrules the other. That's a fair point. Then of course *both* versions do not use their full potential, but that is strongly related to the "using all (new) features" item in the list. Stefan Krah From solipsis at pitrou.net Thu Mar 1 16:42:26 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 1 Mar 2012 16:42:26 +0100 Subject: [Python-Dev] Spreading the Python 3 religion References: <20120228134113.B0E9F2500E4@webabinitio.net> <20120228095357.2b9fde87@resist.wooz.org> <20120228175056.Horde.KfPofklCcOxPTQXw0KqW1nA@webmail.df.eu> <20120301111136.GB4720@sleipnir.bytereef.org> <20120301153114.GA6307@sleipnir.bytereef.org> Message-ID: <20120301164226.3ff68c4f@pitrou.net> On Thu, 1 Mar 2012 16:31:14 +0100 Stefan Krah wrote: > > As an example for a pretty large project, it looks like Antoine is making > good progress with Twisted: > > https://bitbucket.org/pitrou/t3k/wiki/Home Well, to be honest, "making good progress" currently means "bored and not progressing at all" :-) But that's not due to the strategy I adopted, only to the sheer amount of small changes needed, and lack of immediate motivation to continue this work. However, merging actually ended up easier than I expected. The last time, merging one month's worth of upstream changes took me around one hour (including fixing additional tests and regressions). Regards Antoine. From barry at python.org Thu Mar 1 17:24:19 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 1 Mar 2012 11:24:19 -0500 Subject: [Python-Dev] Spreading the Python 3 religion In-Reply-To: <20120301164226.3ff68c4f@pitrou.net> References: <20120228134113.B0E9F2500E4@webabinitio.net> <20120228095357.2b9fde87@resist.wooz.org> <20120228175056.Horde.KfPofklCcOxPTQXw0KqW1nA@webmail.df.eu> <20120301111136.GB4720@sleipnir.bytereef.org> <20120301153114.GA6307@sleipnir.bytereef.org> <20120301164226.3ff68c4f@pitrou.net> Message-ID: <20120301112419.77687197@limelight.wooz.org> On Mar 01, 2012, at 04:42 PM, Antoine Pitrou wrote: >Well, to be honest, "making good progress" currently means "bored and >not progressing at all" :-) But that's not due to the strategy I >adopted, only to the sheer amount of small changes needed, and lack of >immediate motivation to continue this work. For any porting strategy, the best thing to do is to get as many changes into upstream as possible that prepares the way for Python 3 support. For example, when I did the dbus-python port, upstream (rightly so) rejected my big all-together-now patch. Instead, we took a number of smaller steps, many of which were incorporated before the Python 3 support landed. These included: - Agreeing to Python 2.6 as a minimum base - #include and global PyString_* -> PyBytes_* conversion - (yes) adding future imports for unicode_literals, unadorning unicodes and adding b'' prefixes where necessary - fixing except clauses to use 'as' - removing L suffix on integer literals - lots of other little syntactic nits You could add to that things like print functions (although IIRC dbus-python had few if any of these), etc. So really, it was the same strategy as any porting process, but the key was breaking these up into reviewable chunks that could be applied while still keeping the code base Python 2 only. I really do think that to the extent that you can do that kind of thing, you may end up with essentially Python 3 support without even realizing it. :) Cheers, -Barry From solipsis at pitrou.net Thu Mar 1 17:24:31 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 1 Mar 2012 17:24:31 +0100 Subject: [Python-Dev] Spreading the Python 3 religion References: <20120228134113.B0E9F2500E4@webabinitio.net> <20120228095357.2b9fde87@resist.wooz.org> <20120228175056.Horde.KfPofklCcOxPTQXw0KqW1nA@webmail.df.eu> <20120301111136.GB4720@sleipnir.bytereef.org> <20120301153114.GA6307@sleipnir.bytereef.org> <20120301164226.3ff68c4f@pitrou.net> <20120301112419.77687197@limelight.wooz.org> Message-ID: <20120301172431.1c358aba@pitrou.net> On Thu, 1 Mar 2012 11:24:19 -0500 Barry Warsaw wrote: > > I really do think that to the extent that you can do that kind of thing, you > may end up with essentially Python 3 support without even realizing it. :) That's unlikely. Twisted processes bytes data a lot, and the bytes indexing behaviour of 3.x is a chore for porting. Regards Antoine. From guido at python.org Thu Mar 1 18:00:07 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 1 Mar 2012 09:00:07 -0800 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: On Thu, Mar 1, 2012 at 2:01 AM, Victor Stinner wrote: >>> The main idea of pysandbox is to reuse most of CPython but hide >>> "dangerous" functions and run untrusted code in a separated namespace. >>> The problem is to create the sandbox and ensure that it is not >>> possible to escape from this sandbox. pysandbox is still a >>> proof-of-concept, even if it works pretty well for short dummy >>> scripts. But pysandbox is not ready for real world programs. >> >> I hope you have studied (recent) history. Sandboxes in Python >> traditionally have not been secure. Read the archives for details. > > The design of pysandbox makes it difficult to implement. It is mostly > based on blacklist, so any omission would lead to a vulnerability. I > read the recent history of sandboxes and see other security modules > for Python, and I don't understand your reference to ?"Sandboxes in > Python traditionally have not been secure." There is no known > vulnerability in pysandbox, did I miss something? (there is only a > limitation on the dict API because of the lack of frozendict.) > > Are you talking about rexec/Bastion? (which cannot be qualified as "recent" :-)) > > pysandbox limitations are documented in its README file: > > << pysandbox is a sandbox for the Python namespace, not a sandbox between Python > and the operating system. It doesn't protect your system against Python > security vulnerabilities: vulnerabilities in modules/functions available in > your sandbox (depend on your sandbox configuration). By default, only few > functions are exposed to the sandbox namespace which limits the attack surface. > > pysandbox is unable to limit the memory of the sandbox process: you have to use > your own protection. >> > > Hum, I am also not sure that pysandbox "works" with threads :-) I mean > that enabling pysandbox impacts all running threads, not only one > thread, which can cause issues. It should also be mentioned. > > PyPy sandbox has a different design: it uses a process with no > priviledge, all syscalls are redirected to another process which apply > security checks to each syscall. > http://doc.pypy.org/en/latest/sandbox.html > > See also the seccomp-nurse project, a generic sandbox using Linux SECCOMP: > http://chdir.org/~nico/seccomp-nurse/ > > See also pysandbox README for a list of other Python security modules. Hm. I can't tell what the purpose of a sandbox is from what you quote from your own README here (and my cellphone tethering is slow enough that clicking on the links doesn't work right now). The sandboxes I'm familiar with (e.g. Google App Engine) are intended to allow untrusted third parties to execute (more or less) arbitrary code while strictly controlling which resources they can access. In App Engine's case, an attacker who broke out of the sandbox would have access to the inside of Google's datacenter, which would obviously be bad -- that's why Google has developed its own sandboxing technologies. I do know that I don't feel comfortable having a sandbox in the Python standard library or even recommending a 3rd party sandboxing solution -- if someone uses the sandbox to protect a critical resource, and a hacker breaks out of the sandbox, the author of the sandbox may be held responsible for more than they bargained for when they made it open source. (Doesn't an open source license limit your responsibility? Who knows. AFAIK this question has not gotten to court yet. I wouldn't want to have to go to court over it.) I wasn't just referring of rexec/Bastion (though that definitely shaped my thinking about this issue; much more recently someone (Tal, I think was his name?) tried to come up with a sandbox and every time he believed he had a perfect solution, somebody found a loophole. (Hm..., you may have been involved that time yourself. :-) -- --Guido van Rossum (python.org/~guido) From rdmurray at bitdance.com Thu Mar 1 18:16:06 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 01 Mar 2012 12:16:06 -0500 Subject: [Python-Dev] Spreading the Python 3 religion In-Reply-To: <20120301172431.1c358aba@pitrou.net> References: <20120228134113.B0E9F2500E4@webabinitio.net> <20120228095357.2b9fde87@resist.wooz.org> <20120228175056.Horde.KfPofklCcOxPTQXw0KqW1nA@webmail.df.eu> <20120301111136.GB4720@sleipnir.bytereef.org> <20120301153114.GA6307@sleipnir.bytereef.org> <20120301164226.3ff68c4f@pitrou.net> <20120301112419.77687197@limelight.wooz.org> <20120301172431.1c358aba@pitrou.net> Message-ID: <20120301171607.053FA2500E5@webabinitio.net> On Thu, 01 Mar 2012 17:24:31 +0100, Antoine Pitrou wrote: > On Thu, 1 Mar 2012 11:24:19 -0500 > Barry Warsaw wrote: > > > > I really do think that to the extent that you can do that kind of thing, you > > may end up with essentially Python 3 support without even realizing it. :) > > That's unlikely. Twisted processes bytes data a lot, and the bytes > indexing behaviour of 3.x is a chore for porting. The dodges you have to use work fine in python2 as well, though, so I think Barry's point stands, even if it does make the python2 code a bit uglier...but not as bad as the 2.5 exception hacks. Still, I'll grant that it would be a harder sell to upstream than the changes Barry mentioned. On the other hand, it's not like the code will get *prettier* once you drop Python2 support :(. --David From guido at python.org Thu Mar 1 18:42:56 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 1 Mar 2012 09:42:56 -0800 Subject: [Python-Dev] PEP 414 In-Reply-To: <20120229092856.2aeb9256@limelight.wooz.org> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <20120229092856.2aeb9256@limelight.wooz.org> Message-ID: I noticed there were some complaints about unnecessarily offensive language in PEP 414. Have those passages been edited to everyone's satisfaction? -- --Guido van Rossum (python.org/~guido) From victor.stinner at gmail.com Thu Mar 1 18:44:53 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 1 Mar 2012 18:44:53 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: > In App Engine's case, an attacker who broke out of the sandbox would have > access to the inside of Google's datacenter, which would obviously be > bad -- that's why Google has developed its own sandboxing > technologies. This is not specific to Google: if an attacker breaks a sandbox, he/she has access to everything. Depending on how the sandbox is implemented, you have more or less code to audit. pysandbox disables introspection in Python and create an empty namespace to reduce as much as possible the attack surface. You are to be very careful when you add a new feature/function and it is complex. > I do know that I don't feel comfortable having a sandbox in the Python > standard library or even recommending a 3rd party sandboxing solution frozendict would help pysandbox but also any security Python module, not security, but also (many) other use cases ;-) > I wasn't just referring of rexec/Bastion (though that definitely > shaped my thinking about this issue; much more recently someone (Tal, > I think was his name?) tried to come up with a sandbox and every time > he believed he had a perfect solution, somebody found a loophole. > (Hm..., you may have been involved that time yourself. :-) pysandbox is based on tav's approach, but it is more complete and implement more protections. It is also more functional (you have more available functions and features). I challenge anyone to try to break pysandbox! Victor From storchaka at gmail.com Thu Mar 1 18:56:38 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 01 Mar 2012 19:56:38 +0200 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <201203011547.08603.nd@perlig.de> References: <201203011029.32559.nd@perlig.de> <201203011547.08603.nd@perlig.de> Message-ID: 01.03.12 16:47, Andr? Malo ???????(??): > On Thursday 01 March 2012 15:17:35 Serhiy Storchaka wrote: >> This is the first rational use of frozendict that I see. However, a deep >> copy is still necessary to create the frozendict. For this case, I >> believe, would be better to "freeze" dict inplace and then copy-on-write >> it. > In my case it's actually a half one. The data mostly comes from memcache ;) > I'm populating the object and then I'm done with it. People wanting to modify > it, need to copy it, yes. OTOH usually a shallow copy is enough (here). What if people modify dicts in deep? a = frozendict({1: {2: 3}}) b = a.copy() c = a.copy() assert b[1][2] == 3 c[1][2] = 4 assert b[1][2] == 4 You need to copy incoming dict in depth. def frozencopy(value): if isinstance(value, list): return tuple(frozencopy(x) for x in value) if isinstance(value, dict): return frozendict((frozencopy(k), frozencopy(v)) for k, v in value.items()) return value # I'm lucky And when client wants to modify the result in depth it should call "unfrozencopy". Using frozendict profitable only when multiple clients are reading the result, but not modify it. Copy-on-write would help in all cases and would simplify the code. But this is a topic for python-ideas, sorry. From p.f.moore at gmail.com Thu Mar 1 19:06:14 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 1 Mar 2012 18:06:14 +0000 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: On 1 March 2012 17:44, Victor Stinner wrote: > I challenge anyone to try to break pysandbox! Can you explain precisely how a frozendict will help pysandbox? Then I'll be able to beat this challenge :-) Paul. From guido at python.org Thu Mar 1 19:07:20 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 1 Mar 2012 10:07:20 -0800 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: On Thu, Mar 1, 2012 at 9:44 AM, Victor Stinner wrote: > frozendict would help pysandbox but also any security Python module, > not security, but also (many) other use cases ;-) Well, let's focus on the other use cases, because to me the sandbox use case is too controversial (never mind how confident you are :-). I like thinking through the cache use case a bit more, since this is a common pattern. But I think it would be sufficient there to prevent accidental modification, so it should be sufficient to have a dict subclass that overrides the various mutating methods: __setitem__, __delitem__, pop(), popitem(), clear(), setdefault(), update(). Technically also __init__() -- although calling __init__() on an existing object can hardly be called an accident. As was pointed out this is easy to circumvent, but (together with a reminder in the documentation) should be sufficient to avoid mistakes. I imagine someone who actively wants to mess with the cache can probably also reach into the cache implementation directly. Also don't underestimate the speed of a shallow dict copy. What other use cases are there? (I have to agree with the folks pushing back hard. Even demonstrated repeated requests for a certain feature do not prove a need -- it's quite common for people who are trying to deal with some problem to go down the wrong rabbit hole in their quest for a solution, and ending up thinking they need a certain feature while completely overlooking a much simpler solution.) -- --Guido van Rossum (python.org/~guido) From victor.stinner at gmail.com Thu Mar 1 19:23:47 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 1 Mar 2012 19:23:47 +0100 Subject: [Python-Dev] Sandboxing Python Message-ID: Hi, The frozendict discussion switched somewhere to sandboxing, and so I prefer to start a new thread. There are various ways to implement a sandbox, but I would like to expose here how I implemented pysandbox to have your opinion. pysandbox is written to execute quickly a short untrusted function in a sandbox and then continue the normal execution of the program. It is possible to "enable" the sandbox, but also later to "disable" it. It is written for Python using only one thread and one process. To create a sandbox, pysandbox uses various protections. The main idea is to create an empty namespace and ensure that it is not possible to use objects added into the namespace for escaping from the sandbox. pysandbox only uses one thread and one process and so it doesn't replace the existing trusted namespace, but create a new one. The security of pysandbox depends on the sandbox namespace sealing. I don't want to integrate pysandbox in CPython because I am not yet conviced that the approach is secure by design. I am trying to patch Python to help the implementation of Python security modules and of read-only proxies. You can find below the list of protections implemented in pysandbox. Some of them are implemented in C. I challenge anymore to break pysandbox! I would be happy if anyone breaks it because it would make it more stronger. https://github.com/haypo/pysandbox/ http://pypi.python.org/pypi/pysandbox Namespace ========= * Make builtins read only * Remove function attribute: * frame.f_locals * function.func_closure/__closure__ * function.func_defaults/__defaults__ * function.func_globals/__globals__ * type.__subclasses__ * builtin_function.__self__ * Workaround the lack of frozendict, remove dict attributes: *__init__ * clear * __delitem__ * pop * popitem * setdefault * __setitem__ * update * Create a proxy for objects injected to the sandbox namespace and for the result of functions (the result of callable objects is also proxified) Generic ======= Remove all builtin symbols not in the whitelist. Features ======== import ------ * Replace __import__ function to use an import whitelist Filesystem ---------- * Replace open and file functions to deny access to the filesystem Exit ---- * Replace exit function * Remove SystemExit builtin exception Standard input/output --------------------- * Replace sys.stdin, sys.stdout and sys.stderr Bytecode ======== Execute arbitrary bytecode may crash Python, or lead to execution of arbitrary (machine) code. * Patch code.__new__() * Remove attributes: * function.func_code/__code__ * frame.f_code * generator.gi_code Victor From victor.stinner at gmail.com Thu Mar 1 19:29:14 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 1 Mar 2012 19:29:14 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: >> I challenge anyone to try to break pysandbox! > > Can you explain precisely how a frozendict will help pysandbox? Then > I'll be able to beat this challenge :-) See this email: http://mail.python.org/pipermail/python-dev/2012-February/117011.html The issue #14162 has also two patches: one to make it possible to use frozendict for __builtins__, and another one to create read-only types (which is more a proof-of-concept). http://bugs.python.org/issue14162 Victor From barry at python.org Thu Mar 1 19:46:53 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 1 Mar 2012 13:46:53 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <20120229092856.2aeb9256@limelight.wooz.org> Message-ID: <20120301134653.6a2c5ef6@resist.wooz.org> On Mar 01, 2012, at 09:42 AM, Guido van Rossum wrote: >I noticed there were some complaints about unnecessarily offensive >language in PEP 414. Have those passages been edited to everyone's >satisfaction? Not yet, but I believe Nick volunteered to do a rewrite. -Barry From albl500 at york.ac.uk Thu Mar 1 19:39:19 2012 From: albl500 at york.ac.uk (Alex Leach) Date: Thu, 01 Mar 2012 18:39:19 +0000 Subject: [Python-Dev] Compiling Python on Linux with Intel's icc Message-ID: <1998737.Z68k5Lus0U@metabuntu> Dear Python Devs, I've been attempting to compile a fully functional version of Python 2.7 using Intel's C compiler, having built supposedly optimal versions of numpy and scipy, using Intel Composer XE and Intel's Math Kernel Library. I can build a working Python binary, but I'd really appreciate if someone could check my compile options, and perhaps suggest ways I could further optimise the build. *** COMPILE FAILURE - ffi64.c *** I've managed to compile everything in the python distribution except for Modules/_ctypes/libffi/src/x86/ffi64.c. So to get the compilation to actually work, I've had to use the config option '--with-system-ffi'. If someone could suggest a patch for ffi64.c, I'd happily test it, as I've been unable to fix the code myself! The problem is with register_args, which uses GCC's __int128_t, but this doesn't exist when using icc. The include guard to use could be:- #ifdef __INTEL_COMPILER ... #else ... #endif I've tried using this guard around the register_args struct, at the top of ffi64.c, and where I see register_args used, around lines 592-616, according to the suggestion at http://software.intel.com/en- us/forums/showthread.php?t=56652, but have been unable to get a working solution... A patch would be appreciated! *** Tests *** After compilation, there's a few tests that are consistently failing, mainly involved with floating point precision: test_cmath, test_math and test_float. Also, I wrote a very short script to test the time of for loop execution and integer multiplication. This script (below) has nearly always completed faster using the default Ubuntu Python rather than my own build. Obviously, I was hoping to get a faster python, but the size of the final binary is almost twice the size of the default Ubuntu version (5.2MB cf. 2.7MB), which I thought might cause a startup overhead that leads to slower execution times when running such a basic script. *** TEST SCRIPT *** $ cat ~/bin/timetest.py RANGE = 10000 print "running {0}^2 = {1} for loop iterations".format( RANGE,RANGE**2 ) for i in xrange(RANGE): for j in xrange(RANGE): i * j *** TIMES *** ## ICC-compiled python ## $ time ./python ~/bin/timetest.py running 10000^2 = 100000000 for loop iterations real 0m2.767s user 0m2.720s sys 0m0.008s ## System python ## $ time python ~/bin/timetest.py running 10000^2 = 100000000 for loop iterations real 0m2.781s user 0m2.776s sys 0m0.000s Oh... My python appears to run faster than gcc's now - checked this a few times now, mine's staying faster... :) I've compiled and re-compiled python dozens of times now, but it's still failing some tests... *** Build Environment *** Ubuntu 10.10 server kernel (`uname -r`=3.0.0-16-server) with KDE 4.7.4 $ tail ~/.bashrc #### Custom Commands export PATH=$PATH:/usr/local/cuda/bin:$HOME/bin export PYTHONPATH=$HOME/bin:/usr/lib/pymodules/python2.7 export PYTHONSTARTUP=$HOME/.pystartup export LD_LIBRARY_PATH=/lib64:/usr/lib64:/usr/local/lib:/usr/local/cuda/lib64:/usr/local/cuda/lib # Load Intel compiler and library variables. source /usr/intel/bin/compilervars.sh intel64 source /usr/intel/impi/4.0.3/bin/mpivars.sh intel64 source /usr/intel/tbb/bin/tbbvars.sh intel64 $ env | grep 'PATH\|FLAGS' MANPATH=/usr/intel/impi/4.0.3.008/man:/usr/intel/composer_xe_2011_sp1.9.293/man/en_US:/usr/intel/composer_xe_2011_sp1.9.293/man/en_US:/usr/intel/impi/4.0.3.008/man:/usr/intel/composer_xe_2011_sp1.9.293/man/en_US:/usr/intel/composer_xe_2011_sp1.9.293/man/en_US:/usr/intel/impi/4.0.3.008/man:/usr/intel/composer_xe_2011_sp1.9.293/man/en_US:/usr/intel/composer_xe_2011_sp1.9.293/man/en_US:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/intel/man::: LIBRARY_PATH=/usr/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64//cc4.1.0_libc2.4_kernel2.6.16.21:/usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/ipp/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/mkl/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64//cc4.1.0_libc2.4_kernel2.6.16.21 FPATH=/usr/intel/composer_xe_2011_sp1.9.293/mkl/include:/usr/intel/composer_xe_2011_sp1.9.293/mkl/include LD_LIBRARY_PATH=/usr/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64//cc4.1.0_libc2.4_kernel2.6.16.21:/usr/intel/impi/4.0.3.008/ia32/lib:/usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/ipp/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/mkl/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64//cc4.1.0_libc2.4_kernel2.6.16.21:/biol/arb/lib:/lib64:/usr/lib64:/usr/local/lib:/usr/local/cuda/lib64:/usr/local/cuda/lib:/usr/intel/composer_xe_2011_sp1.9.293/debugger/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/mpirt/lib/intel64 CPATH=/usr/intel/composer_xe_2011_sp1.9.293/tbb/include:/usr/intel/composer_xe_2011_sp1.9.293/mkl/include:/usr/intel/composer_xe_2011_sp1.9.293/tbb/include NLSPATH=/usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64/locale/%l_%t/%N:/usr/intel/composer_xe_2011_sp1.9.293/ipp/lib/intel64/locale/%l_%t/%N:/usr/intel/composer_xe_2011_sp1.9.293/mkl/lib/intel64/locale/%l_%t/%N:/usr/intel/composer_xe_2011_sp1.9.293/debugger/intel64/locale/%l_%t/%N PATH=/usr/intel/impi/4.0.3.008/ia32/bin:/usr/intel/composer_xe_2011_sp1.9.293/bin/intel64:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/intel/bin:/usr/local/cuda/bin:/usr/local/cuda/bin:/usr/intel/composer_xe_2011_sp1.9.293/mpirt/bin/intel64 PYTHONPATH=/usr/lib/pymodules/python2.7/ WINDOWPATH=7 QT_PLUGIN_PATH=$HOME/.kde/lib/kde4/plugins/:/usr/lib/kde4/plugins/ *** Download, configure and Build instructions *** $ hg clone -r 2.7 http://hg.python.org/cpython Since... $ hg update -r 2.7 *** Generate Profile-Guided Optimisation stuff with first build *** $ make distclean && mkdir PGO $ CC=icc AR=xiar LD=xild CXX=icpc \ CPPFLAGS+="-I/usr/include \ -I/usr/include/x86_86-linux-gnu \ -I/usr/src/linux-headers-3.0.0-16-server/include/" \ CFLAGS+="-O3 \ -fomit-frame-pointer \ -shared-intel \ -fpic \ -prof-gen \ -prof-dir $PWD/PGO \ -fp-model precise \ -fp-model source \ -xHost \ -ftz" ./configure --with-system-ffi --with-libc="-lirc" --with-libm="-limf" $ make -j9 *** Use the PGO-generated information in new build *** $ make clean $ CC=icc AR=xiar LD=xild CXX=icpc \ CPPFLAGS+="-I/usr/include \ -I/usr/include/x86_86-linux-gnu \ -I/usr/src/linux-headers-3.0.0-16-server/include/" \ CFLAGS+="-O3 \ -fomit-frame-pointer \ -shared-intel \ -fpic \ -prof-use \ -prof-dir $PWD/PGO \ -fp-model precise \ -fp-model source \ -xHost \ -ftz \ -fomit-frame-pointer" \ ./configure --with-system-ffi --with-libc="-lirc" --with-libm="-limf" $ make -j9 ... $ make test building dbm using gdbm Python build finished, but the necessary bits to build these modules were not found: _bsddb bsddb185 dl imageop sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. find ./Lib -name '*.py[co]' -print | xargs rm -f ./python -Wd -3 -E -tt ./Lib/test/regrtest.py -l /usr/local/src/pysrc/cpython/Lib/unittest/util.py:2: ImportWarning: Not importing directory '/usr/local/src/pysrc/cpython/Lib/collections': missing __init__.py from collections import namedtuple, OrderedDict == CPython 2.7.3rc1 (2.7:5c52e7c6d868+, Feb 29 2012, 22:10:22) [GCC Intel(R) C++ gcc 4.6 mode] == Linux-3.0.0-16-server-x86_64-with-debian-wheezy-sid little-endian == /usr/local/src/pysrc/cpython/build/test_python_16278 Testing with flags: sys.flags(debug=0, py3k_warning=1, division_warning=1, division_new=0, inspect=0, interactive=0, optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=1, tabcheck=2, verbose=0, unicode=0, bytes_warning=0, hash_randomization=0) ......... test_cmath test test_cmath failed -- Traceback (most recent call last): File "/usr/local/src/pysrc/cpython/Lib/test/test_cmath.py", line 352, in test_specific_values msg=error_message) File "/usr/local/src/pysrc/cpython/Lib/test/test_cmath.py", line 94, in rAssertAlmostEqual 'got {!r}'.format(a, b)) AssertionError: acos0000: acos(complex(0.0, 0.0)) Expected: complex(1.5707963267948966, -0.0) Received: complex(1.5707963267948966, 0.0) Received value insufficiently close to expected value. ... test_curses skipped -- Use of the `curses' resource not enabled ... test_float test test_float failed -- Traceback (most recent call last): File "/usr/local/src/pysrc/cpython/Lib/test/test_float.py", line 1273, in test_from_hex self.identical(fromHex('0x0.ffffffffffffd6p-1022'), MIN-3*TINY) File "/usr/local/src/pysrc/cpython/Lib/test/test_float.py", line 983, in identical self.fail('%r not identical to %r' % (x, y)) AssertionError: 0.0 not identical to 2.2250738585072014e-308 ..... test test_strtod failed -- multiple errors occurred; run in verbose mode for details ...... 347 tests OK. 5 tests failed: test_cmath test_float test_gdb test_math test_strtod 1 test altered the execution environment: test_distutils 37 tests skipped: test_aepack test_al test_applesingle test_bsddb test_bsddb185 test_bsddb3 test_cd test_cl test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_curses test_dl test_gl test_imageop test_imgfile test_kqueue test_linuxaudiodev test_macos test_macostools test_msilib test_ossaudiodev test_scriptpackages test_smtpnet test_socketserver test_startfile test_sunaudiodev test_timeout test_tk test_ttk_guionly test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 4 skips unexpected on linux2: test_bsddb test_bsddb3 test_tk test_ttk_guionly make: *** [test] Error 1 *** Drill down to test_strtod error *** $ ./python Python 2.7.3rc1 (2.7:5c52e7c6d868+, Feb 29 2012, 22:10:22) [GCC Intel(R) C++ gcc 4.6 mode] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from test import test_strtod >>> test_strtod.test_main() test_bigcomp (test.test_strtod.StrtodTests) ... FAIL test_boundaries (test.test_strtod.StrtodTests) ... FAIL test_halfway_cases (test.test_strtod.StrtodTests) ... ok test_parsing (test.test_strtod.StrtodTests) ... FAIL test_particular (test.test_strtod.StrtodTests) ... FAIL test_short_halfway_cases (test.test_strtod.StrtodTests) ... ok test_underflow_boundary (test.test_strtod.StrtodTests) ... FAIL ====================================================================== FAIL: test_bigcomp (test.test_strtod.StrtodTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/src/pysrc/cpython/Lib/test/test_strtod.py", line 214, in test_bigcomp self.check_strtod(s) File "/usr/local/src/pysrc/cpython/Lib/test/test_strtod.py", line 105, in check_strtod "expected {}, got {}".format(s, expected, got)) AssertionError: Incorrectly rounded str->float conversion for 81608e-328: expected 0x0.0000000000002p-1022, got 0x0.0p+0 ====================================================================== FAIL: test_boundaries (test.test_strtod.StrtodTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/src/pysrc/cpython/Lib/test/test_strtod.py", line 191, in test_boundaries self.check_strtod(s) File "/usr/local/src/pysrc/cpython/Lib/test/test_strtod.py", line 105, in check_strtod "expected {}, got {}".format(s, expected, got)) AssertionError: Incorrectly rounded str->float conversion for 22250738585072002149149e-330: expected 0x0.ffffffffffffep-1022, got 0x0.0p+0 ====================================================================== FAIL: test_parsing (test.test_strtod.StrtodTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/src/pysrc/cpython/Lib/test/test_strtod.py", line 243, in test_parsing self.check_strtod(s) File "/usr/local/src/pysrc/cpython/Lib/test/test_strtod.py", line 105, in check_strtod "expected {}, got {}".format(s, expected, got)) AssertionError: Incorrectly rounded str->float conversion for -6.E-310: expected -0x0.06e7344a56502p-1022, got -0x0.0p+0 ====================================================================== FAIL: test_particular (test.test_strtod.StrtodTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/src/pysrc/cpython/Lib/test/test_strtod.py", line 393, in test_particular self.check_strtod(s) File "/usr/local/src/pysrc/cpython/Lib/test/test_strtod.py", line 105, in check_strtod "expected {}, got {}".format(s, expected, got)) AssertionError: Incorrectly rounded str->float conversion for 12579816049008305546974391768996369464963024663104e-357: expected 0x0.90bbd7412d19fp-1022, got 0x0.0p+0 ====================================================================== FAIL: test_underflow_boundary (test.test_strtod.StrtodTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/src/pysrc/cpython/Lib/test/test_strtod.py", line 205, in test_underflow_boundary self.check_strtod(s) File "/usr/local/src/pysrc/cpython/Lib/test/test_strtod.py", line 105, in check_strtod "expected {}, got {}".format(s, expected, got)) AssertionError: Incorrectly rounded str->float conversion for 24703282292062327208828439643411068618252990130716238221279284125033775363572e-400: expected 0x0.0000000000001p-1022, got 0x0.0p+0 ---------------------------------------------------------------------- Ran 7 tests in 0.280s FAILED (failures=5) Traceback (most recent call last): File "", line 1, in File "/usr/local/src/pysrc/cpython/Lib/test/test_strtod.py", line 396, in test_main test_support.run_unittest(StrtodTests) File "/usr/local/src/pysrc/cpython/Lib/test/test_support.py", line 1094, in run_unittest _run_suite(suite) File "/usr/local/src/pysrc/cpython/Lib/test/test_support.py", line 1077, in _run_suite raise TestFailed(err) test.test_support.TestFailed: multiple errors occurred *** Binary size and linked libraries *** ## My Intel build ## $ ls -l ./python && ldd ./python -rwxrwxr-x 1 user user 5.2M 2012-02-29 22:10 ./python linux-vdso.so.1 => (0x00007fffde1ec000) libirc.so => /usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64/libirc.so (0x00007fe5f0f30000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fe5f0cde000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fe5f0ada000) libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007fe5f08d7000) libimf.so => /usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64/libimf.so (0x00007fe5f050b000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fe5f0287000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fe5f0071000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe5efcd1000) /lib64/ld-linux-x86-64.so.2 (0x00007fe5f107e000) libintlc.so.5 => /usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64/libintlc.so.5 (0x00007fe5efb85000) ## System build ## $ ls -lhH /usr/bin/python && ldd /usr/bin/python -rwxr-xr-x 1 root root 2.7M 2011-10-04 22:26 /usr/bin/python linux-vdso.so.1 => (0x00007fff509ff000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f3e339b0000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f3e337ab000) libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f3e335a8000) libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f3e33357000) libcrypto.so.1.0.0 => /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007f3e32fa7000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f3e32d8f000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f3e32b0b000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f3e3276b000) /lib64/ld-linux-x86-64.so.2 (0x00007f3e33c03000) *** Conclusion (finally!) *** The Intel Python build looks very promising, but I don't yet trust it to the extent that I'd to go ahead and install it or use it in place of the system build. None of the errors look too alarming though, so I'm confident that I could actually get this to work, with the right help. If someone could help me pass these final tests and compile the ffi64.c module, that'd be amazing! I hope to hear back from you, Kind regards, Alex ps. Sorry how long this email turned out! pps. I'd be happy to write up the fully working solution on a wiki or somewhere, if anyone has any suggestions where? From vinay_sajip at yahoo.co.uk Thu Mar 1 20:00:57 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 1 Mar 2012 19:00:57 +0000 (UTC) Subject: [Python-Dev] PEP 414 References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <20120229092856.2aeb9256@limelight.wooz.org> Message-ID: Guido van Rossum python.org> writes: > I noticed there were some complaints about unnecessarily offensive > language in PEP 414. Have those passages been edited to everyone's > satisfaction? I'm not sure if Nick has finished his updates, but I for one would like to see some improvements in a few places: "Many thought that the unicode_literals future import might make a common source possible, but it turns out that it's doing more harm than good." Rather than talking about it doing more harm than good, it would be better to say that unicode_literals is not the best solution in some scenarios (specifically, WSGI, but any other scenarios can also be mentioned). The "more harm than good" is not true in all scenarios, but as it's worded now, it seems like it is always a bad approach. "(either by having a u function that marks things as unicode without future imports or the inverse by having a n function that marks strings as native). Unfortunately, this has the side effect of slowing down the runtime performance of Python and makes for less beautiful code." The use of u() and n() are not equivalent in the sense that n() only has to be used when unicode_literals are in effect, and the incidence of n() calls in an application would be much lower than using u() in the absence of unicode_literals. In at least some cases, it is possible that some of the APIs which fail unless native strings are provided may be broken (e.g. some database adapters expect datetimes in ISO format as native strings, where there is no apparent reason why they couldn't accept them as text). As far as "less beautiful" code is concerned, it's subjective: I see nothing especially ugly about 'xxx' for text, and certainly don't find u'xxx' "more" beautiful - and I doubt if I'm the only person with that view. The point about the added cognitive burden of semantic-changing __future__ imports is, however, quite valid. "As it stands, when chosing between 2.7 and Python 3.2, Python 3 is currently not the best choice for certain long-term investments, since the ecosystem is not yet properly developed, and libraries are still fighting with their API decisions for Python 3." This looks to become a self-fulfilling prophecy, if you take it seriously. You would expect that, if Python 3 is the future of Python, then Python 3 is *precisely* the choice for *long*-term investments. The ecosystem is not yet fully developed, true: but that is because some people aren't ready to grasp the nettle and undergo the short-term pain required to get things in order. By "things", I mean places in existing 2.x code where no distinction was made between bytes and text, which you could get away with because of 2.x's forgiving nature. Whether you're using unicode_literals and 'xxx' or u'xxx', these things will need to be sorted out, and the syntax element is only one possible focus. If that entire sentence is removed, it does the PEP no harm, and the PEP will antagonise fewer people. "A valid point is that this would encourage people to become dependent on Python 3.3 for their ports. Fortunately that is not a big problem since that could be fixed at installation time similar to how many projects are currently invoking 2to3 as part of their installation process." Yes, but avoiding the very pain of running 2to3 is what (at least in part) motivates the PEP in the first place. This appears to be moving the pain that 2.x developers feel when trying to move to 3.x, to people who want to support 3.2 and 3.3 and 2.6+ in the same codebase. "For Python 3.1 and Python 3.2 (even 3.0 if necessary) a simple on-installation hook could be provided that tokenizes all source files and strips away the otherwise unnecessary u prefix at installation time." There's some confusion about this hook - The PEP calls it an on-installation hook (like 2to3) but Nick said it was an import-time hook. I'm more comfortable with the latter - it has a chance of providing an acceptable performance for a large codebase, as it will only kick in when .py files are newer than their .pyc. A 2to3 like hook, when working with a large codebase like Django, is likely to be about as painful as people are finding 2to3 now (when used in an edit-test-edit-test workflow). "Possible Downsides" does not mention any possible adverse impact on single codebase for 3.2/3.3, which I mention only because it's still not clear how the hook which is to make 3.2 development easier will work (in terms of its impact on development workflow). In the section on "Modernizing code", "but to make strings cheap for both 2.x and 3.x it is nearly impossible. The way it currently works is by abusing the unicode-escape codec on Python 2.x native strings." IIUC, the unicode-escape codec is only needed if you don't use unicode_literals - am I wrong about that? How are strings not equally cheap (near enough) on 2.x and 3.x if you use unicode_literals? In the "Runtime overhead of wrappers", the times may be valid, but a rider should be added to the effect that in a realistic workload, the wrapper overhead will be somewhat diluted where wrapper calls are fairly infrequent (i.e. the unicode_literals and n() case). Of course, if the PEP is targeting Python 2.5 and earlier where unicode_literals is not available, then it should say so. I would say that the overall impression given by the PEP is that "the unicode_literals approach is not worth bothering with", and that I do not find to be true based on my own experience. Regards, Vinay Sajip From stefan at bytereef.org Thu Mar 1 20:32:58 2012 From: stefan at bytereef.org (Stefan Krah) Date: Thu, 1 Mar 2012 20:32:58 +0100 Subject: [Python-Dev] Compiling Python on Linux with Intel's icc In-Reply-To: <1998737.Z68k5Lus0U@metabuntu> References: <1998737.Z68k5Lus0U@metabuntu> Message-ID: <20120301193258.GA8210@sleipnir.bytereef.org> Hi, Alex Leach wrote: > I've managed to compile everything in the python distribution except for > Modules/_ctypes/libffi/src/x86/ffi64.c. There is an issue for this: http://bugs.python.org/issue4130 > After compilation, there's a few tests that are consistently failing, mainly > involved with floating point precision: test_cmath, test_math and test_float. I think you have to compile with "-fp-model strict". In general, please submit suspected bugs on http://bugs.python.org/ (after searching the archives) and post things like speed comparisons on python-list at python.org. Stefan Krah From yselivanov.ml at gmail.com Thu Mar 1 20:50:44 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 1 Mar 2012 14:50:44 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <20120229092856.2aeb9256@limelight.wooz.org> Message-ID: <33702B3F-B8A5-4A88-ACA0-A2C59F23D0F1@gmail.com> Vinay, Thank you for the comprehensive summary. Big +1. I really do hope that Nick and Armin will rectify the PEP. Otherwise, many of its points are moot, and we need to raise a question of rejecting it somehow. On 2012-03-01, at 2:00 PM, Vinay Sajip wrote: > Guido van Rossum python.org> writes: > >> I noticed there were some complaints about unnecessarily offensive >> language in PEP 414. Have those passages been edited to everyone's >> satisfaction? > > I'm not sure if Nick has finished his updates, but I for one would like to see > some improvements in a few places: > > "Many thought that the unicode_literals future import might make a common source > possible, but it turns out that it's doing more harm than good." > > Rather than talking about it doing more harm than good, it would be better to > say that unicode_literals is not the best solution in some scenarios > (specifically, WSGI, but any other scenarios can also be mentioned). The "more > harm than good" is not true in all scenarios, but as it's worded now, it seems > like it is always a bad approach. > > "(either by having a u function that marks things as unicode without future > imports or the inverse by having a n function that marks strings as native). > Unfortunately, this has the side effect of slowing down the runtime performance > of Python and makes for less beautiful code." > > The use of u() and n() are not equivalent in the sense that n() only has to be > used when unicode_literals are in effect, and the incidence of n() calls in an > application would be much lower than using u() in the absence of > unicode_literals. In at least some cases, it is possible that some of the APIs > which fail unless native strings are provided may be broken (e.g. some database > adapters expect datetimes in ISO format as native strings, where there is no > apparent reason why they couldn't accept them as text). > > As far as "less beautiful" code is concerned, it's subjective: I see nothing > especially ugly about 'xxx' for text, and certainly don't find u'xxx' "more" > beautiful - and I doubt if I'm the only person with that view. The point about > the added cognitive burden of semantic-changing __future__ imports is, however, > quite valid. > > "As it stands, when chosing between 2.7 and Python 3.2, Python 3 is currently > not the best choice for certain long-term investments, since the ecosystem is > not yet properly developed, and libraries are still fighting with their API > decisions for Python 3." > > This looks to become a self-fulfilling prophecy, if you take it seriously. You > would expect that, if Python 3 is the future of Python, then Python 3 is > *precisely* the choice for *long*-term investments. The ecosystem is not yet > fully developed, true: but that is because some people aren't ready to grasp the > nettle and undergo the short-term pain required to get things in order. By > "things", I mean places in existing 2.x code where no distinction was made > between bytes and text, which you could get away with because of 2.x's forgiving > nature. Whether you're using unicode_literals and 'xxx' or u'xxx', these things > will need to be sorted out, and the syntax element is only one possible focus. > > If that entire sentence is removed, it does the PEP no harm, and the PEP will > antagonise fewer people. > > "A valid point is that this would encourage people to become dependent on Python > 3.3 for their ports. Fortunately that is not a big problem since that could be > fixed at installation time similar to how many projects are currently invoking > 2to3 as part of their installation process." > > Yes, but avoiding the very pain of running 2to3 is what (at least in part) > motivates the PEP in the first place. This appears to be moving the pain that > 2.x developers feel when trying to move to 3.x, to people who want to support > 3.2 and 3.3 and 2.6+ in the same codebase. > > "For Python 3.1 and Python 3.2 (even 3.0 if necessary) a simple on-installation > hook could be provided that tokenizes all source files and strips away the > otherwise unnecessary u prefix at installation time." > > There's some confusion about this hook - The PEP calls it an on-installation > hook (like 2to3) but Nick said it was an import-time hook. I'm more comfortable > with the latter - it has a chance of providing an acceptable performance for a > large codebase, as it will only kick in when .py files are newer than their > .pyc. A 2to3 like hook, when working with a large codebase like Django, is > likely to be about as painful as people are finding 2to3 now (when used in an > edit-test-edit-test workflow). > > "Possible Downsides" does not mention any possible adverse impact on single > codebase for 3.2/3.3, which I mention only because it's still not clear how the > hook which is to make 3.2 development easier will work (in terms of its impact > on development workflow). > > In the section on "Modernizing code", > > "but to make strings cheap for both 2.x and 3.x it is nearly impossible. The way > it currently works is by abusing the unicode-escape codec on Python 2.x native > strings." > > IIUC, the unicode-escape codec is only needed if you don't use unicode_literals > - am I wrong about that? How are strings not equally cheap (near enough) on 2.x > and 3.x if you use unicode_literals? > > In the "Runtime overhead of wrappers", the times may be valid, but a rider > should be added to the effect that in a realistic workload, the wrapper overhead > will be somewhat diluted where wrapper calls are fairly infrequent (i.e. the > unicode_literals and n() case). > > Of course, if the PEP is targeting Python 2.5 and earlier where unicode_literals > is not available, then it should say so. I would say that the overall impression > given by the PEP is that "the unicode_literals approach is not worth bothering > with", and that I do not find to be true based on my own experience. > > Regards, > > Vinay Sajip > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com From nd at perlig.de Thu Mar 1 21:27:17 2012 From: nd at perlig.de (=?utf-8?q?Andr=C3=A9_Malo?=) Date: Thu, 1 Mar 2012 21:27:17 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <201203011547.08603.nd@perlig.de> Message-ID: <201203012127.18465@news.perlig.de> * Serhiy Storchaka wrote: > 01.03.12 16:47, Andr? Malo ???????(??): > > On Thursday 01 March 2012 15:17:35 Serhiy Storchaka wrote: > >> This is the first rational use of frozendict that I see. However, a > >> deep copy is still necessary to create the frozendict. For this case, > >> I believe, would be better to "freeze" dict inplace and then > >> copy-on-write it. > > > > In my case it's actually a half one. The data mostly comes from > > memcache ;) I'm populating the object and then I'm done with it. People > > wanting to modify it, need to copy it, yes. OTOH usually a shallow copy > > is enough (here). > > What if people modify dicts in deep? that's the "here" part. They can't [1]. These objects are typically ROLists of RODicts. Maybe nested deeper, but all RO* or other immutable types. I cheated, by deepcopying always in the cache, but defining __deepcopy__ for those RO* objects as "return self". nd [1] Well, an attacker could, because it's still based on regular dicts and lists. But thatswhy it's not a security feature, but a safety net (here). -- "Solides und umfangreiches Buch" -- aus einer Rezension From nd at perlig.de Thu Mar 1 21:35:07 2012 From: nd at perlig.de (=?iso-8859-1?q?Andr=E9_Malo?=) Date: Thu, 1 Mar 2012 21:35:07 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: Message-ID: <201203012135.07575@news.perlig.de> * Guido van Rossum wrote: > On Thu, Mar 1, 2012 at 9:44 AM, Victor Stinner wrote: > > frozendict would help pysandbox but also any security Python module, > > not security, but also (many) other use cases ;-) > > Well, let's focus on the other use cases, because to me the sandbox > use case is too controversial (never mind how confident you are :-). > > I like thinking through the cache use case a bit more, since this is a > common pattern. But I think it would be sufficient there to prevent > accidental modification, so it should be sufficient to have a dict > subclass that overrides the various mutating methods: __setitem__, > __delitem__, pop(), popitem(), clear(), setdefault(), update(). For the caching part, simply making the dictproxy type public would already help a lot. > What other use cases are there? dicts as keys or as set members. I do run into this from time to time and always get tuple(sorted(items()) or something like that. nd -- s s^saaaaaoaaaoaaaaooooaaoaaaomaaaa a alataa aaoat a a a maoaa a laoata a oia a o a m a o alaoooat aaool aaoaa matooololaaatoto aaa o a o ms;s;\s;s;g;y;s;:;s;y#mailto: # \51/\134\137| http://www.perlig.de #;print;# > nd at perlig.de From armin.ronacher at active-4.com Thu Mar 1 22:12:48 2012 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Thu, 01 Mar 2012 21:12:48 +0000 Subject: [Python-Dev] PEP 414 In-Reply-To: <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> Message-ID: <4F4FE650.8060402@active-4.com> Hi, On 2/29/12 12:30 PM, Yury Selivanov wrote: > I see you've (or somebody) changed: Yes, I reworded that. > Could you just remove the statement completely? I will let Nick handle the PEP wording. > I don't think that PEPs are the right place to put such polemic > and biased statements. Why call it polemic? If you want to use ubuntu LTS you're forcing yourself to stick to a particular Python version for a longer time. Which means you don't want to have to adjust your code. Which again means that you're better of with the Python 2.x ecosystem which is proven, does not change nearly as quickly as the Python 3 one (hopefully) so if you have the choice between those two you would chose 2.x over 3.x. That's what this sentence is supposed to say. That's not polemic, that's just a fact. > Nobody asked you to express your *personal* feelings and thoughts > about applicability or state of python3 in the PEP. That is not a personal-feeling-PEP. If people would be 100% happy with Python 3 we would not have these discussions, would we. Why is it that I'm getting "attacked" on this mailinglist for writing this PEP, or the wording etc. Regards, Armin From rdmurray at bitdance.com Thu Mar 1 22:32:15 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 01 Mar 2012 16:32:15 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: <4F4FE650.8060402@active-4.com> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> Message-ID: <20120301213217.600262500ED@webabinitio.net> On Thu, 01 Mar 2012 21:12:48 +0000, Armin Ronacher wrote: > Hi, > > On 2/29/12 12:30 PM, Yury Selivanov wrote: > > I see you've (or somebody) changed: > Yes, I reworded that. > > > Could you just remove the statement completely? > I will let Nick handle the PEP wording. > > > I don't think that PEPs are the right place to put such polemic > > and biased statements. > Why call it polemic? If you want to use ubuntu LTS you're forcing Presumably because it comes across to him that way. Perception aside, I do think it matches the dictionary meaning of the term ("One who writes in support of one opinion, doctrine, or system, in opposition to another"), which Nick's edits will presumably fix (by addressing all sides of the argument, as a finished PEP should). > yourself to stick to a particular Python version for a longer time. > Which means you don't want to have to adjust your code. Which again > means that you're better of with the Python 2.x ecosystem which is > proven, does not change nearly as quickly as the Python 3 one > (hopefully) so if you have the choice between those two you would chose > 2.x over 3.x. That's what this sentence is supposed to say. That's not > polemic, that's just a fact. Wow. I never would have guessed that from the sentence in question. I don't think I agree with your "that means" statement either, I can imagine other motivations for using an LTS. But I don't think that discussion is worth getting in to or matters for the PEP. > > Nobody asked you to express your *personal* feelings and thoughts > > about applicability or state of python3 in the PEP. > That is not a personal-feeling-PEP. If people would be 100% happy with > Python 3 we would not have these discussions, would we. > > Why is it that I'm getting "attacked" on this mailinglist for writing > this PEP, or the wording etc. I think it is because people are *perceiving* that you are attacking Python3 and arguing (out of your personal experience) that porting is harder than other people (out of their personal experience) have found it to be. This presumably reflects the different problem domains people are working in. --David From victor.stinner at gmail.com Thu Mar 1 22:59:51 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 1 Mar 2012 22:59:51 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: Message-ID: > I challenge anymore to break pysandbox! I would be happy if anyone > breaks it because it would make it more stronger. Hum, I should give some rules for such contest: - the C module (_sandbox) must be used - you have to get access to a object outside the sandbox, like a real module, or get access to a blocked resource (like the filesystem) - the best is to be able to write into the filesystem - you can use the interpreter ("python interpreter.py") to play with the sandbox, but you have to be able to reproduce with a simple script (e.g. using "python execfile.py script.py") pysandbox works on Python 2.5, 2.6 and 2.7. It does not officially support Python 3 yet. Example. --- $ python setup.py build $ PYTHONPATH=build/lib.*/ python interpreter.py --allow-path=/etc/issue pysandbox 1.1 Enabled features: codecs, encodings, exit, interpreter, site, stderr, stdin, stdout, traceback (use --features=help to enable the help function) Try to break the sandbox! sandbox>>> open('/etc/issue').read() 'Ubuntu 11.10 \\n \\l\n\n' sandbox>>> type(open('/etc/issue'))('test', 'w') Traceback (most recent call last): File "", line 1, in TypeError: object.__new__() takes no parameters --- You fail! I'm interested by vulnerabilities in pysandbox using the Python restricted module (used when _sandbox is missing), but it is not the official mode :-) And it is more limited: you cannot read files for example. See also sandbox tests to get some ideas ;-) Victor From barry at python.org Thu Mar 1 23:10:46 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 1 Mar 2012 17:10:46 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: <4F4FE650.8060402@active-4.com> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> Message-ID: <20120301171046.46d162a9@limelight.wooz.org> Hopefully, I can say the following in a constructive way. I certainly don't mean to attack anyone personally for their closely held beliefs, though I might disagree with them. And you have the right to those beliefs and to express them in a respectful and constructive manner on this mailing list, which I think you've done. No criticisms there. However, PEPs *are* official documents from the Python developer community, so I think it's required of us to present technical issues in an honest light, yet devoid of negative connotations which harm Python. On Mar 01, 2012, at 09:12 PM, Armin Ronacher wrote: >Why call it polemic? If you want to use ubuntu LTS you're forcing >yourself to stick to a particular Python version for a longer time. Not just a particular Python 3 version, but a particular Python 2 version too. And a particular kernel version, and version of Perl, Ruby, Java, gcc, etc. etc. That's kind of the whole point of an LTS. :) >Which means you don't want to have to adjust your code. Which again >means that you're better of with the Python 2.x ecosystem which is >proven, does not change nearly as quickly as the Python 3 one >(hopefully) so if you have the choice between those two you would chose >2.x over 3.x. That's what this sentence is supposed to say. That's not >polemic, that's just a fact. I don't agree with the conclusion. But none of that is germane to the PEP anyway. The PEP could simply say that for some domains, the ability to port code from Python 2 to Python 3 would be enhanced by the reintroduction of the u-prefix. It could even explain why WSGI applications in particular would benefit from this. That would be enough to justify Guido's acceptance of the PEP. Cheers, -Barry From yselivanov at gmail.com Thu Mar 1 23:38:00 2012 From: yselivanov at gmail.com (Yury Selivanov) Date: Thu, 1 Mar 2012 17:38:00 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: <4F4FE650.8060402@active-4.com> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> Message-ID: <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> Hi Armin, Sorry if I sounded like 'attacking' you. I certainly had no such intention, as I believe nobody on this list. But if you'd just stuck to the point, without touching very controversial topics of what version of python is a good choice and what is a bad, with full review of all porting scenarios with well-thought set of benchmarks, nobody would ever call your PEP "polemic". Thanks, - Yury On 2012-03-01, at 4:12 PM, Armin Ronacher wrote: > Hi, > > On 2/29/12 12:30 PM, Yury Selivanov wrote: >> I see you've (or somebody) changed: > Yes, I reworded that. > >> Could you just remove the statement completely? > I will let Nick handle the PEP wording. > >> I don't think that PEPs are the right place to put such polemic >> and biased statements. > Why call it polemic? If you want to use ubuntu LTS you're forcing > yourself to stick to a particular Python version for a longer time. > Which means you don't want to have to adjust your code. Which again > means that you're better of with the Python 2.x ecosystem which is > proven, does not change nearly as quickly as the Python 3 one > (hopefully) so if you have the choice between those two you would chose > 2.x over 3.x. That's what this sentence is supposed to say. That's not > polemic, that's just a fact. > >> Nobody asked you to express your *personal* feelings and thoughts >> about applicability or state of python3 in the PEP. > That is not a personal-feeling-PEP. If people would be 100% happy with > Python 3 we would not have these discussions, would we. > > Why is it that I'm getting "attacked" on this mailinglist for writing > this PEP, or the wording etc. > > > Regards, > Armin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com From armin.ronacher at active-4.com Thu Mar 1 23:52:56 2012 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Thu, 01 Mar 2012 22:52:56 +0000 Subject: [Python-Dev] PEP 414 In-Reply-To: <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> Message-ID: <4F4FFDC8.9020105@active-4.com> Hi, On 3/1/12 10:38 PM, Yury Selivanov wrote: > Sorry if I sounded like 'attacking' you. I certainly had no such > intention, as I believe nobody on this list. Sorry if I sound cranky but I got that impression from the responses here (which are greatly different from the responses I got on other communication channels and by peers). You were just the unlucky mail I responded to :-) > But if you'd just stuck to the point, without touching very > controversial topics of what version of python is a good choice > and what is a bad, with full review of all porting scenarios with > well-thought set of benchmarks, nobody would ever call your PEP > "polemic". I tried my best but obviously it was not good enough to please everybody. In all honesty I did not expect that such a small change would spawn such a great discussion. After all what we're discussing here is the introduction of one letter to literals :-) Regards, Armin From guido at python.org Fri Mar 2 00:11:35 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 1 Mar 2012 15:11:35 -0800 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <201203012135.07575@news.perlig.de> References: <201203012135.07575@news.perlig.de> Message-ID: On Thu, Mar 1, 2012 at 12:35 PM, Andr? Malo wrote: > * Guido van Rossum wrote: > >> On Thu, Mar 1, 2012 at 9:44 AM, Victor Stinner > wrote: >> > frozendict would help pysandbox but also any security Python module, >> > not security, but also (many) other use cases ;-) >> >> Well, let's focus on the other use cases, because to me the sandbox >> use case is too controversial (never mind how confident you are :-). >> >> I like thinking through the cache use case a bit more, since this is a >> common pattern. But I think it would be sufficient there to prevent >> accidental modification, so it should be sufficient to have a dict >> subclass that overrides the various mutating methods: __setitem__, >> __delitem__, pop(), popitem(), clear(), setdefault(), update(). > > For the caching part, simply making the dictproxy type public would already > help a lot. Heh, that's a great idea. Can you file a bug for that? >> What other use cases are there? > > dicts as keys or as set members. I do run into this from time to time and > always get tuple(sorted(items()) or something like that. I know I've done that once or twice in my life too, but it's a pretty rare use case and as you say the solution is simple enough. An alternative is frozenset(d.items()) -- someone should compare the timing of these for large dicts. -- --Guido van Rossum (python.org/~guido) From yselivanov at gmail.com Fri Mar 2 00:15:04 2012 From: yselivanov at gmail.com (Yury Selivanov) Date: Thu, 1 Mar 2012 18:15:04 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: <4F4FFDC8.9020105@active-4.com> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> Message-ID: <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> On 2012-03-01, at 5:52 PM, Armin Ronacher wrote: > Hi, > > On 3/1/12 10:38 PM, Yury Selivanov wrote: >> Sorry if I sounded like 'attacking' you. I certainly had no such >> intention, as I believe nobody on this list. > Sorry if I sound cranky but I got that impression from the responses > here (which are greatly different from the responses I got on other > communication channels and by peers). You were just the unlucky mail I > responded to :-) It's OK ;) >> But if you'd just stuck to the point, without touching very >> controversial topics of what version of python is a good choice >> and what is a bad, with full review of all porting scenarios with >> well-thought set of benchmarks, nobody would ever call your PEP >> "polemic". > I tried my best but obviously it was not good enough to please > everybody. In all honesty I did not expect that such a small change > would spawn such a great discussion. After all what we're discussing > here is the introduction of one letter to literals :-) Well, unfortunately it's not that simple from the standpoint of how this change will be perceived by the community. If we have u'' syntax in python 3, will people even understand what is the key difference from python 2? Will the internet be polluted with weird source-code targeted only for python3, but with the wide use of u''? When to deprecate it, and will it ever be deprecated (as everybody is already tired with all this)? Will it further strengthen the common misbelief the porting is hard (as for the many of the projects it is not), or that the right way it to have one code-base for all versions? And that's just the beginning of such questions. And when this PEP was suddenly approved, many of us felt that all those questions are not answered and were not even discussed. - Yury From vinay_sajip at yahoo.co.uk Fri Mar 2 00:55:26 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 1 Mar 2012 23:55:26 +0000 (UTC) Subject: [Python-Dev] PEP 414 References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> Message-ID: Armin Ronacher active-4.com> writes: > I tried my best but obviously it was not good enough to please > everybody. In all honesty I did not expect that such a small change > would spawn such a great discussion. After all what we're discussing > here is the introduction of one letter to literals The objections are not to the introduction of one letter to literals. It is the extent to which, in your presentation of the PEP, a narrow set of concerns and a specific addressing of those concerns has been represented as if it is the only possible view of all right-thinking people in the Python community. What is "obvious" to you may not be so to others - au contraire. A PEP author obviously will promote their specific views - it is an instrument of advocacy - but an author should not, in my view, arrogate to themselves the presumption of speaking for everyone in the community; rather, they should respect that others may have different sensibilities. The PEP comes across as being primarily motivated by WSGI concerns: Nick mentioned that he would update the PEP to "name drop" and indicate support from you, Jacob Kaplan-Moss and Chris McDonough - all authors of Web frameworks. While I completely acknowledge the importance and ubiquity of these web frameworks in the overall ecosystem, I think Python the language is about more than just Web development. There is a bit of a sense of the tail wagging the dog. Let's remember, it's possible to do Web development without the concept of "native" strings - this doesn't exist AFAIK in many other languages which allow Web applications to be developed - the concept is a sort of historical accident arising in part out of how the WSGI spec was written and evolved, interacting with how 3.x differs from 2.x, and how some legacy APIs expect native strings because they are broken. There are a number of possible ways of addressing the concerns which motivated the PEP, but in my view you have given some of them short shrift because of what come across as personal prejudices. An example - on a Reddit thread about PEP 414, I commented: "The PEP does not (for example) consider the possibility of leaving literals as they are and using a n('xxx') callable for native strings. Since there are very few places where native strings are needed, this approach is potentially less obtrusive than either u'xxx' or u('xxx')." Your response to this was: "Because in all honesty, because string wrappers make a codebase horrible to work with. I will have to continue maintaining 2.x versions for at least another four or five years. The idea if having to use string wrappers for that long makes me puke." I know that this was just a comment on Reddit and was not in the PEP, but it smacks of you throwing all your toys out of the pram. It certainly wasn't a reasoned response to my point. And some of that toys-pram attitude bleeds through into the language of the PEP, leading others to make the criticisms that they have. A PEP is supposed to be balanced, reasonable and thought through. It's not supposed to gloss over things in a hand-wavy sort of way - there's still uncertainty in my mind, for example, whether the 3.2 hook will be a 2to3-style tool that runs over a chunk of the whole project's codebase between editing and running a test, or whether it's an import-time hook which only kicks in on files that have just been edited in a development session. Which of these it is might affect crucially the experience of someone wanting to support 3.2 and 3.3 and 2.6+ - but that clearly isn't you, and you don't seem to have much sympathy or patience with that constituency - we're all stick-in-the-muds who want to work with Ubuntu LTS, rather than people subject to constraints imposed by employers, clients, projects we depend on etc. In contrast, Nick made a more reasonable case when commenting ion my preference for unicode_literals (on this list, not on Reddit), by reminding me about how unicode_literals changes the semantics of string literals, and this increases the cognitive burden on developers. I'm not whinging about the PEP in this post - I've said elsewhere that I wasn't opposed to it. I'm just trying to respond to your apparent bewilderment at some of the reaction to the PEP. I have confidence that with your continued input and Nick's input, the wording of the PEP can be made such that it doesn't ruffle quite so many feathers. I'm looking forward to seeing the updates. Regards, Vinay Sajip From dreamingforward at gmail.com Fri Mar 2 01:25:35 2012 From: dreamingforward at gmail.com (Mark Janssen) Date: Thu, 1 Mar 2012 17:25:35 -0700 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: On Thu, Mar 1, 2012 at 10:00 AM, Guido van Rossum wrote: > > I do know that I don't feel comfortable having a sandbox in the Python > standard library or even recommending a 3rd party sandboxing solution > -- if someone uses the sandbox to protect a critical resource, and a > hacker breaks out of the sandbox, the author of the sandbox may be > held responsible for more than they bargained for when they made it > open source. (Doesn't an open source license limit your > responsibility? Who knows. AFAIK this question has not gotten to court > yet. I wouldn't want to have to go to court over it.) > Since there's no way (even theoretical way) to completely secure anything (remember the DVD protection wars?), there's no way there should be any liability if reasonable diligence is performed to provide security where expected (which is probably calculable to some %-age of assets protected). It's like putting a lock on the door of your house -- you can't expect to be held liable is someone has a crowbar. Open sourcing code could be said to be a disclaimer on any liability as your letting people know that you've got nothing your trying to conceal. It's like a dog who plays dead: by being totally open you're actually more secure.... mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Fri Mar 2 01:25:52 2012 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 01 Mar 2012 19:25:52 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: <4F4FFDC8.9020105@active-4.com> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/01/2012 05:52 PM, Armin Ronacher wrote: > Hi, > > On 3/1/12 10:38 PM, Yury Selivanov wrote: >> Sorry if I sounded like 'attacking' you. I certainly had no such >> intention, as I believe nobody on this list. > Sorry if I sound cranky but I got that impression from the responses > here (which are greatly different from the responses I got on other > communication channels and by peers). You were just the unlucky mail > I responded to :-) Several responses on the list *have* been offensive, not criticizing the PEP on its own merits but on your (presumed) motives for introducing it. Such attacks are wildly off-base. >> But if you'd just stuck to the point, without touching very >> controversial topics of what version of python is a good choice and >> what is a bad, with full review of all porting scenarios with >> well-thought set of benchmarks, nobody would ever call your PEP >> "polemic". > I tried my best but obviously it was not good enough to please > everybody. In all honesty I did not expect that such a small change > would spawn such a great discussion. After all what we're discussing > here is the introduction of one letter to literals :-) Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk9QE5AACgkQ+gerLs4ltQ6m7wCdHufuDMrplgg0+MQr4M10Y4Oy N74AoJW5UKbUjPOdrreeTT38C9Cl2eFk =DkBX -----END PGP SIGNATURE----- From victor.stinner at gmail.com Fri Mar 2 01:39:32 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 02 Mar 2012 01:39:32 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: <4F5016C4.5010501@gmail.com> Le 01/03/2012 19:07, Guido van Rossum a ?crit : > What other use cases are there? frozendict could be used to implement "read-only" types: it is not possible to add or remove an attribute or set an attribute value, but attribute value can be a mutable object. Example of an enum with my type_final.patch (attached to issue #14162). >>> class Color: ... red=1 ... green=2 ... blue=3 ... __final__=True ... >>> Color.red 1 >>> Color.red=2 TypeError: 'frozendict' object does not support item assignment >>> Color.yellow=4 TypeError: 'frozendict' object does not support item assignment >>> Color.__dict__ frozendict({...}) The implementation avoids the private PyDictProxy for read-only types, type.__dict__ gives directly access to the frozendict (but type.__dict__=newdict is still blocked). The "__final__=True" API is just a proposition, it can be anything else, maybe a metaclass. Using a frozendict for type.__dict__ is not the only possible solution to implement read-only types. There are also Python implementation using properties. Using a frozendict is faster than using properties because getting an attribute is just a fast dictionary lookup, whereas reading a property requires to execute a Python function. The syntax to declare a read-only class is also more classic using the frozendict approach. Victor From guido at python.org Fri Mar 2 01:50:06 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 1 Mar 2012 16:50:06 -0800 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <4F5016C4.5010501@gmail.com> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <4F5016C4.5010501@gmail.com> Message-ID: On Thu, Mar 1, 2012 at 4:39 PM, Victor Stinner wrote: > Le 01/03/2012 19:07, Guido van Rossum a ?crit : > >> What other use cases are there? > > > frozendict could be used to implement "read-only" types: it is not possible > to add or remove an attribute or set an attribute value, but attribute value > can be a mutable object. Example of an enum with my type_final.patch > (attached to issue #14162). > >>>> class Color: > ... ? red=1 > ... ? green=2 > ... ? blue=3 > ... ? __final__=True > ... >>>> Color.red > 1 >>>> Color.red=2 > > TypeError: 'frozendict' object does not support item assignment >>>> Color.yellow=4 > > TypeError: 'frozendict' object does not support item assignment >>>> Color.__dict__ > frozendict({...}) > > The implementation avoids the private PyDictProxy for read-only types, > type.__dict__ gives directly access to the frozendict (but > type.__dict__=newdict is still blocked). > > The "__final__=True" API is just a proposition, it can be anything else, > maybe a metaclass. > > Using a frozendict for type.__dict__ is not the only possible solution to > implement read-only types. There are also Python implementation using > properties. Using a frozendict is faster than using properties because > getting an attribute is just a fast dictionary lookup, whereas reading a > property requires to execute a Python function. The syntax to declare a > read-only class is also more classic using the frozendict approach. I think you should provide stronger arguments in each case why the data needs to be truly immutable or read-only, rather than just using a convention or an "advisory" API (like __private can be circumvented but clearly indicates intent to the reader). -- --Guido van Rossum (python.org/~guido) From rdmurray at bitdance.com Fri Mar 2 02:50:51 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 01 Mar 2012 20:50:51 -0500 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <4F5016C4.5010501@gmail.com> Message-ID: <20120302015052.A57942500E5@webabinitio.net> On Thu, 01 Mar 2012 16:50:06 -0800, Guido van Rossum wrote: > On Thu, Mar 1, 2012 at 4:39 PM, Victor Stinner wrote: > > frozendict could be used to implement "read-only" types: it is not possible > > to add or remove an attribute or set an attribute value, but attribute value > > can be a mutable object. Example of an enum with my type_final.patch > > (attached to issue #14162). [...] > > I think you should provide stronger arguments in each case why the > data needs to be truly immutable or read-only, rather than just using > a convention or an "advisory" API (like __private can be circumvented > but clearly indicates intent to the reader). +1. Except in very limited circumstances (such as a security sandbox) I would *much* rather have the code I'm interacting with use advisory means rather than preventing me from being a consenting adult. (Having to name mangle by hand when someone has used a __ method is painful enough, thank you...good thing the need to do that doesn't dome up often (mostly only in unit tests)). --David From ncoghlan at gmail.com Fri Mar 2 03:06:21 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 2 Mar 2012 12:06:21 +1000 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <20120302015052.A57942500E5@webabinitio.net> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <4F5016C4.5010501@gmail.com> <20120302015052.A57942500E5@webabinitio.net> Message-ID: On Fri, Mar 2, 2012 at 11:50 AM, R. David Murray wrote: > +1. ?Except in very limited circumstances (such as a security sandbox) > I would *much* rather have the code I'm interacting with use advisory > means rather than preventing me from being a consenting adult. ?(Having to > name mangle by hand when someone has used a __ method is painful enough, > thank you...good thing the need to do that doesn't dome up often (mostly > only in unit tests)). The main argument I'm aware of in favour of this kind of enforcement is that it means you get exceptions at the point of *error* (trying to modify the "read-only" dict), rather than having a strange action-at-a-distance data mutation bug to track down. However, in that case, it's just fine (and in fact better) if there is a way around the default enforcement via a more verbose spelling. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Fri Mar 2 03:13:44 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 1 Mar 2012 21:13:44 -0500 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <4F5016C4.5010501@gmail.com> Message-ID: <10608AD7-7410-41F4-AE3D-ED9B4B9D5C32@gmail.com> On 2012-03-01, at 7:50 PM, Guido van Rossum wrote: > I think you should provide stronger arguments in each case why the > data needs to be truly immutable or read-only, rather than just using > a convention or an "advisory" API (like __private can be circumvented > but clearly indicates intent to the reader). Here's one more argument to support frozendicts. For last several months I've been thinking about prohibiting coroutines (generators + greenlets in our framework) to modify the global state. If there is a guarantee that all coroutines of the whole application, modules and framework are 100% safe from that, it's possible to do some interesting stuff. For instance, dynamically balance jobs across all application processes: @coroutine def on_generate_report(context): data = yield fetch_financial_data(context) ... In the above example, 'fetch_financial_data' may be executed in the different process, or even on the different server, if the coroutines' scheduler of current process decides so (based on its load, or a low priority of the coroutine being scheduled). With built-in frozendict it will be easier to secure modules or functions' __globals__ that way, allowing to play with features closer to the ones Erlang and other concurrent languages provide. - Yury From guido at python.org Fri Mar 2 03:31:41 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 1 Mar 2012 18:31:41 -0800 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: <10608AD7-7410-41F4-AE3D-ED9B4B9D5C32@gmail.com> References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <4F5016C4.5010501@gmail.com> <10608AD7-7410-41F4-AE3D-ED9B4B9D5C32@gmail.com> Message-ID: On Thu, Mar 1, 2012 at 6:13 PM, Yury Selivanov wrote: > On 2012-03-01, at 7:50 PM, Guido van Rossum wrote: >> I think you should provide stronger arguments in each case why the >> data needs to be truly immutable or read-only, rather than just using >> a convention or an "advisory" API (like __private can be circumvented >> but clearly indicates intent to the reader). > > > Here's one more argument to support frozendicts. > > For last several months I've been thinking about prohibiting coroutines > (generators + greenlets in our framework) to modify the global state. > If there is a guarantee that all coroutines of the whole application, > modules and framework are 100% safe from that, it's possible to do some > interesting stuff. ?For instance, dynamically balance jobs across all > application processes: > > @coroutine > def on_generate_report(context): > ? ?data = yield fetch_financial_data(context) > ? ?... > > In the above example, 'fetch_financial_data' may be executed in the > different process, or even on the different server, if the coroutines' > scheduler of current process decides so (based on its load, or a low > priority of the coroutine being scheduled). > > With built-in frozendict it will be easier to secure modules or > functions' __globals__ that way, allowing to play with features closer > to the ones Erlang and other concurrent languages provide. That sounds *very* far-fetched. You're pretty much designing a new language variant. It's not an argument for burdening the original language with a data type it doesn't need for itself. You should be able to prototype what you want using an advisory subclass (if you subclass dict and add __slots__=[] to it, it will cost very little overhead) or using a custom extension that implements the flavor of frozendict that works best for you -- given that you're already using greenlets, another extension can't be a bid burden. -- --Guido van Rossum (python.org/~guido) From yselivanov.ml at gmail.com Fri Mar 2 03:44:25 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 1 Mar 2012 21:44:25 -0500 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <4F5016C4.5010501@gmail.com> <10608AD7-7410-41F4-AE3D-ED9B4B9D5C32@gmail.com> Message-ID: <5534DCF0-7143-4FB6-B054-F4F9821CC514@gmail.com> On 2012-03-01, at 9:31 PM, Guido van Rossum wrote: > That sounds *very* far-fetched. You're pretty much designing a new > language variant. It's not an argument for burdening the original Yeah, that's what we do ;) > You should be able to prototype what you want using an advisory > subclass (if you subclass dict and add __slots__=[] to it, it will > cost very little overhead) or using a custom extension that implements > the flavor of frozendict that works best for you -- given that you're > already using greenlets, another extension can't be a bid burden. I understand. The only reason I wrote about it is to give an idea of how frozendicts may be used besides just sandboxing. I'm not strongly advocating for it, though. - Yury From merwok at netwok.org Fri Mar 2 05:08:45 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Fri, 02 Mar 2012 05:08:45 +0100 Subject: [Python-Dev] PEP 414 In-Reply-To: <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> Message-ID: <4F5047CD.1040007@netwok.org> Hello, Le 02/03/2012 00:15, Yury Selivanov a ?crit : > And that's just the beginning of such questions. And when this PEP > was suddenly approved, many of us felt that all those questions are > not answered and were not even discussed. Let me comment on that ?suddenly?. We joke about Guido being the dictator for Python, but it?s actually not a joke. The point of the PEP process is to help Guido make an informed decision on a proposed change. (There are also side benefits like providing a record of design or implementation choices, or documenting once and for all why some idea will never be accepted, but let?s ignore them here.) I can?t read Guido?s mind, but I think that here he pronounced somewhat quickly because he was convinced by the arguments in the PEP, while choosing to ignore the problems therein, knowing that they could be fixed later. Regards From ncoghlan at gmail.com Fri Mar 2 05:48:24 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 2 Mar 2012 14:48:24 +1000 Subject: [Python-Dev] PEP 414 In-Reply-To: <4F5047CD.1040007@netwok.org> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> Message-ID: On Fri, Mar 2, 2012 at 2:08 PM, ?ric Araujo wrote: > I can?t read Guido?s mind, but I think that here he pronounced somewhat > quickly because he was convinced by the arguments in the PEP, while > choosing to ignore the problems therein, knowing that they could be > fixed later. It's also the case that this particular point has been the subject of debate for a *long* time. When the decision was first made to offer the unicode_literals future import, one of the other contenders was just to allow the u/U prefix in Python 3 and not worry about it, and while the "purity" side carried the day back then, it was a close run thing. While that approach turned out to work pretty well for many users that didn't use unicode literals all that much, the web community understandably feel like they're being actively *punished* for doing Unicode right in Python 2. Consider: an application that uses 8-bit strings everywhere and blows up on non-ASCII data in Python 2 has at least a fighting chance to run unmodified *and* handle Unicode properly on Python 3. Because unicode literals are gone, a Unicode-aware Python 2 application currently has *no* chance to run unmodified on Python 3. So even though the PEP doesn't currently do a good job of *presenting* that history to new readers, it's still very much a factor in the decision making process. Accordingly, I'd like to ask folks not to stress too much about the precise wording until I get a chance to update it over the weekend :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ezio.melotti at gmail.com Fri Mar 2 10:33:04 2012 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Fri, 02 Mar 2012 11:33:04 +0200 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> Message-ID: <4F5093D0.8020900@gmail.com> > [quoting Armin from Reddit] > "Because in all honesty, because string wrappers make a codebase horrible to > work with. I will have to continue maintaining 2.x versions for at least another > four or five years. The idea if having to use string wrappers for that long > makes me puke." Reading this led me to think the following: * 2.5 is now available basically everywhere, and it was released almost 5 years ago (Sep 2006); * if it takes the same time for 3.3, it will be widespread after 4-5 years (i.e. 2016-2017) [0]; * if you want to target a Python 3 version that is widespread [1], you will want to support 3.1/3.2 too in the meanwhile; * therefore you will have to use the hook on 3.1/3.2; * in 2016-2017 you'll finally be able to drop 3.1/3.2 and use only 3.3 without hooks; * in 2016-2017 you'll also stop maintaining the 2.x version (according to that quote); * if you are not maintaining 2.x anymore, you won't need u'' -- right when you could finally stop using the hook; Now, if the hook doesn't get in the way (AFAIU you just have to "install" it and it will do its work automatically), wouldn't it be better to use it in 3.3 too (especially considering that you will probably have to use it already for 3.1/3.2)? If my reasoning is correct, by the time you will be able to use u without problems you will have to start phasing it out because you won't need to support 2.x anymore. Is this hook available somewhere? How difficult is the installation? Does it strip the u automatically or is there a further step that developers should do before testing on 3.1/3.2? Best Regards, Ezio Melotti [0]: ISTM that people think "once you decide to switch to 3.x, there's really no reason to pick an older release, just pick the latest (3.3)". While this might be true for single developers that install it by hand, I don't think it's the same for distros and I expect for 3.x the same time span between release and widespread availability that we have with 2.x (i.e. 4-5 years). However this is just an assumption, if you have more accurate information that can show that the time span will indeed be shorter for 3.x (e.g. 2-3 year), feel free to prove me wrong. [1]: I think most projects still support 2.5, some support even older versions, some support only newer ones, but 2.5 as minimum support version seems a good average to me. Targeting the same use base seems reasonable to me (albeit not strictly necessary). > > I know that this was just a comment on Reddit and was not in the PEP, but it > smacks of you throwing all your toys out of the pram. It certainly wasn't a > reasoned response to my point. And some of that toys-pram attitude bleeds > through into the language of the PEP, leading others to make the criticisms that > they have. A PEP is supposed to be balanced, reasonable and thought through. > It's not supposed to gloss over things in a hand-wavy sort of way - there's > still uncertainty in my mind, for example, whether the 3.2 hook will be a > 2to3-style tool that runs over a chunk of the whole project's codebase between > editing and running a test, or whether it's an import-time hook which only kicks > in on files that have just been edited in a development session. Which of these > it is might affect crucially the experience of someone wanting to support 3.2 > and 3.3 and 2.6+ - but that clearly isn't you, and you don't seem to have much > sympathy or patience with that constituency - we're all stick-in-the-muds who > want to work with Ubuntu LTS, rather than people subject to constraints imposed > by employers, clients, projects we depend on etc. > > In contrast, Nick made a more reasonable case when commenting ion my preference > for unicode_literals (on this list, not on Reddit), by reminding me about how > unicode_literals changes the semantics of string literals, and this increases > the cognitive burden on developers. > > I'm not whinging about the PEP in this post - I've said elsewhere that I wasn't > opposed to it. I'm just trying to respond to your apparent bewilderment at some > of the reaction to the PEP. > > I have confidence that with your continued input and Nick's input, the wording > of the PEP can be made such that it doesn't ruffle quite so many feathers. I'm > looking forward to seeing the updates. > > Regards, > > Vinay Sajip > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ezio.melotti%40gmail.com > From stephen at xemacs.org Fri Mar 2 10:55:16 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 02 Mar 2012 18:55:16 +0900 Subject: [Python-Dev] PEP 414 In-Reply-To: <33702B3F-B8A5-4A88-ACA0-A2C59F23D0F1@gmail.com> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <20120229092856.2aeb9256@limelight.wooz.org> <33702B3F-B8A5-4A88-ACA0-A2C59F23D0F1@gmail.com> Message-ID: <87r4xbcoor.fsf@uwakimon.sk.tsukuba.ac.jp> Yury Selivanov writes: > Otherwise, many of its points are moot, and we need to raise a > question of rejecting it somehow. Yury, that's not going to happen. Guido made it quite clear that he agrees with those who consider this PEP useful, obvious, and safe, and the PEP *is* approved. There has been no hint of second thoughts, and AFAICS no reason why there would be. Please be patient, as Nick has taken on the next revision of this PEP with Armin's approval, and has indicated multiple times that he may take some time to actually do it because of other personal commitments. Regards, From lukasz at langa.pl Fri Mar 2 10:59:30 2012 From: lukasz at langa.pl (=?iso-8859-2?Q?=A3ukasz_Langa?=) Date: Fri, 2 Mar 2012 10:59:30 +0100 Subject: [Python-Dev] PEP 414 In-Reply-To: <4F5093D0.8020900@gmail.com> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> Message-ID: <8A645697-647B-409D-9B68-7F9770BA20FB@langa.pl> Wiadomo?? napisana przez Ezio Melotti w dniu 2 mar 2012, o godz. 10:33: > Now, if the hook doesn't get in the way (AFAIU you just have to "install" it and it will do its work automatically), wouldn't it be better to use it in 3.3 too (especially considering that you will probably have to use it already for 3.1/3.2)? +1 -- Best regards, ?ukasz Langa Senior Systems Architecture Engineer IT Infrastructure Department Grupa Allegro Sp. z o.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Fri Mar 2 11:12:17 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 02 Mar 2012 19:12:17 +0900 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> Message-ID: <87pqcvcnwe.fsf@uwakimon.sk.tsukuba.ac.jp> Mark Janssen writes: > Since there's no way (even theoretical way) to completely secure anything > (remember the DVD protection wars?), there's no way there should be any > liability if reasonable diligence is performed to provide security where > expected (which is probably calculable to some %-age of assets > protected). That's not how the law works, sorry. Look up "consequential damages," "contributory negligence," and "attractive nuisance." I'm not saying that anybody will lose *in* court, but one can surely be taken *to* court. If that happens to you, you've already lost (even if the other side can't win). > Open sourcing code could be said to be a disclaimer on any liability as > your letting people know that you've got nothing your trying to > conceal. Again, you seem to be revealing your ignorance of the law (not to mention security -- a safe is supposed to be secure even if the burglar has the blueprints). A comprehensive and presumably effective disclaimer is part of the license, but it's not clear that even that works. AFAIK such disclaimers are not well-tested in court. Guido is absolutely right. There is a risk here (not in the frozendict type, of course), but in distributing an allegedly effective sandbox. I doubt Victor as an individual doing research has a problem; the PSF is another matter. BTW, Larry Rosen's book on Open Source Licensing is a good reference. Andrew St. Laurent also has a book out, I like Larry's better but YMMV. From stefan_ml at behnel.de Fri Mar 2 11:19:37 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 02 Mar 2012 11:19:37 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() Message-ID: Hi, I just stumbled over this assertion in _PyManagedBuffer_FromObject() in the latest Py3.3 branch: """ static PyObject * _PyManagedBuffer_FromObject(PyObject *base) { _PyManagedBufferObject *mbuf; mbuf = mbuf_alloc(); if (mbuf == NULL) return NULL; if (PyObject_GetBuffer(base, &mbuf->master, PyBUF_FULL_RO) < 0) { /* mbuf->master.obj must be NULL. */ Py_DECREF(mbuf); return NULL; } /* Assume that master.obj is a new reference to base. */ assert(mbuf->master.obj == base); return (PyObject *)mbuf; } """ I'm not saying that this is likely to happen, but I could imagine code that wants to use a different object for the cleanup than itself, possibly for keeping a certain kind of state when it delivers more than one buffer, or for remembering what kind of allocation was used, or ... Given that the buffer will eventually get released by the object pointed to by the view->obj field in the Py_buffer struct, is there a reason why it should be asserted that this is the same as the object that originally provided the buffer? Stefan From albl500 at york.ac.uk Fri Mar 2 11:55:59 2012 From: albl500 at york.ac.uk (Alex Leach) Date: Fri, 02 Mar 2012 10:55:59 +0000 Subject: [Python-Dev] Compiling Python on Linux with Intel's icc Message-ID: <5677871.1R3IvOX91B@metabuntu> Stefan Krah wrote: > Alex Leach wrote: > > I've managed to compile everything in the python distribution except for > > Modules/_ctypes/libffi/src/x86/ffi64.c. > > There is an issue for this: > > http://bugs.python.org/issue4130 Yes, I saw that bug report, but it looked dormant. It is. In 4 years it's only had one post (from you I see), and no proposed fix. The link you posted there is the same link I posted (somewhere) in my previous email... > > After compilation, there's a few tests that are consistently failing, mainly > > involved with floating point precision: test_cmath, test_math and test_float. > > I think you have to compile with "-fp-model strict". > Thanks, I'll give that a go and will report back! > > In general, please submit suspected bugs on http://bugs.python.org/ > (after searching the archives) and post things like speed comparisons on> python-list at python.org. > Thanks again. My only other concern is with distutils, as it doesn't support icc on a Xeon. However, numpy.distutils is almost compatible. I've had to make some mods to the flags in numpy.distutils.intelccompiler and numpy.distutils.fcompiler.intel, but it would be nice if this support was also included in the global distutils... Can the numpy version be used in place of the standard distutils? Again, there's probably a more proper place to ask... I'll suggest patches for these numpy modules to the numpy devs, but it would be nice if the core python distutils supported icc too. Thanks for your time! Kind regards, Alex From albl500 at york.ac.uk Fri Mar 2 11:40:18 2012 From: albl500 at york.ac.uk (Alex Leach) Date: Fri, 02 Mar 2012 10:40:18 +0000 Subject: [Python-Dev] Compiling Python on Linux with Intel's icc Message-ID: <1684689.Oa2ZQP9mMR@metabuntu> Stefan Krah wrote: > Alex Leach wrote: > > I've managed to compile everything in the python distribution except for > > Modules/_ctypes/libffi/src/x86/ffi64.c. > > There is an issue for this: > > http://bugs.python.org/issue4130 Yes, I saw that bug report, but it looked dormant. It is. In 4 years it's only had one post (from you I see), and no proposed fix. The link you posted there is the same link I posted (somewhere) in my previous email... > > After compilation, there's a few tests that are consistently failing, mainly > > involved with floating point precision: test_cmath, test_math and test_float. > > I think you have to compile with "-fp-model strict". > Thanks, I'll give that a go and will report back! > > In general, please submit suspected bugs on http://bugs.python.org/ > (after searching the archives) and post things like speed comparisons on> python-list at python.org. > Thanks again. My only other concern is with distutils, as it doesn't support icc on a Xeon. However, numpy.distutils is almost compatible. I've had to make some mods to the flags in numpy.distutils.intelccompiler and numpy.distutils.fcompiler.intel, but it would be nice if this support was also included in the standard distutils... Can the numpy version be used in place of the standard distutils? Again, there's probably a more proper place to ask... I'll suggest patches for these numpy modules to the numpy devs, but it would be nice if the core python distutils supported icc too. Thanks for your time! Kind regards, Alex From merwok at netwok.org Fri Mar 2 12:07:54 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Fri, 02 Mar 2012 12:07:54 +0100 Subject: [Python-Dev] Compiling Python on Linux with Intel's icc In-Reply-To: <5677871.1R3IvOX91B@metabuntu> References: <5677871.1R3IvOX91B@metabuntu> Message-ID: <4F50AA0A.4040808@netwok.org> Hi, Le 02/03/2012 11:55, Alex Leach a ?crit : > My only other concern is with distutils, as it doesn't support > icc on a Xeon. Could you expand on that? distutils is supposed to support all unix-like C compilers. Regards From beamesleach at gmail.com Fri Mar 2 12:21:46 2012 From: beamesleach at gmail.com (Alex Leach) Date: Fri, 02 Mar 2012 11:21:46 +0000 Subject: [Python-Dev] Compiling Python on Linux with Intel's icc Message-ID: <6091576.PQIn2aBGG3@metabuntu> > ?ric Araujo wrote: > > Could you expand on that? distutils is supposed to support all > unix-like C compilers. Packages that use the numpy distutils can be built with the following options:- $ python setup.py config --compiler=intelem --fcompiler=intelem build -- compiler=intelem install This allows distutils to set the appropriate compile flags. Modules built with the normal distutils always raise warnings about unsupported flags, e.g. -fwrapv. icc needs to calm down a bit on certain floating point arithmetic optimisations, needing some option like '-fp-model strict', as Stefan suggested. '-xHost' is a good flag to use too, allowing icc to detect the CPU type, and use appropriate optimisations. The only way I can build modules now is by using environment variables, e.g:- $ CC=icc CXX=icpc LD=xild AR=xiar python setup.py config build build_ext But this then uses gcc-specific flags when compiling. Cheers, Alex From victor.stinner at gmail.com Fri Mar 2 12:36:05 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 2 Mar 2012 12:36:05 +0100 Subject: [Python-Dev] Add a frozendict builtin type In-Reply-To: References: <61817B63-B0D8-46FA-8284-45F88266516D@gmail.com> <4F5016C4.5010501@gmail.com> Message-ID: > I think you should provide stronger arguments in each case why the > data needs to be truly immutable or read-only, rather than just using > a convention or an "advisory" API (like __private can be circumvented > but clearly indicates intent to the reader). I only know one use case for "truly immutable or read-only" object (frozendict, "read-only" type, read-only proxy, etc.): security. I know three modules using a C extension to implement read only objects: zope.proxy, zope.security and mxProxy. pysandbox uses more ugly tricks to implement read-only proxies :-) Such modules are used to secure web applications for example. A frozendict type doesn't replace these modules but help to implement security modules. http://www.egenix.com/products/python/mxBase/mxProxy/ http://pypi.python.org/pypi/zope.proxy http://pypi.python.org/pypi/zope.security Victor From albl500 at york.ac.uk Fri Mar 2 11:31:07 2012 From: albl500 at york.ac.uk (Alex Leach) Date: Fri, 02 Mar 2012 10:31:07 +0000 Subject: [Python-Dev] Compiling Python on Linux with Intel's icc Message-ID: <25114031.N1TajLbDrU@metabuntu> Stefan Krah wrote: > Alex Leach wrote: > > I've managed to compile everything in the python distribution except for > > Modules/_ctypes/libffi/src/x86/ffi64.c. > > There is an issue for this: > > http://bugs.python.org/issue4130 Yes, I saw that bug report, but it looked dormant. It is. In 4 years it's only had one post (from you I see), and no proposed fix. The link you posted there is the same link I posted (somewhere) in my previous email... > > After compilation, there's a few tests that are consistently failing, mainly > > involved with floating point precision: test_cmath, test_math and test_float. > > I think you have to compile with "-fp-model strict". > Thanks, I'll give that a go and will report back! > > In general, please submit suspected bugs on http://bugs.python.org/ > (after searching the archives) and post things like speed comparisons on> python-list at python.org. > Thanks again. My only other concern is with distutils, as it doesn't support icc on a Xeon. However, numpy.distutils is almost compatible. I've had to make some mods to the flags in numpy.distutils.intelccompiler and numpy.distutils.fcompiler.intel, but it would be nice if this support was also included in the standard distutils... Can the numpy version be used in place of the standard distutils? Again, there's probably a more proper place to ask... I'll suggest patches for these numpy modules to the numpy devs, but it would be nice if the core python distutils supported icc too. Thanks for your time! Kind regards, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at bytereef.org Fri Mar 2 12:53:11 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 2 Mar 2012 12:53:11 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: Message-ID: <20120302115311.GA13663@sleipnir.bytereef.org> Stefan Behnel wrote: > if (PyObject_GetBuffer(base, &mbuf->master, PyBUF_FULL_RO) < 0) { > /* mbuf->master.obj must be NULL. */ > Py_DECREF(mbuf); > return NULL; > } > > /* Assume that master.obj is a new reference to base. */ > assert(mbuf->master.obj == base); > I'm not saying that this is likely to happen, but I could imagine code that > wants to use a different object for the cleanup than itself, possibly for > keeping a certain kind of state when it delivers more than one buffer, or > for remembering what kind of allocation was used, or ... I /think/ a different cleanup object would be possible, but memoryview now has the m.obj attribute that let's you see easily which object the view actually references. That attribute would then point to the cleanup handler. Note that the complexity is such that I would have to go through the whole code again to be *sure* that it's possible. So I'd rather see that people just don't use such schemes (unless there is a storm of protest). The assumption is clearly documented in: http://docs.python.org/dev/c-api/buffer.html#Py_buffer http://docs.python.org/dev/c-api/typeobj.html#buffer-object-structures Since the Py_buffer.obj filed was undocumented in 3.2, I think we're within out rights to restrict the field to the exporter. Stefan Krah From hs at ox.cx Fri Mar 2 12:58:56 2012 From: hs at ox.cx (Hynek Schlawack) Date: Fri, 2 Mar 2012 12:58:56 +0100 Subject: [Python-Dev] PEP 414 In-Reply-To: <4F5093D0.8020900@gmail.com> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> Message-ID: <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> Hi Ezio, Am 02.03.2012 um 10:33 schrieb Ezio Melotti: >> [quoting Armin from Reddit] >> "Because in all honesty, because string wrappers make a codebase horrible to >> work with. I will have to continue maintaining 2.x versions for at least another >> four or five years. The idea if having to use string wrappers for that long >> makes me puke." > Reading this led me to think the following: > * 2.5 is now available basically everywhere, and it was released almost 5 years ago (Sep 2006); > * if it takes the same time for 3.3, it will be widespread after 4-5 years (i.e. 2016-2017) [0]; > * if you want to target a Python 3 version that is widespread [1], you will want to support 3.1/3.2 too in the meanwhile; > * therefore you will have to use the hook on 3.1/3.2; > * in 2016-2017 you'll finally be able to drop 3.1/3.2 and use only 3.3 without hooks; > * in 2016-2017 you'll also stop maintaining the 2.x version (according to that quote); > * if you are not maintaining 2.x anymore, you won't need u'' -- right when you could finally stop using the hook; I don't think you can compare 2.5 and 3.2 like that. Although 3.2 is/will be shipped with some distributions, it never has, and never will have, the adoption of 2.5 that was "mainstream" for a quite long time. 3.3 is the IMHO the first 3.x release that brings really cool stuff to the table and might be the tipping point for people to start embracing Python 3 ? despite the fact that Ubuntu LTS will alas ship 3.2 for the next 10 years. I hope for some half-official back port there. :) Re the language thingie (not directed towards Ezio): It's true that Armin tends to be opinionated ? maybe even polemic. However I can't recall a case where he personally attacked people like it happened here. Regards, Hynek From stefan at bytereef.org Fri Mar 2 13:07:19 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 2 Mar 2012 13:07:19 +0100 Subject: [Python-Dev] Compiling Python on Linux with Intel's icc In-Reply-To: <25114031.N1TajLbDrU@metabuntu> References: <25114031.N1TajLbDrU@metabuntu> Message-ID: <20120302120719.GA14084@sleipnir.bytereef.org> Alex Leach wrote: > > http://bugs.python.org/issue4130 > > Yes, I saw that bug report, but it looked dormant. If a bug report is dormant, you have to wake it up by subscribing to the issue and leaving a comment. The particular case is a low priority issue since icc defines __GNUC__ and should therefore support the types in question. Stefan Krah From ncoghlan at gmail.com Fri Mar 2 13:25:08 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 2 Mar 2012 22:25:08 +1000 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: Message-ID: On Fri, Mar 2, 2012 at 8:19 PM, Stefan Behnel wrote: > I'm not saying that this is likely to happen, but I could imagine code that > wants to use a different object for the cleanup than itself, possibly for > keeping a certain kind of state when it delivers more than one buffer, or > for remembering what kind of allocation was used, or ... Supporting that kind of behaviour is what the "internal" field is for. However, given the lack of control, an assert() isn't the appropriate tool here - PyObject_GetBuffer itself should be *checking* the constraint and then reporting an error if the check fails. Otherwise a misbehaving extension module could trivially crash the Python interpreter by returning a bad Py_buffer. Regards, Nick -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From stefan_ml at behnel.de Fri Mar 2 13:35:42 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 02 Mar 2012 13:35:42 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: <20120302115311.GA13663@sleipnir.bytereef.org> References: <20120302115311.GA13663@sleipnir.bytereef.org> Message-ID: Stefan Krah, 02.03.2012 12:53: > Stefan Behnel wrote: >> if (PyObject_GetBuffer(base, &mbuf->master, PyBUF_FULL_RO) < 0) { >> /* mbuf->master.obj must be NULL. */ >> Py_DECREF(mbuf); >> return NULL; >> } >> >> /* Assume that master.obj is a new reference to base. */ >> assert(mbuf->master.obj == base); > > >> I'm not saying that this is likely to happen, but I could imagine code that >> wants to use a different object for the cleanup than itself, possibly for >> keeping a certain kind of state when it delivers more than one buffer, or >> for remembering what kind of allocation was used, or ... > > I /think/ a different cleanup object would be possible, but memoryview now > has the m.obj attribute that let's you see easily which object the view > actually references. That attribute would then point to the cleanup handler. > > Note that the complexity is such that I would have to go through the whole > code again to be *sure* that it's possible. > > So I'd rather see that people just don't use such schemes (unless there > is a storm of protest). > > The assumption is clearly documented in: > > http://docs.python.org/dev/c-api/buffer.html#Py_buffer > http://docs.python.org/dev/c-api/typeobj.html#buffer-object-structures > > Since the Py_buffer.obj filed was undocumented in 3.2, I think we're within > out rights to restrict the field to the exporter. Careful. There are tons of code out there that use the buffer interface, and the "obj" field has been the way to handle the buffer release ever since the interface actually worked (somewhere around the release of Py3.0, IIRC). Personally, I never read the documentation above (which was written way after the design and implementation of the buffer interface). I initially looked at the (outdated) PEP, and then switched to reading the code once it started to divert substantially from the PEP. I'm sure there are many users out there who have never seen the second link above, and still some who aren't aware that the "exporting object" in the first link is required to be identical with the one that "__getbuffer__()" was called on. Just think of an object that acts as a fa?ade to different buffers. I'm well aware of the complexity of the implementation. However, even if the assert was (appropriately, as Nick noted) replaced by an exception, it's still not all that unlikely that it breaks user code (assuming that it currently works). The decision to enforce this restriction should not be taken lightly. Stefan From stefan at bytereef.org Fri Mar 2 13:55:40 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 2 Mar 2012 13:55:40 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: Message-ID: <20120302125540.GA14210@sleipnir.bytereef.org> Nick Coghlan wrote: > However, given the lack of control, an assert() isn't the appropriate > tool here - PyObject_GetBuffer itself should be *checking* the > constraint and then reporting an error if the check fails. Otherwise a > misbehaving extension module could trivially crash the Python > interpreter by returning a bad Py_buffer. I'm not so sure. Extension modules that use the C-API in wrong or undocumented ways can always crash the interpreter. This assert() should be triggered in the first unit test of the module. Now, if the module does not have unit tests or they don't test against a new Python version is that really our problem? Modules do need to be recompiled anyway due to the removal of Py_buffer.smalltable, otherwise they will almost certainly crash. Perhaps an addition to whatsnew/3.3 would be sufficient. Stefan Krah From ncoghlan at gmail.com Fri Mar 2 14:22:30 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 2 Mar 2012 23:22:30 +1000 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: <20120302125540.GA14210@sleipnir.bytereef.org> References: <20120302125540.GA14210@sleipnir.bytereef.org> Message-ID: On Fri, Mar 2, 2012 at 10:55 PM, Stefan Krah wrote: > Nick Coghlan wrote: >> However, given the lack of control, an assert() isn't the appropriate >> tool here - PyObject_GetBuffer itself should be *checking* the >> constraint and then reporting an error if the check fails. Otherwise a >> misbehaving extension module could trivially crash the Python >> interpreter by returning a bad Py_buffer. > > I'm not so sure. Extension modules that use the C-API in wrong or > undocumented ways can always crash the interpreter. This assert() > should be triggered in the first unit test of the module. Now, if > the module does not have unit tests or they don't test against a > new Python version is that really our problem? Crashing out with a C assert when we can easily give them a nice Python traceback instead is unnecessarily unfriendly. As Stefan Behnel pointed out, by tightening up the API semantics, we're already running the risk of breaking applications that relied on looking at what the old code *did*, since it clearly deviated from both spec (the PEP) and the documentation (which didn't explain how ReleaseBuffer works at all). > Modules do need to be recompiled anyway due to the removal of > Py_buffer.smalltable, otherwise they will almost certainly crash. > Perhaps an addition to whatsnew/3.3 would be sufficient. That, updating the 2.7 and 3.2 docs with a reference to the fleshed out 3.3 semantics and converting the assert() to a Python exception should cover it. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From stefan at bytereef.org Fri Mar 2 14:33:39 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 2 Mar 2012 14:33:39 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: <20120302115311.GA13663@sleipnir.bytereef.org> Message-ID: <20120302133339.GA14528@sleipnir.bytereef.org> Stefan Behnel wrote: > > http://docs.python.org/dev/c-api/buffer.html#Py_buffer > > http://docs.python.org/dev/c-api/typeobj.html#buffer-object-structures > > > > Since the Py_buffer.obj filed was undocumented in 3.2, I think we're within > > out rights to restrict the field to the exporter. > > Careful. There are tons of code out there that use the buffer interface, > and the "obj" field has been the way to handle the buffer release ever > since the interface actually worked (somewhere around the release of Py3.0, > IIRC). > > Personally, I never read the documentation above (which was written way > after the design and implementation of the buffer interface). The documentation has been largely re-written for 3.3. > looked at the (outdated) PEP, and then switched to reading the code once it > started to divert substantially from the PEP. I'm sure there are many users > out there who have never seen the second link above, and still some who > aren't aware that the "exporting object" in the first link is required to > be identical with the one that "__getbuffer__()" was called on. Just think > of an object that acts as a fa??ade to different buffers. That's exactly what the ndarray test object from Modules/_testbuffer.c can do. You can push new buffers onto a linked list and present different ones to each consumer. [Note that IMO that's a questionable design, but it's a test object.] The recommended way of keeping track of resources is to use Py_buffer.internal. I think that part is also appropriately mentioned in the original PEP, though I can perfectly understand if someone misses it due to the huge amount of information that needs to be absorbed. > it's still not all that unlikely that it breaks user code (assuming that it > currently works). The decision to enforce this restriction should not be > taken lightly. As I said, user code using the (also undocumented) Py_buffer.smalltable will also be broken. Stefan Krah From ezio.melotti at gmail.com Fri Mar 2 14:37:32 2012 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Fri, 02 Mar 2012 15:37:32 +0200 Subject: [Python-Dev] PEP 414 In-Reply-To: <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> Message-ID: <4F50CD1C.2090800@gmail.com> > Hi Ezio, > > Am 02.03.2012 um 10:33 schrieb Ezio Melotti: >> Reading this led me to think the following: >> * 2.5 is now available basically everywhere, and it was released almost 5 years ago (Sep 2006); >> * if it takes the same time for 3.3, it will be widespread after 4-5 years (i.e. 2016-2017) [0]; >> * if you want to target a Python 3 version that is widespread [1], you will want to support 3.1/3.2 too in the meanwhile; >> * therefore you will have to use the hook on 3.1/3.2; >> * in 2016-2017 you'll finally be able to drop 3.1/3.2 and use only 3.3 without hooks; >> * in 2016-2017 you'll also stop maintaining the 2.x version (according to that quote); >> * if you are not maintaining 2.x anymore, you won't need u'' -- right when you could finally stop using the hook; > I don't think you can compare 2.5 and 3.2 like that. Although 3.2 is/will be shipped with some distributions, it never has, and never will have, the adoption of 2.5 that was "mainstream" for a quite long time. But I don't think the adoption of 3.2 will affect the decisions that distros will take about 3.3. Even in the unlikely case that e.g. Debian/RHEL make Python 3.3 available as soon as it's released, not everyone will immediately upload to the latest Debian or RHEL version. The point is that regardless the current Python 3 situation, it will take a few years before 3.3 will be widely available on most of the machine. For example I work on a server where I have 3.1. When/if it will be updated it will probably get 3.2, not 3.3 -- and this might happen in a couple of years. If I want 3.3 I will probably have to wait another couple of years. Other people might have to wait less time, others more. > 3.3 is the IMHO the first 3.x release that brings really cool stuff to the table and might be the tipping point for people to start embracing Python 3 ? despite the fact that Ubuntu LTS will alas ship 3.2 for the next 10 years. I hope for some half-official back port there. :) I heard this about 3.1 and 3.2 too, and indeed they are both perfectly valid releases. The fact that 3.3 is even cooler doesn't mean that 3.1/3.2 are not cool. (I'm perfectly fine with the aforementioned server and 3.1, and currently I don't miss anything that is new in 3.2/3.3.) Best Regards, Ezio Melotti From regebro at gmail.com Fri Mar 2 14:49:13 2012 From: regebro at gmail.com (Lennart Regebro) Date: Fri, 2 Mar 2012 14:49:13 +0100 Subject: [Python-Dev] PEP 414 In-Reply-To: <4F50CD1C.2090800@gmail.com> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> <4F50CD1C.2090800@gmail.com> Message-ID: Just my 2 cents on the PEP rewrite: u'' support is not just if you want to write code that doesn't use 2to3. Even when you use 2to3 it is useful to be able to flag strings s binary, unicode or "native". //Lennart From stefan at bytereef.org Fri Mar 2 15:00:53 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 2 Mar 2012 15:00:53 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: <20120302133339.GA14528@sleipnir.bytereef.org> References: <20120302115311.GA13663@sleipnir.bytereef.org> <20120302133339.GA14528@sleipnir.bytereef.org> Message-ID: <20120302140053.GA14860@sleipnir.bytereef.org> Stefan Krah wrote: > > Careful. There are tons of code out there that use the buffer interface, > > and the "obj" field has been the way to handle the buffer release ever > > since the interface actually worked (somewhere around the release of Py3.0, > > IIRC). > > > > Personally, I never read the documentation above (which was written way > > after the design and implementation of the buffer interface). > > The documentation has been largely re-written for 3.3. But even for 3.0 it's not obvious to me why 'obj' should refer to anything but the exporter: http://docs.python.org/release/3.0/c-api/typeobj.html "obj is the object to export ..." Stefan Krah From storchaka at gmail.com Fri Mar 2 15:26:00 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Fri, 02 Mar 2012 16:26:00 +0200 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> <4F50CD1C.2090800@gmail.com> Message-ID: 02.03.12 15:49, Lennart Regebro ???????(??): > Just my 2 cents on the PEP rewrite: > > u'' support is not just if you want to write code that doesn't use > 2to3. Even when you use 2to3 it is useful to be able to flag strings s > binary, unicode or "native". What "native" means in context Python 3 only? "Native" strings only have meaning if we consider Python 2 and Python 3 together. "Native" string is a text string, which was binary in Python 2. There is a flag for such strings -- str(). From stefan_ml at behnel.de Fri Mar 2 15:39:22 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 02 Mar 2012 15:39:22 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: <20120302125540.GA14210@sleipnir.bytereef.org> Message-ID: Nick Coghlan, 02.03.2012 14:22: > On Fri, Mar 2, 2012 at 10:55 PM, Stefan Krah wrote: >> Nick Coghlan wrote: >>> However, given the lack of control, an assert() isn't the appropriate >>> tool here - PyObject_GetBuffer itself should be *checking* the >>> constraint and then reporting an error if the check fails. Otherwise a >>> misbehaving extension module could trivially crash the Python >>> interpreter by returning a bad Py_buffer. >> >> I'm not so sure. Extension modules that use the C-API in wrong or >> undocumented ways can always crash the interpreter. This assert() >> should be triggered in the first unit test of the module. Now, if >> the module does not have unit tests or they don't test against a >> new Python version is that really our problem? > > Crashing out with a C assert when we can easily give them a nice > Python traceback instead is unnecessarily unfriendly. As Stefan Behnel > pointed out, by tightening up the API semantics, we're already running > the risk of breaking applications that relied on looking at what the > old code *did*, since it clearly deviated from both spec (the PEP) and > the documentation (which didn't explain how ReleaseBuffer works at > all). > >> Modules do need to be recompiled anyway due to the removal of >> Py_buffer.smalltable, otherwise they will almost certainly crash. > >> Perhaps an addition to whatsnew/3.3 would be sufficient. > > That, updating the 2.7 and 3.2 docs with a reference to the fleshed > out 3.3 semantics and converting the assert() to a Python exception > should cover it. One problem here: if the code raises an exception, it should properly clean up after itself. Meaning that it must call PyBuffer_Release() on the already acquired buffer - thus proving that the code actually works, except that it decides to raise an exception. I keep failing to see the interest in making this an error in the first place. Why would the object that bf_getbuffer() is being called on have to be identical with the one that exports the buffer? Stefan From ncoghlan at gmail.com Fri Mar 2 15:53:47 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 3 Mar 2012 00:53:47 +1000 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: <20120302125540.GA14210@sleipnir.bytereef.org> Message-ID: On Sat, Mar 3, 2012 at 12:39 AM, Stefan Behnel wrote: >> I keep failing to see the interest in making this an error in the first > place. Why would the object that bf_getbuffer() is being called on have to > be identical with the one that exports the buffer? OK, I misunderstood your suggestion. So you actually want to just remove the assert altogether, thus allowing delegation of the buffer API by defining *only* the getbuffer slot and setting obj to point to a different object? I don't see any obvious problems with that, either. It would need new test cases and some documentation updates, though. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From solipsis at pitrou.net Fri Mar 2 15:52:58 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 2 Mar 2012 15:52:58 +0100 Subject: [Python-Dev] Compiling Python on Linux with Intel's icc References: <1998737.Z68k5Lus0U@metabuntu> Message-ID: <20120302155258.39b7597e@pitrou.net> On Thu, 01 Mar 2012 18:39:19 +0000 Alex Leach wrote: > > Obviously, I was hoping to get a faster python, but the size of the final > binary is almost twice the size of the default Ubuntu version (5.2MB cf. > 2.7MB), which I thought might cause a startup overhead that leads to slower > execution times when running such a basic script. Did you compare the actual code sizes? The `size` command can help you with that. > *** TEST SCRIPT *** > $ cat ~/bin/timetest.py > > RANGE = 10000 > > print "running {0}^2 = {1} for loop iterations".format( RANGE,RANGE**2 ) > > for i in xrange(RANGE): > for j in xrange(RANGE): > i * j That's an extremely silly benchmark, unlikely to be representative of any actual Python workload. I suggest you try a less-trivial benchmark suite, such as: http://hg.python.org/benchmarks/ Regards Antoine. From stefan at bytereef.org Fri Mar 2 16:30:57 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 2 Mar 2012 16:30:57 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: <20120302125540.GA14210@sleipnir.bytereef.org> Message-ID: <20120302153057.GA14973@sleipnir.bytereef.org> Stefan Behnel wrote: > I keep failing to see the interest in making this an error in the first > place. First, it is meant to guard against random pointers in the view.obj field, precisely because view.obj was undocumented and exporters might not fill in the field. Then, as I said, the exporter is exposed on the Python level now: >>> exporter = b'123' >>> x = memoryview(exporter) >>> x.obj == exporter True >>> x.obj b'123' > Why would the object that bf_getbuffer() is being called on have to > be identical with the one that exports the buffer? It doesn't have to be. This is now possible: >>> from _testbuffer import * >>> exporter = b'123' >>> nd = ndarray(exporter) >>> m = memoryview(nd) >>> nd.obj b'123' >>> m.obj Stefan Krah From albl500 at york.ac.uk Fri Mar 2 17:29:39 2012 From: albl500 at york.ac.uk (Alex Leach) Date: Fri, 02 Mar 2012 16:29:39 +0000 Subject: [Python-Dev] Compiling Python on Linux with Intel's icc In-Reply-To: <20120302155258.39b7597e@pitrou.net> Message-ID: On 02/03/2012 14:52, "Antoine Pitrou" wrote: > >Did you compare the actual code sizes? The `size` command can help you >with that. I'd never used `size` before... Thanks for the tip; looks like the Intel build is actually smaller..? :/ # ICC version (`ls -lh` ==> 4.7MB) $ size ./python text data bss dec hex filename 1659760 276904 63760 2000424 1e8628 ./python # System version (`ls -lhH` ==>2.7MB) $ size /usr/bin/python text data bss dec hex filename 2303805 427728 74808 2806341 2ad245 /usr/bin/python I definitely don't get what's going on here! Does this information relate to linked objects being in shared or static libs? Is this indicative anything, either good or bad? > >That's an extremely silly benchmark, unlikely to be representative of >any actual Python workload. I suggest you try a less-trivial benchmark >suite, such as: http://hg.python.org/benchmarks/ lol, yes it is a silly benchmark! Still, when I first started compiling python, without any optimisation options, this silly little script took up to 6-8x more time to process than the default GCC version. (~17s cf. <3s). And the script hardly took that long to write! Thanks for the benchmark recommendation; I'll use that on the next build - hopefully after passing the math tests! Cheers, Alex > From stefan at bytereef.org Fri Mar 2 17:42:26 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 2 Mar 2012 17:42:26 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: <20120302153057.GA14973@sleipnir.bytereef.org> References: <20120302125540.GA14210@sleipnir.bytereef.org> <20120302153057.GA14973@sleipnir.bytereef.org> Message-ID: <20120302164226.GA15907@sleipnir.bytereef.org> Stefan Krah wrote: > > Why would the object that bf_getbuffer() is being called on have to > > be identical with the one that exports the buffer? > > It doesn't have to be. This is now possible: > > >>> from _testbuffer import * > >>> exporter = b'123' > >>> nd = ndarray(exporter) > >>> m = memoryview(nd) > >>> nd.obj > b'123' > >>> m.obj > Stefan (Behnel), do you have an existing example object that does what you described? If I understand correctly, in the above example the ndarray would redirect the buffer request to 'exporter' and set m.obj to 'exporter'. It would be nice to know if people are actually using this. The reason why this scheme was not chosen for a chain of memoryviews was that 'exporter' (in theory) could implement a slideshow of buffers, which means that in the face of redirecting requests m might not be equal to nd. Stefan Krah From solipsis at pitrou.net Fri Mar 2 17:40:41 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 2 Mar 2012 17:40:41 +0100 Subject: [Python-Dev] Compiling Python on Linux with Intel's icc References: <20120302155258.39b7597e@pitrou.net> Message-ID: <20120302174041.17f078be@pitrou.net> On Fri, 02 Mar 2012 16:29:39 +0000 Alex Leach wrote: > > > >Did you compare the actual code sizes? The `size` command can help you > >with that. > > I'd never used `size` before... Thanks for the tip; looks like the Intel > build is actually smaller..? :/ > > # ICC version (`ls -lh` ==> 4.7MB) > $ size ./python > text data bss dec hex filename > 1659760 276904 63760 2000424 1e8628 ./python > > # System version (`ls -lhH` ==>2.7MB) > $ size /usr/bin/python > text data bss dec hex filename > 2303805 427728 74808 2806341 2ad245 /usr/bin/python > > I definitely don't get what's going on here! Does this information relate > to linked objects being in shared or static libs? Is this indicative > anything, either good or bad? Mmmh, your system version might have been compiled with different options, so you may want to compare with a hand-compiled gcc build. The "text" column gives you the code size. Arguably, a smaller code size will make the instruction cache more efficient. cheers Antoine. From stefan at bytereef.org Fri Mar 2 17:58:38 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 2 Mar 2012 17:58:38 +0100 Subject: [Python-Dev] Compiling Python on Linux with Intel's icc In-Reply-To: References: <25114031.N1TajLbDrU@metabuntu> <20120302120719.GA14084@sleipnir.bytereef.org> Message-ID: <20120302165838.GA16028@sleipnir.bytereef.org> Alex Leach wrote: > Can you translate Intel's suggestion into a patch for ffi64? Well probably, but this really belongs on the bug tracker. Also, as I said, there are many issues with higher priority. Stefan Krah From status at bugs.python.org Fri Mar 2 18:07:37 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 2 Mar 2012 18:07:37 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120302170737.91D8D1CEEE@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-02-24 - 2012-03-02) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3299 (+22) closed 22660 (+49) total 25959 (+71) Open issues with patches: 1409 Issues opened (57) ================== #12151: test_logging fails sometimes http://bugs.python.org/issue12151 reopened by nadeem.vawda #14080: Sporadic test_imp failure http://bugs.python.org/issue14080 reopened by skrah #14110: FreeBSD: test_os fails if user is in the wheel group http://bugs.python.org/issue14110 opened by skrah #14111: IDLE Debugger should handle interrupts http://bugs.python.org/issue14111 opened by ltaylor934 #14112: tutorial intro talks of "shallow copy" concept without explana http://bugs.python.org/issue14112 opened by tshepang #14114: 2.7.3rc1 chm gives JS error http://bugs.python.org/issue14114 opened by loewis #14115: 2.7.3rc hangs on test_asynchat on 32-bit Windows http://bugs.python.org/issue14115 opened by loewis #14116: Lock.__enter__() method returns True instead of self http://bugs.python.org/issue14116 opened by sbt #14117: Turtledemo: exception and minor glitches. http://bugs.python.org/issue14117 opened by terry.reedy #14119: Ability to adjust queue size in Executors http://bugs.python.org/issue14119 opened by Nam.Nguyen #14120: ARM Ubuntu 3.x buildbot failing test_dbm http://bugs.python.org/issue14120 opened by nadeem.vawda #14121: add a convenience C-API function for unpacking iterables http://bugs.python.org/issue14121 opened by scoder #14122: operator: div() instead of truediv() in documention since 3.1. http://bugs.python.org/issue14122 opened by felixantoinefortin #14123: Indicate that there are no current plans to deprecate printf-s http://bugs.python.org/issue14123 opened by telephonebook #14124: _pickle.c comment/documentation improvement http://bugs.python.org/issue14124 opened by valhallasw #14126: Speed up list comprehensions by preallocating the list where p http://bugs.python.org/issue14126 opened by alex #14127: os.stat and os.utime: allow preserving exact metadata http://bugs.python.org/issue14127 opened by larry #14128: _elementtree should expose types and factory functions consist http://bugs.python.org/issue14128 opened by eli.bendersky #14130: memoryview: add multi-dimensional indexing and slicing http://bugs.python.org/issue14130 opened by skrah #14131: test_threading failure on WIndows 7 3.x buildbot http://bugs.python.org/issue14131 opened by ncoghlan #14132: Redirect is not working correctly in urllib2 http://bugs.python.org/issue14132 opened by janik #14133: improved PEP 409 implementation http://bugs.python.org/issue14133 opened by benjamin.peterson #14134: xmlrpc.client.ServerProxy needs timeout parameter http://bugs.python.org/issue14134 opened by polymorphm #14135: check for locale changes in test.regrtest http://bugs.python.org/issue14135 opened by brett.cannon #14136: Simplify PEP 409 command line test and move it to test_cmd_lin http://bugs.python.org/issue14136 opened by ncoghlan #14139: test_ftplib: segfault http://bugs.python.org/issue14139 opened by skrah #14140: packaging tests: add helpers to create and inspect a tree of f http://bugs.python.org/issue14140 opened by eric.araujo #14141: 2.7.2 64-bit Windows library has __impt_Py* for several symbol http://bugs.python.org/issue14141 opened by Steve.McConnel #14142: getlocale(LC_ALL) behavior http://bugs.python.org/issue14142 opened by skrah #14143: test_ntpath failure on Windows http://bugs.python.org/issue14143 opened by nadeem.vawda #14144: urllib2 HTTPRedirectHandler not forwarding POST data in redire http://bugs.python.org/issue14144 opened by crustymonkey #14146: IDLE: source line in editor doesn't highlight when debugging http://bugs.python.org/issue14146 opened by Rich.Rauenzahn #14148: Option to kill "stuck" workers in a multiprocessing pool http://bugs.python.org/issue14148 opened by pmoore #14149: argparse: Document how to use argument names that are not Pyth http://bugs.python.org/issue14149 opened by Joseph.Birr-Pixton #14150: AIX, crash loading shared module into another process than pyt http://bugs.python.org/issue14150 opened by Jan.St??rtz #14151: multiprocessing.connection.Listener fails with invalid address http://bugs.python.org/issue14151 opened by Popa.Claudiu #14154: reimplement the bigmem test memory watchdog as a subprocess http://bugs.python.org/issue14154 opened by neologix #14156: argparse.FileType for '-' doesn't work for a mode of 'rb' http://bugs.python.org/issue14156 opened by anacrolix #14157: time.strptime without a year fails on Feb 29 http://bugs.python.org/issue14157 opened by Martin.Morrison #14158: test_mailbox fails if file or dir named by support.TESTFN exis http://bugs.python.org/issue14158 opened by vinay.sajip #14160: TarFile.extractfile fails to extract targets of top-level rela http://bugs.python.org/issue14160 opened by Matthew.Miller #14161: python2 file __repr__ does not escape filename http://bugs.python.org/issue14161 opened by Ronny.Pfannschmidt #14162: PEP 416: Add a builtin frozendict type http://bugs.python.org/issue14162 opened by haypo #14163: tkinter: problems with hello doc example http://bugs.python.org/issue14163 opened by terry.reedy #14166: private dispatch table for picklers http://bugs.python.org/issue14166 opened by sbt #14167: document return statement in finally blocks http://bugs.python.org/issue14167 opened by Yury.Selivanov #14168: Bug in minidom 3.3 after optimization patch http://bugs.python.org/issue14168 opened by vinay.sajip #14169: compiler.compile fails on "if" statement in attached file http://bugs.python.org/issue14169 opened by menegazzobr #14170: print unicode string error in win8 cmd console http://bugs.python.org/issue14170 opened by nkxyz #14171: warnings from valgrind about openssl as used by CPython http://bugs.python.org/issue14171 opened by zooko #14172: ref-counting leak in buffer usage in Python/marshal.c http://bugs.python.org/issue14172 opened by scoder #14173: PyOS_FiniInterupts leaves signal.getsignal segfaulty http://bugs.python.org/issue14173 opened by ferringb #14174: argparse.REMAINDER fails to parse remainder correctly http://bugs.python.org/issue14174 opened by rr2do2 #14176: Fix unicode literals (for PEP 414) http://bugs.python.org/issue14176 opened by Jean-Michel.Fauth #14177: marshal.loads accepts unicode strings http://bugs.python.org/issue14177 opened by pitrou #14178: Failing tests for ElementTree http://bugs.python.org/issue14178 opened by scoder #1346572: Remove inconsistent behavior between import and zipimport http://bugs.python.org/issue1346572 reopened by eric.araujo Most recent 15 issues with no replies (15) ========================================== #14178: Failing tests for ElementTree http://bugs.python.org/issue14178 #14177: marshal.loads accepts unicode strings http://bugs.python.org/issue14177 #14174: argparse.REMAINDER fails to parse remainder correctly http://bugs.python.org/issue14174 #14171: warnings from valgrind about openssl as used by CPython http://bugs.python.org/issue14171 #14169: compiler.compile fails on "if" statement in attached file http://bugs.python.org/issue14169 #14166: private dispatch table for picklers http://bugs.python.org/issue14166 #14160: TarFile.extractfile fails to extract targets of top-level rela http://bugs.python.org/issue14160 #14151: multiprocessing.connection.Listener fails with invalid address http://bugs.python.org/issue14151 #14143: test_ntpath failure on Windows http://bugs.python.org/issue14143 #14142: getlocale(LC_ALL) behavior http://bugs.python.org/issue14142 #14141: 2.7.2 64-bit Windows library has __impt_Py* for several symbol http://bugs.python.org/issue14141 #14140: packaging tests: add helpers to create and inspect a tree of f http://bugs.python.org/issue14140 #14135: check for locale changes in test.regrtest http://bugs.python.org/issue14135 #14130: memoryview: add multi-dimensional indexing and slicing http://bugs.python.org/issue14130 #14126: Speed up list comprehensions by preallocating the list where p http://bugs.python.org/issue14126 Most recent 15 issues waiting for review (15) ============================================= #14172: ref-counting leak in buffer usage in Python/marshal.c http://bugs.python.org/issue14172 #14167: document return statement in finally blocks http://bugs.python.org/issue14167 #14166: private dispatch table for picklers http://bugs.python.org/issue14166 #14163: tkinter: problems with hello doc example http://bugs.python.org/issue14163 #14162: PEP 416: Add a builtin frozendict type http://bugs.python.org/issue14162 #14161: python2 file __repr__ does not escape filename http://bugs.python.org/issue14161 #14158: test_mailbox fails if file or dir named by support.TESTFN exis http://bugs.python.org/issue14158 #14154: reimplement the bigmem test memory watchdog as a subprocess http://bugs.python.org/issue14154 #14151: multiprocessing.connection.Listener fails with invalid address http://bugs.python.org/issue14151 #14150: AIX, crash loading shared module into another process than pyt http://bugs.python.org/issue14150 #14144: urllib2 HTTPRedirectHandler not forwarding POST data in redire http://bugs.python.org/issue14144 #14136: Simplify PEP 409 command line test and move it to test_cmd_lin http://bugs.python.org/issue14136 #14134: xmlrpc.client.ServerProxy needs timeout parameter http://bugs.python.org/issue14134 #14133: improved PEP 409 implementation http://bugs.python.org/issue14133 #14132: Redirect is not working correctly in urllib2 http://bugs.python.org/issue14132 Top 10 most discussed issues (10) ================================= #8706: accept keyword arguments on most base type methods and builtin http://bugs.python.org/issue8706 18 msgs #14080: Sporadic test_imp failure http://bugs.python.org/issue14080 17 msgs #11379: Remove "lightweight" from minidom description http://bugs.python.org/issue11379 16 msgs #14133: improved PEP 409 implementation http://bugs.python.org/issue14133 15 msgs #13405: Add DTrace probes http://bugs.python.org/issue13405 13 msgs #14127: os.stat and os.utime: allow preserving exact metadata http://bugs.python.org/issue14127 13 msgs #14112: tutorial intro talks of "shallow copy" concept without explana http://bugs.python.org/issue14112 11 msgs #1346572: Remove inconsistent behavior between import and zipimport http://bugs.python.org/issue1346572 9 msgs #2377: Replace __import__ w/ importlib.__import__ http://bugs.python.org/issue2377 8 msgs #14097: Improve the "introduction" page of the tutorial http://bugs.python.org/issue14097 8 msgs Issues closed (48) ================== #2394: [Py3k] Finish the memoryview object implementation http://bugs.python.org/issue2394 closed by skrah #2945: bdist_rpm does not list dist files (should effect upload) http://bugs.python.org/issue2945 closed by eric.araujo #6210: Exception Chaining missing method for suppressing context http://bugs.python.org/issue6210 closed by ncoghlan #9845: Allow changing the method in urllib.request.Request http://bugs.python.org/issue9845 closed by eric.araujo #10181: Problems with Py_buffer management in memoryobject.c (and else http://bugs.python.org/issue10181 closed by skrah #10713: re module doesn't describe string boundaries for \b http://bugs.python.org/issue10713 closed by ezio.melotti #11457: os.stat(): add new fields to get timestamps as Decimal objects http://bugs.python.org/issue11457 closed by larry #12903: test_io.test_interrupte[r]d* blocks on OpenBSD http://bugs.python.org/issue12903 closed by neologix #12904: Change os.utime &c functions to use nanosecond precision where http://bugs.python.org/issue12904 closed by larry #12905: multiple errors in test_socket on OpenBSD http://bugs.python.org/issue12905 closed by neologix #13053: Add Capsule migration documentation to "cporting" http://bugs.python.org/issue13053 closed by larry #13086: Update howto/cporting.rst so it talks about Python 3 instead o http://bugs.python.org/issue13086 closed by larry #13125: test_all_project_files() expected failure http://bugs.python.org/issue13125 closed by pitrou #13167: Add get_metadata to packaging http://bugs.python.org/issue13167 closed by eric.araujo #13447: Add tests for some scripts in Tools/scripts http://bugs.python.org/issue13447 closed by eric.araujo #13491: Fixes for sqlite3 doc http://bugs.python.org/issue13491 closed by petri.lehtinen #13521: Make dict.setdefault() atomic http://bugs.python.org/issue13521 closed by pitrou #13706: non-ascii fill characters no longer work in formatting http://bugs.python.org/issue13706 closed by haypo #13716: distutils doc contains lots of XXX http://bugs.python.org/issue13716 closed by eric.araujo #13770: python3 & json: add ensure_ascii documentation http://bugs.python.org/issue13770 closed by eric.araujo #13873: SIGBUS in test_big_buffer() of test_zlib on Debian bigmem buil http://bugs.python.org/issue13873 closed by nadeem.vawda #13973: urllib.parse is imported twice in xmlrpc.client http://bugs.python.org/issue13973 closed by eric.araujo #13998: Lookbehind assertions go behind the start position for the mat http://bugs.python.org/issue13998 closed by ezio.melotti #13999: Queue references in multiprocessing doc points to Queue module http://bugs.python.org/issue13999 closed by sandro.tosi #14049: execfile() fails on files that use global variables inside fun http://bugs.python.org/issue14049 closed by terry.reedy #14081: Allow "maxsplit" argument to str.split() to be passed as a key http://bugs.python.org/issue14081 closed by ezio.melotti #14089: Patch to increase fractions lib test coverage http://bugs.python.org/issue14089 closed by ezio.melotti #14092: __name__ inconsistently applied in class definition http://bugs.python.org/issue14092 closed by terry.reedy #14095: type_new() removes __qualname__ from the input dictionary http://bugs.python.org/issue14095 closed by python-dev #14103: argparse: add ability to create a bash completion script http://bugs.python.org/issue14103 closed by eric.araujo #14107: Debian bigmem buildbot hanging in test_bigmem http://bugs.python.org/issue14107 closed by neologix #14108: test_shutil: failures in symlink tests http://bugs.python.org/issue14108 closed by pitrou #14109: test_lib2to3: output that looks like a failure on Windows 7 http://bugs.python.org/issue14109 closed by pitrou #14113: Failure in test_strptime on Windows http://bugs.python.org/issue14113 closed by nadeem.vawda #14118: _pickle.c structure cleanup http://bugs.python.org/issue14118 closed by loewis #14125: Windows: failures in refleak mode http://bugs.python.org/issue14125 closed by skrah #14129: Corrections for the "extending" doc http://bugs.python.org/issue14129 closed by python-dev #14137: GTK3 Segmentation fault from Warning: g_object_notify: asserti http://bugs.python.org/issue14137 closed by neologix #14138: Ctrl-C does not terminate GTK3 Gtk.main() loop when program ru http://bugs.python.org/issue14138 closed by neologix #14145: string.rfind() returns AttributeError: 'list' object has no at http://bugs.python.org/issue14145 closed by eric.smith #14147: print r"\" cause SyntaxError http://bugs.python.org/issue14147 closed by ezio.melotti #14152: setup.py: Python header file dependencies http://bugs.python.org/issue14152 closed by eric.araujo #14153: Expose os.device_encoding() at the C level http://bugs.python.org/issue14153 closed by brett.cannon #14155: Deja vu in re's documentation http://bugs.python.org/issue14155 closed by ezio.melotti #14159: __len__ method of weakset http://bugs.python.org/issue14159 closed by pitrou #14164: Hyphenation suggestions - floating-point/floating point http://bugs.python.org/issue14164 closed by brian.curtin #14165: The new shlex.quote() function should be marked "New in versio http://bugs.python.org/issue14165 closed by python-dev #14175: broken links on /download/ page http://bugs.python.org/issue14175 closed by georg.brandl From stefan_ml at behnel.de Fri Mar 2 18:14:33 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 02 Mar 2012 18:14:33 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: <20120302164226.GA15907@sleipnir.bytereef.org> References: <20120302125540.GA14210@sleipnir.bytereef.org> <20120302153057.GA14973@sleipnir.bytereef.org> <20120302164226.GA15907@sleipnir.bytereef.org> Message-ID: Stefan Krah, 02.03.2012 17:42: > Stefan Krah wrote: >>> Why would the object that bf_getbuffer() is being called on have to >>> be identical with the one that exports the buffer? >> >> It doesn't have to be. This is now possible: >> >> >>> from _testbuffer import * >> >>> exporter = b'123' >> >>> nd = ndarray(exporter) >> >>> m = memoryview(nd) >> >>> nd.obj >> b'123' >> >>> m.obj >> > > Stefan (Behnel), do you have an existing example object that does > what you described? If I understand correctly, in the above example > the ndarray would redirect the buffer request to 'exporter' and > set m.obj to 'exporter'. Yes, that's a suitable example. It would take the ndarray out of the loop - after all, it has nothing to do with what the memoryview wants, and won't need to do any cleanup for the memoryview's buffer view either. Keeping it explicitly alive in the memoryview is just a waste of resources. It's also related to this issue, which asks for an equivalent at the Python level: http://bugs.python.org/issue13797 > It would be nice to know if people are actually using this. I'm not using this anywhere. My guess is that it would be more of a feature than something to provide legacy code support for, but I can't speak for anyone else. In general, the NumPy mailing list is a good place to ask about these things. > The reason why this scheme was not chosen for a chain of memoryviews > was that 'exporter' (in theory) could implement a slideshow of buffers, > which means that in the face of redirecting requests m might not be > equal to nd. Right. Then it's only safe when the intermediate provider knows what the underlying buffer providers do. Not unlikely in an application setting, though, and it could just be an option at creation time to activate the delegation for the ndarray above. Stefan From thomas at python.org Fri Mar 2 20:11:30 2012 From: thomas at python.org (Thomas Wouters) Date: Fri, 2 Mar 2012 11:11:30 -0800 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: <20120302125540.GA14210@sleipnir.bytereef.org> Message-ID: On Fri, Mar 2, 2012 at 05:22, Nick Coghlan wrote: > On Fri, Mar 2, 2012 at 10:55 PM, Stefan Krah wrote: > > Nick Coghlan wrote: > >> However, given the lack of control, an assert() isn't the appropriate > >> tool here - PyObject_GetBuffer itself should be *checking* the > >> constraint and then reporting an error if the check fails. Otherwise a > >> misbehaving extension module could trivially crash the Python > >> interpreter by returning a bad Py_buffer. > > > > I'm not so sure. Extension modules that use the C-API in wrong or > > undocumented ways can always crash the interpreter. This assert() > > should be triggered in the first unit test of the module. Now, if > > the module does not have unit tests or they don't test against a > > new Python version is that really our problem? > > Crashing out with a C assert when we can easily give them a nice > Python traceback instead is unnecessarily unfriendly. But you should keep in mind that for non-debug builds, asserts are generally off. So the behaviour most people see isn't actually a crash, but silent acceptance. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Fri Mar 2 20:41:07 2012 From: barry at python.org (Barry Warsaw) Date: Fri, 2 Mar 2012 14:41:07 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> Message-ID: <20120302144107.1d6fb80b@resist.wooz.org> On Mar 02, 2012, at 02:48 PM, Nick Coghlan wrote: >Consider: an application that uses 8-bit strings everywhere and blows up on >non-ASCII data in Python 2 has at least a fighting chance to run unmodified >*and* handle Unicode properly on Python 3. Because unicode literals are gone, >a Unicode-aware Python 2 application currently has *no* chance to run >unmodified on Python 3. On its face, this statement is incorrect. It *might* be accurate if qualified by saying "a Unicode-aware Python 2 *web* application". I say "might" because I'm not an expert on web frameworks so I defer to those who are. It certainly can't be applied to the entire universe of Unicode-aware Python 2 applications. >Accordingly, I'd like to ask folks not to stress too much about the >precise wording until I get a chance to update it over the weekend :) /me takes a deep breath. :) -Barry From barry at python.org Fri Mar 2 20:44:52 2012 From: barry at python.org (Barry Warsaw) Date: Fri, 2 Mar 2012 14:44:52 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> Message-ID: <20120302144452.0e981709@resist.wooz.org> On Mar 02, 2012, at 12:58 PM, Hynek Schlawack wrote: >3.3 is the IMHO the first 3.x release that brings really cool stuff to the >table and might be the tipping point for people to start embracing Python 3 ? >despite the fact that Ubuntu LTS will alas ship 3.2 for the next 10 years. I >hope for some half-official back port there. :) Although I disagree with the premise (I think Python 3.2 is a fine platform to build many applications on) it's probably likely what we'll have backports of stable Python 3 releases to 12.04, at the very least in semi-official PPAs. Just like today we're trying to provide a smoother path for LTS->LTS upgrades where 10.04 had only Python 2.6 but 12.04 has only Python 2.7. We have a semi-official Lucid PPA providing Python 2.7, though afaict very few people have actually used or tested it. Cheers, -Barry From chrism at plope.com Fri Mar 2 21:13:18 2012 From: chrism at plope.com (Chris McDonough) Date: Fri, 02 Mar 2012 15:13:18 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: <20120302144107.1d6fb80b@resist.wooz.org> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> Message-ID: <1330719198.8772.137.camel@thinko> On Fri, 2012-03-02 at 14:41 -0500, Barry Warsaw wrote: > On Mar 02, 2012, at 02:48 PM, Nick Coghlan wrote: > > >Consider: an application that uses 8-bit strings everywhere and blows up on > >non-ASCII data in Python 2 has at least a fighting chance to run unmodified > >*and* handle Unicode properly on Python 3. Because unicode literals are gone, > >a Unicode-aware Python 2 application currently has *no* chance to run > >unmodified on Python 3. > > On its face, this statement is incorrect. > > It *might* be accurate if qualified by saying "a Unicode-aware Python 2 *web* > application". I say "might" because I'm not an expert on web frameworks so I > defer to those who are. It certainly can't be applied to the entire universe > of Unicode-aware Python 2 applications. FWIW, I think this issue's webness may be overestimated. There happens to be lots and lots of existing UI code which contains complex interactions between unicode literals and nonliterals in web apps, but there's also likely lots of nonweb code that has the same issue. If e.g. wxPython had already been ported, I think you'd be hearing the same sorts of things from folks that had investments in existing Python-2-compatible code when trying to port stuff to Py3 (at least if they wanted to run on both Python 2 and Python 3 within the same codebase). - C From hs at ox.cx Fri Mar 2 21:23:25 2012 From: hs at ox.cx (Hynek Schlawack) Date: Fri, 2 Mar 2012 21:23:25 +0100 Subject: [Python-Dev] PEP 414 In-Reply-To: <20120302144452.0e981709@resist.wooz.org> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> <20120302144452.0e981709@resist.wooz.org> Message-ID: Am 02.03.2012 um 20:44 schrieb Barry Warsaw: >> 3.3 is the IMHO the first 3.x release that brings really cool stuff to the >> table and might be the tipping point for people to start embracing Python 3 ? >> despite the fact that Ubuntu LTS will alas ship 3.2 for the next 10 years. I >> hope for some half-official back port there. :) > Although I disagree with the premise (I think Python 3.2 is a fine platform to > build many applications on) Just to be clear: I didn't say 3.2 is ?bad? or ?not fine?. It's just the fact that people need more than ?fine? to feel urged to switch to Python 3. I sincerely hope 3.3 fulfills that and if PEP 414 even makes porting easier we might have a perfect storm. :) > it's probably likely what we'll have backports of > stable Python 3 releases to 12.04, at the very least in semi-official PPAs. That's what I've been hoping for. Maybe it will work the other way around too: People like 3.3, target it first and port back later to reach more users. It's all about encouraging people to try the nectar of Python 3 ? once they're caught it's sticky sweetness[1]? ;) Cheers, Hynek [1] disclaimer: sticky sweetness only applies if you're not a maintainer of wsgi-related middleware/framework From barry at python.org Fri Mar 2 21:39:51 2012 From: barry at python.org (Barry Warsaw) Date: Fri, 2 Mar 2012 15:39:51 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: <1330719198.8772.137.camel@thinko> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> Message-ID: <20120302153951.670a42fe@resist.wooz.org> On Mar 02, 2012, at 03:13 PM, Chris McDonough wrote: >FWIW, I think this issue's webness may be overestimated. There happens to be >lots and lots of existing UI code which contains complex interactions between >unicode literals and nonliterals in web apps, but there's also likely lots of >nonweb code that has the same issue. If e.g. wxPython had already been >ported, I think you'd be hearing the same sorts of things from folks that had >investments in existing Python-2-compatible code when trying to port stuff to >Py3 (at least if they wanted to run on both Python 2 and Python 3 within the >same codebase). Okay, I just want to be very careful about the message we're sending here, because I think many libraries and applications will work fine with the facilities available in today's stable releases, i.e. unicode_literals and b-prefixes. For these, there's no need to define "native strings", nor do they require language constructs above what's already available. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From barry at python.org Fri Mar 2 21:41:23 2012 From: barry at python.org (Barry Warsaw) Date: Fri, 2 Mar 2012 15:41:23 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> <20120302144452.0e981709@resist.wooz.org> Message-ID: <20120302154123.4dc82971@resist.wooz.org> On Mar 02, 2012, at 09:23 PM, Hynek Schlawack wrote: >Just to be clear: I didn't say 3.2 is ?bad? or ?not fine?. It's just the fact >that people need more than ?fine? to feel urged to switch to Python 3. I >sincerely hope 3.3 fulfills that and if PEP 414 even makes porting easier we >might have a perfect storm. :) Cool, and yes reaching that tipping point is what it's all about. :) >> it's probably likely what we'll have backports of >> stable Python 3 releases to 12.04, at the very least in semi-official PPAs. > >That's what I've been hoping for. Maybe it will work the other way around >too: People like 3.3, target it first and port back later to reach more >users. It's all about encouraging people to try the nectar of Python 3 ? once >they're caught it's sticky sweetness[1]? ;) Indeed! -Barry From chrism at plope.com Fri Mar 2 21:50:56 2012 From: chrism at plope.com (Chris McDonough) Date: Fri, 02 Mar 2012 15:50:56 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: <20120302153951.670a42fe@resist.wooz.org> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> <20120302153951.670a42fe@resist.wooz.org> Message-ID: <1330721456.8772.148.camel@thinko> On Fri, 2012-03-02 at 15:39 -0500, Barry Warsaw wrote: > On Mar 02, 2012, at 03:13 PM, Chris McDonough wrote: > > >FWIW, I think this issue's webness may be overestimated. There happens to be > >lots and lots of existing UI code which contains complex interactions between > >unicode literals and nonliterals in web apps, but there's also likely lots of > >nonweb code that has the same issue. If e.g. wxPython had already been > >ported, I think you'd be hearing the same sorts of things from folks that had > >investments in existing Python-2-compatible code when trying to port stuff to > >Py3 (at least if they wanted to run on both Python 2 and Python 3 within the > >same codebase). > > Okay, I just want to be very careful about the message we're sending here, > because I think many libraries and applications will work fine with the > facilities available in today's stable releases, i.e. unicode_literals and > b-prefixes. For these, there's no need to define "native strings", nor do > they require language constructs above what's already available. Although the change makes it possible, and it is very useful for very low level WSGI apps, the issue this change addresses really isn't really 100% about "needing to define native strings". It's also just preservation of a resource in pretty short supply: developer energy. You will probably need to modify less code when taking piece of software that currently runs on Python 2 and changing it so that it runs on both Python 2 and Python 3, without needing to worry over the unintended consequences of using a unicode_literals future import or replacing existing u'' with a function call. This, IMO, can only be a good thing, because the nominal impact of some future user who must now understand u'' syntax is (again IMO) not as consequential as that user having less software to choose from because porting to Python 3 was just that much harder for existing Python 2 developers. - C From regebro at gmail.com Fri Mar 2 23:03:25 2012 From: regebro at gmail.com (Lennart Regebro) Date: Fri, 2 Mar 2012 23:03:25 +0100 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> <4F50CD1C.2090800@gmail.com> Message-ID: On Fri, Mar 2, 2012 at 15:26, Serhiy Storchaka wrote: > 02.03.12 15:49, Lennart Regebro ???????(??): > >> Just my 2 cents on the PEP rewrite: >> >> u'' support is not just if you want to write code that doesn't use >> 2to3. Even when you use 2to3 it is useful to be able to flag strings s >> binary, unicode or "native". > > > What "native" means in context Python 3 only? I don't understand your question. > "Native" strings only have > meaning if we consider Python 2 and Python 3 together. "Native" string is a > text string, which was binary in Python 2. There is a flag for such strings > -- str(). Yes. From alex.nanou at gmail.com Sat Mar 3 00:06:33 2012 From: alex.nanou at gmail.com (Alex A. Naanou) Date: Sat, 3 Mar 2012 03:06:33 +0400 Subject: [Python-Dev] odd "tuple does not support assignment" confusion... Message-ID: Hi everyone, Just stumbled on a fun little thing: We create a simple structure... l = ([],) Now modify the list, and... l[0] += [1] ...we fail: ## Traceback (most recent call last): ## File "F:\work\ImageGrid\cur\ImageGrid\src\test\python-bug.py", line 17, in ## l[0] += [1] ## TypeError: 'tuple' object does not support item assignment Tested on 2.5, 2.7, 3.1, PyPy1.8 on win32 and 2.7 on x86-64 Debian (just in case). I was expecting this to succeed, is this picture wrong or am I missing something? ...am I really the first one to try and modify a list within a tuple directly?! It's even more odd that I did not try this myself since first started with Python back in 99 :) I could not google this "feature" out either... BTW, It is quite trivial (and obvious) to trick the interpreter to get the desired result... e = l[0] e += [1] P.S. the attachment is runnable version of the above code... -- Thanks! Alex. From hodgestar+pythondev at gmail.com Sat Mar 3 00:32:17 2012 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Sat, 3 Mar 2012 01:32:17 +0200 Subject: [Python-Dev] odd "tuple does not support assignment" confusion... In-Reply-To: References: Message-ID: l[0] += [1] is the same as l[0] = l[0] + [1] Does that make the reason for the error clearer? The problem is the attempt to assign a value to l[0]. It is not the same as e = l[0] e += [1] which is the equivalent to e = l[0] e = e + [1] This never assigns a value to l[0]. Schiavo Simon From rdmurray at bitdance.com Sat Mar 3 00:38:50 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 02 Mar 2012 18:38:50 -0500 Subject: [Python-Dev] odd "tuple does not support assignment" confusion... In-Reply-To: References: Message-ID: <20120302233851.D3FAD2500E5@webabinitio.net> On Sat, 03 Mar 2012 03:06:33 +0400, "Alex A. Naanou" wrote: > Hi everyone, > > Just stumbled on a fun little thing: > > We create a simple structure... > > l = ([],) > > > Now modify the list, and... > > l[0] += [1] > > > ...we fail: > ## Traceback (most recent call last): > ## File "F:\work\ImageGrid\cur\ImageGrid\src\test\python-bug.py", > line 17, in > ## l[0] += [1] > ## TypeError: 'tuple' object does not support item assignment What is even more fun is that the append actually worked (try printing l). This is not a bug, it is a quirk of how extended assignment works. I think there's an issue report in the tracker somewhere that discusses it. --David From hodgestar+pythondev at gmail.com Sat Mar 3 00:42:45 2012 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Sat, 3 Mar 2012 01:42:45 +0200 Subject: [Python-Dev] odd "tuple does not support assignment" confusion... In-Reply-To: <20120302233851.D3FAD2500E5@webabinitio.net> References: <20120302233851.D3FAD2500E5@webabinitio.net> Message-ID: On Sat, Mar 3, 2012 at 1:38 AM, R. David Murray wrote: > What is even more fun is that the append actually worked (try printing > l). Now that is just weird. :) From ncoghlan at gmail.com Sat Mar 3 00:49:34 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 3 Mar 2012 09:49:34 +1000 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: <20120302125540.GA14210@sleipnir.bytereef.org> <20120302153057.GA14973@sleipnir.bytereef.org> <20120302164226.GA15907@sleipnir.bytereef.org> Message-ID: On Sat, Mar 3, 2012 at 3:14 AM, Stefan Behnel wrote: > Stefan Krah, 02.03.2012 17:42: >> The reason why this scheme was not chosen for a chain of memoryviews >> was that 'exporter' (in theory) could implement a slideshow of buffers, >> which means that in the face of redirecting requests m might not be >> equal to nd. > > Right. Then it's only safe when the intermediate provider knows what the > underlying buffer providers do. Not unlikely in an application setting, > though, and it could just be an option at creation time to activate the > delegation for the ndarray above. OK, my take on the discussion so far: 1. assert() is the wrong tool for this job (it should trigger a Python error message) 2. the current check is too strict (it should just check for obj != NULL, not obj == &exporter) 3. the current check is in the wrong place (it should be in PyObject_GetBuffer) Sound about right? Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Sat Mar 3 00:57:35 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 02 Mar 2012 18:57:35 -0500 Subject: [Python-Dev] odd "tuple does not support assignment" confusion... In-Reply-To: References: Message-ID: On 3/2/2012 6:06 PM, Alex A. Naanou wrote: > Just stumbled on a fun little thing: The place for 'fun little things' is python-list, mirrored as gmane.comp.python.general. > We create a simple structure... > l = ([],) > Now modify the list, and... > l[0] += [1] > ...we fail: This has been discussed several times on python-list. Searching group gmane.comp.python.general for 'augmented assignment tuple' at search.gmane.com returns about 50 matches. -- Terry Jan Reedy From martin at v.loewis.de Sat Mar 3 01:49:48 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sat, 03 Mar 2012 01:49:48 +0100 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> <4F50CD1C.2090800@gmail.com> Message-ID: <20120303014948.Horde.jZkbfML8999PUWqsfvvWLEA@webmail.df.eu> Zitat von Lennart Regebro : > Just my 2 cents on the PEP rewrite: > > u'' support is not just if you want to write code that doesn't use > 2to3. Even when you use 2to3 it is useful to be able to flag strings s > binary, unicode or "native". How so? In the Python 3 code, the u"" prefix would not appear, even if appears in the original source, as 2to3 eliminates it. So you surely need the u"" prefix to distinguish binary, unicode, or native strings in your source - but with 2to3, the PEP 414 change is unnecessary. Regards, Martin From vinay_sajip at yahoo.co.uk Sat Mar 3 02:53:42 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 3 Mar 2012 01:53:42 +0000 (UTC) Subject: [Python-Dev] PEP 414 References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> Message-ID: Chris McDonough plope.com> writes: > FWIW, I think this issue's webness may be overestimated. There happens > to be lots and lots of existing UI code which contains complex > interactions between unicode literals and nonliterals in web apps, but > there's also likely lots of nonweb code that has the same issue. If > e.g. wxPython had already been ported, I think you'd be hearing the same > sorts of things from folks that had investments in existing > Python-2-compatible code when trying to port stuff to Py3 (at least if > they wanted to run on both Python 2 and Python 3 within the same > codebase). As I understand it, WSGI happens to explicitly expect str in certain places, even places where conceptually text should be acceptable. The perception of webness seems to be substantiated by Nick's comment about endorsement from you, Armin, Jacob Kaplan-Moss, and Kenneth Reitz for this change. Not that webness is a bad thing, of course - it's a very important part of the ecosystem. It would be good to hear from other constituencies about where else (apart from WSGI and the other uses mentioned in the "APIs and Concepts Using Native Strings" section of the PEP) native strings are needed. I have encountered such needs sometimes, but not uncommonly, they appear to be broken APIs that just expect str even though text should be OK (e.g. cookie APIs, or the sqlite adapter's insisting on accepting datetimes in text format, but only as native strings). It would be a shame to leave these APIs as they are indefinitely, and perhaps using a marker like n('xxx') for native strings would help to remind us that these areas need addressing at some point. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Sat Mar 3 03:28:55 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 3 Mar 2012 02:28:55 +0000 (UTC) Subject: [Python-Dev] PEP 414 - some numbers from the Django port Message-ID: PEP 414 mentions the use of function wrappers and talks about both their obtrusiveness and performance impact on Python code. In the Django Python 3 port, I've used unicode_literals, and hence have no u prefixes in the ported code, and use a function wrapper to adorn native strings where they are needed. Though the port is still work in progress, it passes all tests on 2.x and 3.x with the SQLite adapter, with only a small number skipped specifically during the porting exercise (generally due to representational differences). I'd like to share some numbers from this port to see what people here think about them. Firstly, on obtrusiveness: Out of a total of 1872 source files, the native string marker only appears in 30 files - 18 files in Django itself, and 12 files in the test suite. This is less than 2% of files, so the native string markers are not especially invasive when looking at code. There are only 76 lines in the ported Django which contain native string markers. Secondly, on performance. I ran the following steps 6 times: Run the test suite on unported Django using Python 2.7.2 ("vanilla") Run the test suite on the ported Django using Python 2.7.2 ("ported") Run the test suite on the ported Django using Python 3.2.2 ("ported3") Django skips some tests because dependencies aren't installed (e.g. PIL for Python 3.2). The raw numbers, in seconds elapsed for the test run, are given below: vanilla (4659 tests): 468.586 486.231 467.584 464.916 480.530 475.457 ported (4655 tests): 467.350 480.902 479.276 478.748 478.115 486.044 ported3 (4609 tests): 463.161 470.423 463.833 448.097 456.727 504.402 If we allow for the different numbers of tests run by dividing by the number of tests and multiplying by 100, we get: vanilla-weighted: 10.057 10.436 10.036 9.979 10.314 10.205 ported-weighted: 10.040 10.331 10.296 10.285 10.271 10.441 ported3-weighted: 10.049 10.207 10.064 9.722 9.909 10.944 If I run these through ministat, it tells me there is no significant difference in these data sets, with a 95% confidence level: $ ministat -w 74 vanilla-weighted ported-weighted ported3-weighted x vanilla-weighted + ported-weighted * ported3-weighted +--------------------------------------------------------------------------+ | * + | |* * x ** * ++x+ * *| ||_______________|___M____|AA_M___AM___|__|_________| | +--------------------------------------------------------------------------+ N Min Max Median Avg Stddev x 6 9.979 10.436 10.205 10.171167 0.17883782 + 6 10.04 10.441 10.296 10.277333 0.13148485 No difference proven at 95.0% confidence * 6 9.722 10.944 10.064 10.149167 0.42250274 No difference proven at 95.0% confidence So, looking at a large project in a relevant problem domain, unicode_literals and native string markers would appear not to adversely impact readability or performance. Your comments would be appreciated. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Sat Mar 3 04:22:43 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 3 Mar 2012 03:22:43 +0000 (UTC) Subject: [Python-Dev] PEP 414 References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> <20120302153951.670a42fe@resist.wooz.org> <1330721456.8772.148.camel@thinko> Message-ID: Chris McDonough plope.com> writes: > Although the change makes it possible, and it is very useful for very > low level WSGI apps, the issue this change addresses really isn't really > 100% about "needing to define native strings". It's also just > preservation of a resource in pretty short supply: developer energy. Apparently developer energy is a limitless resource when it comes to arguing over PEPs ;-) > This, IMO, can only be a good thing, because the nominal impact of some It can also have some downsides, at least according to some points of view. For example, I regard elevating "native strings" to undue prominence, and the continued use of u'xxx' in Python 3 code, as unfortunate consequences. For example, with PEP 414, it will be possible to mix Unicode with and without prefix - how would that not be at least a little confusing for users new to Python? Remember, "native strings" are a Python-only concept. > future user who must now understand u'' syntax is (again IMO) not as > consequential as that user having less software to choose from because > porting to Python 3 was just that much harder for existing Python 2 > developers. I don't believe it's because porting to Python 3 is especially hard. I'm not saying it's trivial, but it isn't rocket surgery ;-) Even if porting were trivially easy to do technically at the level the PEP addresses, there would still be additional tests, and perhaps documentation, and perhaps release-related work to be done. Since Python 2.x is a very good platform for software development, where's the incentive to move over to 3.x? It's the chicken and egg effect. Many people are waiting for other people to move over (perhaps projects they depend upon), and while the transition is happening, it's not as quick as it could be. I think a lot of it is down to inertia. Possibly another factor was the "just use 2to3 message", which we now know doesn't work well in all scenarios. However, I don't believe that the "use a single codebase, use six or six-like techniques, use unicode_literals, use the 2to3 fixer to remove unicode prefixes, and use native string markers where you need to" message has received anything like the same level of airplay. If you talk to people who have *actually tried* this approach (say Barry, or me) you'll hear that it's not been all that rough a ride. Regards, Vinay Sajip From guido at python.org Sat Mar 3 04:38:36 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 2 Mar 2012 19:38:36 -0800 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> <20120302153951.670a42fe@resist.wooz.org> <1330721456.8772.148.camel@thinko> Message-ID: On Fri, Mar 2, 2012 at 7:22 PM, Vinay Sajip wrote: > Apparently developer energy is a limitless resource when it comes to arguing > over PEPs ;-) Aren't *you* the one who keeps kicking this dead horse? -- --Guido van Rossum (python.org/~guido) From vinay_sajip at yahoo.co.uk Sat Mar 3 04:50:55 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 3 Mar 2012 03:50:55 +0000 (UTC) Subject: [Python-Dev] PEP 414 References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> <20120302153951.670a42fe@resist.wooz.org> <1330721456.8772.148.camel@thinko> Message-ID: Guido van Rossum python.org> writes: > > Aren't *you* the one who keeps kicking this dead horse? > >From looking at the overall thread, I'm just one of many people posting on it. Which dead horse am I kicking? It's not as if I'm opposing anything or anyone - just putting my point of view forward about porting from 2.x -> 3.x, as others have done - that's not OT, is it? Regards, Vinay Sajip From regebro at gmail.com Sat Mar 3 07:20:14 2012 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 3 Mar 2012 07:20:14 +0100 Subject: [Python-Dev] PEP 414 In-Reply-To: <20120303014948.Horde.jZkbfML8999PUWqsfvvWLEA@webmail.df.eu> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> <4F50CD1C.2090800@gmail.com> <20120303014948.Horde.jZkbfML8999PUWqsfvvWLEA@webmail.df.eu> Message-ID: On Sat, Mar 3, 2012 at 01:49, wrote: > > Zitat von Lennart Regebro : > > >> Just my 2 cents on the PEP rewrite: >> >> u'' support is not just if you want to write code that doesn't use >> 2to3. Even when you use 2to3 it is useful to be able to flag strings s >> binary, unicode or "native". > > How so? In the Python 3 code, the u"" prefix would not appear, even if > appears in the original source, as 2to3 eliminates it. Well, not if you disable that fixer. ;-) But you are right, it isn't necessary. I was thinking of 3to2, actually. That was one of the objections I had to the usefulness of 3to2, there is no way to make the distinction between unicode and native strings. (The u'' prefix hence actually makes 3to2 a realistic option, and that's good.) So everyone can ignore this, I mixed up two issues. :-) //Lennart From regebro at gmail.com Sat Mar 3 07:28:42 2012 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 3 Mar 2012 07:28:42 +0100 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> <20120302153951.670a42fe@resist.wooz.org> <1330721456.8772.148.camel@thinko> Message-ID: On Sat, Mar 3, 2012 at 04:22, Vinay Sajip wrote: > It can also have some downsides, at least according to some points of view. For > example, I regard elevating "native strings" to undue prominence, and the > continued use of u'xxx' in Python 3 code, as unfortunate consequences. For > example, with PEP 414, it will be possible to mix Unicode with and without > prefix - how would that not be at least a little confusing for users new to > Python? Remember, "native strings" are a Python-only concept. This is true, new users will see 'foo', r'foo', b'foo', and will naturally assume u'foo' is something special too, and will have to be told it is not. But that's an unfortunate effect of Python 3 making the change to Unicode strings, a change that *removed* a lot of other much more confusing things. So the question is if you have any proposal that is *less* confusing while still being practical. Because we do need to distinguish between binary, Unicode and "native" strings. Isn't this the least confusing solution? The only way we could have avoided this "three strings" situation is by actually removing native strings from Python for at least five years, and only used b'' or u''. That would not have been any less confusing. //Lennart From stephen at xemacs.org Sat Mar 3 07:35:48 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sat, 03 Mar 2012 15:35:48 +0900 Subject: [Python-Dev] PEP 414 In-Reply-To: <1330719198.8772.137.camel@thinko> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> Message-ID: <87wr72chtn.fsf@uwakimon.sk.tsukuba.ac.jp> Chris McDonough writes: > FWIW, I think this issue's webness may be overestimated. There happens > to be lots and lots of existing UI code which contains complex > interactions between unicode literals and nonliterals in web apps, but > there's also likely lots of nonweb code that has the same issue. If we generalize "web" to "wire protocols", I would say that nonweb code that has the same issue is poorly coded. I suppose there may be some similar issues in say XML handling, because XML can be used in binary applications as well as for structuring text (ie, XML is really a wire protocol too). But pure user interface modules like wxPython? Text should be handled as text, not as bytes that "probably" are ASCII-encoded or locale-specifically-encoded (or are magic numbers that happen to be mnemonic when interpreted as ASCII). I don't say that we should ignore the pain of the nonweb users -- but it is a different issue, with different solutions. In particular, using "native strings" (and distinguishing them by the absence of u'') is usually a non-solution for non-web applications, because it propagates the bad practice of pretending that unknown encodings can be assumed to be well-behaved into an environment where good practice is designed in. This is quite different from the case for webby usage, where it often makes sense to handle many low-level operations without ever converting to text, while the same literal strings may be useful in both wire and text contexts (and so should be present only once according to DRY). (N.B. I suspect that it is probably also generally possible for webby applications to avoid native strings without much cost, as Nick showed in urlparse. But at least manipulations of the wire protocol without conversion to text are a plausible optimization.) From alex.nanou at gmail.com Sat Mar 3 07:51:21 2012 From: alex.nanou at gmail.com (Alex A. Naanou) Date: Sat, 3 Mar 2012 10:51:21 +0400 Subject: [Python-Dev] odd "tuple does not support assignment" confusion... In-Reply-To: <20120302233851.D3FAD2500E5@webabinitio.net> References: <20120302233851.D3FAD2500E5@webabinitio.net> Message-ID: I knew this was a feature!!! ....features such as these should be fixed! %) On Sat, Mar 3, 2012 at 03:38, R. David Murray wrote: > On Sat, 03 Mar 2012 03:06:33 +0400, "Alex A. Naanou" wrote: >> Hi everyone, >> >> Just stumbled on a fun little thing: >> >> We create a simple structure... >> >> ? l = ([],) >> >> >> Now modify the list, and... >> >> ? l[0] += [1] >> >> >> ...we fail: >> ## Traceback (most recent call last): >> ## ? File "F:\work\ImageGrid\cur\ImageGrid\src\test\python-bug.py", >> line 17, in >> ## ? ? l[0] += [1] >> ## TypeError: 'tuple' object does not support item assignment > > What is even more fun is that the append actually worked (try printing > l). > > This is not a bug, it is a quirk of how extended assignment works. > I think there's an issue report in the tracker somewhere that > discusses it. > > --David -- Alex. From storchaka at gmail.com Sat Mar 3 08:53:24 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 03 Mar 2012 09:53:24 +0200 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> <4F50CD1C.2090800@gmail.com> <20120303014948.Horde.jZkbfML8999PUWqsfvvWLEA@webmail.df.eu> Message-ID: 03.03.12 08:20, Lennart Regebro ???????(??): > But you are right, it isn't necessary. I was thinking of 3to2, > actually. That was one of the objections I had to the usefulness of > 3to2, there is no way to make the distinction between unicode and > native strings. (The u'' prefix hence actually makes 3to2 a realistic > option, and that's good.) 2to3 should recognize the str(string_literal) (or nstr(), or native(), etc) ??as a native string and does not add prefix "u" to it. And you have to explicitly specify these tips. From eliben at gmail.com Sat Mar 3 09:36:04 2012 From: eliben at gmail.com (Eli Bendersky) Date: Sat, 3 Mar 2012 10:36:04 +0200 Subject: [Python-Dev] slice subscripts for sequences and mappings Message-ID: Hello, I find a strange discrepancy in Python with regards to slice subscripting of objects, at the C API level. I mean things like obj[start:end:step]. I'd expect slice subscripts to be part of the sequence interface, and yet they are not. In fact, they are part of the mapping interface. For example, the list object has its slice get/set methods assigned to a PyMappingMethods struct. So does a bytes object, and pretty much every other object that wants to support subscripts. This doesn't align well with the documentation, in at least two places. 1) The library documentation (http://docs.python.org/dev/library/stdtypes.html) in 4.8 says: ??? "Mappings are mutable objects. There is currently only one standard mapping type, the dictionary" Why then does a list implement the mapping interface? Moreover, why does bytes, an immutable object, implement the mapping interface? 2) The same documentation page in 4.6 says, in the operation table: s[i:j] slice of s from i to j s[i:j:k] slice of s from i to j with step k But in the implementation, the slice subscripts are part of the mapping, not the sequence inteface. The PySequenceMethods structure does have fields for slice accessors, but their naming (was_sq_slice, was_sq_ass_slice) suggests they're just deprecated placeholders. This also doesn't align well with logic, since mappings like dict have no real meaning for slice subscripts. These logically belong to a sequence. Moreover, it takes subscripts for single a single numeric index away from subscripts for a slice into a different protocol (the former in sequence, the latter in mapping). I realize I must be missing some piece of the history here and not suggesting to change anything. I do think that the documentation, especially in the area of the type object that defines the sequence and mapping protocols, could be clarified to express what is expected of a new type that wants to act as a sequence. In particular, it should be said explicitly that such a type must implement the mapping protocol if it wants slice subscripting. If this makes any sense at all, I will open an issue. Eli From stefan at bytereef.org Sat Mar 3 09:48:18 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 3 Mar 2012 09:48:18 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: <20120302125540.GA14210@sleipnir.bytereef.org> <20120302153057.GA14973@sleipnir.bytereef.org> <20120302164226.GA15907@sleipnir.bytereef.org> Message-ID: <20120303084818.GA18775@sleipnir.bytereef.org> Stefan Behnel wrote: > Yes, that's a suitable example. It would take the ndarray out of the loop - > after all, it has nothing to do with what the memoryview wants, and won't > need to do any cleanup for the memoryview's buffer view either. Keeping it > explicitly alive in the memoryview is just a waste of resources. Yes, this should be supported. The "cleanup handler" in the earlier example got me on the wrong track, that's why I kept insisting this wasn't necessary. > I'm not using this anywhere. My guess is that it would be more of a feature > than something to provide legacy code support for, but I can't speak for > anyone else. In general, the NumPy mailing list is a good place to ask > about these things. NumPy re-exports, this was confirmed in issue #10181. That's actually the main reason why I considered re-exporting rather than redirecting the standard model and built the test suite around it. Stefan Krah From eliben at gmail.com Sat Mar 3 09:50:06 2012 From: eliben at gmail.com (Eli Bendersky) Date: Sat, 3 Mar 2012 10:50:06 +0200 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: References: Message-ID: > This doesn't align well with the documentation, in at least two places. > Another place is in http://docs.python.org/dev/reference/datamodel.html: " object.__getitem__(self, key) Called to implement evaluation of self[key]. For sequence types, the accepted keys should be integers and slice objects. [...] " Once again, at the C API level this isn't accurate since only integer keys are handled by the sequence protocol, leaving slice keys to the mapping protocol. The datamodel doc should stay as it is, because it's correct for Python-written classes. But the relevant C API sections really need some clarification. Eli From stefan_ml at behnel.de Sat Mar 3 09:58:38 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 03 Mar 2012 09:58:38 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: <20120302125540.GA14210@sleipnir.bytereef.org> <20120302153057.GA14973@sleipnir.bytereef.org> <20120302164226.GA15907@sleipnir.bytereef.org> Message-ID: Nick Coghlan, 03.03.2012 00:49: > On Sat, Mar 3, 2012 at 3:14 AM, Stefan Behnel wrote: >> Stefan Krah, 02.03.2012 17:42: >>> The reason why this scheme was not chosen for a chain of memoryviews >>> was that 'exporter' (in theory) could implement a slideshow of buffers, >>> which means that in the face of redirecting requests m might not be >>> equal to nd. >> >> Right. Then it's only safe when the intermediate provider knows what the >> underlying buffer providers do. Not unlikely in an application setting, >> though, and it could just be an option at creation time to activate the >> delegation for the ndarray above. > > OK, my take on the discussion so far: > > 1. assert() is the wrong tool for this job Absolutely. > 2. the current check is too strict (it should just check for obj != > NULL, not obj == &exporter) I don't know. The documentation isn't very clear on the cases where obj may be NULL. Definitely on error, ok, but otherwise, the bf_getbuffer() docs do not explicitly say that it must not be NULL (they just mention a "standard" case): http://docs.python.org/dev/c-api/typeobj.html#buffer-object-structures and the Py_buffer docs say explicitly that the field either refers to the exporter or is NULL, without saying if this has any implications or specific meaning: http://docs.python.org/dev/c-api/buffer.html#Py_buffer Personally, I don't see a NULL (or None) value being a problem - it would just mean that the buffer does not need any release call (i.e. no cleanup), e.g. because it was statically allocated in an extension module. PyBuffer_Release() has the appropriate checks in place anyway. But I don't care either way, as long as it's documented. Stefan From stefan_ml at behnel.de Sat Mar 3 10:24:34 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 03 Mar 2012 10:24:34 +0100 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: References: Message-ID: Eli Bendersky, 03.03.2012 09:36: > I find a strange discrepancy in Python with regards to slice > subscripting of objects, at the C API level. I mean things like > obj[start:end:step]. > > I'd expect slice subscripts to be part of the sequence interface, and > yet they are not. In fact, they are part of the mapping interface. For > example, the list object has its slice get/set methods assigned to a > PyMappingMethods struct. So does a bytes object, and pretty much every > other object that wants to support subscripts. > > This doesn't align well with the documentation, in at least two places. > > 1) The library documentation > (http://docs.python.org/dev/library/stdtypes.html) in 4.8 says: > > "Mappings are mutable objects. There is currently only one > standard mapping type, the dictionary" > > Why then does a list implement the mapping interface? Moreover, why > does bytes, an immutable object, implement the mapping interface? I think that's (partly?) for historical reasons. Originally, there were the slicing functions as part of the sequence interface. They took a start and an end index of the slice. Then, extended slicing was added to the language, and that used a slice object, which didn't fit into the sequence slicing interface. So the interface was unified using the existing mapping getitem interface, and the sequence slicing functions were eventually deprecated and removed in Py3. Stefan From vinay_sajip at yahoo.co.uk Sat Mar 3 10:26:12 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 3 Mar 2012 09:26:12 +0000 (UTC) Subject: [Python-Dev] PEP 414 References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> <20120302153951.670a42fe@resist.wooz.org> <1330721456.8772.148.camel@thinko> Message-ID: Lennart Regebro gmail.com> writes: > So the question is if you have any proposal that is *less* confusing > while still being practical. Because we do need to distinguish between > binary, Unicode and "native" strings. Isn't this the least confusing > solution? It's a matter of the degree of confusion caused (hard to assess) and also a question of taste, so there will be differing views on this. Considering use of unicode_literals, 'xxx' for text, b'yyy' for bytes and with a function wrapper to mark native strings, it becomes clear that the native strings are special cases - much less encountered when looking at code compared to 'xxx' / b'yyy', so there are fewer opportunities for confusion. Where native strings need to be discussed, then it is not unexceptional, nor I believe incorrect, to explain that they are there to suit the requirements of legacy APIs which pre-date Python 3 and the latest versions of Python 2. In terms of practicality, it is IMO quite practical (assuming 2.5 / earlier support can be dropped) to move to a 2.6+/3.x-friendly codebase, e.g. by using Armin's python-modernize. Regards, Vinay Sajip From regebro at gmail.com Sat Mar 3 11:02:56 2012 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 3 Mar 2012 11:02:56 +0100 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> <20120302153951.670a42fe@resist.wooz.org> <1330721456.8772.148.camel@thinko> Message-ID: On Sat, Mar 3, 2012 at 10:26, Vinay Sajip wrote: > Lennart Regebro gmail.com> writes: > >> So the question is if you have any proposal that is *less* confusing >> while still being practical. Because we do need to distinguish between >> binary, Unicode and "native" strings. Isn't this the least confusing >> solution? > > It's a matter of the degree of confusion caused (hard to assess) and also a > question of taste, so there will be differing views on this. Considering use of > unicode_literals, 'xxx' for text, b'yyy' for bytes and with a function wrapper > to mark native strings, it becomes clear that the native strings are special > cases - much less encountered when looking at code compared to 'xxx' / b'yyy', I'm not sure that's true at all. In most cases where you support both Python 2 and Python 3, most strings will be "native", ie, without prefix in either Python 2 or Python 3. The native case is the most common case. > In terms of practicality, it is > IMO quite practical (assuming 2.5 / earlier support can be dropped) to move to a > 2.6+/3.x-friendly codebase, e.g. by using Armin's python-modernize. I think there is some misunderstanding here. The binary/unicode/native separation is only possible on Python 2.6 and 2.7 at the moment, unless you use function wrappers like b(). //Lennart From vinay_sajip at yahoo.co.uk Sat Mar 3 11:39:45 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 3 Mar 2012 10:39:45 +0000 (UTC) Subject: [Python-Dev] PEP 414 References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> <20120302153951.670a42fe@resist.wooz.org> <1330721456.8772.148.camel@thinko> Message-ID: Lennart Regebro gmail.com> writes: > I'm not sure that's true at all. In most cases where you support both > Python 2 and Python 3, most strings will be "native", ie, without > prefix in either Python 2 or Python 3. The native case is the most > common case. Sorry, I didn't make myself clear. If you import unicode_literals, then in both 2.x and 3.x code, an unadorned literal string is text, and a b-adorned literal string is bytes. My assertion was based on that assumption - the text (Unicode) case then becomes the most common case. > > In terms of practicality, it is > > IMO quite practical (assuming 2.5 / earlier support can be dropped) to > > move to a > > 2.6+/3.x-friendly codebase, e.g. by using Armin's python-modernize. > > I think there is some misunderstanding here. The binary/unicode/native > separation is only possible on Python 2.6 and 2.7 at the moment, > unless you use function wrappers like b(). Right, and that is a possible option for 2.5 and earlier: though obviously not a desirable one from an aesthetic point of view! What I meant (and should have said) was: if you can drop support for 2.5 / earlier, a lib2to3 fixer-based approach brings your 2.x code into the 3-friendly region of 2.x - 2.6 and 2.7. You can then, using the unicode_literals approach, arrive at a common codebase for 2.6+ and 3.x which is not slow to run (see my other post on ported Django test run performance), and clean (looks just like 3 code, pretty much, and means the same, as far as string literals are concerned). Where you hit native string requirements, apply the wrapper. I don't actually use python-modernize, as I independently developed fixers when doing the Django port late last year. I initially wrote a fixer to transform u'xxx' to u('xxx') (as I was assuming 2.5 support was needed), and then, when it appeared likely that Django would drop 2.5 support after 1.4, I wrote a fixer to go from u('xxx') to 'xxx'. Once I learned to use lib2to3, with a few pointers from Benjamin, it worked like a charm for me. Regards, Vinay Sajip From stefan at bytereef.org Sat Mar 3 12:08:38 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 3 Mar 2012 12:08:38 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: <20120302125540.GA14210@sleipnir.bytereef.org> <20120302153057.GA14973@sleipnir.bytereef.org> <20120302164226.GA15907@sleipnir.bytereef.org> Message-ID: <20120303110838.GA19066@sleipnir.bytereef.org> Stefan Behnel wrote: > > 1. assert() is the wrong tool for this job > > Absolutely. I disagree. This assert() is meant for extension authors and not end users. I can't see how a reasonable release procedure would fail to trigger the assert(). My procedure as a C extension author is to test against a new Python version and *then* set the PyPI classifier for that version. If I download a C extension that doesn't have the 3.3 classifier set, then as a user I would not be upset if the extension throws an assert or, as Thomas Wouters pointed out, continues to work as before if not compiled in debug mode. > > 2. the current check is too strict (it should just check for obj != > > NULL, not obj == &exporter) > > I don't know. The documentation isn't very clear on the cases where obj may > be NULL. Definitely on error, ok, but otherwise, the bf_getbuffer() docs do > not explicitly say that it must not be NULL (they just mention a "standard" > case): > > http://docs.python.org/dev/c-api/typeobj.html#buffer-object-structures How about this: "The value of view.obj is the equivalent of the return value of any C-API function that returns a new reference. The value must be NULL on error or a valid new reference to an exporting object. For a chain or a tree of views, there are two possible schemes: 1) Re-export: Each member of the tree pretends to be the exporting object and sets view.obj to a new reference to itself. 2) Redirect: The buffer request is redirected to the root object of the tree. Here view.obj will be a reference to the root object." I think it's better not to complicate this familiar scheme of owning a reference by allowing view.obj==NULL for the general case. view.obj==NULL was introduced for temporary wrapping of ad-hoc memoryviews via PyBuffer_FillInfo() and now also PyMemoryView_FromMemory(). That's why I explicitly wrote the following in the documentation of PyBuffer_FillInfo(): "If this function is used as part of a getbufferproc, exporter MUST be set to the exporting object. Otherwise, exporter MUST be NULL." Stefan Krah From eliben at gmail.com Sat Mar 3 13:07:35 2012 From: eliben at gmail.com (Eli Bendersky) Date: Sat, 3 Mar 2012 14:07:35 +0200 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: References: Message-ID: On Sat, Mar 3, 2012 at 11:24, Stefan Behnel wrote: > Eli Bendersky, 03.03.2012 09:36: >> I find a strange discrepancy in Python with regards to slice >> subscripting of objects, at the C API level. I mean things like >> obj[start:end:step]. >> >> I'd expect slice subscripts to be part of the sequence interface, and >> yet they are not. In fact, they are part of the mapping interface. For >> example, the list object has its slice get/set methods assigned to a >> PyMappingMethods struct. So does a bytes object, and pretty much every >> other object that wants to support subscripts. >> >> This doesn't align well with the documentation, in at least two places. >> >> 1) The library documentation >> (http://docs.python.org/dev/library/stdtypes.html) in 4.8 says: >> >> ? ? "Mappings are mutable objects. There is currently only one >> standard mapping type, the dictionary" >> >> Why then does a list implement the mapping interface? Moreover, why >> does bytes, an immutable object, implement the mapping interface? > > I think that's (partly?) for historical reasons. Originally, there were the > slicing functions as part of the sequence interface. They took a start and > an end index of the slice. Then, extended slicing was added to the > language, and that used a slice object, which didn't fit into the sequence > slicing interface. So the interface was unified using the existing mapping > getitem interface, and the sequence slicing functions were eventually > deprecated and removed in Py3. This make sense. Not that now there's also duplication in almost all objects because the mapping protocol essentially supersedes the sequence protocol for accessing elements. I.e. sq_item and sq_ass_item are no longer needed if an object implements the mapping protocol, because the mapping interface has precedence, and mp_subscript & mp_ass_subscript are called instead, respectively. Because of that, the first thing they do is check whether the index is a simple number and do the work of their sequence protocol cousins. This duplicates code in almost all objects that need to support __getitem__. Eli From solipsis at pitrou.net Sat Mar 3 13:20:24 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 3 Mar 2012 13:20:24 +0100 Subject: [Python-Dev] slice subscripts for sequences and mappings References: Message-ID: <20120303132024.1dcba8c6@pitrou.net> Hi, > I'd expect slice subscripts to be part of the sequence interface, and > yet they are not. In fact, they are part of the mapping interface. For > example, the list object has its slice get/set methods assigned to a > PyMappingMethods struct. So does a bytes object, and pretty much every > other object that wants to support subscripts. It comes from: http://hg.python.org/cpython/rev/245224d1b8c9 http://bugs.python.org/issue400998 Written by Michael Hudson and reviewed by Guido. I wonder why this patch chose to add mapping protocol support to tuples and lists, rather than add a tp_ slot for extended slicing. Regards Antoine. From regebro at gmail.com Sat Mar 3 13:29:09 2012 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 3 Mar 2012 13:29:09 +0100 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> <20120302153951.670a42fe@resist.wooz.org> <1330721456.8772.148.camel@thinko> Message-ID: On Sat, Mar 3, 2012 at 11:39, Vinay Sajip wrote: > Sorry, I didn't make myself clear. If you import unicode_literals, then in both > 2.x and 3.x code, an unadorned literal string is text, and a b-adorned literal > string is bytes. My assertion was based on that assumption - the text (Unicode) > case then becomes the most common case. Absolutely. >> I think there is some misunderstanding here. The binary/unicode/native >> separation is only possible on Python 2.6 and 2.7 at the moment, >> unless you use function wrappers like b(). > > Right, and that is a possible option for 2.5 and earlier: though obviously not a > desirable one from an aesthetic point of view! > > What I meant (and should have said) was: if you can drop support for 2.5 / > earlier, a lib2to3 fixer-based approach brings your 2.x code into the 3-friendly > region of 2.x - 2.6 and 2.7. You can then, using the unicode_literals approach, > arrive at a common codebase for 2.6+ and 3.x which is not slow to run (see my > other post on ported Django test run performance), and clean (looks just like 3 > code, pretty much, and means the same, as far as string literals are concerned). > Where you hit native string requirements, apply the wrapper. Yes, that's a doable solution. Just as the common solution of using b() and u() wrappers. But these are still more confusing and less aesthetically pleasing (and insignificantly slower) than supporting u'' in Python 3. //Lennart From eliben at gmail.com Sat Mar 3 13:41:24 2012 From: eliben at gmail.com (Eli Bendersky) Date: Sat, 3 Mar 2012 14:41:24 +0200 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: <20120303132024.1dcba8c6@pitrou.net> References: <20120303132024.1dcba8c6@pitrou.net> Message-ID: >> I'd expect slice subscripts to be part of the sequence interface, and >> yet they are not. In fact, they are part of the mapping interface. For >> example, the list object has its slice get/set methods assigned to a >> PyMappingMethods struct. So does a bytes object, and pretty much every >> other object that wants to support subscripts. > > It comes from: > http://hg.python.org/cpython/rev/245224d1b8c9 > http://bugs.python.org/issue400998 > > Written by Michael Hudson and reviewed by Guido. > I wonder why this patch chose to add mapping protocol support to tuples > and lists, rather than add a tp_ slot for extended slicing. > Why a separate tp_ slot for extended slicing? ISTM slicing pertains to sequences, similarly to other numeric indices. If you look at PySequenceMethods it has these (apparently no longer used fields): void *was_sq_slice; void *was_sq_ass_slice; These were "simple" slices (pairs of numbers). I suppose if any change is considered, these fields can be re-incarnated to accept PyObject* slices similarly to the current mp_subscript and mp_ass_subscript. Eli From stefan at bytereef.org Sat Mar 3 13:52:18 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 3 Mar 2012 13:52:18 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: <20120302125540.GA14210@sleipnir.bytereef.org> <20120302153057.GA14973@sleipnir.bytereef.org> <20120302164226.GA15907@sleipnir.bytereef.org> Message-ID: <20120303125218.GA19875@sleipnir.bytereef.org> Nick Coghlan wrote: > 2. the current check is too strict (it should just check for obj != > NULL, not obj == &exporter) Yes. For anyone who is interested, see issue #14181. > 3. the current check is in the wrong place (it should be in PyObject_GetBuffer) Agreed, since it's not memoryview specific. But I don't think we even need to check for obj != NULL. view.obj was undocumented, and since 3.0 Include/object.h contains this: typedef struct bufferinfo { void *buf; PyObject *obj; /* owned reference */ So it would be somewhat audacious to set this field to NULL. But even if existing code uses the view.obj==NULL scheme from PyBuffer_FillInfo() correctly, it will still work in the new implementation. I'd just prefer to forbid this in the documentation, because it's much easier to remember: getbuffer "returns" a new reference or NULL. Stefan Krah From solipsis at pitrou.net Sat Mar 3 13:48:53 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 03 Mar 2012 13:48:53 +0100 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: References: <20120303132024.1dcba8c6@pitrou.net> Message-ID: <1330778933.3362.0.camel@localhost.localdomain> Le samedi 03 mars 2012 ? 14:41 +0200, Eli Bendersky a ?crit : > >> I'd expect slice subscripts to be part of the sequence interface, and > >> yet they are not. In fact, they are part of the mapping interface. For > >> example, the list object has its slice get/set methods assigned to a > >> PyMappingMethods struct. So does a bytes object, and pretty much every > >> other object that wants to support subscripts. > > > > It comes from: > > http://hg.python.org/cpython/rev/245224d1b8c9 > > http://bugs.python.org/issue400998 > > > > Written by Michael Hudson and reviewed by Guido. > > I wonder why this patch chose to add mapping protocol support to tuples > > and lists, rather than add a tp_ slot for extended slicing. > > > > Why a separate tp_ slot for extended slicing? ISTM slicing pertains to > sequences, similarly to other numeric indices. If you look at > PySequenceMethods it has these (apparently no longer used fields): Yes, I meant sq_ slot, my bad. Regards Antoine. From barry at barrys-emacs.org Sat Mar 3 16:05:08 2012 From: barry at barrys-emacs.org (Barry Scott) Date: Sat, 3 Mar 2012 15:05:08 +0000 Subject: [Python-Dev] Why does Mac OS X python share site-packages with apple python? Message-ID: <5A0E2490-A743-4729-A752-D94524EA9840@barrys-emacs.org> On my Mac OS X 10.7.3 System I have lots of python kits installed for developing extensions. I'll just noticed that Python.org 2.7.2 uses the sames site-packages folder with Apple's 2.7.1. Since extensions compiled against Apple's 2.7.1 segv when used by python.org's 2.7.2 this is at least unfortunate. Here is the what is in sys.path for both versions. Notice /Library/Python/2.7/site-packages is in both. $ /usr/bin/python -c 'import sys,pprint; pprint.pprint( sys.path )' ['', '/usr/local/lib/wxPython-unicode-2.8.12.1/lib/python2.7/site-packages', '/usr/local/lib/wxPython-unicode-2.8.12.1/lib/python2.7/site-packages/wx-2.8-mac-unicode', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC', '/Library/Python/2.7/site-packages', '/usr/local/lib/wxPython-unicode-2.8.12.1/lib/python2.7'] $ /usr/local/bin/python2.7 -c 'import sys,pprint; pprint.pprint( sys.path )' ['', '/usr/local/lib/wxPython-unicode-2.8.12.1/lib/python2.7/site-packages', '/usr/local/lib/wxPython-unicode-2.8.12.1/lib/python2.7/site-packages/wx-2.8-mac-unicode', '/usr/local/lib/wxPython-unicode-2.8.12.1/lib/python2.7', '/Library/Python/2.7/site-packages', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages'] Barry From ejjyrex at gmail.com Sat Mar 3 12:53:58 2012 From: ejjyrex at gmail.com (Ejaj Hassan) Date: Sat, 3 Mar 2012 17:23:58 +0530 Subject: [Python-Dev] cpython compilation error Message-ID: Hello, I was compiling Pcbuild.sln from cpython in vc++ 2008 and i got the error as "Solution folders are not supported in this version of application-Solution folder will be displayed as unavailable". Could someone please tell me the source and reason for this error. Thanks in advance. Regards, Ejaj -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: python compile error.PNG Type: image/png Size: 111313 bytes Desc: not available URL: From vinay_sajip at yahoo.co.uk Sat Mar 3 17:53:57 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 3 Mar 2012 16:53:57 +0000 (UTC) Subject: [Python-Dev] cpython compilation error References: Message-ID: Ejaj Hassan gmail.com> writes: > ? ? ? ? I was compiling ?Pcbuild.sln from cpython in vc++ 2008 and i got the error as "Solution folders are not supported in this version of application-Solution folder will be displayed as unavailable".? > ?Could someone please tell me the source and reason for this error. It's because you're using the free "Express" edition of Visual Studio, and not the full, paid-for Visual Studio. However, I believe you can ignore the error, and the solution should still be built. (I'm not absolutely sure, as I use the full Visual Studio). Regards, Vinay Sajip From guido at python.org Sat Mar 3 18:58:59 2012 From: guido at python.org (Guido van Rossum) Date: Sat, 3 Mar 2012 09:58:59 -0800 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: <20120303132024.1dcba8c6@pitrou.net> References: <20120303132024.1dcba8c6@pitrou.net> Message-ID: On Sat, Mar 3, 2012 at 4:20 AM, Antoine Pitrou wrote: >> I'd expect slice subscripts to be part of the sequence interface, and >> yet they are not. In fact, they are part of the mapping interface. For >> example, the list object has its slice get/set methods assigned to a >> PyMappingMethods struct. So does a bytes object, and pretty much every >> other object that wants to support subscripts. > > It comes from: > http://hg.python.org/cpython/rev/245224d1b8c9 > http://bugs.python.org/issue400998 > > Written by Michael Hudson and reviewed by Guido. > I wonder why this patch chose to add mapping protocol support to tuples > and lists, rather than add a tp_ slot for extended slicing. That's long ago... IIRC it was for binary compatibility -- I didn't want to add an extra slot to the sq struct because it would require recompilation of 3rd party extensions. At the time that was an important concern. -- --Guido van Rossum (python.org/~guido) From eliben at gmail.com Sat Mar 3 19:18:11 2012 From: eliben at gmail.com (Eli Bendersky) Date: Sat, 3 Mar 2012 20:18:11 +0200 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: References: <20120303132024.1dcba8c6@pitrou.net> Message-ID: On Sat, Mar 3, 2012 at 19:58, Guido van Rossum wrote: > On Sat, Mar 3, 2012 at 4:20 AM, Antoine Pitrou wrote: >>> I'd expect slice subscripts to be part of the sequence interface, and >>> yet they are not. In fact, they are part of the mapping interface. For >>> example, the list object has its slice get/set methods assigned to a >>> PyMappingMethods struct. So does a bytes object, and pretty much every >>> other object that wants to support subscripts. >> >> It comes from: >> http://hg.python.org/cpython/rev/245224d1b8c9 >> http://bugs.python.org/issue400998 >> >> Written by Michael Hudson and reviewed by Guido. >> I wonder why this patch chose to add mapping protocol support to tuples >> and lists, rather than add a tp_ slot for extended slicing. > > That's long ago... IIRC it was for binary compatibility -- I didn't > want to add an extra slot to the sq struct because it would require > recompilation of 3rd party extensions. At the time that was an > important concern. > Perhaps the situation can be fixed now without binary compatibility concerns. PySequenceMethods is: typedef struct { lenfunc sq_length; binaryfunc sq_concat; ssizeargfunc sq_repeat; ssizeargfunc sq_item; void *was_sq_slice; ssizeobjargproc sq_ass_item; void *was_sq_ass_slice; objobjproc sq_contains; binaryfunc sq_inplace_concat; ssizeargfunc sq_inplace_repeat; } PySequenceMethods; The slots "was_sq_slice" and "was_sq_ass_slice" aren't used any longer. These can be re-incarnated to accept a slice object, and sequence objects can be rewritten to use them instead of implementing the mapping protocol (is there any reason listobject implements the mapping protocol, other than to gain the ability to use slices for __getitem__?). Existing 3rd party extensions don't *need* to be recompiled or changed, however. They *can* be, if their authors are interested, of course. Eli From arigo at tunes.org Sat Mar 3 20:13:47 2012 From: arigo at tunes.org (Armin Rigo) Date: Sat, 3 Mar 2012 20:13:47 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: Message-ID: Hi Victor, On Thu, Mar 1, 2012 at 22:59, Victor Stinner wrote: >> I challenge anymore to break pysandbox! I would be happy if anyone >> breaks it because it would make it more stronger. I tried to run the files from Lib/test/crashers and --- kind of obviously --- I found at least two of them that still segfaults execfile.py, sometimes with minor edits and sometimes directly, on CPython 2.7. As usual, I don't see the point of "challenging" us when we have crashers already documented. Also, it's not like Lib/test/crashers contains in detail *all* crashers that exist; some of them are of the kind "there is a general issue with xxx, here is an example". If you are not concerned about segfaults but only real attacks, then fine, I will not spend the hours necessary to turn the segfault into a real attack :-) A bient?t, Armin. From thomas at python.org Sat Mar 3 21:48:16 2012 From: thomas at python.org (Thomas Wouters) Date: Sat, 3 Mar 2012 12:48:16 -0800 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: <20120303110838.GA19066@sleipnir.bytereef.org> References: <20120302125540.GA14210@sleipnir.bytereef.org> <20120302153057.GA14973@sleipnir.bytereef.org> <20120302164226.GA15907@sleipnir.bytereef.org> <20120303110838.GA19066@sleipnir.bytereef.org> Message-ID: On Sat, Mar 3, 2012 at 03:08, Stefan Krah wrote: > Stefan Behnel wrote: > > > 1. assert() is the wrong tool for this job > > > > Absolutely. > > I disagree. This assert() is meant for extension authors and not end > users. I > can't see how a reasonable release procedure would fail to trigger the > assert(). > > My procedure as a C extension author is to test against a new Python > version > and *then* set the PyPI classifier for that version. > Do you test against pydebug builds of Python, or otherwise a build that actually enables asserts? Because I suspect most people don't, so they don't trigger the assert. Python is normally (that is, a release build on Windows or a regular, non-pydebug build on the rest) built without asserts. Asserts are disabled by the NDEBUG symbol, which Python passes for regular builds. Even that aside, asserts are for internal invariants, not external ones. You can use asserts in your extension module to check that your own code is passing what you think it should pass, but you shouldn't really use them to check that a library or API you use is, and Python certainly shouldn't be using it to check what code outside of the core is giving it. Aborting (which is what failed asserts do) is just not the right thing to do. > > If I download a C extension that doesn't have the 3.3 classifier set, > then as a user I would not be upset if the extension throws an assert or, > as Thomas Wouters pointed out, continues to work as before if not compiled > in debug mode. > > > > > > 2. the current check is too strict (it should just check for obj != > > > NULL, not obj == &exporter) > > > > I don't know. The documentation isn't very clear on the cases where obj > may > > be NULL. Definitely on error, ok, but otherwise, the bf_getbuffer() docs > do > > not explicitly say that it must not be NULL (they just mention a > "standard" > > case): > > > > http://docs.python.org/dev/c-api/typeobj.html#buffer-object-structures > > How about this: > > "The value of view.obj is the equivalent of the return value of any C-API > function that returns a new reference. The value must be NULL on error > or a valid new reference to an exporting object. > > For a chain or a tree of views, there are two possible schemes: > > 1) Re-export: Each member of the tree pretends to be the exporting > object and sets view.obj to a new reference to itself. > > 2) Redirect: The buffer request is redirected to the root object > of the tree. Here view.obj will be a reference to the root object." > > > > I think it's better not to complicate this familiar scheme of owning > a reference by allowing view.obj==NULL for the general case. > > > view.obj==NULL was introduced for temporary wrapping of ad-hoc memoryviews > via PyBuffer_FillInfo() and now also PyMemoryView_FromMemory(). > > That's why I explicitly wrote the following in the documentation of > PyBuffer_FillInfo(): > > "If this function is used as part of a getbufferproc, exporter MUST be > set to the exporting object. Otherwise, exporter MUST be NULL." > > > Stefan Krah > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/thomas%40python.org > -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at python.org Sat Mar 3 21:59:13 2012 From: thomas at python.org (Thomas Wouters) Date: Sat, 3 Mar 2012 12:59:13 -0800 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: References: <20120303132024.1dcba8c6@pitrou.net> Message-ID: On Sat, Mar 3, 2012 at 10:18, Eli Bendersky wrote: > On Sat, Mar 3, 2012 at 19:58, Guido van Rossum wrote: > > On Sat, Mar 3, 2012 at 4:20 AM, Antoine Pitrou > wrote: > >>> I'd expect slice subscripts to be part of the sequence interface, and > >>> yet they are not. In fact, they are part of the mapping interface. For > >>> example, the list object has its slice get/set methods assigned to a > >>> PyMappingMethods struct. So does a bytes object, and pretty much every > >>> other object that wants to support subscripts. > >> > >> It comes from: > >> http://hg.python.org/cpython/rev/245224d1b8c9 > >> http://bugs.python.org/issue400998 > >> > >> Written by Michael Hudson and reviewed by Guido. > >> I wonder why this patch chose to add mapping protocol support to tuples > >> and lists, rather than add a tp_ slot for extended slicing. > > > > That's long ago... IIRC it was for binary compatibility -- I didn't > > want to add an extra slot to the sq struct because it would require > > recompilation of 3rd party extensions. At the time that was an > > important concern. > > > > Perhaps the situation can be fixed now without binary compatibility > concerns. PySequenceMethods is: > > typedef struct { > lenfunc sq_length; > binaryfunc sq_concat; > ssizeargfunc sq_repeat; > ssizeargfunc sq_item; > void *was_sq_slice; > ssizeobjargproc sq_ass_item; > void *was_sq_ass_slice; > objobjproc sq_contains; > > binaryfunc sq_inplace_concat; > ssizeargfunc sq_inplace_repeat; > } PySequenceMethods; > > The slots "was_sq_slice" and "was_sq_ass_slice" aren't used any > longer. These can be re-incarnated to accept a slice object, and > sequence objects can be rewritten to use them instead of implementing > the mapping protocol (is there any reason listobject implements the > mapping protocol, other than to gain the ability to use slices for > __getitem__?). Existing 3rd party extensions don't *need* to be > recompiled or changed, however. They *can* be, if their authors are > interested, of course. Why even have separate tp_as_sequence and tp_as_mapping anymore? That particular distinction never existed for Python types, so why should it exist for C types at all? I forget if there was ever a real point to it, but all it seems to do now is create confusion, what with many sequence types implementing both, and PyMapping_Check() and PySequence_Check() doing seemingly random things to come up with somewhat sensible answers. Do note that the dict type actually implements tp_as_sequence (in order to support containtment tests) and that PySequence_Check() has to explicitly return 0 for dicts -- which means that it will give the "wrong" answer for another type that behaves exactly like dicts. Getting rid of the misleading distinction seems like a much better idea than trying to re-conflate some of the issues. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat Mar 3 22:02:50 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 3 Mar 2012 22:02:50 +0100 Subject: [Python-Dev] slice subscripts for sequences and mappings References: <20120303132024.1dcba8c6@pitrou.net> Message-ID: <20120303220250.3457ef4d@pitrou.net> On Sat, 3 Mar 2012 12:59:13 -0800 Thomas Wouters wrote: > > Why even have separate tp_as_sequence and tp_as_mapping anymore? That > particular distinction never existed for Python types, so why should it > exist for C types at all? I forget if there was ever a real point to it, > but all it seems to do now is create confusion, what with many sequence > types implementing both, and PyMapping_Check() and PySequence_Check() doing > seemingly random things to come up with somewhat sensible answers. Ironically, most of the confusion stems from sequence types implementing the mapping protocol for extended slicing. > Do note > that the dict type actually implements tp_as_sequence (in order to support > containtment tests) and that PySequence_Check() has to explicitly return 0 > for dicts -- which means that it will give the "wrong" answer for another > type that behaves exactly like dicts. It seems to be a leftover: int PySequence_Check(PyObject *s) { if (PyDict_Check(s)) return 0; return s != NULL && s->ob_type->tp_as_sequence && s->ob_type->tp_as_sequence->sq_item != NULL; } Dict objects have a NULL sq_item so even removing the explicit check would still return the right answer. > Getting rid of the misleading distinction seems like a much better idea > than trying to re-conflate some of the issues. This proposal sounds rather backwards, given that we now have separate Mapping and Sequence ABCs. Regards Antoine. From stefan_ml at behnel.de Sat Mar 3 22:12:27 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 03 Mar 2012 22:12:27 +0100 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: References: <20120303132024.1dcba8c6@pitrou.net> Message-ID: Thomas Wouters, 03.03.2012 21:59: > Why even have separate tp_as_sequence and tp_as_mapping anymore? That > particular distinction never existed for Python types, so why should it > exist for C types at all? I forget if there was ever a real point to it, > but all it seems to do now is create confusion, what with many sequence > types implementing both, and PyMapping_Check() and PySequence_Check() doing > seemingly random things to come up with somewhat sensible answers. Do note > that the dict type actually implements tp_as_sequence (in order to support > containtment tests) and that PySequence_Check() has to explicitly return 0 > for dicts -- which means that it will give the "wrong" answer for another > type that behaves exactly like dicts. > > Getting rid of the misleading distinction seems like a much better idea > than trying to re-conflate some of the issues. We're too far away from the release of Python 4 to change something with that kind of impact, though. Stefan From victor.stinner at gmail.com Sat Mar 3 22:37:54 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 03 Mar 2012 22:37:54 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: Message-ID: <4F528F32.3060409@gmail.com> Hi, Le 03/03/2012 20:13, Armin Rigo a ?crit : >>> I challenge anymore to break pysandbox! I would be happy if anyone >>> breaks it because it would make it more stronger. > > I tried to run the files from Lib/test/crashers and --- kind of > obviously --- I found at least two of them that still segfaults > execfile.py, sometimes with minor edits and sometimes directly, on > CPython 2.7. As described in the README file of pysandbox, pysandbox doesn't protect against vulnerabilities or bugs in Python. > As usual, I don't see the point of "challenging" us when we have > crashers already documented. Also, it's not like Lib/test/crashers > contains in detail *all* crashers that exist; some of them are of the > kind "there is a general issue with xxx, here is an example". > > If you are not concerned about segfaults but only real attacks, then > fine, I will not spend the hours necessary to turn the segfault into a > real attack :-) You may be able to exploit crashers, but I don't plan to workaround such CPython bug in pysandbox. I'm looking for vulnerabilities in pysandbox, not in CPython. Victor From nad at acm.org Sat Mar 3 22:57:39 2012 From: nad at acm.org (Ned Deily) Date: Sat, 03 Mar 2012 13:57:39 -0800 Subject: [Python-Dev] Why does Mac OS X python share site-packages with apple python? References: <5A0E2490-A743-4729-A752-D94524EA9840@barrys-emacs.org> Message-ID: In article <5A0E2490-A743-4729-A752-D94524EA9840 at barrys-emacs.org>, Barry Scott wrote: > On my Mac OS X 10.7.3 System I have lots of python kits installed for > developing extensions. > > I'll just noticed that Python.org 2.7.2 uses the sames site-packages folder > with Apple's > 2.7.1. > > Since extensions compiled against Apple's 2.7.1 segv when used by > python.org's 2.7.2 > this is at least unfortunate. > > Here is the what is in sys.path for both versions. Notice > /Library/Python/2.7/site-packages > is in both. That directory is in the default sys.path for both the Apple-supplied Python 2.7 in Lion and for the python.org Python 2.7's but that doesn't mean both versions use the same site-packages directory: $ /usr/bin/python2.7 -c "import distutils.sysconfig; \ print(distutils.sysconfig.get_python_lib())" /Library/Python/2.7/site-packages $ /usr/local/bin/python2.7 -c "import distutils.sysconfig; \ print(distutils.sysconfig.get_python_lib())" /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-pack ages That means that, by default, packages installed by Distutils-based installs (setup.py, easy_install, pip, et al) will be installed to the corresponding directory for each version. The python.org OS X Pythons (and built-from-source framework builds) add the Apple-specific directory to the search path in order to allow sharing of installed third-party packages between the two. The feature was added in 2.7 and 3.1+ and tracked in Issue4865 (http://bugs.python.org/issue4865). Please open a new issue on the tracker if you have examples of how this is causing problems. Thanks. -- Ned Deily, nad at acm.org From martin at v.loewis.de Sat Mar 3 23:12:55 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 03 Mar 2012 23:12:55 +0100 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> <4F50CD1C.2090800@gmail.com> <20120303014948.Horde.jZkbfML8999PUWqsfvvWLEA@webmail.df.eu> Message-ID: <4F529767.6020808@v.loewis.de> > 2to3 should recognize the str(string_literal) (or nstr(), or native(), > etc) ??as a native string and does not add prefix "u" to it. And you > have to explicitly specify these tips. That is already implemented. 2to3 *never* adds a u prefix anywhere, including not for str(string_literal). Regards, Martin From victor.stinner at gmail.com Sun Mar 4 00:11:25 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sun, 04 Mar 2012 00:11:25 +0100 Subject: [Python-Dev] PEP 416: Add a frozendict builtin type In-Reply-To: References: Message-ID: <4F52A51D.5080206@gmail.com> Le 29/02/2012 19:21, Victor Stinner a ?crit : > Rationale > ========= > > (...) Use cases of frozendict: (...) I updated the PEP to list use cases described in the other related mailing list thread. --- Use cases: * frozendict lookup can be done at compile time instead of runtime because the mapping is read-only. frozendict can be used instead of a preprocessor to remove conditional code at compilation, like code specific to a debug build. * hashable frozendict can be used as a key of a mapping or as a member of set. frozendict can be used to implement a cache. * frozendict avoids the need of a lock when the frozendict is shared by multiple threads or processes, especially hashable frozendict. It would also help to prohibe coroutines (generators + greenlets) to modify the global state. * frozendict helps to implement read-only object proxies for security modules. For example, it would be possible to use frozendict type for __builtins__ mapping or type.__dict__. This is possible because frozendict is compatible with the PyDict C API. * frozendict avoids the need of a read-only proxy in some cases. frozendict is faster than a proxy because getting an item in a frozendict is a fast lookup whereas a proxy requires a function call. * use a frozendict as the default value of function argument: avoid the problem of mutable default argument. --- Victor From amauryfa at gmail.com Sun Mar 4 00:14:14 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sun, 4 Mar 2012 00:14:14 +0100 Subject: [Python-Dev] cpython compilation error In-Reply-To: References: Message-ID: 2012/3/3 Vinay Sajip > Ejaj Hassan gmail.com> writes: > > > I was compiling Pcbuild.sln from cpython in vc++ 2008 and i got > the > error as "Solution folders are not supported in this version of > application-Solution folder will be displayed as unavailable". > > Could someone please tell me the source and reason for this error. > > It's because you're using the free "Express" edition of Visual Studio, and > not > the full, paid-for Visual Studio. > > However, I believe you can ignore the error, and the solution should still > be > built. (I'm not absolutely sure, as I use the full Visual Studio). I confirm: you can safely ignore this warning message. The "Solution folder" is a convenient place to group files not related to a sub-project, like the "readme.txt" file. It has no effect on the build. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at barrys-emacs.org Sun Mar 4 01:08:24 2012 From: barry at barrys-emacs.org (Barry Scott) Date: Sun, 4 Mar 2012 00:08:24 +0000 Subject: [Python-Dev] Why does Mac OS X python share site-packages with apple python? In-Reply-To: References: <5A0E2490-A743-4729-A752-D94524EA9840@barrys-emacs.org> Message-ID: <44CE1718-ED43-4FB0-98D5-34B3F2A6A5B1@barrys-emacs.org> On 3 Mar 2012, at 21:57, Ned Deily wrote: > In article <5A0E2490-A743-4729-A752-D94524EA9840 at barrys-emacs.org>, > Barry Scott wrote: >> On my Mac OS X 10.7.3 System I have lots of python kits installed for >> developing extensions. >> >> I'll just noticed that Python.org 2.7.2 uses the sames site-packages folder >> with Apple's >> 2.7.1. >> >> Since extensions compiled against Apple's 2.7.1 segv when used by >> python.org's 2.7.2 >> this is at least unfortunate. >> >> Here is the what is in sys.path for both versions. Notice >> /Library/Python/2.7/site-packages >> is in both. > > That directory is in the default sys.path for both the Apple-supplied > Python 2.7 in Lion and for the python.org Python 2.7's but that doesn't > mean both versions use the same site-packages directory: > > $ /usr/bin/python2.7 -c "import distutils.sysconfig; \ > print(distutils.sysconfig.get_python_lib())" > /Library/Python/2.7/site-packages > > $ /usr/local/bin/python2.7 -c "import distutils.sysconfig; \ > print(distutils.sysconfig.get_python_lib())" > /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-pack > ages > > That means that, by default, packages installed by Distutils-based > installs (setup.py, easy_install, pip, et al) will be installed to the > corresponding directory for each version. > > The python.org OS X Pythons (and built-from-source framework builds) add > the Apple-specific directory to the search path in order to allow > sharing of installed third-party packages between the two. The feature > was added in 2.7 and 3.1+ and tracked in Issue4865 > (http://bugs.python.org/issue4865). Please open a new issue on the > tracker if you have examples of how this is causing problems. Thanks. > Yes I have a example that SEGV, pysvn details of kit location in bug report. I take it that any .so will crash as well. Only .py can be shared. http://bugs.python.org/issue14188 Look at the order of the sys.path the apple python site-packages hides the python.org site-packages. If the shared folder was after the python.org then imports could be made to work. Barry From fijall at gmail.com Sun Mar 4 03:02:44 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 3 Mar 2012 18:02:44 -0800 Subject: [Python-Dev] Sandboxing Python In-Reply-To: <4F528F32.3060409@gmail.com> References: <4F528F32.3060409@gmail.com> Message-ID: On Sat, Mar 3, 2012 at 1:37 PM, Victor Stinner wrote: > Hi, > > Le 03/03/2012 20:13, Armin Rigo a ?crit : > >>>> I challenge anymore to break pysandbox! I would be happy if anyone >>>> breaks it because it would make it more stronger. >> >> >> I tried to run the files from Lib/test/crashers and --- kind of >> obviously --- I found at least two of them that still segfaults >> execfile.py, sometimes with minor edits and sometimes directly, on >> CPython 2.7. > > > As described in the README file of pysandbox, pysandbox doesn't protect > against vulnerabilities or bugs in Python. > > >> As usual, I don't see the point of "challenging" us when we have >> crashers already documented. ?Also, it's not like Lib/test/crashers >> contains in detail *all* crashers that exist; some of them are of the >> kind "there is a general issue with xxx, here is an example". >> >> If you are not concerned about segfaults but only real attacks, then >> fine, I will not spend the hours necessary to turn the segfault into a >> real attack :-) > > > You may be able to exploit crashers, but I don't plan to workaround such > CPython bug in pysandbox. > > I'm looking for vulnerabilities in pysandbox, not in CPython. > > Victor Well ok. But then what's the point of "challenging" people? You say "this is secure according to my knowledge" and when armin says "no it's not", you claim this is the wrong kind of security exploit. Segfaults (most of them) can generally be made into arbitrary code execution, hence the pysandbox is not quite secure. Even further, "any" sort of this "security restrictions" where you modify locals globals etc. would be seriously prone to attacks like those segfaults, unless you do something with the VM you're running. This makes it slightly less convincing to argue that the VM requires new features (in this case frozendict) in order to support the kind of program that's broken in the first place. Well, I think I'm seriously missing something. Cheers, fijal From ericsnowcurrently at gmail.com Sun Mar 4 03:07:57 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sat, 3 Mar 2012 19:07:57 -0700 Subject: [Python-Dev] PEP 416: Add a frozendict builtin type In-Reply-To: <4F52A51D.5080206@gmail.com> References: <4F52A51D.5080206@gmail.com> Message-ID: On Sat, Mar 3, 2012 at 4:11 PM, Victor Stinner wrote: > Le 29/02/2012 19:21, Victor Stinner a ?crit : >> >> Rationale >> ========= >> >> (...) Use cases of frozendict: (...) > > > I updated the PEP to list use cases described in the other related mailing > list thread. > --- > Use cases: > > ?* frozendict lookup can be done at compile time instead of runtime because > the mapping is read-only. frozendict can be used instead of a preprocessor > to remove conditional code at compilation, like code specific to a debug > build. > ?* hashable frozendict can be used as a key of a mapping or as a member of > set. frozendict can be used to implement a cache. > ?* frozendict avoids the need of a lock when the frozendict is shared by > multiple threads or processes, especially hashable frozendict. It would also > help to prohibe coroutines (generators + greenlets) to modify the global > state. > ?* frozendict helps to implement read-only object proxies for security > modules. For example, it would be possible to use frozendict type for > __builtins__ mapping or type.__dict__. This is possible because frozendict > is compatible with the PyDict C API. > ?* frozendict avoids the need of a read-only proxy in some cases. frozendict > is faster than a proxy because getting an item in a frozendict is a fast > lookup whereas a proxy requires a function call. > ?* use a frozendict as the default value of function argument: avoid the > problem of mutable default argument. Is your implementation (adapted to a standalone type) something you could put up on the cheeseshop? -eric From thomas at python.org Sun Mar 4 03:20:22 2012 From: thomas at python.org (Thomas Wouters) Date: Sat, 3 Mar 2012 18:20:22 -0800 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: <20120303220250.3457ef4d@pitrou.net> References: <20120303132024.1dcba8c6@pitrou.net> <20120303220250.3457ef4d@pitrou.net> Message-ID: On Sat, Mar 3, 2012 at 13:02, Antoine Pitrou wrote: > On Sat, 3 Mar 2012 12:59:13 -0800 > Thomas Wouters wrote: > > > > Why even have separate tp_as_sequence and tp_as_mapping anymore? That > > particular distinction never existed for Python types, so why should it > > exist for C types at all? I forget if there was ever a real point to it, > > but all it seems to do now is create confusion, what with many sequence > > types implementing both, and PyMapping_Check() and PySequence_Check() > doing > > seemingly random things to come up with somewhat sensible answers. > > Ironically, most of the confusion stems from sequence types > implementing the mapping protocol for extended slicing. > > > Do note > > that the dict type actually implements tp_as_sequence (in order to > support > > containtment tests) and that PySequence_Check() has to explicitly return > 0 > > for dicts -- which means that it will give the "wrong" answer for another > > type that behaves exactly like dicts. > > It seems to be a leftover: > > int > PySequence_Check(PyObject *s) > { > if (PyDict_Check(s)) > return 0; > return s != NULL && s->ob_type->tp_as_sequence && > s->ob_type->tp_as_sequence->sq_item != NULL; > } > > Dict objects have a NULL sq_item so even removing the explicit check > would still return the right answer. > > > Getting rid of the misleading distinction seems like a much better idea > > than trying to re-conflate some of the issues. > > This proposal sounds rather backwards, given that we now have separate > Mapping and Sequence ABCs. > I'm not sure how the ABCs, which are abstract declarations of semantics, tie into this specific implementation detail. ABCs work just as well for Python types as for C types, and Python types don't have this distinction. The distinction in C types has been *practically* useless for years, so why should it stay? What is the actual benefit here? -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at python.org Sun Mar 4 03:24:40 2012 From: thomas at python.org (Thomas Wouters) Date: Sat, 3 Mar 2012 18:24:40 -0800 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: References: <20120303132024.1dcba8c6@pitrou.net> Message-ID: On Sat, Mar 3, 2012 at 13:12, Stefan Behnel wrote: > Thomas Wouters, 03.03.2012 21:59: > > Why even have separate tp_as_sequence and tp_as_mapping anymore? That > > particular distinction never existed for Python types, so why should it > > exist for C types at all? I forget if there was ever a real point to it, > > but all it seems to do now is create confusion, what with many sequence > > types implementing both, and PyMapping_Check() and PySequence_Check() > doing > > seemingly random things to come up with somewhat sensible answers. Do > note > > that the dict type actually implements tp_as_sequence (in order to > support > > containtment tests) and that PySequence_Check() has to explicitly return > 0 > > for dicts -- which means that it will give the "wrong" answer for another > > type that behaves exactly like dicts. > > > > Getting rid of the misleading distinction seems like a much better idea > > than trying to re-conflate some of the issues. > > We're too far away from the release of Python 4 to change something with > that kind of impact, though. It's not hard to do this in a backward-compatible way. Either grow one of the tp_as_* to include everything a 'unified' tp_as_everything struct would need, or add a new tp_as_everything slot in the type struct. Then add a tp_flag to indicate that the type has this new layout/slot and guard all uses of the new slots with a check for that flag. If the type doesn't have the new layout or doesn't have it or the slots in it set, the code can fall back to the old try-one-and-then-the-other behaviour of dealing with tp_as_sequence and tp_as_mapping. (Let's not forget about tp_as_sequence.sq_concat, tp_as_number.nb_add, tp_as_sequence.sq_repeat and tp_as_number.nb_mul either.) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sun Mar 4 03:51:18 2012 From: guido at python.org (Guido van Rossum) Date: Sat, 3 Mar 2012 18:51:18 -0800 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> Message-ID: On Sat, Mar 3, 2012 at 6:02 PM, Maciej Fijalkowski wrote: > On Sat, Mar 3, 2012 at 1:37 PM, Victor Stinner wrote: >> Hi, >> >> Le 03/03/2012 20:13, Armin Rigo a ?crit : >> >>>>> I challenge anymore to break pysandbox! I would be happy if anyone >>>>> breaks it because it would make it more stronger. >>> >>> >>> I tried to run the files from Lib/test/crashers and --- kind of >>> obviously --- I found at least two of them that still segfaults >>> execfile.py, sometimes with minor edits and sometimes directly, on >>> CPython 2.7. >> >> >> As described in the README file of pysandbox, pysandbox doesn't protect >> against vulnerabilities or bugs in Python. >> >> >>> As usual, I don't see the point of "challenging" us when we have >>> crashers already documented. ?Also, it's not like Lib/test/crashers >>> contains in detail *all* crashers that exist; some of them are of the >>> kind "there is a general issue with xxx, here is an example". >>> >>> If you are not concerned about segfaults but only real attacks, then >>> fine, I will not spend the hours necessary to turn the segfault into a >>> real attack :-) >> >> >> You may be able to exploit crashers, but I don't plan to workaround such >> CPython bug in pysandbox. >> >> I'm looking for vulnerabilities in pysandbox, not in CPython. >> >> Victor > > Well ok. But then what's the point of "challenging" people? > > You say "this is secure according to my knowledge" and when armin says > "no it's not", you claim this is the wrong kind of security exploit. > Segfaults (most of them) can generally be made into arbitrary code > execution, hence the pysandbox is not quite secure. Even further, > "any" sort of this "security restrictions" where you modify locals > globals etc. would be seriously prone to attacks like those segfaults, > unless you do something with the VM you're running. This makes it > slightly less convincing to argue that the VM requires new features > (in this case frozendict) in order to support the kind of program > that's broken in the first place. > > Well, I think I'm seriously missing something. Could we put asserts in the places where segfaults may happen? Then Victor could say "if you want this to be secure then you must build your Python executable with asserts on." IIRC some of the segfaults *already* trigger asserts when those are enabled. -- --Guido van Rossum (python.org/~guido) From eliben at gmail.com Sun Mar 4 04:37:37 2012 From: eliben at gmail.com (Eli Bendersky) Date: Sun, 4 Mar 2012 05:37:37 +0200 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: References: <20120303132024.1dcba8c6@pitrou.net> Message-ID: >> Thomas Wouters, 03.03.2012 21:59: >> > Why even have separate tp_as_sequence and tp_as_mapping anymore? That >> > particular distinction never existed for Python types, so why should it >> > exist for C types at all? I forget if there was ever a real point to it, >> > but all it seems to do now is create confusion, what with many sequence >> > types implementing both, and PyMapping_Check() and PySequence_Check() >> > doing >> > seemingly random things to come up with somewhat sensible answers. Do >> > note >> > that the dict type actually implements tp_as_sequence (in order to >> > support >> > containtment tests) and that PySequence_Check() has to explicitly return >> > 0 >> > for dicts -- which means that it will give the "wrong" answer for >> > another >> > type that behaves exactly like dicts. >> > >> > Getting rid of the misleading distinction seems like a much better idea >> > than trying to re-conflate some of the issues. >> >> We're too far away from the release of Python 4 to change something with >> that kind of impact, though. > > > It's not hard to do this in a backward-compatible way. Either grow one of > the tp_as_* to include everything a 'unified' tp_as_everything struct would > need, or add a new tp_as_everything slot in the type struct. Then add a > tp_flag to indicate that the type has this new layout/slot and guard all > uses of the new slots with a check for that flag. If the type doesn't have > the new layout or doesn't have it or the slots in it set, the code can fall > back to the old try-one-and-then-the-other behaviour of dealing with > tp_as_sequence and tp_as_mapping. > > (Let's not forget about tp_as_sequence.sq_concat, tp_as_number.nb_add, > tp_as_sequence.sq_repeat and tp_as_number.nb_mul either.) > There's nothing to unify, really, since PyMappingMethods is just a subset of PySequenceMethods: typedef struct { lenfunc mp_length; binaryfunc mp_subscript; objobjargproc mp_ass_subscript; } PyMappingMethods; with the small difference that in PySequenceMethods sq_item and sq_ass_item just accept numeric indices. However, if PySequenceMethods has the was_sq_sclies and was_sq_ass_slice fields are reinstated to accept a generic PyObject, PyMappingMethods will be a true subset. If we look at the code, this becomes even clearer: in a full grep on the Python 3.3 source, there is no object that defines tp_as_mapping but does not also define tp_as_sequence, except Modules/_sqlite/row.c [I'm not familiar enough with the _sqlite module, but there's a chance it would make sense for the Row to be a sequence too]. Eli From fijall at gmail.com Sun Mar 4 04:41:58 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 3 Mar 2012 19:41:58 -0800 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> Message-ID: On Sat, Mar 3, 2012 at 6:51 PM, Guido van Rossum wrote: > On Sat, Mar 3, 2012 at 6:02 PM, Maciej Fijalkowski wrote: >> On Sat, Mar 3, 2012 at 1:37 PM, Victor Stinner wrote: >>> Hi, >>> >>> Le 03/03/2012 20:13, Armin Rigo a ?crit : >>> >>>>>> I challenge anymore to break pysandbox! I would be happy if anyone >>>>>> breaks it because it would make it more stronger. >>>> >>>> >>>> I tried to run the files from Lib/test/crashers and --- kind of >>>> obviously --- I found at least two of them that still segfaults >>>> execfile.py, sometimes with minor edits and sometimes directly, on >>>> CPython 2.7. >>> >>> >>> As described in the README file of pysandbox, pysandbox doesn't protect >>> against vulnerabilities or bugs in Python. >>> >>> >>>> As usual, I don't see the point of "challenging" us when we have >>>> crashers already documented. ?Also, it's not like Lib/test/crashers >>>> contains in detail *all* crashers that exist; some of them are of the >>>> kind "there is a general issue with xxx, here is an example". >>>> >>>> If you are not concerned about segfaults but only real attacks, then >>>> fine, I will not spend the hours necessary to turn the segfault into a >>>> real attack :-) >>> >>> >>> You may be able to exploit crashers, but I don't plan to workaround such >>> CPython bug in pysandbox. >>> >>> I'm looking for vulnerabilities in pysandbox, not in CPython. >>> >>> Victor >> >> Well ok. But then what's the point of "challenging" people? >> >> You say "this is secure according to my knowledge" and when armin says >> "no it's not", you claim this is the wrong kind of security exploit. >> Segfaults (most of them) can generally be made into arbitrary code >> execution, hence the pysandbox is not quite secure. Even further, >> "any" sort of this "security restrictions" where you modify locals >> globals etc. would be seriously prone to attacks like those segfaults, >> unless you do something with the VM you're running. This makes it >> slightly less convincing to argue that the VM requires new features >> (in this case frozendict) in order to support the kind of program >> that's broken in the first place. >> >> Well, I think I'm seriously missing something. > > Could we put asserts in the places where segfaults may happen? Then > Victor could say "if you want this to be secure then you must build > your Python executable with asserts on." IIRC some of the segfaults > *already* trigger asserts when those are enabled. It's easy for some cases. Stack exhaustion cases might be significantly harder although you might pass some compiler-specific options to defend against that. The problem is a bit that those are "examples", which mean that they might either touch specific parts of code or "code that looks like that". A good example of a latter is chaining of iterators. Any iterators that can be chained can be made into a stack exhaustion segfault. I suppose with a bit of effort it might be made significantly harder though. Cheers, fijal From ncoghlan at gmail.com Sun Mar 4 04:59:34 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 4 Mar 2012 13:59:34 +1000 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: References: <20120303132024.1dcba8c6@pitrou.net> Message-ID: On Sun, Mar 4, 2012 at 12:24 PM, Thomas Wouters wrote: > (Let's not forget about tp_as_sequence.sq_concat, tp_as_number.nb_add, > tp_as_sequence.sq_repeat and tp_as_number.nb_mul either.) Indeed, let's not forget about those, which are a compatibility problem in and of themselves: http://bugs.python.org/issue11477 At most, the tp_mapping and tp_as_sequence overlap should be an FAQ entry in the devguide that says "yes, the implementation of this is weird. It's like that for historical reasons, and fixing it is a long way down the priority list for changes" Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From storchaka at gmail.com Sun Mar 4 07:59:41 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 04 Mar 2012 08:59:41 +0200 Subject: [Python-Dev] PEP 414 In-Reply-To: <4F529767.6020808@v.loewis.de> References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <4F5093D0.8020900@gmail.com> <7DDD8831-6759-4E42-AF1D-95830C814412@ox.cx> <4F50CD1C.2090800@gmail.com> <20120303014948.Horde.jZkbfML8999PUWqsfvvWLEA@webmail.df.eu> <4F529767.6020808@v.loewis.de> Message-ID: 04.03.12 00:12, "Martin v. L?wis" ???????(??): >> 2to3 should recognize the str(string_literal) (or nstr(), or native(), >> etc) ??as a native string and does not add prefix "u" to it. And you >> have to explicitly specify these tips. > > That is already implemented. 2to3 *never* adds a u prefix anywhere, > including not for str(string_literal). Sorry, I mean *3to2*. From ncoghlan at gmail.com Sun Mar 4 08:34:32 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 4 Mar 2012 17:34:32 +1000 Subject: [Python-Dev] PEP 414 updated Message-ID: My rewritten version of PEP 414 is now up (http://www.python.org/dev/peps/pep-0414/). It describes in detail a lot more of the historical background that was taken as read when Guido accepted the PEP. Can we let the interminable discussion die now? Please? Regards, Nick. P.S. If you find an actual factual *error* in the PEP, let me know by private email. If you just disagree with Guido's acceptance of the PEP, or want to quibble about my personal choice of wording on a particular point, please just let it rest. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From victor.stinner at gmail.com Sun Mar 4 10:30:11 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sun, 4 Mar 2012 10:30:11 +0100 Subject: [Python-Dev] PEP 416: Add a frozendict builtin type In-Reply-To: References: <4F52A51D.5080206@gmail.com> Message-ID: > Is your implementation (adapted to a standalone type) something you > could put up on the cheeseshop? Short answer: no. My implementation (attached to the issue #14162) reuses most of private PyDict functins which are not exported and these functions have to be modified to accept a frozendict as input. One of the advantage of reusing PyDict functions is also to have a frozendict type compatible with the PyDict (public) API: PyDict_GetItem(), PyDict_SetItem(), etc. This property allows to do further changes like accepting a frozendict for __builtins__ or use freezing a type dict (use frozendict for type.__dict__). If you only want to a frozendict type, you can copy/paste PyDict code or implement it complelty differently. Or you can write a read-only proxy. Victor From vinay_sajip at yahoo.co.uk Sun Mar 4 10:34:03 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 4 Mar 2012 09:34:03 +0000 (UTC) Subject: [Python-Dev] PEP 414 updated References: Message-ID: Nick Coghlan gmail.com> writes: > My rewritten version of PEP 414 is now up > (http://www.python.org/dev/peps/pep-0414/). It describes in detail a > lot more of the historical background that was taken as read when > Guido accepted the PEP. Nice work - thanks! I've implemented a first attempt at an import hook as mentioned in the PEP: https://bitbucket.org/vinay.sajip/uprefix/ It's used as follows: assume you have a simple package hierarchy of code containing u-prefixed literals: frob +-- __init__.py +-- subwob | +-- __init__.py | +-- subsubwob.py +-- wob.py with the following contents: # frob/subwob/__init__.py z = u'def' #------------------------- # frob/subwob/subsubwob.py w = u'tuv' #------------------------- # frob/__init__.py y = u'xyz' #------------------------- # frob/wob.py x = u'abc' #------------------------- You can now import these in Python 3.2 using the hook: Python 3.2.2 (default, Sep 5 2011, 21:17:14) [GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import uprefix; uprefix.register_hook() >>> import frob.subwob.subsubwob >>> frob.subwob.subsubwob.w 'tuv' >>> frob.subwob >>> frob.subwob.z 'def' >>> import frob.wob >>> frob.wob.x 'abc' >>> frob >>> frob.y 'xyz' >>> The "import uprefix; uprefix.register_hook()" is all that's needed to enable the hook. You can also unregister the hook by calling "uprefix.unregister_hook()". The project is set up with a setup.py and (basic) test suite, though it's too early to put it on PyPI. I have done basic testing with it, and it should also work as expected in virtual environments. The implementation uses lib2to3, though that could be changed if needed. You can also, of course, use it in Python 3.3 right now (before the PEP gets implemented). Please take a look at it, try it out, and give some feedback. Regards, Vinay Sajip From stefan at bytereef.org Sun Mar 4 12:33:29 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 4 Mar 2012 12:33:29 +0100 Subject: [Python-Dev] Assertion in _PyManagedBuffer_FromObject() In-Reply-To: References: <20120302125540.GA14210@sleipnir.bytereef.org> <20120302153057.GA14973@sleipnir.bytereef.org> <20120302164226.GA15907@sleipnir.bytereef.org> <20120303110838.GA19066@sleipnir.bytereef.org> Message-ID: <20120304113329.GA24326@sleipnir.bytereef.org> Thomas Wouters wrote: > Do you test against pydebug builds of Python, or otherwise a build that > actually enables asserts? Yes, I do (and much more than that): http://hg.python.org/features/cdecimal/file/40917e4b51aa/Modules/_decimal/python/runall-memorydebugger.sh http://hg.python.org/features/cdecimal/file/40917e4b51aa/Modules/_decimal/python/runall.bat It's automated, so it's not a big deal. You get 100% coverage, with and without threads, all machine configurations, pydebug, refleaks, release build and release build with Valgrind. The version on PyPI has had the same tests for a long time (i.e. also before I became involved with core development). > Because I suspect most people don't, so they don't trigger the assert. > Python is normally (that is, a release build on Windows or a regular, > non-pydebug build on the rest) built without asserts. Asserts are > disabled by the NDEBUG symbol, which Python passes for regular builds. If many C-extension authors don't know the benefits of --with-pydebug and the consensus here is to protect these authors and their users, then of course I agree with the exception approach for a (now hypothetical) API change. I would have some comments about valid uses of explicit aborts in a library that essentially perform the same function as compiling said library with -D_FORTIFY_SOURCE=2 and -ftrapv (i.e. crash when an external program violates a function contract), but I suspect that would be OT now. Stefan Krah From mark at hotpy.org Sun Mar 4 13:18:42 2012 From: mark at hotpy.org (Mark Shannon) Date: Sun, 04 Mar 2012 12:18:42 +0000 Subject: [Python-Dev] Defending against stack overflow (was Sandboxing Python) In-Reply-To: References: <4F528F32.3060409@gmail.com> Message-ID: <4F535DA2.2030702@hotpy.org> Having a look at the "crashers" in Lib/test/crashers it seems to me that they fall into four groups. 1. Unsafe gc functions like getreferrers() 2. Stack overflows. 3. "Normal" bugs that can be fixed on a case-by-case basis (like borrowed_ref_1.py and borrowed_ref_2.py) 4. Things that don't crash CPython anymore and should be moved. 1. can be dealt with by removing the offending function(s), 3. by fixing the problem directly. 4. no need to fix, just move :) So, how to handle stack overflows (of the C stack)? To prevent a stack overflow an exception must be raised before the VM runs out C stack. To do this we need 2 pieces of info: a) How much stack we've used b) How much stack is available. (a) can be easily, if not strictly portably, determined by taking the address of a local variable. (b) is tougher and is almost certainly OS dependent, but a conservative estimate is easy to do. A different approach is to separate the Python stack from the C stack, like stackless. This is a much more elegant approach, but is also a *lot* more work. I think it is a reasonable aim for 3.3 that Lib/test/crashers should be empty. Cheers, Mark. From stefan_ml at behnel.de Sun Mar 4 13:49:45 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 04 Mar 2012 13:49:45 +0100 Subject: [Python-Dev] Matrix testing against CPython releases (was: Re: Assertion in _PyManagedBuffer_FromObject()) In-Reply-To: <20120304113329.GA24326@sleipnir.bytereef.org> References: <20120302125540.GA14210@sleipnir.bytereef.org> <20120302153057.GA14973@sleipnir.bytereef.org> <20120302164226.GA15907@sleipnir.bytereef.org> <20120303110838.GA19066@sleipnir.bytereef.org> <20120304113329.GA24326@sleipnir.bytereef.org> Message-ID: Stefan Krah, 04.03.2012 12:33: > Thomas Wouters wrote: >> Do you test against pydebug builds of Python, or otherwise a build that >> actually enables asserts? > > Yes, I do (and much more than that): > > http://hg.python.org/features/cdecimal/file/40917e4b51aa/Modules/_decimal/python/runall-memorydebugger.sh > http://hg.python.org/features/cdecimal/file/40917e4b51aa/Modules/_decimal/python/runall.bat > > It's automated, so it's not a big deal. You get 100% coverage, with and without > threads, all machine configurations, pydebug, refleaks, release build and > release build with Valgrind. Same for Cython. We continuously test against the debug builds of all CPython branches since 2.4 (the oldest we support), as well as the latest developer branch, using both our own test suite and Python's regression test suite. https://sage.math.washington.edu:8091/hudson/ https://sage.math.washington.edu:8091/hudson/view/python/ BTW, I can warmly recommend Jenkins' matrix builds for this kind of compatibility testing. Here's an example: https://sage.math.washington.edu:8091/hudson/job/cython-devel-tests/ Basically, you write a build script and Jenkins configures it using environment variables that define the specific setup, e.g. Python 2.7 with C backend. It'll then run all combinations in parallel (optionally filtering out nonsense combinations or preferring combinations that should fail the build early) and present the results both as an aggregated view and in separate per-setup views. It also uses file hashes to remember where the dependencies came from, e.g. which build created the CPython installation that was used for testing, so that you can jump right to the build log of the dependency to check for relevant changes that may have triggered a test failure. Oh, and you can just copy such a job config to set up a separate set of test jobs for a developer's branch, for example. A huge help in distributed developer settings, or when you want to get a GSoC student up and running. Stefan From zbyszek at in.waw.pl Sun Mar 4 13:25:06 2012 From: zbyszek at in.waw.pl (=?UTF-8?B?WmJpZ25pZXcgSsSZZHJ6ZWpld3NraS1Tem1law==?=) Date: Sun, 04 Mar 2012 13:25:06 +0100 Subject: [Python-Dev] PEP 414 updated In-Reply-To: References: Message-ID: <4F535F22.8030703@in.waw.pl> On 03/04/2012 10:34 AM, Vinay Sajip wrote: > https://bitbucket.org/vinay.sajip/uprefix/ >>>> import uprefix; uprefix.register_hook() >>>> import frob.subwob.subsubwob >>>> frob.subwob.subsubwob.w Hi, it's pretty cool that 150 lines is enough to have this functionality. This guard: if sys.version_info[0] < 3: raise NotImplementedError('This hook is implemented for Python 3 only') Wouldn't it be better if the hook did nothing when on python 2? I think it'll make it necessary to use something like import sys if sys.version_info[0] < 3: import uprefix uprefix.register_hook() in the calling code to enable the code to run unchanged on both branches. Also: have you though about providing a context manager which does register_hook() in __enter__() and unregister_hook() in __exit__()? I think that some code will want to enable the hook only for specific modules. The number of lines could be minimized with something like this: import uprefix with uprefix.hook: import abcde_with_u import bcdef_with_u import other_module_without_u Regards, Zbyszek From vinay_sajip at yahoo.co.uk Sun Mar 4 14:14:18 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 4 Mar 2012 13:14:18 +0000 (UTC) Subject: [Python-Dev] PEP 414 updated References: <4F535F22.8030703@in.waw.pl> Message-ID: Zbigniew J?drzejewski-Szmek in.waw.pl> writes: > if sys.version_info[0] < 3: > raise NotImplementedError('This hook is implemented for Python 3 only') > > Wouldn't it be better if the hook did nothing when on python 2? > I think it'll make it necessary to use something like Actually I've realised the guard won't be invoked on Python 2, anyway: I later added a "raise ImportError() from e" in an exception handler, which leads to a syntax error in Python 2 before the guard even gets executed. So, I'll remove the guard (as it does nothing useful anyway) and think a bit more about not failing on Python 2. Perhaps - not use the "from" syntax in the exception handler, and do a no-op in register_hook if on Python 2. > Also: have you though about providing a context manager which does > register_hook() in __enter__() and unregister_hook() in __exit__()? Of course, things like this can be added without too much trouble. Thanks for the feedback. Regards, Vinay Sajip From armin.ronacher at active-4.com Sun Mar 4 14:46:12 2012 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Sun, 04 Mar 2012 13:46:12 +0000 Subject: [Python-Dev] PEP 414 updated In-Reply-To: References: Message-ID: <4F537224.8070203@active-4.com> Hi, It should also be added that the Python 3.3 alpha will release with support: Python 3.3.0a0 (default:042e7481c7b4, Mar 4 2012, 12:37:26) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> u"Hello" + ' World!' 'Hello World!' Regards, Armin From solipsis at pitrou.net Sun Mar 4 14:56:38 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 4 Mar 2012 14:56:38 +0100 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: References: <20120303132024.1dcba8c6@pitrou.net> <20120303220250.3457ef4d@pitrou.net> Message-ID: <20120304145638.425132eb@pitrou.net> On Sat, 3 Mar 2012 18:20:22 -0800 Thomas Wouters wrote: > > I'm not sure how the ABCs, which are abstract declarations of semantics, > tie into this specific implementation detail. ABCs work just as well for > Python types as for C types, and Python types don't have this distinction. > The distinction in C types has been *practically* useless for years, so why > should it stay? What is the actual benefit here? For one, it's certainly easier to implement an extension type if your getitem function receives a Py_ssize_t directly, rather than a PyObject. (it can be more efficient too) Regards Antoine. From ncoghlan at gmail.com Sun Mar 4 15:01:41 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 5 Mar 2012 00:01:41 +1000 Subject: [Python-Dev] PEP 414 updated In-Reply-To: <4F537224.8070203@active-4.com> References: <4F537224.8070203@active-4.com> Message-ID: On Sun, Mar 4, 2012 at 11:46 PM, Armin Ronacher wrote: > Hi, > > It should also be added that the Python 3.3 alpha will release with support: > > ?Python 3.3.0a0 (default:042e7481c7b4, Mar ?4 2012, 12:37:26) > ?[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin > ?Type "help", "copyright", "credits" or "license" for more information. > ?>>> u"Hello" + ' World!' > ?'Hello World!' Nice :) Do you have any more updates left to do? I saw the change, the tests, the docs and the tokenizer updates go by on python-checkins, so if you're done we can mark the PEP as Final (at which point the inclusion in the first alpha is implied). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From armin.ronacher at active-4.com Sun Mar 4 15:38:14 2012 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Sun, 04 Mar 2012 14:38:14 +0000 Subject: [Python-Dev] PEP 414 updated In-Reply-To: References: <4F537224.8070203@active-4.com> Message-ID: <4F537E56.2050807@active-4.com> Hi, On 3/4/12 2:01 PM, Nick Coghlan wrote: > Nice :) > > Do you have any more updates left to do? I saw the change, the tests, > the docs and the tokenizer updates go by on python-checkins, so if > you're done we can mark the PEP as Final (at which point the inclusion > in the first alpha is implied). Docs just have a minor notice regarding the reintroduced support for 'u' prefixes, someone might want to add more to it. Especially regarding the intended use for them. Regards, Armin From armin.ronacher at active-4.com Sun Mar 4 15:43:28 2012 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Sun, 04 Mar 2012 14:43:28 +0000 Subject: [Python-Dev] Install Hook [Was: Re: PEP 414 updated] In-Reply-To: References: <4F535F22.8030703@in.waw.pl> Message-ID: <4F537F90.60509@active-4.com> Hi, Jut to reiterate what I wrote on IRC: Please do not write or advocate for import hooks, especially not for porting purposes. It would either mean that people start adding that hook on their own to the code (and that awfully reminds me of the days of 'require "rubygems"' in the Ruby world) or that the __init__.py has to do that and that's a non trivial thing. The hook on install time works perfectly fine and the only situation where it might not work is when you're trying to use Python 3.2 for development and also support down to 2.x by using the newly introduced u-prefixes. In this case I would recommend using Python 3.3 for development and running the testsuite periodically from Python 3.2 after installating the library (into a virtualenv for instance). The current work in progress install time hook can be found here: https://github.com/mitsuhiko/unicode-literals-pep/tree/master/install-hook Regards, Armin From storchaka at gmail.com Sun Mar 4 15:55:21 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 04 Mar 2012 16:55:21 +0200 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: Message-ID: $ python execfile.py badhash.py Hang up. -------------- next part -------------- A non-text attachment was scrubbed... Name: badhash.py Type: text/x-python Size: 83 bytes Desc: not available URL: From storchaka at gmail.com Sun Mar 4 16:43:09 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 04 Mar 2012 17:43:09 +0200 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: Message-ID: There is even easier way to exceed the time-limit timeout and to eat CPU: sum(xrange(1000000000)). From vinay_sajip at yahoo.co.uk Sun Mar 4 17:06:31 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 4 Mar 2012 16:06:31 +0000 (UTC) Subject: [Python-Dev] Install / import hooks References: <4F535F22.8030703@in.waw.pl> <4F537F90.60509@active-4.com> Message-ID: Armin Ronacher active-4.com> writes: > Please do not write or advocate for import hooks, especially not for > porting purposes. It would either mean that people start adding that > hook on their own to the code (and that awfully reminds me of the days > of 'require "rubygems"' in the Ruby world) or that the __init__.py has > to do that and that's a non trivial thing. Well, we have to treat people as grown ups, don't we? > The hook on install time works perfectly fine and the only situation > where it might not work is when you're trying to use Python 3.2 for > development and also support down to 2.x by using the newly introduced > u-prefixes. This is exactly my use case on some projects; I can't always afford the time delay of a processing step between every edit and test (the same complaint as was made about 2to3 in the PEP). People should be free to use what's best for their use cases and development work processes. Regards, Vinay Sajip From guido at python.org Sun Mar 4 17:42:54 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 4 Mar 2012 08:42:54 -0800 Subject: [Python-Dev] PEP 414 updated In-Reply-To: References: Message-ID: On Sat, Mar 3, 2012 at 11:34 PM, Nick Coghlan wrote: > My rewritten version of PEP 414 is now up > (http://www.python.org/dev/peps/pep-0414/). It describes in detail a > lot more of the historical background that was taken as read when > Guido accepted the PEP. Thanks very much! It looks great to me. > Can we let the interminable discussion die now? > > Please? > > Regards, > Nick. > > P.S. If you find an actual factual *error* in the PEP, let me know by > private email. If you just disagree with Guido's acceptance of the > PEP, or want to quibble about my personal choice of wording on a > particular point, please just let it rest. +1 -- --Guido van Rossum (python.org/~guido) From guido at python.org Sun Mar 4 17:44:30 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 4 Mar 2012 08:44:30 -0800 Subject: [Python-Dev] Install Hook [Was: Re: PEP 414 updated] In-Reply-To: <4F537F90.60509@active-4.com> References: <4F535F22.8030703@in.waw.pl> <4F537F90.60509@active-4.com> Message-ID: On Sun, Mar 4, 2012 at 6:43 AM, Armin Ronacher wrote: > Please do not write or advocate for import hooks, especially not for > porting purposes. ?It would either mean that people start adding that > hook on their own to the code (and that awfully reminds me of the days > of 'require "rubygems"' in the Ruby world) or that the __init__.py has > to do that and that's a non trivial thing. I'd love a pointer to the rubygems debacle... > The hook on install time works perfectly fine and the only situation > where it might not work is when you're trying to use Python 3.2 for > development and also support down to 2.x by using the newly introduced > u-prefixes. ?In this case I would recommend using Python 3.3 for > development and running the testsuite periodically from Python 3.2 after > installating the library (into a virtualenv for instance). +1 > The current work in progress install time hook can be found here: > ?https://github.com/mitsuhiko/unicode-literals-pep/tree/master/install-hook Yee! -- --Guido van Rossum (python.org/~guido) From arigo at tunes.org Sun Mar 4 18:10:19 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 4 Mar 2012 18:10:19 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> Message-ID: Hi all, On Sun, Mar 4, 2012 at 03:51, Guido van Rossum wrote: > Could we put asserts in the places where segfaults may happen? No. I checked Lib/test/crashers/*.py and none of them would be safe with just a failing assert. If they were, we'd have written the assert long ago :-( "mutation_inside_cyclegc.py" is not tied to a particular place in the source; "loosing_mro_ref.py" requires an extra INCREF/DECREF in a performance-critical path; etc. Changing CPython to make it truly secure is definitely either a lost cause or a real major effort, and pysandbox just gives another such example. My advise is to give up and move security at some other level. (Or else, if you want to play this game, there is PyPy's sandboxing, which is just an unpolished proof a concept so far. I can challenge anyone to attack it, and this time it includes attempts to consume too much time or memory, to crash the process in any other way than a clean "fatal error!" message, and more generally to exploit issues that are dismissed by pysandbox as irrelevant.) A bient?t, Armin. From mark at hotpy.org Sun Mar 4 18:34:01 2012 From: mark at hotpy.org (Mark Shannon) Date: Sun, 04 Mar 2012 17:34:01 +0000 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> Message-ID: <4F53A789.4050004@hotpy.org> Armin Rigo wrote: > Hi all, > > On Sun, Mar 4, 2012 at 03:51, Guido van Rossum wrote: >> Could we put asserts in the places where segfaults may happen? > > No. I checked Lib/test/crashers/*.py and none of them would be safe > with just a failing assert. If they were, we'd have written the > assert long ago :-( "mutation_inside_cyclegc.py" is not tied to a > particular place in the source; "loosing_mro_ref.py" requires an extra > INCREF/DECREF in a performance-critical path; etc. > > Changing CPython to make it truly secure is definitely either a lost > cause or a real major effort, and pysandbox just gives another such > example. My advise is to give up and move security at some other > level. I don't think it is as hard as all that. All the crashers can be fixed, and with minimal effect on performance. (although the gc module might need couple of function removed) > > (Or else, if you want to play this game, there is PyPy's sandboxing, > which is just an unpolished proof a concept so far. I can challenge > anyone to attack it, and this time it includes attempts to consume too > much time or memory, to crash the process in any other way than a > clean "fatal error!" message, and more generally to exploit issues > that are dismissed by pysandbox as irrelevant.) Using too much memory can be dealt with at one place (in the allocator). You can't solve the too much time, without solving the halting problem, but you can make sure all code is interruptable (i.e. Cntrl-C works). Cheers, Mark. From barry at python.org Sun Mar 4 19:18:04 2012 From: barry at python.org (Barry Warsaw) Date: Sun, 4 Mar 2012 13:18:04 -0500 Subject: [Python-Dev] PEP 414 updated References: Message-ID: <20120304131804.405e9a42@resist.wooz.org> On Mar 04, 2012, at 05:34 PM, Nick Coghlan wrote: >My rewritten version of PEP 414 is now up >(http://www.python.org/dev/peps/pep-0414/). It describes in detail a lot more >of the historical background that was taken as read when Guido accepted the >PEP. Nick, really great job with your rewrite of PEP 414. I think you nailed it from the technical side while bringing some much needed balance to the social side. Not to diminish Armin's contribution to the PEP - after all this, I'm really glad he was able to bring it up and despite the heat of the discussion, get this resolved to his satisfaction. One factual omission: In the section on WSGI "native strings", you say * binary data: handled as str in Python 2 and bytes in Python 3 While true, this omits that binary data can *also* be handled as bytes in Python 2.6 and 2.7, where using `bytes` can be a more descriptive alias for `str`. If you can do it in a readable way within the context of that section I think it's worth mentioning this. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From chrism at plope.com Sun Mar 4 20:55:16 2012 From: chrism at plope.com (Chris McDonough) Date: Sun, 04 Mar 2012 14:55:16 -0500 Subject: [Python-Dev] PEP 414 updated In-Reply-To: References: Message-ID: <1330890916.8772.162.camel@thinko> On Sun, 2012-03-04 at 17:34 +1000, Nick Coghlan wrote: > My rewritten version of PEP 414 is now up > (http://www.python.org/dev/peps/pep-0414/). It describes in detail a > lot more of the historical background that was taken as read when > Guido accepted the PEP. """ Just as support for string exceptions was eliminated from Python 2 using the normal deprecation process, support for redundant string prefix characters (specifically, B, R, u, U) may be eventually eliminated from Python 3, regardless of the current acceptance of this PEP. """ We might need to clarify the feature's longevity. I take the above to mean that use of u'' and/or U'' won't emit a deprecation warning in 3.3. But that doesn't necessarily mean its usage won't emit a deprecation warning in 3.4 or 3.5 or 3.6, or whenever it "feels right"? Does that sound about right? - C From pje at telecommunity.com Sun Mar 4 20:35:30 2012 From: pje at telecommunity.com (PJ Eby) Date: Sun, 4 Mar 2012 14:35:30 -0500 Subject: [Python-Dev] PEP 414 In-Reply-To: References: <4F49434B.6050604@active-4.com> <4F4A10C1.6040806@pearwood.info> <4F4A29BD.2090607@active-4.com> <4F4BA4E0.80806@active-4.com> <4F4C0600.5010903@active-4.com> <87A20E5B-D624-4F32-BEE5-57A5C6D83339@gmail.com> <918CA06F-DFAE-4696-A824-D1559DD58010@gmail.com> <4F4FE650.8060402@active-4.com> <40C3F3BA-54E7-4B39-B3FB-20BEE65EB1D7@gmail.com> <4F4FFDC8.9020105@active-4.com> <5FBB9106-E12F-430B-BAA8-47C1289834E4@gmail.com> <4F5047CD.1040007@netwok.org> <20120302144107.1d6fb80b@resist.wooz.org> <1330719198.8772.137.camel@thinko> <20120302153951.670a42fe@resist.wooz.org> <1330721456.8772.148.camel@thinko> Message-ID: On Sat, Mar 3, 2012 at 5:02 AM, Lennart Regebro wrote: > I'm not sure that's true at all. In most cases where you support both > Python 2 and Python 3, most strings will be "native", ie, without > prefix in either Python 2 or Python 3. The native case is the most > common case. > Exactly. The reason "native strings" even exist as a concept in WSGI was to make it so that the idiomatic manipulation of header data in both Python 2 and 3 would use plain old string constants with no special wrappers or markings. What's thrown the monkey wrench in here for the WSGI case is the use of unicode_literals. If you simply skip using unicode_literals for WSGI code, you should be fine with a single 2/3 codebase. But then you need some way to mark some things as unicode... which is how we end up back at this PEP. I suppose WSGI could have gone the route of using byte strings for headers instead, but I'm not sure it would have helped. The design goals for PEP 3333 were to sanely support both 2to3 and 2+3 single codebases, and WSGI does actually do that... for the code that's actually doing WSGI stuff. Ironically enough, the effect of the WSGI API is that it's all the *non* WSGI-specific code in the same module that ends up needing to mark its strings as unicode... or else it has to use unicode_literals and mark all the WSGI code with str(). There's really no good way to deal with a *mixed* WSGI/non-WSGI module, except to use explicit markers on one side or the other. Perhaps the simplest solution of all might be to just isolate direct WSGI code in modules that don't import unicode_literals. Web frameworks usually hide WSGI stuff away from the user anyway, and many are already natively unicode in their app-facing APIs. So, if a framework or library encapsulates WSGI in a str-safe/unicode-friendly API, this really shouldn't be an issue for the library's users. But I suppose somebody's got to port the libraries first. ;-) If anyone's updating porting strategy stuff, a mention of this in the tips regarding unicode_literals would be a good idea. i.e., something like: "If you have 2.x modules which work with WSGI and also contain explicit u'' strings, you should not use unicode_literals unless you are willing to explicitly mark all WSGI environment and header strings as native strings using 'str()'. This is necessary because WSGI headers and environment keys/values are defined as byte strings in Python 2.x, and unicode strings in 3.x. Alternatively, you may continue to use u'' strings if you are targeting Python 3.3+ only, or can use the import or install hooks provided for Python 3.2, or if you are using 2to3... but in this case you should not use unicode_literals." That could probably be written a lot more clearly. ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Sun Mar 4 21:35:36 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 04 Mar 2012 21:35:36 +0100 Subject: [Python-Dev] Defending against stack overflow (was Sandboxing Python) In-Reply-To: <4F535DA2.2030702@hotpy.org> References: <4F528F32.3060409@gmail.com> <4F535DA2.2030702@hotpy.org> Message-ID: <4F53D218.6090600@v.loewis.de> > So, how to handle stack overflows (of the C stack)? > To prevent a stack overflow an exception must be raised before > the VM runs out C stack. To do this we need 2 pieces of info: > a) How much stack we've used > b) How much stack is available. Python has already dedicated counters for stack depth, which just need proper updating and conservative values. I also think that we need to avoid allocating large arrays on the stack in recursive functions, and always heap-allocate such memory, to be stack-conservative. > I think it is a reasonable aim for 3.3 that Lib/test/crashers > should be empty. I agree. If you have patches to review, just put me on the nosy list. Regards, Martin From vinay_sajip at yahoo.co.uk Sun Mar 4 22:00:01 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 4 Mar 2012 21:00:01 +0000 (UTC) Subject: [Python-Dev] Install Hook [Was: Re: PEP 414 updated] References: <4F535F22.8030703@in.waw.pl> <4F537F90.60509@active-4.com> Message-ID: Armin Ronacher active-4.com> writes: > The current work in progress install time hook can be found here: > https://github.com/mitsuhiko/unicode-literals-pep/tree/master/install-hook I realise that the implementation is different, using tokenize rather than lib2to3, but in terms of its effect on the transformed code, what are the differences between this hook and running 2to3 with just the fix_unicode fixer? Regards, Vinay Sajip From greg.ewing at canterbury.ac.nz Sun Mar 4 22:14:31 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Mon, 05 Mar 2012 10:14:31 +1300 Subject: [Python-Dev] slice subscripts for sequences and mappings In-Reply-To: References: <20120303132024.1dcba8c6@pitrou.net> Message-ID: <4F53DB37.1010407@canterbury.ac.nz> Thomas Wouters wrote: > Why even have separate tp_as_sequence and tp_as_mapping anymore? That > particular distinction never existed for Python types, so why should it > exist for C types at all? I forget if there was ever a real point to it, I imagine the original motivation was to provide a fast path for types that take ints as indexes. Also, it dates from the very beginnings of Python, before it had user defined classes. At that time the archetypal sequence (list) and the archetypal mapping (dict) were very distinct -- I don't think dicts supported 'in' then, so there was no overlap. It looks like a case of "it seemed like a good idea at the time". The distinction broke down fairly soon after, but it's so embedded in the extension module API that it's been very hard to get rid of. -- Greg From greg.ewing at canterbury.ac.nz Sun Mar 4 22:44:00 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Mon, 05 Mar 2012 10:44:00 +1300 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> Message-ID: <4F53E220.9040006@canterbury.ac.nz> Maciej Fijalkowski wrote: > Segfaults (most of them) can generally be made into arbitrary code > execution, Can you give an example of how this can be done? -- Greg From greg.ewing at canterbury.ac.nz Sun Mar 4 23:00:54 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Mon, 05 Mar 2012 11:00:54 +1300 Subject: [Python-Dev] Sandboxing Python In-Reply-To: <4F53A789.4050004@hotpy.org> References: <4F528F32.3060409@gmail.com> <4F53A789.4050004@hotpy.org> Message-ID: <4F53E616.5070202@canterbury.ac.nz> Mark Shannon wrote: > You can't solve the too much time, without solving the halting problem, > but you can make sure all code is interruptable (i.e. Cntrl-C works). If you can arrange for Ctrl-C to interrupt the process cleanly, then (at least on Unix) you can arrange to receive a signal after a timeout and recover cleanly from that as well.. -- Greg From arigo at tunes.org Sun Mar 4 23:02:50 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 4 Mar 2012 23:02:50 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: <4F53A789.4050004@hotpy.org> References: <4F528F32.3060409@gmail.com> <4F53A789.4050004@hotpy.org> Message-ID: Hi Mark, On Sun, Mar 4, 2012 at 18:34, Mark Shannon wrote: > I don't think it is as hard as all that. > All the crashers can be fixed, and with minimal effect on performance. I will assume that you don't mean just to fix the files in Lib/test/crashers, but to fix the general issues that each is a particular case for. I suppose there is no point in convincing you about my point of view, so I can just say "feel free and have fun". Armin From arigo at tunes.org Sun Mar 4 23:12:50 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 4 Mar 2012 23:12:50 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: <4F53E220.9040006@canterbury.ac.nz> References: <4F528F32.3060409@gmail.com> <4F53E220.9040006@canterbury.ac.nz> Message-ID: Hi Greg, On Sun, Mar 4, 2012 at 22:44, Greg Ewing wrote: >> Segfaults (most of them) can generally be made into arbitrary code >> execution, > > Can you give an example of how this can be done? You should find tons of documented examples of various attacks. It's not easy, but it's possible. For example, let's assume we can decref a object to 0 before its last usage, at address x. All you need is the skills and luck to arrange that the memory at x becomes occupied by a new bigger string object allocated at "x - small_number". This is enough to control exactly all the bytes that are put at address x and following, just by choosing the characters of the string. For example the bytes can be built to make address x look like a built-in function object, which you can call --- which will call an arbitrary chosen address in memory. This is enough to run arbitrary machine code and do anything. A bient?t, Armin. From arigo at tunes.org Sun Mar 4 23:15:24 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 4 Mar 2012 23:15:24 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: <4F53A789.4050004@hotpy.org> References: <4F528F32.3060409@gmail.com> <4F53A789.4050004@hotpy.org> Message-ID: Hi Mark, On Sun, Mar 4, 2012 at 18:34, Mark Shannon wrote: > You can't solve the too much time, without solving the halting problem, Not sure what you mean by that. It seems to me that it's particularly easy to do in a roughly portable way, with alarm() for example on all UNIXes. A bient?t, Armin. From armin.ronacher at active-4.com Sun Mar 4 23:31:43 2012 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Sun, 04 Mar 2012 22:31:43 +0000 Subject: [Python-Dev] Install Hook [Was: Re: PEP 414 updated] In-Reply-To: References: <4F535F22.8030703@in.waw.pl> <4F537F90.60509@active-4.com> Message-ID: <4F53ED4F.3050104@active-4.com> Hi, On 3/4/12 4:44 PM, Guido van Rossum wrote: > I'd love a pointer to the rubygems debacle... Setuptools worked because Python had .pth files for a long, long time. When the Ruby world started moving packages into nonstandard locations (GameName/) something needed to activate that import machinery hack. For a while all Ruby projects had the line "require 'rubygems'" somewhere in the project. Some libraries even shipped that line to bootstrap rubygems. I think an article about that should be found here: http://tomayko.com/writings/require-rubygems-antipattern But since the page errors out currently I don't know if that is the one I'm referring to. Considering such an import hook has to run over all imports because it would not know which to rewrite and which not I think it would be equally problematic, especially if libraries would magically activate that hook. Regards, Armin From armin.ronacher at active-4.com Sun Mar 4 23:44:30 2012 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Sun, 04 Mar 2012 22:44:30 +0000 Subject: [Python-Dev] Install Hook [Was: Re: PEP 414 updated] In-Reply-To: References: <4F535F22.8030703@in.waw.pl> <4F537F90.60509@active-4.com> Message-ID: <4F53F04E.4050101@active-4.com> Hi, On 3/4/12 9:00 PM, Vinay Sajip wrote: > I realise that the implementation is different, using tokenize rather than > lib2to3, but in terms of its effect on the transformed code, what are the > differences between this hook and running 2to3 with just the fix_unicode fixer? I would hope they both have the same effect. Namely stripping the 'u' prefix in all variations. Why did I go with the tokenize approach? Because I never even considered a 2to3 solution. Part of the reason why I wrote this PEP was that 2to3 is so awfully slow and I was assuming that this would be largely based on the initial parsing step and not the fixers themselves. Why did I not time it with just the unicode fixer? Because if you look at how simple the tokenize version is you can see that this one did not take me more than a good minute and maybe 10 more for the distutils hooking. Regards, Armin From steve at pearwood.info Sun Mar 4 23:53:48 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 05 Mar 2012 09:53:48 +1100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> <4F53A789.4050004@hotpy.org> Message-ID: <4F53F27C.3070404@pearwood.info> Armin Rigo wrote: > Hi Mark, > > On Sun, Mar 4, 2012 at 18:34, Mark Shannon wrote: >> You can't solve the too much time, without solving the halting problem, > > Not sure what you mean by that. It seems to me that it's particularly > easy to do in a roughly portable way, with alarm() for example on all > UNIXes. What time should you set the alarm for? How much time is enough before you decide that a piece of code is taking too long? The halting problem is not that you can't breaking out of an infinite loop, but that you can't *in general* decide when you are in an infinite loop. I think that Mark's point is that you can't, in general, tell when you are in a "too much time" attack (or bug) that needs to be broken out of rather than just a legitimately long calculation which will terminate if you wait just a little longer. -- Steven From vinay_sajip at yahoo.co.uk Mon Mar 5 00:04:54 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 4 Mar 2012 23:04:54 +0000 (UTC) Subject: [Python-Dev] Install Hook [Was: Re: PEP 414 updated] References: <4F535F22.8030703@in.waw.pl> <4F537F90.60509@active-4.com> <4F53ED4F.3050104@active-4.com> Message-ID: Armin Ronacher active-4.com> writes: > Considering such an import hook has to run over all imports because it > would not know which to rewrite and which not I think it would be > equally problematic, especially if libraries would magically activate > that hook. You could be right, but it sounds a little alarmist to me - "problematic" - "magical". For example, in the current implementation of uprefix, the hook does nothing for files in the stdlib, could be refined to be more intelligent about what to run on, etc. Plus, as Zbigniew pointed out in his post, ways could be found (e.g. via a context manager) to give users control of when the hook runs. I'm not sure your rubygems example is analogous - I would have thought the equivalent for Python would be to stick "import setuptools" everywhere, which is not an anti-pattern we lose sleep over, AFAIK. It's early days, but it seems reasonable to document in the usage of the hook that it is intended to be used in certain ways and not in others. IIRC Ryan's post was doing just that - telling people how the requiring of rubygems should work. AFAIK the approach hasn't been tried before, and was suggested by Nick (so I assume is not completely off the wall). My particular implementation might have holes in it (feedback welcome on any such, and I'll try to fix them) but I would think the approach could be given a chance in some realistic scenarios to see what problems emerge in practice, rather than trying to shoot it down before it's even got going. Regards, Vinay Sajip "It is not enough merely to win; others must lose." - Gore Vidal From martin at v.loewis.de Mon Mar 5 00:16:29 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 05 Mar 2012 00:16:29 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: <4F53F27C.3070404@pearwood.info> References: <4F528F32.3060409@gmail.com> <4F53A789.4050004@hotpy.org> <4F53F27C.3070404@pearwood.info> Message-ID: <4F53F7CD.1000300@v.loewis.de> Am 04.03.2012 23:53, schrieb Steven D'Aprano: > Armin Rigo wrote: >> Hi Mark, >> >> On Sun, Mar 4, 2012 at 18:34, Mark Shannon wrote: >>> You can't solve the too much time, without solving the halting problem, >> >> Not sure what you mean by that. It seems to me that it's particularly >> easy to do in a roughly portable way, with alarm() for example on all >> UNIXes. > > What time should you set the alarm for? How much time is enough before > you decide that a piece of code is taking too long? > > The halting problem is not that you can't breaking out of an infinite > loop, but that you can't *in general* decide when you are in an infinite > loop. > > I think that Mark's point is that you can't, in general, tell when you > are in a "too much time" attack (or bug) that needs to be broken out of > rather than just a legitimately long calculation which will terminate if > you wait just a little longer. This is getting off-topic, but you can *certainly* solve the "too much time" problem without solving the halting problem. The "too much time" problem typically has a subjective, local, application-specific specification. Therefore, the "too much time" problem is *easily* solved with timeouts. Too much is just too much, even if it would eventually complete with a useful result. I'd say that a single request should not take more than 20 seconds, else it's too much. It must be less than 2 seconds for interactive use, and less than 1s if you get more than 100 requests per second. If these numbers sound arbitrary to you: they are. They are still useful to me. Regards, Martin From vinay_sajip at yahoo.co.uk Mon Mar 5 00:19:29 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 4 Mar 2012 23:19:29 +0000 (UTC) Subject: [Python-Dev] Install Hook [Was: Re: PEP 414 updated] References: <4F535F22.8030703@in.waw.pl> <4F537F90.60509@active-4.com> <4F53F04E.4050101@active-4.com> Message-ID: Armin Ronacher active-4.com> writes: > I would hope they both have the same effect. Namely stripping the 'u' > prefix in all variations. Okay, that's all I was curious about. > Why did I go with the tokenize approach? Because I never even > considered a 2to3 solution. Part of the reason why I wrote this PEP was > that 2to3 is so awfully slow and I was assuming that this would be > largely based on the initial parsing step and not the fixers themselves. > Why did I not time it with just the unicode fixer? Because if you look > at how simple the tokenize version is you can see that this one did not > take me more than a good minute and maybe 10 more for the distutils hooking. You don't need to justify your approach - to me, anyway ;-) I suppose tokenize needed changing because of the grammar change, so it seems reasonable to put the changed version to work. I agree that 2to3 seems slow sometimes, but I can't say I've pinned it down as to exactly where the time is spent. I assumed that it was just because it seems to run over a lot of files each time, regardless of whether they've been changed since the last run or not. (I believe there might be ways of optimising that, but my understanding is that in the default/simple cases it runs over everything.) I factored out the transformation step in my hook into a method, so I should be able to swap out the lib2to3 approach with a tokenize approach without too much work, should that prove necessary or desirable. Regards, Vinay Sajip From fijall at gmail.com Mon Mar 5 04:45:50 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 4 Mar 2012 19:45:50 -0800 Subject: [Python-Dev] Defending against stack overflow (was Sandboxing Python) In-Reply-To: <4F53D218.6090600@v.loewis.de> References: <4F528F32.3060409@gmail.com> <4F535DA2.2030702@hotpy.org> <4F53D218.6090600@v.loewis.de> Message-ID: On Sun, Mar 4, 2012 at 12:35 PM, "Martin v. L?wis" wrote: >> So, how to handle stack overflows (of the C stack)? >> To prevent a stack overflow an exception must be raised before >> the VM runs out C stack. To do this we need 2 pieces of info: >> a) How much stack we've used >> b) How much stack is available. > > Python has already dedicated counters for stack depth, which just need > proper updating and conservative values. I also think that we need to > avoid allocating large arrays on the stack in recursive functions, and > always heap-allocate such memory, to be stack-conservative. > >> I think it is a reasonable aim for 3.3 that Lib/test/crashers >> should be empty. > > I agree. If you have patches to review, just put me on the nosy list. > > Regards, > Martin Maybe as a point of reference. PyPy, with the interpreter being largely modelled after CPython, automatically inserts about 750 checks for stack exhaustion. CPython has about 15 checks so far, I suggest looking at all the places that pypy inserts such checks is a useful start. Cheers, fijal From georg at python.org Mon Mar 5 08:54:06 2012 From: georg at python.org (Georg Brandl) Date: Mon, 05 Mar 2012 08:54:06 +0100 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 1 Message-ID: <4F54711E.2020006@python.org> On behalf of the Python development team, I'm happy to announce the first alpha release of Python 3.3.0. This is a preview release, and its use is not recommended in production settings. Python 3.3 includes a range of improvements of the 3.x series, as well as easier porting between 2.x and 3.x. Major new features in the 3.3 release series are: * PEP 380, Syntax for Delegating to a Subgenerator ("yield from") * PEP 393, Flexible String Representation (doing away with the distinction between "wide" and "narrow" Unicode builds) * PEP 409, Suppressing Exception Context * PEP 3151, Reworking the OS and IO exception hierarchy * The new "packaging" module, building upon the "distribute" and "distutils2" projects and deprecating "distutils" * The new "lzma" module with LZMA/XZ support * PEP 3155, Qualified name for classes and functions * PEP 414, explicit Unicode literals to help with porting * The new "faulthandler" module that helps diagnosing crashes * Wrappers for many more POSIX functions in the "os" and "signal" modules, as well as other useful functions such as "sendfile()" For a more extensive list of changes in 3.3.0, see http://docs.python.org/3.3/whatsnew/3.3.html To download Python 3.3.0 visit: http://www.python.org/download/releases/3.3.0/ Please consider trying Python 3.3.0a1 with your code and reporting any bugs you may notice to: http://bugs.python.org/ Enjoy! -- Georg Brandl, Release Manager georg at python.org (on behalf of the entire python-dev team and 3.3's contributors) From mark at hotpy.org Mon Mar 5 09:51:47 2012 From: mark at hotpy.org (Mark Shannon) Date: Mon, 05 Mar 2012 08:51:47 +0000 Subject: [Python-Dev] Remove f_yieldfrom attribute from frameobject In-Reply-To: <4F54711E.2020006@python.org> References: <4F54711E.2020006@python.org> Message-ID: <4F547EA3.10703@hotpy.org> Could we remove the f_yieldfrom attribute from frameobject (at the Python level) before it is too late and we are stuck with it. Issue (with patch) here: http://bugs.python.org/issue13970 Cheers, Mark. From victor.stinner at gmail.com Mon Mar 5 10:09:40 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 5 Mar 2012 10:09:40 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: <4F53F27C.3070404@pearwood.info> References: <4F528F32.3060409@gmail.com> <4F53A789.4050004@hotpy.org> <4F53F27C.3070404@pearwood.info> Message-ID: >>> You can't solve the too much time, without solving the halting problem, >> >> Not sure what you mean by that. ?It seems to me that it's particularly >> easy to do in a roughly portable way, with alarm() for example on all >> UNIXes. > > What time should you set the alarm for? How much time is enough before you > decide that a piece of code is taking too long? pysandbox uses SIGALRM with a timeout of 5 seconds by default. You can change this timeout or disable it completly. pysandbox doesn't provide a function to limit the memory yet, you have to do it manually. It's not automatic because there is no portable way to implement such limit and it's difficult to configure it. For my IRC bot using pysandbox, setrlimit() is used with RLIMIT_AS. Victor From mark at hotpy.org Mon Mar 5 13:41:58 2012 From: mark at hotpy.org (Mark Shannon) Date: Mon, 05 Mar 2012 12:41:58 +0000 Subject: [Python-Dev] Exceptions in comparison operators Message-ID: <4F54B496.3030905@hotpy.org> Comparing two objects (of the same type for simplicity) involves a three stage lookup: The class has the operator C.__eq__ It can be applied to operator (descriptor protocol): C().__eq__ and it produces a result: C().__eq__(C()) Exceptions can be raised in all 3 phases, but an exception in the first phase is not really an error, its just says the operation is not supported. E.g. class C: pass C() == C() is False, rather than raising an Exception. If an exception is raised in the 3rd stage, then it is propogated, as follows: class C: def __eq__(self, other): raise Exception("I'm incomparable") C() == C() raises an exception However, if an exception is raised in the second phase (descriptor) then it is silenced: def no_eq(self): raise Exception("I'm incomparable") class C: __eq__ = property(no_eq) C() == C() is False. But should it raise an exception? The behaviour for arithmetic is different. def no_add(self): raise Exception("I don't add up") class C: __add__ = property(no_add) C() + C() raises an exception. So what is the "correct" behaviour? It is my opinion that comparisons should behave like arithmetic and raise an exception. Cheers, Mark From ned at nedbatchelder.com Mon Mar 5 14:27:08 2012 From: ned at nedbatchelder.com (Ned Batchelder) Date: Mon, 05 Mar 2012 08:27:08 -0500 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 1 In-Reply-To: <4F54711E.2020006@python.org> References: <4F54711E.2020006@python.org> Message-ID: <4F54BF2C.7000107@nedbatchelder.com> On 3/5/2012 2:54 AM, Georg Brandl wrote: > On behalf of the Python development team, I'm happy to announce the > first alpha release of Python 3.3.0. > > This is a preview release, and its use is not recommended in > production settings. > > Python 3.3 includes a range of improvements of the 3.x series, as well > as easier > porting between 2.x and 3.x. Major new features in the 3.3 release > series are: > > * PEP 380, Syntax for Delegating to a Subgenerator ("yield from") > * PEP 393, Flexible String Representation (doing away with the > distinction between "wide" and "narrow" Unicode builds) > * PEP 409, Suppressing Exception Context > * PEP 3151, Reworking the OS and IO exception hierarchy > * The new "packaging" module, building upon the "distribute" and > "distutils2" projects and deprecating "distutils" > * The new "lzma" module with LZMA/XZ support > * PEP 3155, Qualified name for classes and functions > * PEP 414, explicit Unicode literals to help with porting > * The new "faulthandler" module that helps diagnosing crashes > * Wrappers for many more POSIX functions in the "os" and "signal" > modules, as well as other useful functions such as "sendfile()" > > For a more extensive list of changes in 3.3.0, see > > http://docs.python.org/3.3/whatsnew/3.3.html > The 3.3 whatsnews page doesn't seem to mention PEP 414 or Unicode literals at all. --Ned. > To download Python 3.3.0 visit: > > http://www.python.org/download/releases/3.3.0/ > > Please consider trying Python 3.3.0a1 with your code and reporting any > bugs you may notice to: > > http://bugs.python.org/ > > > Enjoy! > > -- > Georg Brandl, Release Manager > georg at python.org > (on behalf of the entire python-dev team and 3.3's contributors) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ned%40nedbatchelder.com > From merwok at netwok.org Mon Mar 5 14:59:31 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Mon, 05 Mar 2012 14:59:31 +0100 Subject: [Python-Dev] Why does Mac OS X python share site-packages with apple python? In-Reply-To: References: <5A0E2490-A743-4729-A752-D94524EA9840@barrys-emacs.org> Message-ID: <4F54C6C3.9040401@netwok.org> Hi, Le 03/03/2012 22:57, Ned Deily a ?crit : > The python.org OS X Pythons (and built-from-source framework builds) add > the Apple-specific directory to the search path in order to allow > sharing of installed third-party packages between the two. The interesting thing to me here is that Ned?s decision to allow sharing some installed distributions/packages on Mac OS X is (IIUC) diametrically opposed to the one made by Canonical developers when they invented the dist-packages directory for Debian and Ubuntu to prevent breaking the system Python by installing a distribution/package with a python.org/built-from-source Python installed under /usr/local. (On that note, there is still time to land http://bugs.python.org/issue1298835 ?Adding a vendor-packages directory? into 3.3.) Regards From merwok at netwok.org Mon Mar 5 16:55:31 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Mon, 05 Mar 2012 16:55:31 +0100 Subject: [Python-Dev] Misc/NEWS in 2.7 and 3.2 Message-ID: <4F54E1F3.10108@netwok.org> Hi, I noticed that the top-level section in Misc/NEWS (i.e. the section where we add entries) for 3.3 is for 3.3.0a2 (the next release), but in 2.7 and 3.2 we?re still adding entries to the sections corresponding to the last RCs. Will the RMs move things when they merge back their release clones, or should we start new sections and move the latest entries there? Cheers From nad at acm.org Mon Mar 5 18:44:41 2012 From: nad at acm.org (Ned Deily) Date: Mon, 05 Mar 2012 09:44:41 -0800 Subject: [Python-Dev] Why does Mac OS X python share site-packages with apple python? References: <5A0E2490-A743-4729-A752-D94524EA9840@barrys-emacs.org> <4F54C6C3.9040401@netwok.org> Message-ID: In article <4F54C6C3.9040401 at netwok.org>, ?ric Araujo wrote: > Le 03/03/2012 22:57, Ned Deily a ?crit : > > The python.org OS X Pythons (and built-from-source framework builds) add > > the Apple-specific directory to the search path in order to allow > > sharing of installed third-party packages between the two. > The interesting thing to me here is that Ned?s decision to allow sharing > some installed distributions/packages on Mac OS X is (IIUC) > diametrically opposed to the one made by Canonical developers when they > invented the dist-packages directory for Debian and Ubuntu to prevent > breaking the system Python by installing a distribution/package with a > python.org/built-from-source Python installed under /usr/local. Just to be clear, it wasn't my decision; this feature was added before I was a core developer. In any case, this is the opposite case: the system Python is not affected by this feature. It affects user-installed framework-build Pythons, such as those provided by python.org installers, allowing them to share distributions explicitly installed by the user with the system Pythons. It also does not share 3rd-party distributions included by Apple with the system Pythons. I'm +0 on it myself. -- Ned Deily, nad at acm.org From nad at acm.org Mon Mar 5 19:03:05 2012 From: nad at acm.org (Ned Deily) Date: Mon, 05 Mar 2012 10:03:05 -0800 Subject: [Python-Dev] Why does Mac OS X python share site-packages with apple python? References: <5A0E2490-A743-4729-A752-D94524EA9840@barrys-emacs.org> <4F54C6C3.9040401@netwok.org> Message-ID: [edited for clarity] In article , Ned Deily wrote: > [...] It affects > user-installed framework-build Pythons, such as those provided by > python.org installers, allowing [the user-installed Pythons] to [use] > distributions that [were] explicitly > installed by the user [into] the system Pythons. -- Ned Deily, nad at acm.org From guido at python.org Mon Mar 5 19:41:50 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 5 Mar 2012 10:41:50 -0800 Subject: [Python-Dev] Exceptions in comparison operators In-Reply-To: <4F54B496.3030905@hotpy.org> References: <4F54B496.3030905@hotpy.org> Message-ID: On Mon, Mar 5, 2012 at 4:41 AM, Mark Shannon wrote: > Comparing two objects (of the same type for simplicity) > involves a three stage lookup: > The class has the operator C.__eq__ > It can be applied to operator (descriptor protocol): C().__eq__ > and it produces a result: C().__eq__(C()) > > Exceptions can be raised in all 3 phases, > but an exception in the first phase is not really an error, > its just says the operation is not supported. > E.g. > > class C: pass > > C() == C() is False, rather than raising an Exception. > > If an exception is raised in the 3rd stage, then it is propogated, > as follows: > > class C: > ? def __eq__(self, other): > ? ? ? raise Exception("I'm incomparable") > > C() == C() ?raises an exception > > However, if an exception is raised in the second phase (descriptor) > then it is silenced: > > def no_eq(self): > ? ?raise Exception("I'm incomparable") > > class C: > ? __eq__ = property(no_eq) > > C() == C() is False. > > But should it raise an exception? > > The behaviour for arithmetic is different. > > def no_add(self): > ? ?raise Exception("I don't add up") > > class C: > ? __add__ = property(no_add) > > C() + C() raises an exception. > > So what is the "correct" behaviour? > It is my opinion that comparisons should behave like arithmetic > and raise an exception. I think you're probably right. This is one of those edge cases that are so rare (and always considered a bug in the user code) that we didn't define carefully what should happen. There are probably some implementation-specific reasons why it was done this way (comparisons use a very different code path from regular binary operators) but that doesn't sound like a very good reason. OTOH there *is* a difference: as you say, C() == C() is False when the class doesn't define __eq__, whereas C() + C() raises an exception if it doesn't define __add__. Still, this is more likely to have favored the wrong outcome for (2) by accident than by design. You'll have to dig through the CPython implementation and find out exactly what code needs to be changed before I could be sure though -- sometimes seeing the code jogs my memory. But I think of x==y as roughly equivalent to r = NotImplemented if hasattr(x, '__eq__'): r = x.__eq__(y) if r is NotImplemented and hasattr(y, '__eq__'): r = y.__eq__(x) if r is NotImplemented: r = False which would certainly suggest that (2) should raise an exception. A possibility is that the code looking for the __eq__ attribute suppresses *all* exceptions instead of just AttributeError. If you change no_eq() to return 42, for example, the comparison raises the much more reasonable TypeError: 'int' object is not callable. -- --Guido van Rossum (python.org/~guido) From victor.stinner at gmail.com Mon Mar 5 22:16:24 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 5 Mar 2012 22:16:24 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: <4F549709.4070705@gmail.com> References: <4F528F32.3060409@gmail.com> <4F53A789.4050004@hotpy.org> <4F53F27C.3070404@pearwood.info> <4F549709.4070705@gmail.com> Message-ID: 2012/3/5 Serhiy Storchaka : > 05.03.12 11:09, Victor Stinner ???????(??): > >> pysandbox uses SIGALRM with a timeout of 5 seconds by default. You can >> change this timeout or disable it completly. >> >> pysandbox doesn't provide a function to limit the memory yet, you have >> to do it manually. It's not automatic because there is no portable way >> to implement such limit and it's difficult to configure it. For my IRC >> bot using pysandbox, setrlimit() is used with RLIMIT_AS. > > > But it does not work for extensive C-calculations. `sum(xrange(1000000000))` > runs 2.5 minutes on my computer instead of 5 seconds, and `map(sum, > [xrange(1000000000)] * 1000000)` -- almost infinity time. pysandbox doesn't > provide a reliable time limit too, it is also necessary to mention. Ah yes, I realized that SIGALRM is handled by the C signal handler, but Python only handles the signal later. sum() doesn't call PyErr_CheckSignals() to check for pending signals. Apply the timeout would require to modify the sum() function. A more generic solution would be to use a subprocess. Victor From greg.ewing at canterbury.ac.nz Mon Mar 5 22:21:12 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 06 Mar 2012 10:21:12 +1300 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> <4F53E220.9040006@canterbury.ac.nz> Message-ID: <4F552E48.8040008@canterbury.ac.nz> Armin Rigo wrote: > For example, let's assume we can decref > a object to 0 before its last usage, at address x. All you need is > the skills and luck to arrange that the memory at x becomes occupied > by a new bigger string object allocated at "x - small_number". That's a lot of assumptions. When you claimed that *any* segfault bug could be turned into an arbitrary-code exploit, it sounded like you had a provably general procedure in mind for doing so, but it seems not. In any case, I think Victor is right to object to his sandbox being shot down on such grounds. The same thing equally applies to any method of sandboxing any computation, whether it involves Python or not. Even if you fork a separate process running code written in Befunge, it could be prone to this kind of attack if there is a bug in it. What you seem to be saying is "Python cannot be sandboxed, because any code can have bugs." Or, "Nothing is ever 100% secure, because the universe is not perfect." Which is true, but not in a very interesting way. -- Greg From solipsis at pitrou.net Mon Mar 5 22:27:35 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 5 Mar 2012 22:27:35 +0100 Subject: [Python-Dev] Sandboxing Python References: <4F528F32.3060409@gmail.com> <4F53E220.9040006@canterbury.ac.nz> <4F552E48.8040008@canterbury.ac.nz> Message-ID: <20120305222735.4c3a65a1@pitrou.net> On Tue, 06 Mar 2012 10:21:12 +1300 Greg Ewing wrote: > > What you seem to be saying is "Python cannot be sandboxed, > because any code can have bugs." Or, "Nothing is ever 100% secure, > because the universe is not perfect." Which is true, but not in > a very interesting way. There is a difference between bugs and known bugs, though. Regards Antoine. From guido at python.org Mon Mar 5 22:47:59 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 5 Mar 2012 13:47:59 -0800 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> <4F53A789.4050004@hotpy.org> <4F53F27C.3070404@pearwood.info> <4F549709.4070705@gmail.com> Message-ID: On Mon, Mar 5, 2012 at 1:16 PM, Victor Stinner wrote: > 2012/3/5 Serhiy Storchaka : >> 05.03.12 11:09, Victor Stinner ???????(??): >> >>> pysandbox uses SIGALRM with a timeout of 5 seconds by default. You can >>> change this timeout or disable it completly. >>> >>> pysandbox doesn't provide a function to limit the memory yet, you have >>> to do it manually. It's not automatic because there is no portable way >>> to implement such limit and it's difficult to configure it. For my IRC >>> bot using pysandbox, setrlimit() is used with RLIMIT_AS. >> >> >> But it does not work for extensive C-calculations. `sum(xrange(1000000000))` >> runs 2.5 minutes on my computer instead of 5 seconds, and `map(sum, >> [xrange(1000000000)] * 1000000)` -- almost infinity time. pysandbox doesn't >> provide a reliable time limit too, it is also necessary to mention. > > Ah yes, I realized that SIGALRM is handled by the C signal handler, > but Python only handles the signal later. sum() doesn't call > PyErr_CheckSignals() to check for pending signals. Just forbid the sandboxed code from using the signal module, and set the signal to the default action (abort). > Apply the timeout would require to modify the sum() function. A more > generic solution would be to use a subprocess. Maybe it would make more sense to add such a test to xrange()? (Maybe not every iteration but every 10 or 100 iterations.) -- --Guido van Rossum (python.org/~guido) From victor.stinner at gmail.com Mon Mar 5 23:11:31 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 5 Mar 2012 23:11:31 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: Message-ID: >>> I challenge anymore to break pysandbox! I would be happy if anyone >>> breaks it because it would make it more stronger. > > I tried to run the files from Lib/test/crashers and --- kind of > obviously --- I found at least two of them that still segfaults > execfile.py, sometimes with minor edits and sometimes directly, on > CPython 2.7. Most crashers don't crash pysandbox because they use features blocked by pysandbox, like the gc module. Others fail with a timeout. 3 tests are crashing pysandbox: - modify a dict during a dict lookup: I proposed two different fixes in issue #14205 - type MRO changed during a type lookup (modify __bases__ during the lookup): I proposed a fix in issue #14199 (keep a reference to the MRO during the lookup) - stack overflow because of a compiler recursion: we should limit the depth in the compiler (i didn't write a patch yet) pysandbox should probably hide __bases__ special attribute, or at least make it read-only. > If you are not concerned about segfaults but only real attacks, then > fine, I will not spend the hours necessary to turn the segfault into a > real attack :-) It's possible to fix these crashers. In my experience, Python is very stable and has few crasher in the core language (e.g. compared to PHP). But I agree that it would be safer to run the untrusted code in a subprocess, by design. Running the code in a subprocess may be an option to provide higher level of security. Using a subprocess allows to reuse OS protections. Victor From victor.stinner at gmail.com Mon Mar 5 23:24:02 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 5 Mar 2012 23:24:02 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> <4F53A789.4050004@hotpy.org> <4F53F27C.3070404@pearwood.info> <4F549709.4070705@gmail.com> Message-ID: > Just forbid the sandboxed code from using the signal module, and set > the signal to the default action (abort). Ah yes, good idea. It may be an option because depending on the use case, failing with abort is not always the best option. The signal module is not allowed by the default policy. >> Apply the timeout would require to modify the sum() function. A more >> generic solution would be to use a subprocess. > > Maybe it would make more sense to add such a test to xrange()? (Maybe > not every iteration but every 10 or 100 iterations.) pysandbox may replace some functions by functions checking regulary the timeout to raise a Python exception instead of aborting the process. Victor From storchaka at gmail.com Mon Mar 5 23:26:05 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 06 Mar 2012 00:26:05 +0200 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> <4F53A789.4050004@hotpy.org> <4F53F27C.3070404@pearwood.info> <4F549709.4070705@gmail.com> Message-ID: <4F553D7D.6020809@gmail.com> 05.03.12 23:16, Victor Stinner ???????(??): > Apply the timeout would require to modify the sum() function. sum() is just one, simple, example. Any C code could potentially run long enough. Another example is the recently discussed hashtable vulnerability: class badhash: __hash__ = int(42).__hash__ set([badhash() for _ in range(100000)]) > A more generic solution would be to use a subprocess. Yes, it's the only way to secure implement the sandbox. From storchaka at gmail.com Mon Mar 5 23:33:55 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 06 Mar 2012 00:33:55 +0200 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> <4F53A789.4050004@hotpy.org> <4F53F27C.3070404@pearwood.info> <4F549709.4070705@gmail.com> Message-ID: 05.03.12 23:47, Guido van Rossum ???????(??): > Maybe it would make more sense to add such a test to xrange()? (Maybe > not every iteration but every 10 or 100 iterations.) `sum([10**1000000]*1000000)` leads to same effect. From fijall at gmail.com Tue Mar 6 00:08:38 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 5 Mar 2012 15:08:38 -0800 Subject: [Python-Dev] Sandboxing Python In-Reply-To: <4F552E48.8040008@canterbury.ac.nz> References: <4F528F32.3060409@gmail.com> <4F53E220.9040006@canterbury.ac.nz> <4F552E48.8040008@canterbury.ac.nz> Message-ID: On Mon, Mar 5, 2012 at 1:21 PM, Greg Ewing wrote: > Armin Rigo wrote: >> >> For example, let's assume we can decref >> a object to 0 before its last usage, at address x. ?All you need is >> the skills and luck to arrange that the memory at x becomes occupied >> by a new bigger string object allocated at "x - small_number". > > > That's a lot of assumptions. When you claimed that *any* segfault > bug could be turned into an arbitrary-code exploit, it sounded > like you had a provably general procedure in mind for doing so, > but it seems not. > > In any case, I think Victor is right to object to his sandbox > being shot down on such grounds. The same thing equally applies > to any method of sandboxing any computation, whether it involves > Python or not. Even if you fork a separate process running code > written in Befunge, it could be prone to this kind of attack if > there is a bug in it. > > What you seem to be saying is "Python cannot be sandboxed, > because any code can have bugs." Or, "Nothing is ever 100% secure, > because the universe is not perfect." Which is true, but not in > a very interesting way. Not all of segfaults are exploitable, but most of them are. Some are super trivial (like the one armin explained, where it takes ~few hours for a skilled person), or some are only research paper kind of proof of concepts (double free is an example). I strongly disagree that sandbox is secure because it's "just segfaults" and "any code is exploitable that way". Finding segfaults in CPython is "easy". As in all you need is armin, a bit of coffee and a free day. Reasons for this vary, but one of those is that python is a large code base that does not have automatic ways of preventing such issues like C-level recursion. For a comparison, PyPy sandbox is a compiled from higher-level language program that by design does not have all sorts of problems described. The amount of code you need to carefully review is very minimal (as compared to the entire CPython interpreter). It does not mean it has no bugs, but it does mean finding segfaults is a significantly harder endeavour. There are no bug-free programs, however having for example to segfault an arbitrary interpreter *written* in Python would be significantly harder than one in C, wouldn't it? Cheers, fijal From victor.stinner at gmail.com Tue Mar 6 00:36:14 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 6 Mar 2012 00:36:14 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> <4F53E220.9040006@canterbury.ac.nz> <4F552E48.8040008@canterbury.ac.nz> Message-ID: > For a comparison, PyPy sandbox is a compiled from higher-level > language program that by design does not have all sorts of problems > described. The amount of code you need to carefully review is very > minimal (as compared to the entire CPython interpreter). It does not > mean it has no bugs, but it does mean finding segfaults is a > significantly harder endeavour. There are no bug-free programs, > however having for example to segfault an arbitrary interpreter > *written* in Python would be significantly harder than one in C, > wouldn't it? I agree that the PyPy sandbox design looks better... but some people are still using CPython and some of them need security. That's why there are projects like zope.security, RestrictedPython and others. Security was not included in CPython design. Python is a highly dynamic language which make the situation worse. I would like to improve CPython security. pysandbox is maybe not perfect, and it may only be a first step to improve security. Even if pysandbox has issues, having a frozendict type would help to secure applications. For example, it can be used later for __builtins__ or to build read-only types. I agree that each bug, especially segfault, may lead to exploitable vulnerabilities, but it doesn't mean that we should not consider hardening Python because of these bugs. Even if PHP is known for its lack of security and its broken safe_mode, people use it and run it on web server accessible to anyone on the Internet. There are also projects to harden PHP. For example: http://www.hardened-php.net/suhosin/ suhosin patch doesn't avoid the possiblity of segfault but it is harder to exploit them with the patch. I proposed to start with a frozendict because I consider that it is not only useful for security, and the patch to add the type is not intrusive. Other changes to use the patch can be discussed later, except if you consider that related changes (__builtins__ and read-only type) should be discussed to decide if a frozendict is required or not. Victor From martin at v.loewis.de Tue Mar 6 00:40:05 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 06 Mar 2012 00:40:05 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> <4F53E220.9040006@canterbury.ac.nz> <4F552E48.8040008@canterbury.ac.nz> Message-ID: <4F554ED5.7040006@v.loewis.de> > I strongly disagree that sandbox is secure because it's "just > segfaults" and "any code is exploitable that way". Finding segfaults > in CPython is "easy". As in all you need is armin, a bit of coffee and > a free day. Reasons for this vary, but one of those is that python is > a large code base that does not have automatic ways of preventing such > issues like C-level recursion. > > For a comparison, PyPy sandbox is a compiled from higher-level > language program that by design does not have all sorts of problems > described. The amount of code you need to carefully review is very > minimal (as compared to the entire CPython interpreter). It does not > mean it has no bugs, but it does mean finding segfaults is a > significantly harder endeavour. There are no bug-free programs, > however having for example to segfault an arbitrary interpreter > *written* in Python would be significantly harder than one in C, > wouldn't it? While this may true, I can't conclude that we should stop fixing crashers in CPython, or give up developing CPython altogether. While it is a large code base, it is also a code base that will be around for a long time to come, so any effort spend on this today will pay off in the years to come. Regards, Martin From fijall at gmail.com Tue Mar 6 00:49:03 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 5 Mar 2012 15:49:03 -0800 Subject: [Python-Dev] Sandboxing Python In-Reply-To: <4F554ED5.7040006@v.loewis.de> References: <4F528F32.3060409@gmail.com> <4F53E220.9040006@canterbury.ac.nz> <4F552E48.8040008@canterbury.ac.nz> <4F554ED5.7040006@v.loewis.de> Message-ID: On Mon, Mar 5, 2012 at 3:40 PM, "Martin v. L?wis" wrote: >> I strongly disagree that sandbox is secure because it's "just >> segfaults" and "any code is exploitable that way". Finding segfaults >> in CPython is "easy". As in all you need is armin, a bit of coffee and >> a free day. Reasons for this vary, but one of those is that python is >> a large code base that does not have automatic ways of preventing such >> issues like C-level recursion. >> >> For a comparison, PyPy sandbox is a compiled from higher-level >> language program that by design does not have all sorts of problems >> described. The amount of code you need to carefully review is very >> minimal (as compared to the entire CPython interpreter). It does not >> mean it has no bugs, but it does mean finding segfaults is a >> significantly harder endeavour. There are no bug-free programs, >> however having for example to segfault an arbitrary interpreter >> *written* in Python would be significantly harder than one in C, >> wouldn't it? > > While this may true, I can't conclude that we should stop fixing > crashers in CPython, or give up developing CPython altogether. While > it is a large code base, it is also a code base that will be around > for a long time to come, so any effort spend on this today will pay > off in the years to come. > > Regards, > Martin I did not say that Martin. PyPy sandbox does not come without issues, albeit they are not the security related kind. My point is that it does not make sense do add stuff to CPython to make sandboxing on the Python easier *while* there are still easily accessible segfaults. Fixing those issues, should be a priority *before* we actually start and tinker with other layers. All I'm trying to say is "if you want to make a sandbox on top of CPython, you have to fix segfaults". Cheers, fijal From eliben at gmail.com Tue Mar 6 08:03:45 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 6 Mar 2012 09:03:45 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Fix a comment: PySequence_Fast() creates a list, not a tuple. In-Reply-To: References: Message-ID: This fix should be applied to the documentation as well. On Tue, Mar 6, 2012 at 08:59, larry.hastings wrote: > http://hg.python.org/cpython/rev/d8f68195210e > changeset: ? 75448:d8f68195210e > user: ? ? ? ?Larry Hastings > date: ? ? ? ?Mon Mar 05 22:59:13 2012 -0800 > summary: > ?Fix a comment: PySequence_Fast() creates a list, not a tuple. > > files: > ?Include/abstract.h | ?2 +- > ?1 files changed, 1 insertions(+), 1 deletions(-) > > > diff --git a/Include/abstract.h b/Include/abstract.h > --- a/Include/abstract.h > +++ b/Include/abstract.h > @@ -1026,7 +1026,7 @@ > > ? ? ?PyAPI_FUNC(PyObject *) PySequence_Fast(PyObject *o, const char* m); > ? ? ? ?/* > - ? ? Returns the sequence, o, as a tuple, unless it's already a > + ? ? Returns the sequence, o, as a list, unless it's already a > ? ? ?tuple or list. ?Use PySequence_Fast_GET_ITEM to access the > ? ? ?members of this list, and PySequence_Fast_GET_SIZE to get its length. > > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From stefan at bytereef.org Tue Mar 6 11:34:59 2012 From: stefan at bytereef.org (Stefan Krah) Date: Tue, 6 Mar 2012 11:34:59 +0100 Subject: [Python-Dev] Undocumented view==NULL argument in PyObject_GetBuffer() Message-ID: <20120306103459.GA30078@sleipnir.bytereef.org> Hello, PyObject_GetBuffer() had an undocumented variant that was used internally: PyObject_GetBuffer(obj, NULL, flags) view==NULL has never been allowed by either PEP-3118 or the documentation: PEP: "The first variable is the "exporting" object. The second argument is the address to a bufferinfo structure. Both arguments must never be NULL." 3.2 docs: "view must point to an existing Py_buffer structure allocated by the caller." The internal use was to bump up the export count of e.g. a bytearray without bothering to have it fill in a complete Py_buffer. The increased export count would then prevent the bytearray from being resized. However, this feature appears to be unused in the source tree. The last traces of a middle NULL argument that I found are here: http://hg.python.org/cpython/file/df3b2b5db900/Modules/posixmodule.c#l561 So, currently the checks for NULL just slow down bytearray_getbuffer() and a couple of other getbufferprocs. Also, due to the absence of a use case it takes some VCS history digging to find out why the feature was there in the first place. The obvious question is: Will anyone need view==NULL in the future or can we remove the special case? Stefan Krah From ncoghlan at gmail.com Tue Mar 6 12:19:50 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Mar 2012 21:19:50 +1000 Subject: [Python-Dev] Undocumented view==NULL argument in PyObject_GetBuffer() In-Reply-To: <20120306103459.GA30078@sleipnir.bytereef.org> References: <20120306103459.GA30078@sleipnir.bytereef.org> Message-ID: On Tue, Mar 6, 2012 at 8:34 PM, Stefan Krah wrote: > The obvious question is: Will anyone need view==NULL in the future or > can we remove the special case? The public API will still need a guard (to report an error), but +1 for otherwise eliminating the undocumented special case. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From stefan_ml at behnel.de Tue Mar 6 13:50:37 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 06 Mar 2012 13:50:37 +0100 Subject: [Python-Dev] Undocumented view==NULL argument in PyObject_GetBuffer() In-Reply-To: References: <20120306103459.GA30078@sleipnir.bytereef.org> Message-ID: Nick Coghlan, 06.03.2012 12:19: > On Tue, Mar 6, 2012 at 8:34 PM, Stefan Krah wrote: >> The obvious question is: Will anyone need view==NULL in the future or >> can we remove the special case? > > The public API will still need a guard (to report an error), but +1 > for otherwise eliminating the undocumented special case. +1 for removing it internally and only checking at the API level. I think it's just a left-over from the "old times" (pre 3.0) when the buffer protocol had an explicit option to lock a buffer. Back then, the code used to call into getbuffer() with a NULL pointer in order to acquire the (IIRC write-) lock. It took me some discussion back then to get this part of the protocol removed, but it's dead for good now. Stefan From vinay_sajip at yahoo.co.uk Tue Mar 6 15:35:47 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 6 Mar 2012 14:35:47 +0000 (UTC) Subject: [Python-Dev] Windows uninstallation problem Message-ID: I've built an MSI with 3.3 on Windows 7 and installed it - it seems to work OK in that it passes all tests except test_tcl (intermittent failure). However, when I uninstall, python33.dll is left behind in System32. If I rebuild the MSI after some changes and reinstall, the old python33.dll is not overwritten. Before raising an issue I thought I'd check here, in case it's something to do with my configuration. Can anyone else confirm my findings? Regards, Vinay Sajip From stefan at bytereef.org Tue Mar 6 16:15:12 2012 From: stefan at bytereef.org (Stefan Krah) Date: Tue, 6 Mar 2012 16:15:12 +0100 Subject: [Python-Dev] Undocumented view==NULL argument in PyObject_GetBuffer() In-Reply-To: References: <20120306103459.GA30078@sleipnir.bytereef.org> Message-ID: <20120306151512.GA31794@sleipnir.bytereef.org> Nick Coghlan wrote: > On Tue, Mar 6, 2012 at 8:34 PM, Stefan Krah wrote: > > The obvious question is: Will anyone need view==NULL in the future or > > can we remove the special case? > > The public API will still need a guard (to report an error), but +1 > for otherwise eliminating the undocumented special case. I'm looking at other getbufferprocs apart from bytearray_getbuffer() and the public API seems pretty dangerous to me: For example, bytes_buffer_getbuffer() just calls PyBuffer_FillInfo(), which instantly returns 0 (success). Now the reference count to the bytes object is *not* incremented, so it might disappear while the consumer still thinks it's valid. The same happens in _ctypes.c:PyCData_NewGetBuffer(). For array_buffer_getbuf() it looks different: The export count is increased, but not the reference count. So while the array is protected against resizing, it's not immediately obvious to me if it's protected against being deallocated (but I just skimmed the code). It seems to me that bytearray was the only place where the view==NULL scheme obviously worked. Stefan Krah From stefan at bytereef.org Tue Mar 6 17:04:47 2012 From: stefan at bytereef.org (Stefan Krah) Date: Tue, 6 Mar 2012 17:04:47 +0100 Subject: [Python-Dev] PEP-393/PEP-3118: unicode format specifiers Message-ID: <20120306160447.GA32426@sleipnir.bytereef.org> Hello, In the array module the 'u' specifier previously meant "2-bytes, on wide builds 4-bytes". Currently in 3.3 the 'u' specifier is mapped to UCS4. I think it would be nice for Python3.3 to implement the PEP-3118 suggestion: 'c' -> UCS1 'u' -> UCS2 'w' -> UCS4 Actually we could even add 'a' -> ASCII, then a unicode object could be a buffer provider that gives the correct view according to the maxchar in the buffer. This opens the possibility for strongly typed memoryviews of strings. Not sure if this is useful, just an idea. Stefan Krah From victor.stinner at gmail.com Tue Mar 6 17:43:43 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 6 Mar 2012 17:43:43 +0100 Subject: [Python-Dev] PEP-393/PEP-3118: unicode format specifiers In-Reply-To: <20120306160447.GA32426@sleipnir.bytereef.org> References: <20120306160447.GA32426@sleipnir.bytereef.org> Message-ID: > In the array module the 'u' specifier previously meant "2-bytes, on wide > builds 4-bytes". Currently in 3.3 the 'u' specifier is mapped to UCS4. > > I think it would be nice for Python3.3 to implement the PEP-3118 > suggestion: > > 'c' -> UCS1 > > 'u' -> UCS2 > > 'w' -> UCS4 A Unicode string is an array of code point. Another approach is to expose such string as an array of uint8/uint16/uint32 integers. I don't know if you expect to get a character / a substring when you read the buffer of a string object. Using Python 3.2, I get: >>> memoryview(b"abc")[0] b'a' ... but using Python 3.3 I get a number :-) >>> memoryview(b'abc')[0] 97 It is no more possible to create a Unicode string containing characters outside U+0000-U+10FFFF range. You might apply the same restriction in the buffer API for UCS4. It may be inefficient, the check can be done when you convert the buffer to a string. > Actually we could even add 'a' -> ASCII ASCII implies that the values are in the range U+0000-U+007F (0-127). Same as the UCS4: you may do the check in the buffer API or when the buffer is converted to string. I don't think that it would be useful to add an ASCII buffer type, because when the buffer is converted to string, Python has to recompute the maximum character (to choose between ASCII, UCS1, UCS2 and UCS4). For example, "abc\xe9"[:-1] is ASCII. UCS1 is enough for ASCII strings. Victor From stefan at bytereef.org Tue Mar 6 19:15:16 2012 From: stefan at bytereef.org (Stefan Krah) Date: Tue, 6 Mar 2012 19:15:16 +0100 Subject: [Python-Dev] PEP-393/PEP-3118: unicode format specifiers In-Reply-To: References: <20120306160447.GA32426@sleipnir.bytereef.org> Message-ID: <20120306181516.GA341@sleipnir.bytereef.org> Victor Stinner wrote: > > 'c' -> UCS1 > > 'u' -> UCS2 > > 'w' -> UCS4 > > A Unicode string is an array of code point. Another approach is to > expose such string as an array of uint8/uint16/uint32 integers. I > don't know if you expect to get a character / a substring when you > read the buffer of a string object. Using Python 3.2, I get: > > >>> memoryview(b"abc")[0] > b'a' > > ... but using Python 3.3 I get a number :-) Yes, that's changed because officially (see struct module) the format is unsigned bytes, which are integers in struct module syntax: >>> unsigned_bytes = memoryview(b"abc") >>> unsigned_bytes.format 'B' >>> char_array = unsigned_bytes.cast('c') >>> char_array.format 'c' >>> char_array[0] b'a' Possibly the uint8/uint16/uint32 integer approach that you mention would make more sense. Stefan Krah From g.brandl at gmx.net Tue Mar 6 19:42:08 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 06 Mar 2012 19:42:08 +0100 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 1 In-Reply-To: <4F54BF2C.7000107@nedbatchelder.com> References: <4F54711E.2020006@python.org> <4F54BF2C.7000107@nedbatchelder.com> Message-ID: On 05.03.2012 14:27, Ned Batchelder wrote: >> For a more extensive list of changes in 3.3.0, see >> >> http://docs.python.org/3.3/whatsnew/3.3.html >> > The 3.3 whatsnews page doesn't seem to mention PEP 414 or Unicode > literals at all. Indeed. Thanks to Nick, this is now fixed. Georg From jimjjewett at gmail.com Tue Mar 6 20:43:54 2012 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Tue, 06 Mar 2012 11:43:54 -0800 (PST) Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 1 In-Reply-To: <4F54711E.2020006@python.org> Message-ID: <4f5668fa.a81e340a.358a.4fa6@mx.google.com> In http://mail.python.org/pipermail/python-dev/2012-March/117348.html Georg Brandl posted: > Python 3.3 includes a range of improvements of the 3.x series, as well as easier > porting between 2.x and 3.x. Major new features in the 3.3 release series are: As much as it is nice to just celebrate improvements, I think readers (particularly on the download page http://www.python.org/download/releases/3.3.0/ ) would be better served if there were an additional point about porting and the hash changes. http://docs.python.org/dev/whatsnew/3.3.html#porting-to-python-3-3 also failed to mention this, and even the changelog didn't seem to warn people about failing tests or tell them how to work around it. Perhaps something like: Hash Randomization (issue 13703) is now on by default. Unfortunately, this does break some tests; it can be temporarily turned off by setting the environment variable PYTHONHASHSEED to "0" before launching python. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From g.rodola at gmail.com Tue Mar 6 21:31:20 2012 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Tue, 6 Mar 2012 21:31:20 +0100 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 1 In-Reply-To: <4f5668fa.a81e340a.358a.4fa6@mx.google.com> References: <4F54711E.2020006@python.org> <4f5668fa.a81e340a.358a.4fa6@mx.google.com> Message-ID: Il 06 marzo 2012 20:43, Jim J. Jewett ha scritto: > > > In http://mail.python.org/pipermail/python-dev/2012-March/117348.html > Georg Brandl ?posted: > >> Python 3.3 includes a range of improvements of the 3.x series, as well as easier >> porting between 2.x and 3.x. ?Major new features in the 3.3 release series are: > > As much as it is nice to just celebrate improvements, I think > readers (particularly on the download page > http://www.python.org/download/releases/3.3.0/ ?) would be better > served if there were an additional point about porting and the > hash changes. > > http://docs.python.org/dev/whatsnew/3.3.html#porting-to-python-3-3 > also failed to mention this, and even the changelog didn't seem to > warn people about failing tests or tell them how to work around it. > > Perhaps something like: > > Hash Randomization (issue 13703) is now on by default. ?Unfortunately, > this does break some tests; it can be temporarily turned off by setting > the environment variable PYTHONHASHSEED to "0" before launching python. > > > -jJ > > -- > > If there are still threading problems with my replies, please > email me with details, so that I can try to resolve them. ?-jJ That's why I once proposed to include whatsnew.rst changes every time a new feature is added/committed. Assigning that effort to the release manager or whoever is supposed to take care of this, is both impractical and prone to forgetfulness. --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ http://code.google.com/p/pysendfile/ From martin at v.loewis.de Tue Mar 6 22:59:32 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 06 Mar 2012 22:59:32 +0100 Subject: [Python-Dev] PEP-393/PEP-3118: unicode format specifiers In-Reply-To: <20120306160447.GA32426@sleipnir.bytereef.org> References: <20120306160447.GA32426@sleipnir.bytereef.org> Message-ID: <4F5688C4.6080203@v.loewis.de> > I think it would be nice for Python3.3 to implement the PEP-3118 > suggestion: > > 'c' -> UCS1 > > 'u' -> UCS2 > > 'w' -> UCS4 What is the use case for these format codes? Regards, Martin From martin at v.loewis.de Tue Mar 6 23:01:52 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 06 Mar 2012 23:01:52 +0100 Subject: [Python-Dev] Windows uninstallation problem In-Reply-To: References: Message-ID: <4F568950.5080607@v.loewis.de> Am 06.03.2012 15:35, schrieb Vinay Sajip: > I've built an MSI with 3.3 on Windows 7 and installed it - it seems to work OK > in that it passes all tests except test_tcl (intermittent failure). However, > when I uninstall, python33.dll is left behind in System32. If I rebuild the MSI > after some changes and reinstall, the old python33.dll is not overwritten. > > Before raising an issue I thought I'd check here, in case it's something to do > with my configuration. Can anyone else confirm my findings? It most likely is a misconfiguration of your system. I guess that the registry key for the DLL has a non-zero refcount before you started the installation, so that the refcount didn't drop to zero when you uninstalled. Regards, Martin From vinay_sajip at yahoo.co.uk Tue Mar 6 23:33:23 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 6 Mar 2012 22:33:23 +0000 (UTC) Subject: [Python-Dev] Windows uninstallation problem References: <4F568950.5080607@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > It most likely is a misconfiguration of your system. I guess that the > registry key for the DLL has a non-zero refcount before you started the > installation, so that the refcount didn't drop to zero when you uninstalled. That must have been it - thanks. I uninstalled, removed the key from the SharedDLLs (it was still there with a refcount of 1, as per your analysis), reinstalled and uninstalled - all is well. I did raise an issue but I'll go and close it now. Thanks for the help. Regards, Vinay Sajip From ncoghlan at gmail.com Wed Mar 7 01:17:25 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 7 Mar 2012 10:17:25 +1000 Subject: [Python-Dev] PEP-393/PEP-3118: unicode format specifiers In-Reply-To: <20120306181516.GA341@sleipnir.bytereef.org> References: <20120306160447.GA32426@sleipnir.bytereef.org> <20120306181516.GA341@sleipnir.bytereef.org> Message-ID: On Wed, Mar 7, 2012 at 4:15 AM, Stefan Krah wrote: > Victor Stinner wrote: >> A Unicode string is an array of code point. Another approach is to >> expose such string as an array of uint8/uint16/uint32 integers. I >> don't know if you expect to get a character / a substring when you >> read the buffer of a string object. Using Python 3.2, I get: >> >> >>> memoryview(b"abc")[0] >> b'a' >> >> ... but using Python 3.3 I get a number :-) > > Yes, that's changed because officially (see struct module) the format > is unsigned bytes, which are integers in struct module syntax: > >>>> unsigned_bytes = memoryview(b"abc") >>>> unsigned_bytes.format > 'B' >>>> char_array = unsigned_bytes.cast('c') >>>> char_array.format > 'c' >>>> char_array[0] > b'a' To maintain backwards compatibility, we should probably take the purity hit and officially change the default format of memoryview() to 'c', requiring the explicit cast to 'B' to get the new more bytes-like behaviour. Using 'c' as the default format is a little ugly, but not as ugly as breaking currently working 3.2 code in the upgrade to 3.3. > Possibly the uint8/uint16/uint32 integer approach that you mention > would make more sense. Any changes made in this area should be aimed specifically at making life easier for developers dealing with ASCII puns in binary protocols. Being able to ask a string for a memoryview, and receiving one back with the format set to the appropriate value could potentially help with that by indicating: ASCII: each code point is mapped to an integer in the range 0-127 latin-1: each code point is mapped to an integer in the range 0-255 UCS2: each code point is mapped to an integer in the range 0-65535 UCS4: each code point is mapped to an integer in the range 0-0x10FFFF Using the actual code point values rather than bytes representations which may vary in length can help gloss over the differences in the underlying data layout. However, use cases should be explored more thoroughly *first* before any additional changes are made to the supported formats. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From stefan_ml at behnel.de Wed Mar 7 08:08:39 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 07 Mar 2012 08:08:39 +0100 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 1 In-Reply-To: <4f5668fa.a81e340a.358a.4fa6@mx.google.com> References: <4F54711E.2020006@python.org> <4f5668fa.a81e340a.358a.4fa6@mx.google.com> Message-ID: Jim J. Jewett, 06.03.2012 20:43: > Hash Randomization (issue 13703) is now on by default. Unfortunately, > this does break some tests; it can be temporarily turned off by setting > the environment variable PYTHONHASHSEED to "0" before launching python. I don't think that makes it clear enough that that's just a work-around and that it's the tests that need fixing in the first place. Stefan From g.brandl at gmx.net Wed Mar 7 08:56:45 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 07 Mar 2012 08:56:45 +0100 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 1 In-Reply-To: References: <4F54711E.2020006@python.org> <4f5668fa.a81e340a.358a.4fa6@mx.google.com> Message-ID: On 07.03.2012 08:08, Stefan Behnel wrote: > Jim J. Jewett, 06.03.2012 20:43: >> Hash Randomization (issue 13703) is now on by default. Unfortunately, >> this does break some tests; it can be temporarily turned off by setting >> the environment variable PYTHONHASHSEED to "0" before launching python. > > I don't think that makes it clear enough that that's just a work-around and > that it's the tests that need fixing in the first place. It's much too much for the release announcement anyway. I'll add one bullet point in the future announcements. I've added a note to what's new so that someone will add an appropriate paragraph. Georg From armin.ronacher at active-4.com Wed Mar 7 09:55:51 2012 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Wed, 07 Mar 2012 08:55:51 +0000 Subject: [Python-Dev] PEP 414 - some numbers from the Django port In-Reply-To: References: Message-ID: <4F572297.10607@active-4.com> Hi, On 3/3/12 2:28 AM, Vinay Sajip wrote: > So, looking at a large project in a relevant problem domain, unicode_literals > and native string markers would appear not to adversely impact readability or > performance. What are you trying to argue? That the overall Django testsuite does not do a lot of string processing, less processing with native strings? I'm surprised you see a difference at all over the whole Django testsuite and I wonder why you get a slowdown at all for the ported Django on 2.7. Regards, Armin` From stefan at bytereef.org Wed Mar 7 11:50:44 2012 From: stefan at bytereef.org (Stefan Krah) Date: Wed, 7 Mar 2012 11:50:44 +0100 Subject: [Python-Dev] PEP-393/PEP-3118: unicode format specifiers In-Reply-To: <4F5688C4.6080203@v.loewis.de> References: <20120306160447.GA32426@sleipnir.bytereef.org> <4F5688C4.6080203@v.loewis.de> Message-ID: <20120307105044.GA3822@sleipnir.bytereef.org> "Martin v. L?wis" wrote: > > I think it would be nice for Python3.3 to implement the PEP-3118 > > suggestion: > > > > 'c' -> UCS1 > > > > 'u' -> UCS2 > > > > 'w' -> UCS4 > > What is the use case for these format codes? Unfortunately I've only worked with UTF-8 so far and I'm not too familiar with UCS2 and UCS4. *If* the arrays that Victor mentioned give one character per array location, then memoryview(str) could be used for zero-copy slicing etc. The main reason why I raised the issue is this: If Python-3.3 is shipped with 'u' -> UCS4 in the array module and *then* someone figures out that the above format codes are a great idea, we'd be stuck with yet another format code incompatibility. Stefan Krah From martin at v.loewis.de Wed Mar 7 12:39:51 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Wed, 07 Mar 2012 12:39:51 +0100 Subject: [Python-Dev] PEP-393/PEP-3118: unicode format specifiers In-Reply-To: <20120307105044.GA3822@sleipnir.bytereef.org> References: <20120306160447.GA32426@sleipnir.bytereef.org> <4F5688C4.6080203@v.loewis.de> <20120307105044.GA3822@sleipnir.bytereef.org> Message-ID: <20120307123951.Horde.ZZZ-CKGZi1VPV0kH5DVjlIA@webmail.df.eu> > The main reason why I raised the issue is this: If Python-3.3 is shipped > with 'u' -> UCS4 in the array module and *then* someone figures out that > the above format codes are a great idea, we'd be stuck with yet another > format code incompatibility. Ah. I think the array module should maintain compatibility with Python 3.2, i.e. "u" should continue to denote Py_UNICODE, i.e. 7fa098f6dc6a should be reverted. It may be that the 'u' code is not particularly useful, but AFAICT, it never was useful. Regards, Martin From ncoghlan at gmail.com Wed Mar 7 12:40:19 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 7 Mar 2012 21:40:19 +1000 Subject: [Python-Dev] PEP-393/PEP-3118: unicode format specifiers In-Reply-To: <20120307105044.GA3822@sleipnir.bytereef.org> References: <20120306160447.GA32426@sleipnir.bytereef.org> <4F5688C4.6080203@v.loewis.de> <20120307105044.GA3822@sleipnir.bytereef.org> Message-ID: On Wed, Mar 7, 2012 at 8:50 PM, Stefan Krah wrote: > *If* the arrays that Victor mentioned give one character per array location, > then memoryview(str) could be used for zero-copy slicing etc. A slight tangent, but it's worth trying to stick to the "code point" term when talking about what Unicode strings contain. Even in UCS4, full characters may be expressed as multiple code points (to be honest, I still don't understand exactly how code points are composed into graphemes and characters and mapped to glyphs for display, I just know the mapping is a lot more complicated than the one-to-one implied by referring to code points as characters). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Wed Mar 7 12:53:10 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 7 Mar 2012 21:53:10 +1000 Subject: [Python-Dev] PEP-393/PEP-3118: unicode format specifiers In-Reply-To: <20120307123951.Horde.ZZZ-CKGZi1VPV0kH5DVjlIA@webmail.df.eu> References: <20120306160447.GA32426@sleipnir.bytereef.org> <4F5688C4.6080203@v.loewis.de> <20120307105044.GA3822@sleipnir.bytereef.org> <20120307123951.Horde.ZZZ-CKGZi1VPV0kH5DVjlIA@webmail.df.eu> Message-ID: On Wed, Mar 7, 2012 at 9:39 PM, wrote: > Ah. I think the array module should maintain compatibility with Python 3.2, > i.e. "u" should continue to denote Py_UNICODE, i.e. 7fa098f6dc6a should be > reverted. > > It may be that the 'u' code is not particularly useful, but AFAICT, it never > was useful. +1. It can go away at the same time that Py_UNICODE itself goes away. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mark at hotpy.org Wed Mar 7 16:33:20 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 07 Mar 2012 15:33:20 +0000 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). Message-ID: <4F577FC0.9060408@hotpy.org> I propose adding an optional (keyword-only?) 3rd parameter, "builtins" to exec(), eval(), __import__() and any other functions that take locals and globals as parameters. Currently, Python code is evaluated in a context of three name spaces; locals(), globals() and builtins. However, eval & exec only take 2 (optional) namespaces as parameters; locals and globals, so access to builtins is poorly defined. The reason I am proposing this here rather than on python-ideas is that treating the triple of [locals, globals, builtins] as a single "execution context" can be implemented in a really nice way. Internally, the execution context of [locals, globals, builtins] can be treated a single immutable object (custom object or tuple) Treating it as immutable means that it can be copied merely by taking a reference. A nice trick in the implementation is to make a NULL locals mean "fast" locals for function contexts. Frames, could then acquire their globals and builtins by a single reference copy from the function object, rather than searching globals for a '__builtins__' to find the builtins. A unified execution context will speed up all calls (to Python functions) as frame allocation and deallocation would be faster. I used this implementation in my original HotPy VM, and it worked well. It should also help with sandboxing, as it would make it easier to analyse and thus control access to builtins, since the execution context of all code would be easier to determine. Currently, it is impossible to allow one function access to sensitive functions like open(), while denying it to others, as any code can then get the builtins of another function via f.__globals__['builtins__']. Separating builtins from globals could solve this. Cheers, Mark. From benjamin at python.org Wed Mar 7 16:56:22 2012 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 7 Mar 2012 09:56:22 -0600 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: <4F577FC0.9060408@hotpy.org> References: <4F577FC0.9060408@hotpy.org> Message-ID: 2012/3/7 Mark Shannon : > Currently, it is impossible to allow one function access to sensitive > functions like open(), while denying it to others, as any code can then > get the builtins of another function via f.__globals__['builtins__']. > Separating builtins from globals could solve this. I like this idea. We could finally kill __builtins__, too, which has often been confusing for people. -- Regards, Benjamin From brett at python.org Wed Mar 7 20:16:02 2012 From: brett at python.org (Brett Cannon) Date: Wed, 7 Mar 2012 14:16:02 -0500 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: References: <4F577FC0.9060408@hotpy.org> Message-ID: On Wed, Mar 7, 2012 at 10:56, Benjamin Peterson wrote: > 2012/3/7 Mark Shannon : > > Currently, it is impossible to allow one function access to sensitive > > functions like open(), while denying it to others, as any code can then > > get the builtins of another function via f.__globals__['builtins__']. > > Separating builtins from globals could solve this. > > I like this idea. We could finally kill __builtins__, too, which has > often been confusing for people. I like it as well. It's a mess right now to try to grab the __import__() implementation and this would actually help clarify import semantics by saying that __import__() for any chained imports comes from __import__()s locals, globals, or builtins arguments (in that order) or from the builtins module itself (i.e. tstate->builtins). -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Wed Mar 7 21:40:18 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 07 Mar 2012 21:40:18 +0100 Subject: [Python-Dev] problem with recursive "yield from" delegation Message-ID: Hi, I found a problem in the current "yield from" implementation that I think is worth discussing: http://bugs.python.org/issue14220 Test code: def g1(): yield "y1" yield from g2() yield "y4" def g2(): yield "y2" try: yield from gi except ValueError: pass # catch "already running" error yield "y3" gi = g1() for y in gi: print("Yielded: %s" % (y,)) This is what it currently does: 1) g1() delegates to a new g2(), propagates its "y2" value and asks for the next value 2) g2 delegates back to the g1 instance and asks for its next value 3) Python sees the active delegation in g1 and asks g2 for its next value 4) g2 sees that it's already running and throws an exception Ok so far. Now: 5) the exception is propagated into g1 at call level 3) instead of the original requestor g2 one level above 6) g1 undelegates and terminates by the exception 7) g2 catches the exception, yields "y3" and then terminates normally 8) g1 gets control back but has already terminated and does nothing Effect: g1 does not yield "y4" anymore. The problem is in steps 5) and 6), which are handled by g1 at the wrong call level. They shouldn't lead to undelegation and termination in g1, just to an exception being raised in g2. I ran into this while trying to adapt the implementation for Cython, which has a different generator type implementation but otherwise uses more or less the same code now. But I'm not sure how to fix this one without major changes to the implementation, especially not without special casing the generator type on delegation (which won't work because CPython doesn't know about Cython generators). Any ideas? Stefan From vinay_sajip at yahoo.co.uk Wed Mar 7 23:36:43 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 7 Mar 2012 22:36:43 +0000 (UTC) Subject: [Python-Dev] PEP 414 - some numbers from the Django port References: <4F572297.10607@active-4.com> Message-ID: Armin Ronacher active-4.com> writes: > What are you trying to argue? That the overall Django testsuite does > not do a lot of string processing, less processing with native strings? > > I'm surprised you see a difference at all over the whole Django > testsuite and I wonder why you get a slowdown at all for the ported > Django on 2.7. The point of the figures is to show there is *no* difference (statistically speaking) between the three sets of samples. Of course, any individual run or set of runs could be higher or lower due to other things happening on the machine (not that I was running any background tasks), so the idea of the simple statistical analysis is to determine whether these samples could all have come from the same populations. According to ministat, they could have (with a 95% confidence level). The Django test suite is pretty comprehensive, so it would presumably exercise every part of Django, including the parts that handle processing of requests and producing responses. I can't confirm this, not having done a coverage analysis of Django; but this seems like a more representative workload than any microbenchmark which just measures a single operation, like the overhead of a wrapper. And so my argument was that the microbenchmark numbers didn't give a meaningful indication of the actual performance in a real scenario, and they should be taken in that light. No doubt there are other, better (more useful) tests that could be performed (e.g. ab run against all three variants and requests/sec figures compared) but I had the Django test run figures to hand (since they're a byproduct of the porting work), and so presented them in my post. Anyway, it doesn't really matter now, since the latest version of the PEP no longer mentions those figures. Regards, Vinay Sajip From benjamin at python.org Thu Mar 8 00:01:07 2012 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 7 Mar 2012 17:01:07 -0600 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: References: Message-ID: 2012/3/7 Stefan Behnel : > The problem is in steps 5) and 6), which are handled by g1 at the wrong > call level. They shouldn't lead to undelegation and termination in g1, just > to an exception being raised in g2. That looks wrong indeed. > > I ran into this while trying to adapt the implementation for Cython, which > has a different generator type implementation but otherwise uses more or > less the same code now. But I'm not sure how to fix this one without major > changes to the implementation Cython's or CPython's implementation? -- Regards, Benjamin From brett at python.org Thu Mar 8 00:05:50 2012 From: brett at python.org (Brett Cannon) Date: Wed, 7 Mar 2012 18:05:50 -0500 Subject: [Python-Dev] importlib cleared for merging into default! Message-ID: At the language summit today I got clearance to merge my importlib bootstrap branch (http://hg.python.org/sandbox/bcannon#bootstrap_importlib) thanks to performance being about 5% slower using the normal_startup (which, as Thomas Wouters said, is less than the difference of using the newest gcc in some benchmarking he recently did). Obviously thanks to everyone who has helped out with making this happen over the years! So, where does that leave us? First is getting full code review and sign off from somebody (http://bugs.python.org/issue2377 is the issue tracking this). Once I have that I will do the merge and then go through the tracker to update issues to see if they still apply or need to be re-targeted for importlib. Once importlib has been merged there is some stuff that needs to happen. I will first strip imp down to what has to be in import.c (e.g. importing a built-in module), rename it _imp, and then re-implement/extend whatever is needed in an imp.py module. This will allow for much of the C code left in import.c to go away (i.e. imp.find_module() is the reason the finder code in import.c has not been removed yet). It will also alleviate the other VMs from having to implement all of imp from scratch. Once importlib is in I will also do some proposing on how to undo some import design decisions that were caused because it was all C code (e.g. implicit stuff like a sys.meta_path entry that handles sys.path/sys.path_importer_cache/sys.path_hooks, the meaning of None in sys.path_importer_cache). Everyone at the language summit was supportive of this stuff and basically agreed to it but wanted it as a separate step from the merge to get everything moving faster. What can be done in parallel/while waiting is exposing more of importlib's innards. This ties into some of the proposals I will be making once the merge occurs. Everything else I have in mind is long term stdlib cleanup using importlib, but that is not important now. IOW, someone please review my bootstrap_importlib branch so I can merge it. =) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Mar 8 00:28:38 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 8 Mar 2012 09:28:38 +1000 Subject: [Python-Dev] PEP 414 - some numbers from the Django port In-Reply-To: References: <4F572297.10607@active-4.com> Message-ID: On Thu, Mar 8, 2012 at 8:36 AM, Vinay Sajip wrote: > Anyway, it doesn't really > matter now, since the latest version of the PEP no longer mentions those figures. Indeed, I deliberately removed the part about performance concerns, since I considered it a distraction from what I see as the heart of the problem PEP 414 is designed to address (i.e. that the purely mechanical changes previously required to Unicode text that is already clearly marked as such in the Python 2 version are irrelevant noise when it comes to identifying and reviewing the *actual* changes needed for a successful Python 3 port). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From p.f.moore at gmail.com Thu Mar 8 00:33:54 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 7 Mar 2012 23:33:54 +0000 Subject: [Python-Dev] importlib cleared for merging into default! In-Reply-To: References: Message-ID: On 7 March 2012 23:05, Brett Cannon wrote: > At the language summit today I got clearance to merge my importlib bootstrap > branch (http://hg.python.org/sandbox/bcannon#bootstrap_importlib) thanks to > performance being about 5% slower using the normal_startup (which, as Thomas > Wouters said, is less than the difference of using the newest gcc in some > benchmarking he recently did). Obviously thanks to everyone who has helped > out with making this happen over the years! Yay! Congratulations for getting this done. When I first co-authored PEP 302 I never realised what a long road it would be from there to here. Thanks for making it happen. Paul. From brett at yvrsfo.ca Thu Mar 8 00:52:43 2012 From: brett at yvrsfo.ca (Brett Cannon) Date: Wed, 7 Mar 2012 18:52:43 -0500 Subject: [Python-Dev] PEP 412 has been accepted Message-ID: Since PEP 412 has code that doesn't break tests anymore (thanks to hash randomization), it was just accepted. Mark, can you make sure there is an up-to-date patch in the tracker so people can potentially look at the code at the sprints here at PyCon? And also please apply for core dev privileges (http://docs.python.org/devguide/coredev.html) so that we can make you fix bugs. =) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Mar 8 00:57:42 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 8 Mar 2012 09:57:42 +1000 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: References: Message-ID: On Thu, Mar 8, 2012 at 6:40 AM, Stefan Behnel wrote: > I ran into this while trying to adapt the implementation for Cython, which > has a different generator type implementation but otherwise uses more or > less the same code now. But I'm not sure how to fix this one without major > changes to the implementation, especially not without special casing the > generator type on delegation (which won't work because CPython doesn't know > about Cython generators). Any ideas? After tinkering with it a bit, a couple of my original guesses as to the underlying problem were clearly wrong. I'm moving house next week, so it'll be a while before I get to look at it in detail, but I added Mark Shannon to the issue's nosy list. He's been working on a few patches lately to clean up generator related state handling in general, so he may have some insight into this. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From benjamin at python.org Thu Mar 8 01:00:36 2012 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 7 Mar 2012 18:00:36 -0600 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: References: Message-ID: 2012/3/7 Benjamin Peterson : > 2012/3/7 Stefan Behnel : >> The problem is in steps 5) and 6), which are handled by g1 at the wrong >> call level. They shouldn't lead to undelegation and termination in g1, just >> to an exception being raised in g2. > > That looks wrong indeed. Fixed as of 3357eac1ba62 -- Regards, Benjamin From ncoghlan at gmail.com Thu Mar 8 01:11:53 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 8 Mar 2012 10:11:53 +1000 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: References: Message-ID: On Thu, Mar 8, 2012 at 10:00 AM, Benjamin Peterson wrote: > 2012/3/7 Benjamin Peterson : >> 2012/3/7 Stefan Behnel : >>> The problem is in steps 5) and 6), which are handled by g1 at the wrong >>> call level. They shouldn't lead to undelegation and termination in g1, just >>> to an exception being raised in g2. >> >> That looks wrong indeed. > > Fixed as of 3357eac1ba62 Thanks. And, since the fix was entirely internal to the generator implementation, Stefan should be right for the Cython generators, too. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From jimjjewett at gmail.com Thu Mar 8 01:32:57 2012 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Wed, 07 Mar 2012 16:32:57 -0800 (PST) Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: Message-ID: <4f57fe39.a526340a.7705.0192@mx.google.com> http://mail.python.org/pipermail/python-dev/2012-March/117396.html Stefan Behnel posted: > I found a problem in the current "yield from" implementation ... [paraphrasing] g1 yields from g2 g2 yields from g1 XXX python follows the existing delegation without checking re-entrancy g2 (2nd call) checks re-entrancy, and raises an exception g1 (2nd call) gets to handle the exception, and doesn't g2 (1st call) gets to handle the exception, and does How is this a problem? Re-entering a generator is a bug. Python caught it and raised an appropriate exception. It would be nice if python caught the generator cycle as soon as it was created, just as it would be nice if reference cycles were collected as soon as they became garbage. But python doesn't promise to catch cycles immediately, and the checks required to do so would slow down all code, so in practice the checks are delayed. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From ncoghlan at gmail.com Thu Mar 8 01:47:00 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 8 Mar 2012 10:47:00 +1000 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: <4f57fe39.a526340a.7705.0192@mx.google.com> References: <4f57fe39.a526340a.7705.0192@mx.google.com> Message-ID: On Thu, Mar 8, 2012 at 10:32 AM, Jim J. Jewett wrote: > How is this a problem? > > Re-entering a generator is a bug. ?Python caught it and raised an > appropriate exception. No, the problem was that the interpreter screwed up the state of the generators while attempting to deal with the erroneous reentry. The ValueError *should* just be caught and completely suppressed by the try/except block, but that wasn't quite happening properly - the failed attempt at reentry left the generators in a dodgy state (which is why the subsequent "3" was being produced, but then the expected final "4" vanished into the electronic ether). Benjamin figured out where the generator's reentrancy check was going wrong, so Stefan's example should do the right thing in the next alpha (i.e. the ValueError will still get raised and suppressed by the try/except block, the inner generator will complete, but the outer generator will also continue on to produce the final value). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From jimjjewett at gmail.com Thu Mar 8 01:48:28 2012 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Wed, 07 Mar 2012 16:48:28 -0800 (PST) Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: Message-ID: <4f5801dc.3221340a.3486.0373@mx.google.com> http://mail.python.org/pipermail/python-dev/2012-March/117395.html Brett Cannon posted: [in reply to Mark Shannon's suggestion of adding a builtins parameter to match locals and globals] > It's a mess right now to try to grab the __import__() > implementation and this would actually help clarify import semantics by > saying that __import__() for any chained imports comes from __import__()s > locals, globals, or builtins arguments (in that order) or from the builtins > module itself (i.e. tstate->builtins). How does that differ from today? If you're saying that the locals and (module-level) globals aren't always checked in order, then that is a semantic change. Probably a good change, but still a change -- and it can be made indepenently of Mark's suggestion. Also note that I would assume this was for sandboxing, and that missing names should *not* fall back to the "real" globals, although I would understand if bootstrapping required the import statement to get special treatment. (Note that I like Mark's proposed change; I just don't see how it cleans up import.) -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From victor.stinner at gmail.com Thu Mar 8 02:39:40 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 8 Mar 2012 02:39:40 +0100 Subject: [Python-Dev] Non-string keys in type dict Message-ID: Hi, During the Language Summit 2011 (*), it was discussed that PyPy and Jython don't support non-string key in type dict. An issue was open to emit a warning on such dict, but the patch has not been commited yet. I'm trying to Lib/test/crashers/losing_mro_ref.py: I wrote a patch fixing the specific issue (keep a strong reference to the MRO during the lookup, see #14199), but I realized that the real problem is that we allow custom objects in the type dict. So my question is: what is the use case of such dict? Why do we still support it? Can't we simply raise an error if the dict contains non-string keys? (*) http://blog.python.org/2011/03/2011-language-summit-report.html Victor From benjamin at python.org Thu Mar 8 02:42:31 2012 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 7 Mar 2012 19:42:31 -0600 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: Message-ID: 2012/3/7 Victor Stinner : > So my question is: what is the use case of such dict? Why do we still > support it? Probably a side-effect of implementation. > Can't we simply raise an error if the dict contains > non-string keys? Sounds okay to me. -- Regards, Benjamin From victor.stinner at gmail.com Thu Mar 8 02:45:07 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 8 Mar 2012 02:45:07 +0100 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: Message-ID: > During the Language Summit 2011 (*), it was discussed that PyPy and > Jython don't support non-string key in type dict. An issue was open to > emit a warning on such dict, but the patch has not been commited yet. It's the issue #11455. As written in the issue, there are two ways to create such type: class A(object): locals()[42] = "abc" or type("A", (object,), {42: "abc"}) Both look like an ugly hack. Victor From ckaynor at zindagigames.com Thu Mar 8 02:49:55 2012 From: ckaynor at zindagigames.com (Chris Kaynor) Date: Wed, 7 Mar 2012 17:49:55 -0800 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: Message-ID: On Wed, Mar 7, 2012 at 5:45 PM, Victor Stinner wrote: > > During the Language Summit 2011 (*), it was discussed that PyPy and > > Jython don't support non-string key in type dict. An issue was open to > > emit a warning on such dict, but the patch has not been commited yet. > > It's the issue #11455. As written in the issue, there are two ways to > create such type: > > class A(object): > locals()[42] = "abc" > > or > > type("A", (object,), {42: "abc"}) > > Both look like an ugly hack. > Here is a cleaner version, using metaclasses (Python 2.6): class M(type): def __new__(mcs, name, bases, dict): dict[42] = 'abc' return super(M, mcs).__new__(mcs, name, bases, dict) class A(object): __metaclass__ = M > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ckaynor%40zindagigames.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at brett.geek.nz Thu Mar 8 02:52:28 2012 From: brett at brett.geek.nz (Brett Wilkins) Date: Thu, 08 Mar 2012 14:52:28 +1300 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: <4F58100B.7000608@brett.geek.nz> References: <4F58100B.7000608@brett.geek.nz> Message-ID: <4F5810DC.2090708@brett.geek.nz> I see that I've misunderstood this entirely, nevermind me. --Brett On 08/03/12 14:48, Brett Wilkins wrote: > I assume when you say "non-string keys" this includes numbers. > > But in Pypy, I can certainly use numbers: >>>> {'1':1, 1:2}.keys() > ['1', 1] > > I can even use a lambda (obviously not a string, a number, nor what I > would consider a primitive): >>>> {'1':1, (lambda x: x):2}.keys() > ['1', at 0x00007fdb0b837da8>] > > These are in Pypy 1.8. > > --Brett > > On Thu 08 Mar 2012 14:39:40 NZDT, Victor Stinner wrote: >> Hi, >> >> During the Language Summit 2011 (*), it was discussed that PyPy and >> Jython don't support non-string key in type dict. An issue was open to >> emit a warning on such dict, but the patch has not been commited yet. >> >> I'm trying to Lib/test/crashers/losing_mro_ref.py: I wrote a patch >> fixing the specific issue (keep a strong reference to the MRO during >> the lookup, see #14199), but I realized that the real problem is that >> we allow custom objects in the type dict. >> >> So my question is: what is the use case of such dict? Why do we still >> support it? Can't we simply raise an error if the dict contains >> non-string keys? >> >> (*) http://blog.python.org/2011/03/2011-language-summit-report.html >> >> Victor >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40brett.geek.nz From brett at brett.geek.nz Thu Mar 8 02:48:59 2012 From: brett at brett.geek.nz (Brett Wilkins) Date: Thu, 08 Mar 2012 14:48:59 +1300 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: Message-ID: <4F58100B.7000608@brett.geek.nz> I assume when you say "non-string keys" this includes numbers. But in Pypy, I can certainly use numbers: >>> {'1':1, 1:2}.keys() ['1', 1] I can even use a lambda (obviously not a string, a number, nor what I would consider a primitive): >>> {'1':1, (lambda x: x):2}.keys() ['1', at 0x00007fdb0b837da8>] These are in Pypy 1.8. --Brett On Thu 08 Mar 2012 14:39:40 NZDT, Victor Stinner wrote: > Hi, > > During the Language Summit 2011 (*), it was discussed that PyPy and > Jython don't support non-string key in type dict. An issue was open to > emit a warning on such dict, but the patch has not been commited yet. > > I'm trying to Lib/test/crashers/losing_mro_ref.py: I wrote a patch > fixing the specific issue (keep a strong reference to the MRO during > the lookup, see #14199), but I realized that the real problem is that > we allow custom objects in the type dict. > > So my question is: what is the use case of such dict? Why do we still > support it? Can't we simply raise an error if the dict contains > non-string keys? > > (*) http://blog.python.org/2011/03/2011-language-summit-report.html > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40brett.geek.nz From ncoghlan at gmail.com Thu Mar 8 03:20:21 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 8 Mar 2012 12:20:21 +1000 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: Message-ID: On Thu, Mar 8, 2012 at 11:42 AM, Benjamin Peterson wrote: > 2012/3/7 Victor Stinner : >> Can't we simply raise an error if the dict contains >> non-string keys? > > Sounds okay to me. For 3.3, the most we can do is trigger a deprecation warning, since removing this feature *will* break currently running code. I don't have any objection to us starting down that path, though. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From steve at pearwood.info Thu Mar 8 04:15:33 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 8 Mar 2012 14:15:33 +1100 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: Message-ID: <20120308031533.GC19312@ando> On Thu, Mar 08, 2012 at 12:20:21PM +1000, Nick Coghlan wrote: > On Thu, Mar 8, 2012 at 11:42 AM, Benjamin Peterson wrote: > > 2012/3/7 Victor Stinner : > >> Can't we simply raise an error if the dict contains > >> non-string keys? > > > > Sounds okay to me. > > For 3.3, the most we can do is trigger a deprecation warning, since > removing this feature *will* break currently running code. I don't > have any objection to us starting down that path, though. Could we make string-key-only dicts a public type instead of an implementation detail? I've used string-only keys in my code, and it seems silly to have to re-invent the wheel. I don't care if it is a built-in. I don't even care if I have to do something gnarly like StringDict = type(type.__dict__), so long as doing so is officially supported. (But "from collections import StringDict" would be better from the point of discoverability.) -- Steven From pje at telecommunity.com Thu Mar 8 06:35:46 2012 From: pje at telecommunity.com (PJ Eby) Date: Thu, 8 Mar 2012 00:35:46 -0500 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: Message-ID: On Wed, Mar 7, 2012 at 8:39 PM, Victor Stinner wrote: > So my question is: what is the use case of such dict? Well, I use them for this: http://pypi.python.org/pypi/AddOns (And I have various other libraries that depend on that library.) Short version: AddOns are things you can use to dynamically extend instances -- a bit like the "decorator" in "decorator pattern" (not to be confused with Python decorators). Rather than synthesize a unique string as a dictionary key, I just used the AddOn classes themselves as keys. This works fine for object instances, but gets hairy once classes come into play. ( http://pypi.python.org/pypi/AddOns#class-add-ons - an orthogonal alternative to writing hairy metaclasses with registries for special methods, persisted attributes, and all other sorts of things one would ordinarily use metaclasses for.) In principle, I could refactor AddOns to use synthetic (i.e. made-up) strings as keys, but it honestly seemed unpythonic to me to make up a key when the One Obvious key to use is the AddOn type itself. (Or in some cases, a tuple comprised of an AddOn type plus additional values - which would mean string manipulation for every access.) Another possible solution would be to not store addons directly in a class' dictionary, but instead throw in an __addons__ key with a subdictionary; again this seemed like pointless indirection, wasted memory and access time when there's already a perfectly good dictionary lying about. IOW, it's one of those places where Python's simple orthogonality seems like a feature rather than a bug that needs fixing. I mean, next thing you know, people will be saying that *instance* dictionaries need to have only string keys or something. ;-) Of course, if my library has to change to be able to work on 3.3, then I guess it'll have to change. IIRC, this is *probably* the only place I'm using non-string keys in type or instance dictionaries, so in the big scheme of porting costs, it's not that much. But, since you asked, that's the main use case I know of for non-string keys in type dictionaries, and I wouldn't be terribly surprised if I'm the only person with public code that does this. ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Thu Mar 8 07:00:26 2012 From: arigo at tunes.org (Armin Rigo) Date: Wed, 7 Mar 2012 22:00:26 -0800 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> <4F53E220.9040006@canterbury.ac.nz> <4F552E48.8040008@canterbury.ac.nz> <4F554ED5.7040006@v.loewis.de> Message-ID: Hi Stefan, Stefan Behnel wrote: > could you please stop bashing CPython for no good reason, especially on > python-dev? Specifically, to call it broken beyond repair is a rather > offensive claim, especially when made in public. Sorry if you were offended. I am just trying to point out that CPython has a rather large number of *far-fetched* corner cases in which it is broken. (If this is news to anyone, sorry, but examples have been part of the CPython source tree for years and years.) This is of course very different from saying that CPython is generally broken --- I don't think anyone here considers that it is. My point is merely to repeat that CPython is not suited to be the (only) line of defence in any place that needs serious security. I personally think that the removal of 'rexec' back around Python 2.3(?) was a good idea, as such tools give people a false sense of security. A bient?t, Armin. From fijall at gmail.com Thu Mar 8 07:50:41 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 7 Mar 2012 22:50:41 -0800 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: Message-ID: On Wed, Mar 7, 2012 at 5:42 PM, Benjamin Peterson wrote: > 2012/3/7 Victor Stinner : >> So my question is: what is the use case of such dict? Why do we still >> support it? > > Probably a side-effect of implementation. > >> Can't we simply raise an error if the dict contains >> non-string keys? > > Sounds okay to me. > > > -- > Regards, > Benjamin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com I think the original reason given was that enforcing the type would make a performance hit, but I might be misremembering. From stefan_ml at behnel.de Thu Mar 8 08:16:33 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Thu, 08 Mar 2012 08:16:33 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> <4F53E220.9040006@canterbury.ac.nz> <4F552E48.8040008@canterbury.ac.nz> Message-ID: Maciej Fijalkowski, 06.03.2012 00:08: > For a comparison, PyPy sandbox is a compiled from higher-level > language program that by design does not have all sorts of problems > described. The amount of code you need to carefully review is very > minimal (as compared to the entire CPython interpreter). It does not > mean it has no bugs, but it does mean finding segfaults is a > significantly harder endeavour. Well, there's a bug tracker that lists some of them, which is not *that* hard to find. Does your claim about "a significantly harder endeavour" refer to finding a crash or to finding a fix for it? Stefan From ethan at stoneleaf.us Thu Mar 8 08:43:15 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 07 Mar 2012 23:43:15 -0800 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: Message-ID: <4F586313.5060001@stoneleaf.us> PJ Eby wrote: > Short version: AddOns are things you can use to dynamically extend > instances -- a bit like the "decorator" in "decorator pattern" (not to > be confused with Python decorators). Rather than synthesize a unique > string as a dictionary key, I just used the AddOn classes themselves as > keys. This works fine for object instances, but gets hairy once classes > come into play. Are you able to modify classes after class creation in Python 3? Without using a metaclass? ~Ethan~ From ethan at stoneleaf.us Thu Mar 8 08:46:34 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 07 Mar 2012 23:46:34 -0800 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: Message-ID: <4F5863DA.5030206@stoneleaf.us> Nick Coghlan wrote: > On Thu, Mar 8, 2012 at 11:42 AM, Benjamin Peterson wrote: >> 2012/3/7 Victor Stinner : >>> Can't we simply raise an error if the dict contains >>> non-string keys? >> Sounds okay to me. > > For 3.3, the most we can do is trigger a deprecation warning, since > removing this feature *will* break currently running code. I don't > have any objection to us starting down that path, though. I think it would be sad to lose that functionality. If we are going to, though, we may as well check the string to make sure it's a valid identifier: --> class A: --> pass --> setattr(A, '42', 'hrm') --> A.42 File "", line 1 A.42 ^ SyntaxError: invalid syntax Doesn't seem very useful. ~Ethan~ From regebro at gmail.com Thu Mar 8 10:10:11 2012 From: regebro at gmail.com (Lennart Regebro) Date: Thu, 8 Mar 2012 10:10:11 +0100 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: <4F5863DA.5030206@stoneleaf.us> References: <4F5863DA.5030206@stoneleaf.us> Message-ID: On Thu, Mar 8, 2012 at 08:46, Ethan Furman wrote: > I think it would be sad to lose that functionality. > > If we are going to, though, we may as well check the string to make sure > it's a valid identifier: That would break even more code. I have encountered many cases of attributes that aren't valid identifiers, in particular using dots or dashes. Admittedly this is often in cases where the object has both attribute access and key access, so you can make foo['bar-frotz'] instead. But when should we then require that it is a valid identifier and when not? > --> class A: > --> ? pass > --> setattr(A, '42', 'hrm') > --> A.42 > ?File "", line 1 > ? ?A.42 > ? ? ? ^ > SyntaxError: invalid syntax > > Doesn't seem very useful. You have to set it with setattr, so you have to get it with getattr. I don't see the problem. //Lennart From mark at hotpy.org Thu Mar 8 12:52:21 2012 From: mark at hotpy.org (Mark Shannon) Date: Thu, 08 Mar 2012 11:52:21 +0000 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: References: Message-ID: <4F589D75.8090302@hotpy.org> Stefan Behnel wrote: > Hi, > > I found a problem in the current "yield from" implementation that I think > is worth discussing: > > http://bugs.python.org/issue14220 > [snip] I've been experimenting with the implementation of PEP 380, and I found a couple of interesting things. First of all, the semantics described in the PEP do not match the tests. If you substitute the supposedly semantically equivalent code based on normal yields for each yield from in the test code (Lib/test/test_pep380.py) and run it, then it fails. My second experiment involved stripping away all the code relating to yield from outside of the interpreter and changing the YIELD_FROM bytecode to repeat itself, by setting the last instruction to the instruction immediately before itself. To do this I added 4 lines of code and removed over 120 lines :) This fails many of the tests*, but works for the most straightforward use cases. Many of these failures seem to me to be more 'natural' than the current behaviour It might be possible to fix most, or all?, of the other failures by compiling "yield from" into a two opcode sequence: YIELD_FROM_START and YIELD_FROM_REPEAT. Both opcodes should be reasonably simple and __next__() and send() would not have to worry about the subiterator, although close() and throw() might (the subiterator would be top-of-stack in the generator's frame) Overall the semantics of PEP 380 seem far too complex, trying to do several things at once. An example: Plain yield makes no distinction between a receiving a None and any other value, so send(None) and __next__() are the same. Yield from makes this distinction so has to test for None, meaning the semantics of send now changes with its argument. I would recommend changing one of two things in the PEP: Either, close and throw should not close/throw in subiterators (this would simplify the semantics and implementation immensely) Or, only allow subgenerators, not subiterators (this would fix the next/send problem). I would also suggest a change in implementation to perform all yields within the interpreter. A simpler implementation is likely to be a more reliable one. Finally, the PEP itself makes no mention of coroutines, stackless or greenlet in the alternatives or criticisms section. Perhaps it should. Cheers, Mark. *Tests in PEP 380. It passes all other tests. From ncoghlan at gmail.com Thu Mar 8 13:04:07 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 8 Mar 2012 22:04:07 +1000 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: <4F589D75.8090302@hotpy.org> References: <4F589D75.8090302@hotpy.org> Message-ID: On Thu, Mar 8, 2012 at 9:52 PM, Mark Shannon wrote: > I would recommend changing one of two things in the PEP: > Either, close and throw should not close/throw in subiterators > (this would simplify the semantics and implementation immensely) > Or, only allow subgenerators, not subiterators > (this would fix the next/send problem). Either of those changes would completely defeat the point of the PEP. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mark at hotpy.org Thu Mar 8 13:06:09 2012 From: mark at hotpy.org (Mark Shannon) Date: Thu, 08 Mar 2012 12:06:09 +0000 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: <4f5801dc.3221340a.3486.0373@mx.google.com> References: <4f5801dc.3221340a.3486.0373@mx.google.com> Message-ID: <4F58A0B1.4050701@hotpy.org> Jim J. Jewett wrote: > > http://mail.python.org/pipermail/python-dev/2012-March/117395.html > Brett Cannon posted: > > [in reply to Mark Shannon's suggestion of adding a builtins parameter > to match locals and globals] > >> It's a mess right now to try to grab the __import__() >> implementation and this would actually help clarify import semantics by >> saying that __import__() for any chained imports comes from __import__()s >> locals, globals, or builtins arguments (in that order) or from the builtins >> module itself (i.e. tstate->builtins). > > How does that differ from today? The idea is that you can change, presumable restrict, the builtins separately from the globals for an import. > > If you're saying that the locals and (module-level) globals aren't > always checked in order, then that is a semantic change. Probably > a good change, but still a change -- and it can be made indepenently > of Mark's suggestion. > > Also note that I would assume this was for sandboxing, Actually, I just think it's a cleaner implementation, but sandboxing is a good excuse :) > and that > missing names should *not* fall back to the "real" globals, although > I would understand if bootstrapping required the import statement to > get special treatment. > > > (Note that I like Mark's proposed change; I just don't see how it > cleans up import.) I don't think it cleans up import, but I'll defer to Brett on that. I've included __import__() along with exec and eval as it is a place where new namespaces can be introduced into an execution. There may be others I haven't though of. Cheers, Mark. From ncoghlan at gmail.com Thu Mar 8 13:07:00 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 8 Mar 2012 22:07:00 +1000 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: <4F589D75.8090302@hotpy.org> References: <4F589D75.8090302@hotpy.org> Message-ID: On Thu, Mar 8, 2012 at 9:52 PM, Mark Shannon wrote: > First of all, the semantics described in the PEP do not match the tests. > If you substitute the supposedly semantically equivalent code > based on normal yields for each yield from in the test code > (Lib/test/test_pep380.py) and run it, then it fails. What's more important is whether or not it matches the semantics of inlining the subgenerator bodies. The expansion in the PEP was an attempt to define a way to achieve that in current Python without interpreter support. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Thu Mar 8 13:52:36 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 8 Mar 2012 22:52:36 +1000 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: <4F58A0B1.4050701@hotpy.org> References: <4f5801dc.3221340a.3486.0373@mx.google.com> <4F58A0B1.4050701@hotpy.org> Message-ID: On Thu, Mar 8, 2012 at 10:06 PM, Mark Shannon wrote: > I don't think it cleans up import, but I'll defer to Brett on that. > I've included __import__() along with exec and eval as it is a place where > new namespaces can be introduced into an execution. > There may be others I haven't though of. runpy is another one. However, the problem I see with "builtins" as a separate argument is that it would be a lie. The element that's most interesting about locals vs globals vs builtins is the scope of visibility of their contents. When I call out to another function in the same module, locals are not shared, but globals and builtins are. When I call out to code in a *different* module, neither locals nor globals are shared, but builtins are still common. So there are two ways this purported extra "builtins" parameter could work: 1. Sandboxing - you try to genuinely give the execution context a different set of builtins that's shared by all code executed, even imports from other modules. However, I assume this isn't what you meant, since it is the domain of sandboxing utilities like Victor's pysandbox and is known to be incredibly difficult to get right (hence the demise of both rexec and Bastion and recent comments about known segfault vulnerabilities that are tolerable in the normal case of merely processing untrusted data with trusted code but anathema to a robust CPython native sandboxing scheme that can still cope even when the code itself is untrusted). 2. chained globals - just an extra namespace that's chained behind the globals dictionary for name lookup, not actually shared with code invoked from other modules. The second approach is potentially useful, but: 1. "builtins" is *not* the right name for it (because any other code invoked will still be using the original builtins) 2. it's already trivial to achieve such chained lookups in 3.3 by passing a collections.ChainMap instance as the globals parameter: http://docs.python.org/dev/library/collections#collections.ChainMap collections.ChainMap also has the virtue of working with any current API that accepts a globals argument and can be extended to an arbitrary level of chaining, whereas this suggestion requires that all such APIs be expanded to accept a third parameter, and could still only chain lookups one additional step in doing so. So a big -1 from me. Cheers, Nick. P.S. I've referenced this talk before, but Tim Dawborn's effort from PyCon AU last year about the sandboxing setup for http://www.ncss.edu.au/ should be required viewing for anyone wanting to understand the kind of effort it takes to fairly comprehensively protect host servers from attacks when executing arbitrary untrusted Python code on CPython. Implementing such protection is certainly *possible* (since Tim's talk is all about one way to do it), but it's not easy, and Tim's approach uses Linux OS level sandboxing rather than rather than relying on a Python language level sandbox. This was largely due to a university requirement that the sandbox solution be language agnostic, but it also serves to protect the sandbox from the documented attacks against the CPython interpreter. Tim reviews a few interesting attempts to break the sandbox around the 5 minute mark in https://www.youtube.com/watch?v=y-WPPdhTKBU. (I did suggest he grab our test_crashers directory to see what happened when they were run in the sandbox, but I doubt it would be much more interesting than merely calling "sys.exit()") -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ejjyrex at gmail.com Thu Mar 8 14:09:42 2012 From: ejjyrex at gmail.com (Ejaj Hassan) Date: Thu, 8 Mar 2012 18:39:42 +0530 Subject: [Python-Dev] steps to solve bugs Message-ID: Hi, I am a novice python programmer and am learning to be able to solve some issues. Well following the steps given in the PSF website, I have *> installed vc++ 2008 and even finished till building the cpython code and I have got the console for python 3.0x >Having done this, I am not able to quite follow the further steps to solve the bugs . >Currently I am wondering in the issues tracker though not still working on it. So I request you my kind folks to kindly make me understand the further things I must have to take up to get it done. Thanks Regards, Ejaj From ncoghlan at gmail.com Thu Mar 8 14:23:20 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 8 Mar 2012 23:23:20 +1000 Subject: [Python-Dev] steps to solve bugs In-Reply-To: References: Message-ID: On Thu, Mar 8, 2012 at 11:09 PM, Ejaj Hassan wrote: > Hi, > ? I am a novice python programmer and am learning to be able to solve > some issues. Well following the steps given in the PSF website, I have > *> installed vc++ 2008 and even finished till building the cpython > code and I have got the console for python 3.0x >>Having done this, I am not able to quite follow the further steps to solve the bugs . >>Currently I am wondering in the issues tracker though not still working on it. > ?So I request you my kind folks to kindly make me understand the > further things I must have to take up to get it done. Hi Ejaj, If you're interested in getting started working on items on the issue tracker, I suggest signing up for the core-mentorship list (core-mentorship at python.org) and reposting your question there. It's specifically for this kind of question, whereas python-dev is intended for design and development discussions. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From phd at phdru.name Thu Mar 8 14:24:38 2012 From: phd at phdru.name (Oleg Broytman) Date: Thu, 8 Mar 2012 17:24:38 +0400 Subject: [Python-Dev] steps to solve bugs In-Reply-To: References: Message-ID: <20120308132438.GA6585@iskra.aviel.ru> On Thu, Mar 08, 2012 at 06:39:42PM +0530, Ejaj Hassan wrote: > I am a novice python programmer and am learning to be able to solve > some issues. Well following the steps given in the PSF website, I have > installed vc++ 2008 and even finished till building the cpython > code and I have got the console for python 3.0x > Having done this, I am not able to quite follow the further steps to solve the bugs . > Currently I am wondering in the issues tracker though not still working on it. Are you debugging the Python core, the standard Python library or your own code written in Python? Except for the core you don't need to recompile the Python interpreter - just download ready one. If you are debugging your own code - this mailing list is a wrong place to look for help. This mailing list is to work on developing Python (adding new features to Python itself and fixing bugs); if you're having problems learning, understanding or using Python, please find another forum. Probably python-list/comp.lang.python mailing list/news group is the best place; there are Python developers who participate in it; you may get a faster, and probably more complete, answer there. See http://www.python.org/community/ for other lists/news groups/fora. Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From mark at hotpy.org Thu Mar 8 14:40:46 2012 From: mark at hotpy.org (Mark Shannon) Date: Thu, 08 Mar 2012 13:40:46 +0000 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: References: <4f5801dc.3221340a.3486.0373@mx.google.com> <4F58A0B1.4050701@hotpy.org> Message-ID: <4F58B6DE.2020309@hotpy.org> Nick Coghlan wrote: > On Thu, Mar 8, 2012 at 10:06 PM, Mark Shannon wrote: >> I don't think it cleans up import, but I'll defer to Brett on that. >> I've included __import__() along with exec and eval as it is a place where >> new namespaces can be introduced into an execution. >> There may be others I haven't though of. > > runpy is another one. Add that to the list. > > However, the problem I see with "builtins" as a separate argument is > that it would be a lie. > > The element that's most interesting about locals vs globals vs > builtins is the scope of visibility of their contents. > > When I call out to another function in the same module, locals are not > shared, but globals and builtins are. > > When I call out to code in a *different* module, neither locals nor > globals are shared, but builtins are still common. Not necessarily. All functions in a module will inherit their globals *and* builtins from the module, which gets them from __import__(). > > So there are two ways this purported extra "builtins" parameter could work: > > 1. Sandboxing - you try to genuinely give the execution context a > different set of builtins that's shared by all code executed, even > imports from other modules. Victor's pysandbox seems pretty good to me, I had a go at breaking it and failed, but it is too restrictive. Rather than make pysandbox more secure, I think my proposal could make it more usable, as clearer guarantees about access and visibility can be provided to the sandbox developer. You shouldn't need to cripple introspection in order to limit access to the builtins. > However, I assume this isn't what you > meant, since it is the domain of sandboxing utilities like Victor's > pysandbox and is known to be incredibly difficult to get right (hence > the demise of both rexec and Bastion and recent comments about known > segfault vulnerabilities that are tolerable in the normal case of > merely processing untrusted data with trusted code but anathema to a > robust CPython native sandboxing scheme that can still cope even when > the code itself is untrusted). By changing the implementation to be based around immutable "execution context"s means that the compiler will enforce things for us. Static typing has its advantages, occasionally :) As I stated elsewhere, the crashers can be fixed. I think Victor has already fixed a couple. > > 2. chained globals - just an extra namespace that's chained behind the > globals dictionary for name lookup, not actually shared with code > invoked from other modules. That's exactly what builtins already are. They are a fall back for LOAD_GLOBAL and similar when something isn't found in the globals. > > The second approach is potentially useful, but: > > 1. "builtins" is *not* the right name for it (because any other code > invoked will still be using the original builtins) Other code will use whatever builtins they were given at __import__. The key point is that every piece of code already inherits locals, globals and builtins from somewhere else. We can already control locals (by which parameters are passed in) and globals via exec, eval, __import__, and runpy (any others?) but we can't control builtins. One last point is that this is a low-impact change. All code using eval, etc. will continue to work as before. It also may speed things up a little. Cheers, Mark. From mark at hotpy.org Thu Mar 8 14:45:20 2012 From: mark at hotpy.org (Mark Shannon) Date: Thu, 08 Mar 2012 13:45:20 +0000 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: References: <4F589D75.8090302@hotpy.org> Message-ID: <4F58B7F0.8070805@hotpy.org> Nick Coghlan wrote: > On Thu, Mar 8, 2012 at 9:52 PM, Mark Shannon wrote: >> First of all, the semantics described in the PEP do not match the tests. >> If you substitute the supposedly semantically equivalent code >> based on normal yields for each yield from in the test code >> (Lib/test/test_pep380.py) and run it, then it fails. > > What's more important is whether or not it matches the semantics of > inlining the subgenerator bodies. The expansion in the PEP was an > attempt to define a way to achieve that in current Python without > interpreter support. So "yield from X" means "inline X here", if X is a generator. That is much, much easier to understand than the big block of code in the PEP. It really ought say that "yield from" is equivalent to inlining in the PEP. Cheers, Mark From solipsis at pitrou.net Thu Mar 8 14:41:55 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 8 Mar 2012 14:41:55 +0100 Subject: [Python-Dev] PEP 412 has been accepted References: Message-ID: <20120308144155.212f10b7@pitrou.net> On Wed, 7 Mar 2012 18:52:43 -0500 Brett Cannon wrote: > Since PEP 412 has code that doesn't break tests anymore (thanks to hash > randomization), it was just accepted. Mark, can you make sure there is an > up-to-date patch in the tracker so people can potentially look at the code > at the sprints here at PyCon? And also please apply for core dev privileges > (http://docs.python.org/devguide/coredev.html) so that we can make you fix > bugs. =) For the record (I had to look it up), PEP 412 is Mark Shannon's "Key-Sharing Dictionary", an optimization that decreases memory consumption of instances. http://www.python.org/dev/peps/pep-0412/ Regards Antoine. From ncoghlan at gmail.com Thu Mar 8 14:57:32 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 8 Mar 2012 23:57:32 +1000 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: <4F58B6DE.2020309@hotpy.org> References: <4f5801dc.3221340a.3486.0373@mx.google.com> <4F58A0B1.4050701@hotpy.org> <4F58B6DE.2020309@hotpy.org> Message-ID: On Thu, Mar 8, 2012 at 11:40 PM, Mark Shannon wrote: > Other code will use whatever builtins they were given at __import__. Then they're not builtins - they're module-specific chained globals. The thing that makes the builtins special is *who else* can see them (i.e. all the other code in the process). If you replace builtins.open, you replace if for everyone (that hasn't either shadowed it or cached a reference to the original). > The key point is that every piece of code already inherits locals, globals > and builtins from somewhere else. > We can already control locals (by which parameters are passed in) and > globals via exec, eval, __import__, and runpy (any others?) > but we can't control builtins. Correct - because controlling builtins is the domain of sandboxes. > One last point is that this is a low-impact change. All code using eval, > etc. will continue to work as before. > It also may speed things up a little. Passing in a ChainMap instance as the globals when you want to include an additional namespace in the lookup chain is even lower impact. A reference implementation and concrete use cases might change my mind, but for now, I'm just seeing a horrendously complicated approach with huge implications for the runtime data model semantics for something that 3.3 already supports in a much simpler fashion. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mark at hotpy.org Thu Mar 8 15:06:17 2012 From: mark at hotpy.org (Mark Shannon) Date: Thu, 08 Mar 2012 14:06:17 +0000 Subject: [Python-Dev] Request for clarification of PEP 380 Message-ID: <4F58BCD9.8010707@hotpy.org> Hi, The scenario is this: A generator, G, has a non-generator sub-iterator, S, (ie G includes a "yield from S" experssion and S is not a generator) and either G.close() or G.throw(GeneratorExit) is called. In the current implementation, S.close() is called and, if that call raises an exception, then that exception is suppressed. Should close() be called at all? I know that it helps non-generators to support the protocol, but there is the problem of iterables that happen to have a close method. This may cause unwanted side effects. Why is the exception suppressed? The text of the PEP seems to implicitly assume that all sub-iterators will be generators, so it is not clear on the above points. Cheers, Mark. From ncoghlan at gmail.com Thu Mar 8 15:11:15 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 9 Mar 2012 00:11:15 +1000 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: <4F58B7F0.8070805@hotpy.org> References: <4F589D75.8090302@hotpy.org> <4F58B7F0.8070805@hotpy.org> Message-ID: On Thu, Mar 8, 2012 at 11:45 PM, Mark Shannon wrote: > It really ought say that "yield from" is equivalent to inlining > in the PEP. That's what the motivation section is about. There's also an entire subsection called "The Refactoring Principle" that describes this intent. However, we needed something more concrete to flesh out the original implementation details, which is what the code equivalent was designed to provide (it was also designed to point out that the days of "you can just use a simple for loop" actually went away when PEP 342 was implemented). Now, it may be that we fixed things during implementation that should be reflected back into the formal semantic spec in the PEP - so if you can point out specific cases where what we implemented doesn't match the nominal behaviour, I'm open to updating the PEP accordingly. (Of course, if there are any tests that fail solely due to the two known differences in semantics that are already noted in the PEP, they don't count). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Thu Mar 8 15:15:20 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 9 Mar 2012 00:15:20 +1000 Subject: [Python-Dev] Request for clarification of PEP 380 In-Reply-To: <4F58BCD9.8010707@hotpy.org> References: <4F58BCD9.8010707@hotpy.org> Message-ID: On Fri, Mar 9, 2012 at 12:06 AM, Mark Shannon wrote: > > The text of the PEP seems to implicitly assume that all sub-iterators > will be generators, so it is not clear on the above points. On the contrary, this question is explicitly addressed in the PEP: http://www.python.org/dev/peps/pep-0380/#finalization If you want to block an inadvertent close() call, you need to either wrap the object so it doesn't implement unwanted parts of the generator API (which is the iterator protocol plus send(), throw() and close()), or else you should continue to use simple iteration rather than delegation. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From barry at barrys-emacs.org Thu Mar 8 15:06:07 2012 From: barry at barrys-emacs.org (Barry Scott) Date: Thu, 8 Mar 2012 14:06:07 +0000 Subject: [Python-Dev] Why does Mac OS X python share site-packages with apple python? In-Reply-To: References: <5A0E2490-A743-4729-A752-D94524EA9840@barrys-emacs.org> <4F54C6C3.9040401@netwok.org> Message-ID: <12A71B2D-8B9F-496D-8C7D-1D5F462D7026@barrys-emacs.org> Just to follow up. With Robin's help over in wxPython land I have given Robin a patch to wxPython to fix the site-packages issue. Barry From mark at hotpy.org Thu Mar 8 16:08:48 2012 From: mark at hotpy.org (Mark Shannon) Date: Thu, 08 Mar 2012 15:08:48 +0000 Subject: [Python-Dev] Request for clarification of PEP 380 In-Reply-To: References: <4F58BCD9.8010707@hotpy.org> Message-ID: <4F58CB80.7050809@hotpy.org> Nick Coghlan wrote: > On Fri, Mar 9, 2012 at 12:06 AM, Mark Shannon wrote: >> The text of the PEP seems to implicitly assume that all sub-iterators >> will be generators, so it is not clear on the above points. > > On the contrary, this question is explicitly addressed in the PEP: > http://www.python.org/dev/peps/pep-0380/#finalization I should have read it more carefully. > > If you want to block an inadvertent close() call, you need to either > wrap the object so it doesn't implement unwanted parts of the > generator API (which is the iterator protocol plus send(), throw() and > close()), or else you should continue to use simple iteration rather > than delegation. > What about the exception being suppressed? That doesn't seem to be mentioned. Cheers, Mark. From guido at python.org Thu Mar 8 16:27:41 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 8 Mar 2012 07:27:41 -0800 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: <4F5863DA.5030206@stoneleaf.us> Message-ID: On Thu, Mar 8, 2012 at 1:10 AM, Lennart Regebro wrote: > On Thu, Mar 8, 2012 at 08:46, Ethan Furman wrote: >> I think it would be sad to lose that functionality. >> >> If we are going to, though, we may as well check the string to make sure >> it's a valid identifier: > > That would break even more code. I have encountered many cases of > attributes that aren't valid identifiers, in particular using dots or > dashes. Admittedly this is often in cases where the object has both > attribute access and key access, so you can make foo['bar-frotz'] > instead. But when should we then require that it is a valid identifier > and when not? > >> --> class A: >> --> ? pass >> --> setattr(A, '42', 'hrm') >> --> A.42 >> ?File "", line 1 >> ? ?A.42 >> ? ? ? ^ >> SyntaxError: invalid syntax >> >> Doesn't seem very useful. > > You have to set it with setattr, so you have to get it with getattr. I > don't see the problem. I'm with Lennart. I've spoken out on this particular question several times before; it is a *feature* that you can use any arbitrary string with getattr() and setattr(). However these functions should (and do!) reject non-strings. I'm lukewarm on forbidding non-strings in namespace dicts if you can get them in there by other means. I presume that some people might use this to hide state they don't want accessible using regular attribute notation (x.foo) and maybe somebody is even using some namespace's keys as a dict to store many things with non-string keys, but that feels like abuse of the namespace to me, and there are plenty of other ways to manage such state, so if it gives a significant speed-up to use string-only dicts, so be it. (Although Python's dicts already have some string-only optimizations -- they just dynamically adapt to a more generic and slightly slower approach once the first non-key string shows up.) As Nick says we should introduce a deprecation period if we do this. On Wed, Mar 7, 2012 at 11:43 PM, Ethan Furman wrote: > Are you able to modify classes after class creation in Python 3? Without > using a metaclass? Yes, by assignment to attributes. The __dict__ is a read-only proxy, but attribute assignment is allowed. (This is because the "new" type system introduced in Python 2.2 needs to *track* changes to the dict; it does this by tracking setattr/delattr calls, because dict doesn't have a way to trigger a hook on changes.) -- --Guido van Rossum (python.org/~guido) From ethan at stoneleaf.us Thu Mar 8 17:22:41 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 08 Mar 2012 08:22:41 -0800 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: <4F5863DA.5030206@stoneleaf.us> Message-ID: <4F58DCD1.8040807@stoneleaf.us> Guido van Rossum wrote: > On Wed, Mar 7, 2012 at 11:43 PM, Ethan Furman wrote: >> Are you able to modify classes after class creation in Python 3? Without >> using a metaclass? > > Yes, by assignment to attributes. The __dict__ is a read-only proxy, > but attribute assignment is allowed. (This is because the "new" type > system introduced in Python 2.2 needs to *track* changes to the dict; > it does this by tracking setattr/delattr calls, because dict doesn't > have a way to trigger a hook on changes.) Poorly phrased question -- I meant is it possible to add non-string-name attributes to classes after class creation. During class creation we can do this: --> class Test: ... ns = vars() ... ns[42] = 'green eggs' ... del ns ... --> Test --> Test.__dict__ dict_proxy({ '__module__': '__main__', 42: 'green eggs', '__doc__': None, '__dict__': , '__weakref__': , '__locals__': { 42: 'green eggs', '__module__': '__main__', '__locals__': {...}} }) --> Test.__dict__[42] 'green eggs' A little more experimentation shows that not all is well, however: --> dir(Test) Traceback (most recent call last): File "", line 1, in TypeError: unorderable types: int() < str() ~Ethan~ From guido at python.org Thu Mar 8 18:06:41 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 8 Mar 2012 09:06:41 -0800 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: <4F58DCD1.8040807@stoneleaf.us> References: <4F5863DA.5030206@stoneleaf.us> <4F58DCD1.8040807@stoneleaf.us> Message-ID: On Thu, Mar 8, 2012 at 8:22 AM, Ethan Furman wrote: > Guido van Rossum wrote: > >> On Wed, Mar 7, 2012 at 11:43 PM, Ethan Furman wrote: >>> >>> Are you able to modify classes after class creation in Python 3? Without >>> using a metaclass? >> >> >> Yes, by assignment to attributes. The __dict__ is a read-only proxy, >> but attribute assignment is allowed. (This is because the "new" type >> system introduced in Python 2.2 needs to *track* changes to the dict; >> it does this by tracking setattr/delattr calls, because dict doesn't >> have a way to trigger a hook on changes.) > > > Poorly phrased question -- I meant is it possible to add non-string-name > attributes to classes after class creation. ?During class creation we can do > this: > > --> class Test: > ... ? ns = vars() > ... ? ns[42] = 'green eggs' > ... ? del ns > ... > --> Test > > --> Test.__dict__ > dict_proxy({ > ? ?'__module__': '__main__', > ? ?42: 'green eggs', > ? ?'__doc__': None, > ? ?'__dict__': , > ? ?'__weakref__': , > ? ?'__locals__': { > ? ? ? ?42: 'green eggs', > ? ? ? '__module__': '__main__', > ? ? ? '__locals__': {...}} > ? ?}) > --> Test.__dict__[42] > 'green eggs' > > A little more experimentation shows that not all is well, however: > > --> dir(Test) > Traceback (most recent call last): > ?File "", line 1, in > TypeError: unorderable types: int() < str() So what conclusion do you draw? -- --Guido van Rossum (python.org/~guido) From pje at telecommunity.com Thu Mar 8 18:16:28 2012 From: pje at telecommunity.com (PJ Eby) Date: Thu, 8 Mar 2012 12:16:28 -0500 Subject: [Python-Dev] Non-string keys in type dict Message-ID: On Thu, Mar 8, 2012 at 2:43 AM, Ethan Furman wrote: > > PJ Eby wrote: >> >> Short version: AddOns are things you can use to dynamically extend instances -- a bit like the "decorator" in "decorator pattern" (not to be confused with Python decorators). Rather than synthesize a unique string as a dictionary key, I just used the AddOn classes themselves as keys. This works fine for object instances, but gets hairy once classes come into play. > > > Are you able to modify classes after class creation in Python 3? Without using a metaclass? For ClassAddOns, it really doesn't matter; you can't remove them from the class they attach to. Addons created after the class is finalized use a weakref dictionary to attach to their classes. Now that I've gone back and looked at the code, the only reason that ClassAddOns even use the class __dict__ in the first place is because it's a convenient place to put them while the class is being built. With only slightly hairier code, I could use an __addons__ dict in the class namespace while it's being built, but there'll then be a performance hit at look up time to do cls.__dict__['__addons__'][key] instead of cls.__dict__[key]. Actually, now that I'm thinking about it, the non-modifiability of class dictionaries is actually a feature for this use case: if I make an __addons__ dict, that dict is mutable. That means I'll have to move to string keys or have some sort of immutable dict type available... ;-) (Either that, or do some other, more complex refactoring.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fwierzbicki at gmail.com Thu Mar 8 18:20:52 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Thu, 8 Mar 2012 09:20:52 -0800 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: Message-ID: On Wed, Mar 7, 2012 at 5:39 PM, Victor Stinner wrote: > Hi, > > During the Language Summit 2011 (*), it was discussed that PyPy and > Jython don't support non-string key in type dict. An issue was open to > emit a warning on such dict, but the patch has not been commited yet. It should be noted that Jython started supporting non-string dict keys in version 2.5. IIRC this was done to get Django running, but in general we decided to go with compatibility. -Frank From guido at python.org Thu Mar 8 18:31:29 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 8 Mar 2012 09:31:29 -0800 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: References: <4f5801dc.3221340a.3486.0373@mx.google.com> <4F58A0B1.4050701@hotpy.org> <4F58B6DE.2020309@hotpy.org> Message-ID: On Thu, Mar 8, 2012 at 5:57 AM, Nick Coghlan wrote: > On Thu, Mar 8, 2012 at 11:40 PM, Mark Shannon wrote: >> Other code will use whatever builtins they were given at __import__. > > Then they're not builtins - they're module-specific chained globals. > The thing that makes the builtins special is *who else* can see them > (i.e. all the other code in the process). If you replace > builtins.open, you replace if for everyone (that hasn't either > shadowed it or cached a reference to the original). Looks like you two are talking about different things. There is only one 'builtins' *module*. But the __builtins__ that are actually used by any particular piece of code is *not* taken by importing builtins. It is taken from what the globals store under the key __builtins__. This is a feature that was added specifically for sandboxing purposes, but I believe it has found other uses too. >> The key point is that every piece of code already inherits locals, globals >> and builtins from somewhere else. >> We can already control locals (by which parameters are passed in) and >> globals via exec, eval, __import__, and runpy (any others?) >> but we can't control builtins. > > Correct - because controlling builtins is the domain of sandboxes. Incorrect (unless I misunderstand the context) -- when you control the globals you control the __builtins__ set there. >> One last point is that this is a low-impact change. All code using eval, >> etc. will continue to work as before. >> It also may speed things up a little. > > Passing in a ChainMap instance as the globals when you want to include > an additional namespace in the lookup chain is even lower impact. > > A reference implementation and concrete use cases might change my > mind, but for now, I'm just seeing a horrendously complicated approach > with huge implications for the runtime data model semantics for > something that 3.3 already supports in a much simpler fashion. I can't say I'm completely following the discussion. It's not clear whether what I just explained was already implicit in the coversation or is new information. In any case, the locals / globals / builtins chain is a simplification; there are also any number of intermediate scopes (between locals and globals) from which "nonlocal" variables may be used. Like optimized function globals, these don't use a dict lookup at all, they are determined by compile-time analysis. -- --Guido van Rossum (python.org/~guido) From guido at python.org Thu Mar 8 18:33:21 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 8 Mar 2012 09:33:21 -0800 Subject: [Python-Dev] PEP 412 has been accepted In-Reply-To: <20120308144155.212f10b7@pitrou.net> References: <20120308144155.212f10b7@pitrou.net> Message-ID: On Thu, Mar 8, 2012 at 5:41 AM, Antoine Pitrou wrote: > For the record (I had to look it up), PEP 412 is Mark Shannon's > "Key-Sharing Dictionary", an optimization that decreases memory > consumption of instances. > http://www.python.org/dev/peps/pep-0412/ Thanks for reminding us. I've gotten into the habit of listing the PEP number *and* title whenever first referencing a given PEP in a discussion -- if the PEP is mentioned in the message subject, its title should be too. -- --Guido van Rossum (python.org/~guido) From ethan at stoneleaf.us Thu Mar 8 18:23:46 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 08 Mar 2012 09:23:46 -0800 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: References: <4F5863DA.5030206@stoneleaf.us> <4F58DCD1.8040807@stoneleaf.us> Message-ID: <4F58EB22.4040409@stoneleaf.us> Guido van Rossum wrote: > On Thu, Mar 8, 2012 at 8:22 AM, Ethan Furman wrote: >> Guido van Rossum wrote: >> >>> On Wed, Mar 7, 2012 at 11:43 PM, Ethan Furman wrote: >>>> Are you able to modify classes after class creation in Python 3? Without >>>> using a metaclass? >>> >>> Yes, by assignment to attributes. The __dict__ is a read-only proxy, >>> but attribute assignment is allowed. (This is because the "new" type >>> system introduced in Python 2.2 needs to *track* changes to the dict; >>> it does this by tracking setattr/delattr calls, because dict doesn't >>> have a way to trigger a hook on changes.) >> >> Poorly phrased question -- I meant is it possible to add non-string-name >> attributes to classes after class creation. During class creation we can do >> this: >> >> --> class Test: >> ... ns = vars() >> ... ns[42] = 'green eggs' >> ... del ns >> ... >> --> Test >> >> --> Test.__dict__ >> dict_proxy({ >> '__module__': '__main__', >> 42: 'green eggs', >> '__doc__': None, >> '__dict__': , >> '__weakref__': , >> '__locals__': { >> 42: 'green eggs', >> '__module__': '__main__', >> '__locals__': {...}} >> }) >> --> Test.__dict__[42] >> 'green eggs' >> >> A little more experimentation shows that not all is well, however: >> >> --> dir(Test) >> Traceback (most recent call last): >> File "", line 1, in >> TypeError: unorderable types: int() < str() > > So what conclusion do you draw? That other changes (that have definitely been for the better) are making the 'feature' of non-string keys in namespace dicts less and less friendly. Rather than letting it slowly fall into complete shambles we should go ahead and deprecate, then remove, that functionality. Because namespace dicts already have tacit approval to not support non-string keys, it doesn't make much sense to spend developer resources on fixing dir and whatever other functions exist that deal with namespace dicts and assume string-only keys. ~Ethan~ From amauryfa at gmail.com Thu Mar 8 18:41:30 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 8 Mar 2012 18:41:30 +0100 Subject: [Python-Dev] Non-string keys in type dict In-Reply-To: <4F58EB22.4040409@stoneleaf.us> References: <4F5863DA.5030206@stoneleaf.us> <4F58DCD1.8040807@stoneleaf.us> <4F58EB22.4040409@stoneleaf.us> Message-ID: Hi, 2012/3/8 Ethan Furman : >>> A little more experimentation shows that not all is well, however: >>> >>> --> dir(Test) >>> Traceback (most recent call last): >>> ?File "", line 1, in >>> TypeError: unorderable types: int() < str() >> >> >> So what conclusion do you draw? > > > That other changes (that have definitely been for the better) are making the > 'feature' of non-string keys in namespace dicts less and less friendly. > ?Rather than letting it slowly fall into complete shambles we should go > ahead and deprecate, then remove, that functionality. Not that I disagree with the conclusion, but the obvious thing to do here is to fix dir() and return only string attributes, i.e. those you can access with getattr. Cheers, -- Amaury Forgeot d'Arc From arigo at tunes.org Thu Mar 8 19:57:40 2012 From: arigo at tunes.org (Armin Rigo) Date: Thu, 8 Mar 2012 10:57:40 -0800 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: <4F528F32.3060409@gmail.com> <4F53E220.9040006@canterbury.ac.nz> <4F552E48.8040008@canterbury.ac.nz> Message-ID: Hi Stefan, On Wed, Mar 7, 2012 at 23:16, Stefan Behnel wrote: > Well, there's a bug tracker that lists some of them, which is not *that* > hard to find. Does your claim about "a significantly harder endeavour" > refer to finding a crash or to finding a fix for it? Are you talking about the various crashes about the JIT? That's one of the reasons why the sandboxed PyPy does not include the JIT. A bient?t, Armin. From stefan_ml at behnel.de Thu Mar 8 20:00:12 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Thu, 08 Mar 2012 20:00:12 +0100 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: References: Message-ID: Stefan Behnel, 07.03.2012 21:40: > I found a problem in the current "yield from" implementation ... and here's another one, also in genobject.c: """ int PyGen_FetchStopIterationValue(PyObject **pvalue) { PyObject *et, *ev, *tb; PyObject *value = NULL; if (PyErr_ExceptionMatches(PyExc_StopIteration)) { PyErr_Fetch(&et, &ev, &tb); Py_XDECREF(et); Py_XDECREF(tb); if (ev) { value = ((PyStopIterationObject *)ev)->value; Py_INCREF(value); Py_DECREF(ev); } } else if (PyErr_Occurred()) { return -1; } if (value == NULL) { value = Py_None; Py_INCREF(value); } *pvalue = value; return 0; } """ When the StopIteration was set using PyErr_SetObject(), "ev" points to the value, not the exception instance, so this code lacks exception normalisation. I use slightly different code in Cython (which needs to be compatible with Py2.x), but CPython 3.3 could do something along these lines: """ if (ev) { if (PyObject_IsInstance(ev, PyExc_StopIteration)) { value = ((PyStopIterationObject *)ev)->value; Py_INCREF(value); // or maybe XINCREF()? Py_DECREF(ev); } else { /* PyErr_SetObject() puts the value directly into ev */ value = ev; } } else ... """ Would that be acceptable for CPython as well or would you prefer full fledged normalisation? Stefan From p.f.moore at gmail.com Thu Mar 8 20:12:23 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 8 Mar 2012 19:12:23 +0000 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: References: <4f5801dc.3221340a.3486.0373@mx.google.com> <4F58A0B1.4050701@hotpy.org> Message-ID: On 8 March 2012 12:52, Nick Coghlan wrote: > 2. it's already trivial to achieve such chained lookups in 3.3 by > passing a collections.ChainMap instance as the globals parameter: > http://docs.python.org/dev/library/collections#collections.ChainMap Somewhat OT, but collections.ChainMap is really cool. I hadn't noticed it get added into 3.3, and as far as I can see, it's not in the "What's New in 3.3" document. But it's little things like this that *really* make the difference for me in new versions. So thanks to whoever added it, and could we have a whatsnew entry, please? Paul. From benjamin at python.org Thu Mar 8 21:36:06 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 8 Mar 2012 14:36:06 -0600 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: References: Message-ID: 2012/3/8 Stefan Behnel : > Would that be acceptable for CPython as well or would you prefer full > fledged normalisation? I think we have to normalize for correctness. Consider that it may be some StopIteration subclass which set "value" on construction. -- Regards, Benjamin From solipsis at pitrou.net Thu Mar 8 21:36:30 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 8 Mar 2012 21:36:30 +0100 Subject: [Python-Dev] problem with recursive "yield from" delegation References: Message-ID: <20120308213630.76d247b2@pitrou.net> On Thu, 8 Mar 2012 14:36:06 -0600 Benjamin Peterson wrote: > 2012/3/8 Stefan Behnel : > > Would that be acceptable for CPython as well or would you prefer full > > fledged normalisation? > > I think we have to normalize for correctness. Consider that it may be > some StopIteration subclass which set "value" on construction. Perhaps it would be time to drop the whole delayed normalization thing, provided the benchmarks don't exhibit a slowdown? It complicates a lot of code paths. cheers Antoine. From benjamin at python.org Thu Mar 8 21:45:20 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 8 Mar 2012 14:45:20 -0600 Subject: [Python-Dev] problem with recursive "yield from" delegation In-Reply-To: <20120308213630.76d247b2@pitrou.net> References: <20120308213630.76d247b2@pitrou.net> Message-ID: 2012/3/8 Antoine Pitrou : > On Thu, 8 Mar 2012 14:36:06 -0600 > Benjamin Peterson wrote: >> 2012/3/8 Stefan Behnel : >> > Would that be acceptable for CPython as well or would you prefer full >> > fledged normalisation? >> >> I think we have to normalize for correctness. Consider that it may be >> some StopIteration subclass which set "value" on construction. > > Perhaps it would be time to drop the whole delayed normalization thing, > provided the benchmarks don't exhibit a slowdown? It complicates a lot > of code paths. +1 Also, it delays errors from exception initialization to arbitrary points. -- Regards, Benjamin From dreamingforward at gmail.com Thu Mar 8 22:08:23 2012 From: dreamingforward at gmail.com (Mark Janssen) Date: Thu, 8 Mar 2012 14:08:23 -0700 Subject: [Python-Dev] PEP Message-ID: On Thu, Feb 9, 2012 at 5:18 PM, Guido van Rossum wrote: > A dictionary would (then) be a SET of these. (Voila! things have already >> gotten simplified.) >> > > Really? So {a:1, a:2} would be a dict of length 2? > Eventually, I also think this will seque and integrate nicely into Mark >> Shannon's "shared-key dict" proposal (PEP 412). >> >> I just noticed something in Guido's example. Something gives me a strange feeling that using a variable as a key doesn't smell right. Presumably Python just hashes the variable's id, or uses the id itself as the key, but I wonder if anyone's noticed any problems with this, and whether the hash collision problems could be solved by removing this?? Does anyone even use this functionality -- of a *variable* (not a string) as a dict key? mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From masklinn at masklinn.net Thu Mar 8 22:59:31 2012 From: masklinn at masklinn.net (Masklinn) Date: Thu, 8 Mar 2012 22:59:31 +0100 Subject: [Python-Dev] [Python-ideas] PEP In-Reply-To: References: Message-ID: On 2012-03-08, at 22:08 , Mark Janssen wrote: > I just noticed something in Guido's example. Something gives me a strange > feeling that using a variable as a key doesn't smell right. Presumably > Python just hashes the variable's id, or uses the id itself as the key Python calls ``hash`` on the object and uses the result. > , but > I wonder if anyone's noticed any problems with this, and whether the hash > collision problems could be solved by removing this?? No. Not that it makes sense, people could ask for object hashes on their own and end up with the same result. > Does anyone even > use this functionality -- of a *variable* (not a string) as a dict key? What you're asking does not make sense, the dict key is not the name but whatever object is bound to the name. And yes I've used non-string objects as names before: tuples, frozensets, integers, my own objects, ? From victor.stinner at gmail.com Fri Mar 9 00:26:32 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 09 Mar 2012 00:26:32 +0100 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: <4F577FC0.9060408@hotpy.org> References: <4F577FC0.9060408@hotpy.org> Message-ID: <4F594028.1070604@gmail.com> On 07/03/2012 16:33, Mark Shannon wrote: > It should also help with sandboxing, as it would make it easier to > analyse and thus control access to builtins, since the execution context > of all code would be easier to determine. pysandbox patchs __builtins__ in: - the caller frame - the interpreter state - all modules It uses a read-only dict with only a subset of __builtins__. It is important for: - deny replacing a builtin function - deny adding a new "superglobal" variable - deny accessing a blocked function If a module or something else leaks the real builtins dict, it would be a vulnerability. pysandbox is able to replace temporary __builtins__ everywhere and then restore the previous state. Can you please explain why/how pysandbox is too restrictive and how your proposition would make it more usable? > Currently, it is impossible to allow one function access to sensitive > functions like open(), while denying it to others, as any code can then > get the builtins of another function via f.__globals__['builtins__']. > Separating builtins from globals could solve this. For a sandbox, it's a feature, or maybe a requirement :-) It is a problem if a function accessing to the trusted builtins dict is also accessible in the sandbox. I don't remember why it is a problem: pysandbox blocks access to the __globals__ attribute of functions. Victor From fijall at gmail.com Fri Mar 9 01:19:29 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 8 Mar 2012 16:19:29 -0800 Subject: [Python-Dev] PEP 414 - some numbers from the Django port In-Reply-To: References: <4F572297.10607@active-4.com> Message-ID: On Wed, Mar 7, 2012 at 2:36 PM, Vinay Sajip wrote: > Armin Ronacher active-4.com> writes: > >> What are you trying to argue? ?That the overall Django testsuite does >> not do a lot of string processing, less processing with native strings? >> >> I'm surprised you see a difference at all over the whole Django >> testsuite and I wonder why you get a slowdown at all for the ported >> Django on 2.7. > > The point of the figures is to show there is *no* difference (statistically > speaking) between the three sets of samples. Of course, any individual run or > set of runs could be higher or lower due to other things happening on the > machine (not that I was running any background tasks), so the idea of the simple > statistical analysis is to determine whether these samples could all have come > from the same populations. According to ministat, they could have (with a 95% > confidence level). But the stuff you run is not really benchmarking anything. As far as I know django benchmarks benchmark something like mostly DB creation and deletion, although that might differ between CPython and PyPy. How about running *actual* django benchmarks, instead of the test suite? Not that proving anything is necessary, but if you try to prove something, make it right. Cheers, fijal From victor.stinner at gmail.com Fri Mar 9 01:28:08 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 09 Mar 2012 01:28:08 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: Message-ID: <4F594E98.405@gmail.com> On 05/03/2012 23:11, Victor Stinner wrote: > 3 tests are crashing pysandbox: > > - modify a dict during a dict lookup: I proposed two different fixes > in issue #14205 > - type MRO changed during a type lookup (modify __bases__ during the > lookup): I proposed a fix in issue #14199 (keep a reference to the MRO > during the lookup) > - stack overflow because of a compiler recursion: we should limit the > depth in the compiler (i didn't write a patch yet) > > pysandbox should probably hide __bases__ special attribute, or at > least make it read-only. I opened the following issues to fix these crashers: #14205: Raise an error if a dict is modified during a lookup. Fixed in Python 3.3. #14199: Keep a refence to mro in _PyType_Lookup() and super_getattro(). Fixed in Python 3.3. #14211: Don't rely on borrowed _PyType_Lookup() reference in PyObject_GenericSetAttr(). Fixed in Python 3.3. #14231: Fix or drop Lib/test/crashers/borrowed_ref_1.py, it looks like it was already fixed 3 years ago. The compiler recursion is not fixed yet. Fixes may be backported to Python 2.7 and 3.2. Victor From ncoghlan at gmail.com Fri Mar 9 01:33:00 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 9 Mar 2012 10:33:00 +1000 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: References: <4f5801dc.3221340a.3486.0373@mx.google.com> <4F58A0B1.4050701@hotpy.org> <4F58B6DE.2020309@hotpy.org> Message-ID: On Fri, Mar 9, 2012 at 3:31 AM, Guido van Rossum wrote: > But the __builtins__ that are actually used by any particular piece of > code is *not* taken by importing builtins. It is taken from what the > globals store under the key __builtins__. > > This is a feature that was added specifically for sandboxing purposes, > but I believe it has found other uses too. Agreed, but swapping out builtins for a different namespace is still the exception rather than the rule. My Impression of Mark's proposal was that this approach would become the *preferred* way of doing things, and that's the part I don't like at a conceptual level. >>> The key point is that every piece of code already inherits locals, globals >>> and builtins from somewhere else. >>> We can already control locals (by which parameters are passed in) and >>> globals via exec, eval, __import__, and runpy (any others?) >>> but we can't control builtins. >> >> Correct - because controlling builtins is the domain of sandboxes. > > Incorrect (unless I misunderstand the context) -- when you control the > globals you control the __builtins__ set there. And this is where I don't like the idea at a practical level. We already have a way to swap in a different set of builtins for a certain execution context (i.e. set "__builtins__" in the global namespace) for a small chunk of code, as well as allowing collections.ChainMap to insert additional namespaces into the name lookup path. This proposal suggests adding an additional mapping argument to every API that currently accepts a locals and/or globals mapping, thus achieving... well, nothing substantial, as far as I can tell (aside from a lot of pointless churn in a bunch of APIs, not all of which are under our direct control). > In any case, the locals / globals / builtins chain is a > simplification; there are also any number of intermediate scopes > (between locals and globals) from which "nonlocal" variables may be > used. Like optimized function globals, these don't use a dict lookup > at all, they are determined by compile-time analysis. Acknowledged, but code executed via the exec API with both locals and globals passed in is actually one of the few places where that lookup chain survives in its original form (module level class definitions being the other). Now, rereading Mark's original message, a simpler proposal of having *function objects* do an early lookup of "self.__globals__['__builtins__']" at creation time and caching that somewhere such that the frame objects can get hold of it (rather than having to do the lookup every time the function gets called or a builtin gets referenced) might be a nice micro-optimisation. It's the gratuitous API changes that I'm objecting to, not the underlying idea of binding the reference to the builtins namespace earlier in the function definition process. I'd even be OK with leaving the default builtins reference *out* of the globals namespace in favour of storing a hidden reference on the frame objects. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From guido at python.org Fri Mar 9 01:39:12 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 8 Mar 2012 16:39:12 -0800 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: References: <4f5801dc.3221340a.3486.0373@mx.google.com> <4F58A0B1.4050701@hotpy.org> <4F58B6DE.2020309@hotpy.org> Message-ID: On Thu, Mar 8, 2012 at 4:33 PM, Nick Coghlan wrote: > On Fri, Mar 9, 2012 at 3:31 AM, Guido van Rossum wrote: >> But the __builtins__ that are actually used by any particular piece of >> code is *not* taken by importing builtins. It is taken from what the >> globals store under the key __builtins__. >> >> This is a feature that was added specifically for sandboxing purposes, >> but I believe it has found other uses too. > > Agreed, but swapping out builtins for a different namespace is still > the exception rather than the rule. My Impression of Mark's proposal > was that this approach would become the *preferred* way of doing > things, and that's the part I don't like at a conceptual level. > >>>> The key point is that every piece of code already inherits locals, globals >>>> and builtins from somewhere else. >>>> We can already control locals (by which parameters are passed in) and >>>> globals via exec, eval, __import__, and runpy (any others?) >>>> but we can't control builtins. >>> >>> Correct - because controlling builtins is the domain of sandboxes. >> >> Incorrect (unless I misunderstand the context) -- when you control the >> globals you control the __builtins__ set there. > > And this is where I don't like the idea at a practical level. We > already have a way to swap in a different set of builtins for a > certain execution context (i.e. set "__builtins__" in the global > namespace) for a small chunk of code, as well as allowing > collections.ChainMap to insert additional namespaces into the name > lookup path. > > This proposal suggests adding an additional mapping argument to every > API that currently accepts a locals and/or globals mapping, thus > achieving... well, nothing substantial, as far as I can tell (aside > from a lot of pointless churn in a bunch of APIs, not all of which are > under our direct control). > >> In any case, the locals / globals / builtins chain is a >> simplification; there are also any number of intermediate scopes >> (between locals and globals) from which "nonlocal" variables may be >> used. Like optimized function globals, these don't use a dict lookup >> at all, they are determined by compile-time analysis. > > Acknowledged, but code executed via the exec API with both locals and > globals passed in is actually one of the few places where that lookup > chain survives in its original form (module level class definitions > being the other). > > Now, rereading Mark's original message, a simpler proposal of having > *function objects* do an early lookup of > "self.__globals__['__builtins__']" at creation time and caching that > somewhere such that the frame objects can get hold of it (rather than > having to do the lookup every time the function gets called or a > builtin gets referenced) might be a nice micro-optimisation. It's the > gratuitous API changes that I'm objecting to, not the underlying idea > of binding the reference to the builtins namespace earlier in the > function definition process. I'd even be OK with leaving the default > builtins reference *out* of the globals namespace in favour of storing > a hidden reference on the frame objects. Agreed on the gratuitous API changes. I'd like to hear Mark's response. -- --Guido van Rossum (python.org/~guido) From victor.stinner at gmail.com Fri Mar 9 01:38:11 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 09 Mar 2012 01:38:11 +0100 Subject: [Python-Dev] Sandboxing Python In-Reply-To: References: Message-ID: <4F5950F3.9010003@gmail.com> On 01/03/2012 22:59, Victor Stinner wrote: >> I challenge anymore to break pysandbox! I would be happy if anyone >> breaks it because it would make it more stronger. Results, one week later. Nobody found a vulnerability giving access to the filesystem or to the sandbox. Armin Rigo complained that CPython has known "crasher" bugs. Except of the compiler recursion, I fixed those bugs in CPython 3.3. Serhiy Storchaka found a bug in the pysandbox timeout: long operations implemented in C hangs the sandbox, the timeout contrain is not applied. Guido proposed to abort the process (use the default SIGALRM action). I proposed to add an option to use a subprocess. Both solutions are not exclusive. Armin Rigo also noticed that PyPy sandbox design is more robust than pysandbox design, I agree with him even if I think a CPython sandbox is useful and users ask for such protection. I have no idea how many developers tried to break the pysandbox security. Victor From mark at hotpy.org Fri Mar 9 09:19:24 2012 From: mark at hotpy.org (Mark Shannon) Date: Fri, 09 Mar 2012 08:19:24 +0000 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: References: <4f5801dc.3221340a.3486.0373@mx.google.com> <4F58A0B1.4050701@hotpy.org> <4F58B6DE.2020309@hotpy.org> Message-ID: <4F59BD0C.6030503@hotpy.org> Guido van Rossum wrote: > On Thu, Mar 8, 2012 at 4:33 PM, Nick Coghlan wrote: >> On Fri, Mar 9, 2012 at 3:31 AM, Guido van Rossum wrote: >>> But the __builtins__ that are actually used by any particular piece of >>> code is *not* taken by importing builtins. It is taken from what the >>> globals store under the key __builtins__. >>> >>> This is a feature that was added specifically for sandboxing purposes, >>> but I believe it has found other uses too. >> Agreed, but swapping out builtins for a different namespace is still >> the exception rather than the rule. My Impression of Mark's proposal >> was that this approach would become the *preferred* way of doing >> things, and that's the part I don't like at a conceptual level. >> >>>>> The key point is that every piece of code already inherits locals, globals >>>>> and builtins from somewhere else. >>>>> We can already control locals (by which parameters are passed in) and >>>>> globals via exec, eval, __import__, and runpy (any others?) >>>>> but we can't control builtins. >>>> Correct - because controlling builtins is the domain of sandboxes. >>> Incorrect (unless I misunderstand the context) -- when you control the >>> globals you control the __builtins__ set there. >> And this is where I don't like the idea at a practical level. We >> already have a way to swap in a different set of builtins for a >> certain execution context (i.e. set "__builtins__" in the global >> namespace) for a small chunk of code, as well as allowing >> collections.ChainMap to insert additional namespaces into the name >> lookup path. >> >> This proposal suggests adding an additional mapping argument to every >> API that currently accepts a locals and/or globals mapping, thus >> achieving... well, nothing substantial, as far as I can tell (aside >> from a lot of pointless churn in a bunch of APIs, not all of which are >> under our direct control). >> >>> In any case, the locals / globals / builtins chain is a >>> simplification; there are also any number of intermediate scopes >>> (between locals and globals) from which "nonlocal" variables may be >>> used. Like optimized function globals, these don't use a dict lookup >>> at all, they are determined by compile-time analysis. >> Acknowledged, but code executed via the exec API with both locals and >> globals passed in is actually one of the few places where that lookup >> chain survives in its original form (module level class definitions >> being the other). >> >> Now, rereading Mark's original message, a simpler proposal of having >> *function objects* do an early lookup of >> "self.__globals__['__builtins__']" at creation time and caching that >> somewhere such that the frame objects can get hold of it (rather than >> having to do the lookup every time the function gets called or a >> builtin gets referenced) might be a nice micro-optimisation. It's the >> gratuitous API changes that I'm objecting to, not the underlying idea >> of binding the reference to the builtins namespace earlier in the >> function definition process. I'd even be OK with leaving the default >> builtins reference *out* of the globals namespace in favour of storing >> a hidden reference on the frame objects. > > Agreed on the gratuitous API changes. I'd like to hear Mark's response. > C API or Python API? The Python API would be changed, but in a backwards compatible way. exec, eval and __import__ would all gain an optional (keyword-only?) "builtins" parameter. I see no reason to change any of the C API functions. New functions taking an extra parameter could be added, but it wouldn't be a requirement. Cheers, Mark From vinay_sajip at yahoo.co.uk Fri Mar 9 09:49:53 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 9 Mar 2012 08:49:53 +0000 (GMT) Subject: [Python-Dev] PEP 414 - some numbers from the Django port In-Reply-To: References: <4F572297.10607@active-4.com> Message-ID: <1331282993.88775.YahooMailNeo@web171306.mail.ir2.yahoo.com> ----- Original Message ----- > But the stuff you run is not really benchmarking anything. As far as I > know django benchmarks benchmark something like mostly DB creation and > deletion, although that might differ between CPython and PyPy. How > about running *actual* django benchmarks, instead of the test suite? > > Not that proving anything is necessary, but if you try to prove > something, make it right. But my point was only to show that in a reasonable body of Python code (as opposed to a microbenchmark), the overhead of using wrappers was not significant. All those wrapper calls in ported Django and its test suite were exercised. It was not exactly a benchmarking exercise in that it didn't matter what the actual numbers were, nor was any claim being made about absolute performance; just that they were the same for all three variants, within statistical variation. As I mentioned in my other post, I happened to have the Django test suite figures to hand, and to my mind they suited the purpose of showing that wrapper calls, in the overall mix, don't seem to have a noticeable impact (whereas they do, in a microbenchmark). Regards, Vinay Sajip From ncoghlan at gmail.com Fri Mar 9 12:03:54 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 9 Mar 2012 21:03:54 +1000 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: <4F59BD0C.6030503@hotpy.org> References: <4f5801dc.3221340a.3486.0373@mx.google.com> <4F58A0B1.4050701@hotpy.org> <4F58B6DE.2020309@hotpy.org> <4F59BD0C.6030503@hotpy.org> Message-ID: On Fri, Mar 9, 2012 at 6:19 PM, Mark Shannon wrote: > The Python API would be changed, but in a backwards compatible way. > exec, eval and __import__ would all gain an optional (keyword-only?) > "builtins" parameter. No, some APIs effectively define *protocols*. For such APIs, *adding* parameters is almost of comparable import to taking them away, because they require that other APIs modelled on the prototype also change. In this case, not only exec() has to change, but eval, __import__, probably runpy, function creation, eventually any third party APIs for code execution, etc, etc. Adding a new parameter to exec is a change with serious implications, and utterly unnecessary, since the API part is already covered by setting __builtins__ in the passed in globals namespace (which is appropriately awkward to advise people that they're doing something strange with potentially unintended consequences or surprising limitations). That said, binding a reference to the builtin *early* (for example, at function definition time or when a new invocation of the eval loop first fires up) may be a reasonable idea, but you don't have to change the user facing API to explore that option - it works just as well with "__builtins__" as an optional value in the existing global namespace. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mark at hotpy.org Fri Mar 9 13:25:03 2012 From: mark at hotpy.org (Mark Shannon) Date: Fri, 09 Mar 2012 12:25:03 +0000 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: References: <4f5801dc.3221340a.3486.0373@mx.google.com> <4F58A0B1.4050701@hotpy.org> <4F58B6DE.2020309@hotpy.org> <4F59BD0C.6030503@hotpy.org> Message-ID: <4F59F69F.5050807@hotpy.org> Nick Coghlan wrote: > On Fri, Mar 9, 2012 at 6:19 PM, Mark Shannon wrote: >> The Python API would be changed, but in a backwards compatible way. >> exec, eval and __import__ would all gain an optional (keyword-only?) >> "builtins" parameter. > > No, some APIs effectively define *protocols*. For such APIs, *adding* > parameters is almost of comparable import to taking them away, because > they require that other APIs modelled on the prototype also change. In > this case, not only exec() has to change, but eval, __import__, > probably runpy, function creation, eventually any third party APIs for > code execution, etc, etc. > > Adding a new parameter to exec is a change with serious implications, > and utterly unnecessary, since the API part is already covered by > setting __builtins__ in the passed in globals namespace (which is > appropriately awkward to advise people that they're doing something > strange with potentially unintended consequences or surprising > limitations). It is the implementation that interests me. Implementing the (locals, globals, builtins) triple as a single object has advantages both in terms of internal consistency and efficiency. I just thought to expose this to the user. I am now persuaded that I don't want to expose anything :) > > That said, binding a reference to the builtin *early* (for example, at > function definition time or when a new invocation of the eval loop > first fires up) may be a reasonable idea, but you don't have to change > the user facing API to explore that option - it works just as well > with "__builtins__" as an optional value in the existing global > namespace. OK. So, how about this: (builtins refers to the dict used for variable lookup, not the module) New eval pseudocode eval(code, globals, locals): triple = (locals, globals, globals["__builtins__"]) return eval_internal(triple) Similarly for exec, __import__ and runpy. That way the (IMO clumsy) builtins = globals["__builtins__"] only happens at a few known locations. It should then be clear where all code gets its namespaces from. Namespaces should be inherited as follows: frame: function scope: globals and builtins from function, locals from parameters. module scope: globals and builtins from module, locals == globals. in eval, exec, or runpy: all explicit. function: globals and builtins from module (no locals) module: globals and builtins from import (no locals) import: explicitly from __import__() or implicitly from current frame in an import statement. For frame and function, free and cell (nonlocal) variables would be unchanged. On entry the namespaces will be {}, {}, sys.modules['builtins'].__dict__ This is pretty much what happens anyway, except that where code gets its builtins from is now well defined. Cheers, Mark. From victor.stinner at gmail.com Fri Mar 9 13:48:35 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 9 Mar 2012 13:48:35 +0100 Subject: [Python-Dev] FYI dict lookup now raises a RuntimeError if the dict is modified during the lookup Message-ID: If you use your own comparaison function (__eq__) for objects used as dict keys, you may now get RuntimeError with Python 3.3 if the dict is modified during a dict lookup in a multithreaded application. You should use a lock on the dict to avoid this exception. Said differently, a dict lookup is not atomic if you use your own types as dict keys and these types implement __eq__. In Python < 3.3, the dict lookup is retried until the dict is no more modified during the lookup, which leads to a stack overflow crash if the dict is always modified. See the issue #14205 for the rationale. Victor From victor.stinner at gmail.com Fri Mar 9 13:50:14 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 9 Mar 2012 13:50:14 +0100 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: <4F577FC0.9060408@hotpy.org> References: <4F577FC0.9060408@hotpy.org> Message-ID: > The reason I am proposing this here rather than on python-ideas is that > treating the triple of [locals, globals, builtins] as a single > "execution context" can be implemented in a really nice way. > > Internally, the execution context of [locals, globals, builtins] > can be treated a single immutable object (custom object or tuple) > Treating it as immutable means that it can be copied merely by taking a > reference. A nice trick in the implementation is to make a NULL locals > mean "fast" locals for function contexts. Frames, could then acquire their > globals and builtins by a single reference copy from the function object, > rather than searching globals for a '__builtins__' > to find the builtins. Creating a new frame lookup for __builtins__ in globals only if globals of the new frame is different from the globals of the previous frame. You would like to optimize this case? If globals is unchanged, Python just increments the reference counter. When globals is different from the previous frame? When you call a function from a different module maybe? Do you have an idea of the speedup of your optimization? Victor From mark at hotpy.org Fri Mar 9 13:57:30 2012 From: mark at hotpy.org (Mark Shannon) Date: Fri, 09 Mar 2012 12:57:30 +0000 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: References: <4F577FC0.9060408@hotpy.org> Message-ID: <4F59FE3A.2060207@hotpy.org> Victor Stinner wrote: >> The reason I am proposing this here rather than on python-ideas is that >> treating the triple of [locals, globals, builtins] as a single >> "execution context" can be implemented in a really nice way. >> >> Internally, the execution context of [locals, globals, builtins] >> can be treated a single immutable object (custom object or tuple) >> Treating it as immutable means that it can be copied merely by taking a >> reference. A nice trick in the implementation is to make a NULL locals >> mean "fast" locals for function contexts. Frames, could then acquire their >> globals and builtins by a single reference copy from the function object, >> rather than searching globals for a '__builtins__' >> to find the builtins. > > Creating a new frame lookup for __builtins__ in globals only if > globals of the new frame is different from the globals of the previous > frame. You would like to optimize this case? If globals is unchanged, > Python just increments the reference counter. I'm more interested in simplifying the code than performance. We this proposed approach, there is no need to test where the globals come from, or what the builtins are; just incref the namespace triple. > > When globals is different from the previous frame? When you call a > function from a different module maybe? > > Do you have an idea of the speedup of your optimization? No. But it won't be slower. Cheers, Mark From breamoreboy at yahoo.co.uk Fri Mar 9 16:16:04 2012 From: breamoreboy at yahoo.co.uk (Mark Lawrence) Date: Fri, 09 Mar 2012 15:16:04 +0000 Subject: [Python-Dev] Adding a builtins parameter to eval(), exec() and __import__(). In-Reply-To: <4F59FE3A.2060207@hotpy.org> References: <4F577FC0.9060408@hotpy.org> <4F59FE3A.2060207@hotpy.org> Message-ID: On 09/03/2012 12:57, Mark Shannon wrote: > No. But it won't be slower. > > Cheers, > Mark Please prove it, you have to convince a number of core developers including, but not limited to, the BDFL :). -- Cheers. Mark Lawrence. From status at bugs.python.org Fri Mar 9 18:07:37 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 9 Mar 2012 18:07:37 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120309170737.D20E51DAA7@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-03-02 - 2012-03-09) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3318 (+19) closed 22704 (+44) total 26022 (+63) Open issues with patches: 1409 Issues opened (45) ================== #14179: Test coverage for lib/re.py http://bugs.python.org/issue14179 opened by flomm #14180: Factorize code to convert int/float to time_t, timeval or time http://bugs.python.org/issue14180 opened by haypo #14182: collections.Counter equality test thrown-off by zero counts http://bugs.python.org/issue14182 opened by rhettinger #14183: Test coverage for packaging.install and packaging.pypi.wrapper http://bugs.python.org/issue14183 opened by francismb #14184: test_recursion_limit fails on OS X when compiled with clang http://bugs.python.org/issue14184 opened by dk #14185: Failure to build _dbm with ndbm on Arch Linux http://bugs.python.org/issue14185 opened by anikom15 #14186: Link to PEP 3107 in "def" part of Language Reference http://bugs.python.org/issue14186 opened by cvrebert #14187: add "annotation" entry to Glossary http://bugs.python.org/issue14187 opened by cvrebert #14189: Documentation for some C APIs is missing clear specification o http://bugs.python.org/issue14189 opened by baruch.sterin #14190: Minor C API documentation bugs http://bugs.python.org/issue14190 opened by baruch.sterin #14191: argparse: nargs='*' doesn't get out-of-order positional parame http://bugs.python.org/issue14191 opened by v+python #14196: Unhandled exceptions in pdb return value display http://bugs.python.org/issue14196 opened by Simon.Chopin #14197: OS X framework builds do not create ABI-suffixed libpython3.x http://bugs.python.org/issue14197 opened by shibingli #14198: Backport parts of the new memoryview documentation http://bugs.python.org/issue14198 opened by skrah #14200: Idle shell crash on printing non-BMP unicode character http://bugs.python.org/issue14200 opened by vbr #14201: Documented caching for shared library's __getattr__ and __geti http://bugs.python.org/issue14201 opened by erijo #14202: The docs of xml.dom.pulldom are almost nonexistent http://bugs.python.org/issue14202 opened by eli.bendersky #14203: bytearray_getbuffer: unnecessary code http://bugs.python.org/issue14203 opened by skrah #14204: Support for the NPN extension to TLS/SSL http://bugs.python.org/issue14204 opened by colinmarc #14206: multiprocessing.Queue documentation is lacking important detai http://bugs.python.org/issue14206 opened by Garrett.Moore #14207: ElementTree.ParseError - needs documentation and consistent C& http://bugs.python.org/issue14207 opened by eli.bendersky #14208: No way to recover original argv with python -m http://bugs.python.org/issue14208 opened by Ben.Darnell #14209: pkgutil.iter_zipimport_modules ignores the prefix parameter fo http://bugs.python.org/issue14209 opened by James.Pickering #14210: add filename completion to pdb http://bugs.python.org/issue14210 opened by tshepang #14214: test_concurrent_futures hangs http://bugs.python.org/issue14214 opened by tebeka #14215: http://www.python.org/dev/peps/ title is python.org http://bugs.python.org/issue14215 opened by ramchandra.apte #14216: ImportError: No module named binascii http://bugs.python.org/issue14216 opened by liuqianhn #14219: start the Class tutorial in a more gentle manner http://bugs.python.org/issue14219 opened by tshepang #14222: Using time.time() in Queue.get breaks when system time is chan http://bugs.python.org/issue14222 opened by tasslehoff #14224: packaging: path description of resources is mixed up http://bugs.python.org/issue14224 opened by jokoala #14225: _cursesmodule compile error in OS X 32-bit-only installer buil http://bugs.python.org/issue14225 opened by ned.deily #14226: Expose dict_proxy type from descrobject.c http://bugs.python.org/issue14226 opened by ndparker #14227: console w/ cp65001 displays extra characters for non-ascii str http://bugs.python.org/issue14227 opened by metolone #14228: It is impossible to catch sigint on startup in python code http://bugs.python.org/issue14228 opened by telmich #14229: On KeyboardInterrupt, the exit code should mirror the signal n http://bugs.python.org/issue14229 opened by pitrou #14230: Delegating generator is not always visible to debugging tools http://bugs.python.org/issue14230 opened by Mark.Shannon #14232: obmalloc: mmap() returns MAP_FAILED on error, not 0 http://bugs.python.org/issue14232 opened by haypo #14234: CVE-2012-0876 (hash table collisions CPU usage DoS) for embedd http://bugs.python.org/issue14234 opened by dmalcolm #14235: test_cmd.py does not correctly call reload() http://bugs.python.org/issue14235 opened by Josh.Watson #14236: In help(re) are insufficient details http://bugs.python.org/issue14236 opened by py.user #14237: Special sequences \A and \Z don't work in character set [] http://bugs.python.org/issue14237 opened by py.user #14238: python shouldn't need username in passwd database http://bugs.python.org/issue14238 opened by arekm #14239: Uninitialised variable in _PyObject_GenericSetAttrWithDict http://bugs.python.org/issue14239 opened by Mark.Shannon #14241: io.UnsupportedOperation.__new__(io.UnsupportedOperation) fails http://bugs.python.org/issue14241 opened by Mark.Shannon #14218: include rendered output in addition to markup http://bugs.python.org/issue14218 opened by tshepang Most recent 15 issues with no replies (15) ========================================== #14241: io.UnsupportedOperation.__new__(io.UnsupportedOperation) fails http://bugs.python.org/issue14241 #14239: Uninitialised variable in _PyObject_GenericSetAttrWithDict http://bugs.python.org/issue14239 #14236: In help(re) are insufficient details http://bugs.python.org/issue14236 #14230: Delegating generator is not always visible to debugging tools http://bugs.python.org/issue14230 #14226: Expose dict_proxy type from descrobject.c http://bugs.python.org/issue14226 #14225: _cursesmodule compile error in OS X 32-bit-only installer buil http://bugs.python.org/issue14225 #14215: http://www.python.org/dev/peps/ title is python.org http://bugs.python.org/issue14215 #14207: ElementTree.ParseError - needs documentation and consistent C& http://bugs.python.org/issue14207 #14198: Backport parts of the new memoryview documentation http://bugs.python.org/issue14198 #14196: Unhandled exceptions in pdb return value display http://bugs.python.org/issue14196 #14189: Documentation for some C APIs is missing clear specification o http://bugs.python.org/issue14189 #14186: Link to PEP 3107 in "def" part of Language Reference http://bugs.python.org/issue14186 #14185: Failure to build _dbm with ndbm on Arch Linux http://bugs.python.org/issue14185 #14184: test_recursion_limit fails on OS X when compiled with clang http://bugs.python.org/issue14184 #14182: collections.Counter equality test thrown-off by zero counts http://bugs.python.org/issue14182 Most recent 15 issues waiting for review (15) ============================================= #14241: io.UnsupportedOperation.__new__(io.UnsupportedOperation) fails http://bugs.python.org/issue14241 #14239: Uninitialised variable in _PyObject_GenericSetAttrWithDict http://bugs.python.org/issue14239 #14235: test_cmd.py does not correctly call reload() http://bugs.python.org/issue14235 #14234: CVE-2012-0876 (hash table collisions CPU usage DoS) for embedd http://bugs.python.org/issue14234 #14232: obmalloc: mmap() returns MAP_FAILED on error, not 0 http://bugs.python.org/issue14232 #14230: Delegating generator is not always visible to debugging tools http://bugs.python.org/issue14230 #14224: packaging: path description of resources is mixed up http://bugs.python.org/issue14224 #14210: add filename completion to pdb http://bugs.python.org/issue14210 #14204: Support for the NPN extension to TLS/SSL http://bugs.python.org/issue14204 #14203: bytearray_getbuffer: unnecessary code http://bugs.python.org/issue14203 #14200: Idle shell crash on printing non-BMP unicode character http://bugs.python.org/issue14200 #14191: argparse: nargs='*' doesn't get out-of-order positional parame http://bugs.python.org/issue14191 #14183: Test coverage for packaging.install and packaging.pypi.wrapper http://bugs.python.org/issue14183 #14180: Factorize code to convert int/float to time_t, timeval or time http://bugs.python.org/issue14180 #14179: Test coverage for lib/re.py http://bugs.python.org/issue14179 Top 10 most discussed issues (10) ================================= #7652: Merge C version of decimal into py3k. http://bugs.python.org/issue7652 23 msgs #14218: include rendered output in addition to markup http://bugs.python.org/issue14218 14 msgs #14127: os.stat and os.utime: allow preserving exact metadata http://bugs.python.org/issue14127 12 msgs #13719: bdist_msi upload fails http://bugs.python.org/issue13719 11 msgs #14191: argparse: nargs='*' doesn't get out-of-order positional parame http://bugs.python.org/issue14191 11 msgs #14228: It is impossible to catch sigint on startup in python code http://bugs.python.org/issue14228 10 msgs #14216: ImportError: No module named binascii http://bugs.python.org/issue14216 8 msgs #11379: Remove "lightweight" from minidom description http://bugs.python.org/issue11379 7 msgs #13964: os.utimensat() and os.futimes() should accept (sec, nsec), dro http://bugs.python.org/issue13964 7 msgs #14200: Idle shell crash on printing non-BMP unicode character http://bugs.python.org/issue14200 7 msgs Issues closed (43) ================== #5689: Support xz compression in tarfile module http://bugs.python.org/issue5689 closed by nadeem.vawda #6715: xz compressor support http://bugs.python.org/issue6715 closed by nadeem.vawda #10369: tarfile requires an actual file on disc; a file-like object is http://bugs.python.org/issue10369 closed by lars.gustaebel #12328: multiprocessing's overlapped PipeConnection on Windows http://bugs.python.org/issue12328 closed by pitrou #12568: Add functions to get the width in columns of a character http://bugs.python.org/issue12568 closed by loewis #13550: Rewrite logging hack of the threading module http://bugs.python.org/issue13550 closed by python-dev #13704: Random number generator in Python core http://bugs.python.org/issue13704 closed by christian.heimes #13860: PyBuffer_FillInfo() return value http://bugs.python.org/issue13860 closed by skrah #13882: PEP 410: Use decimal.Decimal type for timestamps http://bugs.python.org/issue13882 closed by haypo #13970: frameobject should not have f_yieldfrom attribute http://bugs.python.org/issue13970 closed by python-dev #13981: time.sleep() should use nanosleep() if available http://bugs.python.org/issue13981 closed by haypo #14071: allow more than one hash seed per process (move _Py_HashSecret http://bugs.python.org/issue14071 closed by haypo #14085: PyUnicode_WRITE: "comparison is always true" warnings http://bugs.python.org/issue14085 closed by python-dev #14100: Add a missing info to PEP 393 + link from whatsnew 3.3 http://bugs.python.org/issue14100 closed by loewis #14114: 2.7.3rc1 chm gives JS error http://bugs.python.org/issue14114 closed by loewis #14122: operator: div() instead of truediv() in documention since 3.1. http://bugs.python.org/issue14122 closed by ezio.melotti #14144: urllib2 HTTPRedirectHandler not forwarding POST data in redire http://bugs.python.org/issue14144 closed by orsenthil #14166: private dispatch table for picklers http://bugs.python.org/issue14166 closed by pitrou #14168: Bug in minidom 3.3 after optimization patch http://bugs.python.org/issue14168 closed by loewis #14171: warnings from valgrind about openssl as used by CPython http://bugs.python.org/issue14171 closed by loewis #14172: ref-counting leak in buffer usage in Python/marshal.c http://bugs.python.org/issue14172 closed by pitrou #14176: Fix unicode literals http://bugs.python.org/issue14176 closed by terry.reedy #14177: marshal.loads accepts unicode strings http://bugs.python.org/issue14177 closed by pitrou #14178: _elementtree problem deleting slices with steps != +1 http://bugs.python.org/issue14178 closed by eli.bendersky #14181: Support getbuffer redirection scheme in memoryview http://bugs.python.org/issue14181 closed by skrah #14188: Sharing site-packages between Apple and python.org builds brea http://bugs.python.org/issue14188 closed by ned.deily #14192: stdout.encoding not set when redirecting windows command line http://bugs.python.org/issue14192 closed by loewis #14193: broken link on PEP 385 http://bugs.python.org/issue14193 closed by georg.brandl #14194: typo in pep414 http://bugs.python.org/issue14194 closed by georg.brandl #14195: WeakSet has ordering relations wrong http://bugs.python.org/issue14195 closed by meador.inge #14199: Keep a refence to mro in _PyType_Lookup() and super_getattro() http://bugs.python.org/issue14199 closed by python-dev #14205: Raise an error if a dict is modified during a lookup http://bugs.python.org/issue14205 closed by python-dev #14211: Don't rely on borrowed _PyType_Lookup() reference in PyObject_ http://bugs.python.org/issue14211 closed by haypo #14212: Segfault when using re.finditer over mmap http://bugs.python.org/issue14212 closed by python-dev #14213: python33.dll not removed on uninstallation http://bugs.python.org/issue14213 closed by vinay.sajip #14217: text output pretends to be code http://bugs.python.org/issue14217 closed by orsenthil #14220: "yield from" kills generator on re-entry http://bugs.python.org/issue14220 closed by python-dev #14221: re.sub backreferences to numbered groups produce garbage http://bugs.python.org/issue14221 closed by ezio.melotti #14223: curses addch broken on Python3.3a1 http://bugs.python.org/issue14223 closed by python-dev #14231: Fix or drop Lib/test/crashers/borrowed_ref_1.py http://bugs.python.org/issue14231 closed by haypo #14233: argparse: "append" action fails to override default values http://bugs.python.org/issue14233 closed by r.david.murray #14240: lstrip problem http://bugs.python.org/issue14240 closed by amaury.forgeotdarc #1469629: __dict__ = self in subclass of dict causes a memory leak http://bugs.python.org/issue1469629 closed by python-dev From thomas at python.org Fri Mar 9 20:44:56 2012 From: thomas at python.org (Thomas Wouters) Date: Fri, 9 Mar 2012 11:44:56 -0800 Subject: [Python-Dev] Testsuite dependency on _testcapi Message-ID: While testing Python 2.7 internally (at Google) I noticed that (now that ImportErrors aren't automatically test skips) lots of tests fail if you don't have the _testcapi module. These tests are (as far as I've seen) properly marked as cpython-only, but when some wacko decides the _testcapi module shouldn't, for example, be shipped to a million machines[*] that are never going to use it, it would be nice to still run the tests that can be run without _testcapi. Any objections to fixing the tests to use test.support.import_module() for _testcapi and a 'needs_testcapi' skipping decorator? To elaborate, we are also not shipping a couple of other modules (like distutils), but it's not unreasonable to expect those to exist (we modify the testsuite for that in our own builds only, instead, as well as making all our code deal with it.) The _testcapi module, however, is internal *and* meant for tests only, and used in quite a few tests (sometimes only in a single testfunction.) [*] 'a million machines' is not the actual number -- I don't know the actual number (but I'm sure it's bigger than that), I'm just tossing out some large number. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Fri Mar 9 22:25:57 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 09 Mar 2012 22:25:57 +0100 Subject: [Python-Dev] performance of generator termination (was: Re: problem with recursive "yield from" delegation) In-Reply-To: <20120308213630.76d247b2@pitrou.net> References: <20120308213630.76d247b2@pitrou.net> Message-ID: Antoine Pitrou, 08.03.2012 21:36: > On Thu, 8 Mar 2012 14:36:06 -0600 > Benjamin Peterson wrote: >> 2012/3/8 Stefan Behnel: >>> Would that be acceptable for CPython as well or would you prefer full >>> fledged normalisation? >> >> I think we have to normalize for correctness. Consider that it may be >> some StopIteration subclass which set "value" on construction. > > Perhaps it would be time to drop the whole delayed normalization thing, > provided the benchmarks don't exhibit a slowdown? At least for Cython, always normalising the exception can make quite a difference. For the nqueens benchmark, which uses short running generator expressions (not affected by this particular change), a quick hack to fetch, normalise and restore the StopIteration exception raised at the end of the generator expression run reduces the performance by 10% for me. I'd expect code similar to the group item iterator in itertools.groupby() to suffer even worse for very small groups. A while ago, I wouldn't have expected generator termination to have that an impact, but when we dropped the single frame+traceback creation at the end of the generator run in Cython, that boosted the performance of the compiled nqueens benchmark by 70% and a compiled Python version of itertools.groupby() ran twice as fast as before. These things can make an impressively large difference. http://thread.gmane.org/gmane.comp.python.cython.devel/12993/focus=13044 I'm using the following in Cython now. Note how complex the pre-3.3 case is, I'm sure that makes it even more worth the special case in older CPython versions (including 2.x). """ static int __Pyx_PyGen_FetchStopIterationValue(PyObject **pvalue) { PyObject *et, *ev, *tb; PyObject *value = NULL; if (PyErr_ExceptionMatches(PyExc_StopIteration)) { PyErr_Fetch(&et, &ev, &tb); // most common case: plain StopIteration without argument if (et == PyExc_StopIteration) { if (!ev || !PyObject_IsInstance(ev, PyExc_StopIteration)) { // PyErr_SetObject() puts the value directly into ev if (!ev) { Py_INCREF(Py_None); ev = Py_None; } Py_XDECREF(tb); Py_DECREF(et); *pvalue = ev; return 0; } } // otherwise: normalise and check what that gives us PyErr_NormalizeException(&et, &ev, &tb); if (PyObject_IsInstance(ev, PyExc_StopIteration)) { Py_XDECREF(tb); Py_DECREF(et); #if PY_VERSION_HEX >= 0x030300A0 value = ((PyStopIterationObject *)ev)->value; Py_INCREF(value); Py_DECREF(ev); #else PyObject* args = PyObject_GetAttrString(ev, "args"); Py_DECREF(ev); if (args) { value = PyObject_GetItem(args, 0); Py_DECREF(args); } if (!value) PyErr_Clear(); #endif } else { // looks like normalisation failed - raise the new exception PyErr_Restore(et, ev, tb); return -1; } } else if (PyErr_Occurred()) { return -1; } if (value == NULL) { Py_INCREF(Py_None); value = Py_None; } *pvalue = value; return 0; } """ Stefan From jimjjewett at gmail.com Fri Mar 9 22:32:33 2012 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 9 Mar 2012 16:32:33 -0500 Subject: [Python-Dev] [Python-checkins] cpython: Close #14205: dict lookup raises a RuntimeError if the dict is modified during In-Reply-To: References: Message-ID: I do not believe the change set below is valid. As I read it, the new test verifies that one particular type of Nasty key will provoke a RuntimeError -- but that particular type already did so, by hitting the recursion limit. (It doesn't even really mutate the dict.) Meanwhile, the patch throws out tests for several different types of mutations that have caused problems -- even segfaults -- in the past, even after the dict implementation code was already "fixed". Changing these tests to "assertRaises" would be fine, but they should all be kept; if nothing else, they test whether you've caught all mutation avenues. -jJ On Mon, Mar 5, 2012 at 7:13 PM, victor.stinner wrote: > http://hg.python.org/cpython/rev/934aaf2191d0 > changeset: ? 75445:934aaf2191d0 > user: ? ? ? ?Victor Stinner > date: ? ? ? ?Tue Mar 06 01:03:13 2012 +0100 > summary: > ?Close #14205: dict lookup raises a RuntimeError if the dict is modified during > a lookup. > > "if you want to make a sandbox on top of CPython, you have to fix segfaults" > so let's fix segfaults! > > files: > ?Lib/test/crashers/nasty_eq_vs_dict.py | ? 47 -- > ?Lib/test/test_dict.py ? ? ? ? ? ? ? ? | ? 22 +- > ?Lib/test/test_mutants.py ? ? ? ? ? ? ?| ?291 -------------- > ?Misc/NEWS ? ? ? ? ? ? ? ? ? ? ? ? ? ? | ? ?5 +- > ?Objects/dictobject.c ? ? ? ? ? ? ? ? ?| ? 18 +- > ?5 files changed, 31 insertions(+), 352 deletions(-) > > > diff --git a/Lib/test/crashers/nasty_eq_vs_dict.py b/Lib/test/crashers/nasty_eq_vs_dict.py > deleted file mode 100644 > --- a/Lib/test/crashers/nasty_eq_vs_dict.py > +++ /dev/null > @@ -1,47 +0,0 @@ > -# from http://mail.python.org/pipermail/python-dev/2001-June/015239.html > - > -# if you keep changing a dictionary while looking up a key, you can > -# provoke an infinite recursion in C > - > -# At the time neither Tim nor Michael could be bothered to think of a > -# way to fix it. > - > -class Yuck: > - ? ?def __init__(self): > - ? ? ? ?self.i = 0 > - > - ? ?def make_dangerous(self): > - ? ? ? ?self.i = 1 > - > - ? ?def __hash__(self): > - ? ? ? ?# direct to slot 4 in table of size 8; slot 12 when size 16 > - ? ? ? ?return 4 + 8 > - > - ? ?def __eq__(self, other): > - ? ? ? ?if self.i == 0: > - ? ? ? ? ? ?# leave dict alone > - ? ? ? ? ? ?pass > - ? ? ? ?elif self.i == 1: > - ? ? ? ? ? ?# fiddle to 16 slots > - ? ? ? ? ? ?self.__fill_dict(6) > - ? ? ? ? ? ?self.i = 2 > - ? ? ? ?else: > - ? ? ? ? ? ?# fiddle to 8 slots > - ? ? ? ? ? ?self.__fill_dict(4) > - ? ? ? ? ? ?self.i = 1 > - > - ? ? ? ?return 1 > - > - ? ?def __fill_dict(self, n): > - ? ? ? ?self.i = 0 > - ? ? ? ?dict.clear() > - ? ? ? ?for i in range(n): > - ? ? ? ? ? ?dict[i] = i > - ? ? ? ?dict[self] = "OK!" > - > -y = Yuck() > -dict = {y: "OK!"} > - > -z = Yuck() > -y.make_dangerous() > -print(dict[z]) > diff --git a/Lib/test/test_dict.py b/Lib/test/test_dict.py > --- a/Lib/test/test_dict.py > +++ b/Lib/test/test_dict.py > @@ -379,7 +379,7 @@ > ? ? ? ? x.fail = True > ? ? ? ? self.assertRaises(Exc, d.pop, x) > > - ? ?def test_mutatingiteration(self): > + ? ?def test_mutating_iteration(self): > ? ? ? ? # changing dict size during iteration > ? ? ? ? d = {} > ? ? ? ? d[1] = 1 > @@ -387,6 +387,26 @@ > ? ? ? ? ? ? for i in d: > ? ? ? ? ? ? ? ? d[i+1] = 1 > > + ? ?def test_mutating_lookup(self): > + ? ? ? ?# changing dict during a lookup > + ? ? ? ?class NastyKey: > + ? ? ? ? ? ?mutate_dict = None > + > + ? ? ? ? ? ?def __hash__(self): > + ? ? ? ? ? ? ? ?# hash collision! > + ? ? ? ? ? ? ? ?return 1 > + > + ? ? ? ? ? ?def __eq__(self, other): > + ? ? ? ? ? ? ? ?if self.mutate_dict: > + ? ? ? ? ? ? ? ? ? ?self.mutate_dict[self] = 1 > + ? ? ? ? ? ? ? ?return self == other > + > + ? ? ? ?d = {} > + ? ? ? ?d[NastyKey()] = 0 > + ? ? ? ?NastyKey.mutate_dict = d > + ? ? ? ?with self.assertRaises(RuntimeError): > + ? ? ? ? ? ?d[NastyKey()] = None > + > ? ? def test_repr(self): > ? ? ? ? d = {} > ? ? ? ? self.assertEqual(repr(d), '{}') > diff --git a/Lib/test/test_mutants.py b/Lib/test/test_mutants.py > deleted file mode 100644 > --- a/Lib/test/test_mutants.py > +++ /dev/null > @@ -1,291 +0,0 @@ > -from test.support import verbose, TESTFN > -import random > -import os > - > -# From SF bug #422121: ?Insecurities in dict comparison. > - > -# Safety of code doing comparisons has been an historical Python weak spot. > -# The problem is that comparison of structures written in C *naturally* > -# wants to hold on to things like the size of the container, or "the > -# biggest" containee so far, across a traversal of the container; but > -# code to do containee comparisons can call back into Python and mutate > -# the container in arbitrary ways while the C loop is in midstream. ?If the > -# C code isn't extremely paranoid about digging things out of memory on > -# each trip, and artificially boosting refcounts for the duration, anything > -# from infinite loops to OS crashes can result (yes, I use Windows ). > -# > -# The other problem is that code designed to provoke a weakness is usually > -# white-box code, and so catches only the particular vulnerabilities the > -# author knew to protect against. ?For example, Python's list.sort() code > -# went thru many iterations as one "new" vulnerability after another was > -# discovered. > -# > -# So the dict comparison test here uses a black-box approach instead, > -# generating dicts of various sizes at random, and performing random > -# mutations on them at random times. ?This proved very effective, > -# triggering at least six distinct failure modes the first 20 times I > -# ran it. ?Indeed, at the start, the driver never got beyond 6 iterations > -# before the test died. > - > -# The dicts are global to make it easy to mutate tham from within functions. > -dict1 = {} > -dict2 = {} > - > -# The current set of keys in dict1 and dict2. ?These are materialized as > -# lists to make it easy to pick a dict key at random. > -dict1keys = [] > -dict2keys = [] > - > -# Global flag telling maybe_mutate() whether to *consider* mutating. > -mutate = 0 > - > -# If global mutate is true, consider mutating a dict. ?May or may not > -# mutate a dict even if mutate is true. ?If it does decide to mutate a > -# dict, it picks one of {dict1, dict2} at random, and deletes a random > -# entry from it; or, more rarely, adds a random element. > - > -def maybe_mutate(): > - ? ?global mutate > - ? ?if not mutate: > - ? ? ? ?return > - ? ?if random.random() < 0.5: > - ? ? ? ?return > - > - ? ?if random.random() < 0.5: > - ? ? ? ?target, keys = dict1, dict1keys > - ? ?else: > - ? ? ? ?target, keys = dict2, dict2keys > - > - ? ?if random.random() < 0.2: > - ? ? ? ?# Insert a new key. > - ? ? ? ?mutate = 0 ? # disable mutation until key inserted > - ? ? ? ?while 1: > - ? ? ? ? ? ?newkey = Horrid(random.randrange(100)) > - ? ? ? ? ? ?if newkey not in target: > - ? ? ? ? ? ? ? ?break > - ? ? ? ?target[newkey] = Horrid(random.randrange(100)) > - ? ? ? ?keys.append(newkey) > - ? ? ? ?mutate = 1 > - > - ? ?elif keys: > - ? ? ? ?# Delete a key at random. > - ? ? ? ?mutate = 0 ? # disable mutation until key deleted > - ? ? ? ?i = random.randrange(len(keys)) > - ? ? ? ?key = keys[i] > - ? ? ? ?del target[key] > - ? ? ? ?del keys[i] > - ? ? ? ?mutate = 1 > - > -# A horrid class that triggers random mutations of dict1 and dict2 when > -# instances are compared. > - > -class Horrid: > - ? ?def __init__(self, i): > - ? ? ? ?# Comparison outcomes are determined by the value of i. > - ? ? ? ?self.i = i > - > - ? ? ? ?# An artificial hashcode is selected at random so that we don't > - ? ? ? ?# have any systematic relationship between comparison outcomes > - ? ? ? ?# (based on self.i and other.i) and relative position within the > - ? ? ? ?# hash vector (based on hashcode). > - ? ? ? ?# XXX This is no longer effective. > - ? ? ? ?##self.hashcode = random.randrange(1000000000) > - > - ? ?def __hash__(self): > - ? ? ? ?return 42 > - ? ? ? ?return self.hashcode > - > - ? ?def __eq__(self, other): > - ? ? ? ?maybe_mutate() ? # The point of the test. > - ? ? ? ?return self.i == other.i > - > - ? ?def __ne__(self, other): > - ? ? ? ?raise RuntimeError("I didn't expect some kind of Spanish inquisition!") > - > - ? ?__lt__ = __le__ = __gt__ = __ge__ = __ne__ > - > - ? ?def __repr__(self): > - ? ? ? ?return "Horrid(%d)" % self.i > - > -# Fill dict d with numentries (Horrid(i), Horrid(j)) key-value pairs, > -# where i and j are selected at random from the candidates list. > -# Return d.keys() after filling. > - > -def fill_dict(d, candidates, numentries): > - ? ?d.clear() > - ? ?for i in range(numentries): > - ? ? ? ?d[Horrid(random.choice(candidates))] = \ > - ? ? ? ? ? ?Horrid(random.choice(candidates)) > - ? ?return list(d.keys()) > - > -# Test one pair of randomly generated dicts, each with n entries. > -# Note that dict comparison is trivial if they don't have the same number > -# of entires (then the "shorter" dict is instantly considered to be the > -# smaller one, without even looking at the entries). > - > -def test_one(n): > - ? ?global mutate, dict1, dict2, dict1keys, dict2keys > - > - ? ?# Fill the dicts without mutating them. > - ? ?mutate = 0 > - ? ?dict1keys = fill_dict(dict1, range(n), n) > - ? ?dict2keys = fill_dict(dict2, range(n), n) > - > - ? ?# Enable mutation, then compare the dicts so long as they have the > - ? ?# same size. > - ? ?mutate = 1 > - ? ?if verbose: > - ? ? ? ?print("trying w/ lengths", len(dict1), len(dict2), end=' ') > - ? ?while dict1 and len(dict1) == len(dict2): > - ? ? ? ?if verbose: > - ? ? ? ? ? ?print(".", end=' ') > - ? ? ? ?c = dict1 == dict2 > - ? ?if verbose: > - ? ? ? ?print() > - > -# Run test_one n times. ?At the start (before the bugs were fixed), 20 > -# consecutive runs of this test each blew up on or before the sixth time > -# test_one was run. ?So n doesn't have to be large to get an interesting > -# test. > -# OTOH, calling with large n is also interesting, to ensure that the fixed > -# code doesn't hold on to refcounts *too* long (in which case memory would > -# leak). > - > -def test(n): > - ? ?for i in range(n): > - ? ? ? ?test_one(random.randrange(1, 100)) > - > -# See last comment block for clues about good values for n. > -test(100) > - > -########################################################################## > -# Another segfault bug, distilled by Michael Hudson from a c.l.py post. > - > -class Child: > - ? ?def __init__(self, parent): > - ? ? ? ?self.__dict__['parent'] = parent > - ? ?def __getattr__(self, attr): > - ? ? ? ?self.parent.a = 1 > - ? ? ? ?self.parent.b = 1 > - ? ? ? ?self.parent.c = 1 > - ? ? ? ?self.parent.d = 1 > - ? ? ? ?self.parent.e = 1 > - ? ? ? ?self.parent.f = 1 > - ? ? ? ?self.parent.g = 1 > - ? ? ? ?self.parent.h = 1 > - ? ? ? ?self.parent.i = 1 > - ? ? ? ?return getattr(self.parent, attr) > - > -class Parent: > - ? ?def __init__(self): > - ? ? ? ?self.a = Child(self) > - > -# Hard to say what this will print! ?May vary from time to time. ?But > -# we're specifically trying to test the tp_print slot here, and this is > -# the clearest way to do it. ?We print the result to a temp file so that > -# the expected-output file doesn't need to change. > - > -f = open(TESTFN, "w") > -print(Parent().__dict__, file=f) > -f.close() > -os.unlink(TESTFN) > - > -########################################################################## > -# And another core-dumper from Michael Hudson. > - > -dict = {} > - > -# Force dict to malloc its table. > -for i in range(1, 10): > - ? ?dict[i] = i > - > -f = open(TESTFN, "w") > - > -class Machiavelli: > - ? ?def __repr__(self): > - ? ? ? ?dict.clear() > - > - ? ? ? ?# Michael sez: ?"doesn't crash without this. ?don't know why." > - ? ? ? ?# Tim sez: ?"luck of the draw; crashes with or without for me." > - ? ? ? ?print(file=f) > - > - ? ? ? ?return repr("machiavelli") > - > - ? ?def __hash__(self): > - ? ? ? ?return 0 > - > -dict[Machiavelli()] = Machiavelli() > - > -print(str(dict), file=f) > -f.close() > -os.unlink(TESTFN) > -del f, dict > - > - > -########################################################################## > -# And another core-dumper from Michael Hudson. > - > -dict = {} > - > -# let's force dict to malloc its table > -for i in range(1, 10): > - ? ?dict[i] = i > - > -class Machiavelli2: > - ? ?def __eq__(self, other): > - ? ? ? ?dict.clear() > - ? ? ? ?return 1 > - > - ? ?def __hash__(self): > - ? ? ? ?return 0 > - > -dict[Machiavelli2()] = Machiavelli2() > - > -try: > - ? ?dict[Machiavelli2()] > -except KeyError: > - ? ?pass > - > -del dict > - > -########################################################################## > -# And another core-dumper from Michael Hudson. > - > -dict = {} > - > -# let's force dict to malloc its table > -for i in range(1, 10): > - ? ?dict[i] = i > - > -class Machiavelli3: > - ? ?def __init__(self, id): > - ? ? ? ?self.id = id > - > - ? ?def __eq__(self, other): > - ? ? ? ?if self.id == other.id: > - ? ? ? ? ? ?dict.clear() > - ? ? ? ? ? ?return 1 > - ? ? ? ?else: > - ? ? ? ? ? ?return 0 > - > - ? ?def __repr__(self): > - ? ? ? ?return "%s(%s)"%(self.__class__.__name__, self.id) > - > - ? ?def __hash__(self): > - ? ? ? ?return 0 > - > -dict[Machiavelli3(1)] = Machiavelli3(0) > -dict[Machiavelli3(2)] = Machiavelli3(0) > - > -f = open(TESTFN, "w") > -try: > - ? ?try: > - ? ? ? ?print(dict[Machiavelli3(2)], file=f) > - ? ?except KeyError: > - ? ? ? ?pass > -finally: > - ? ?f.close() > - ? ?os.unlink(TESTFN) > - > -del dict > -del dict1, dict2, dict1keys, dict2keys > diff --git a/Misc/NEWS b/Misc/NEWS > --- a/Misc/NEWS > +++ b/Misc/NEWS > @@ -10,10 +10,13 @@ > ?Core and Builtins > ?----------------- > > +- Issue #14205: dict lookup raises a RuntimeError if the dict is modified > + ?during a lookup. > + > ?Library > ?------- > > -- Issue #14168: Check for presence of Element._attrs in minidom before > +- Issue #14168: Check for presence of Element._attrs in minidom before > ? accessing it. > > ?- Issue #12328: Fix multiprocessing's use of overlapped I/O on Windows. > diff --git a/Objects/dictobject.c b/Objects/dictobject.c > --- a/Objects/dictobject.c > +++ b/Objects/dictobject.c > @@ -347,12 +347,9 @@ > ? ? ? ? ? ? ? ? ? ? return ep; > ? ? ? ? ? ? } > ? ? ? ? ? ? else { > - ? ? ? ? ? ? ? ?/* The compare did major nasty stuff to the > - ? ? ? ? ? ? ? ? * dict: ?start over. > - ? ? ? ? ? ? ? ? * XXX A clever adversary could prevent this > - ? ? ? ? ? ? ? ? * XXX from terminating. > - ? ? ? ? ? ? ? ? */ > - ? ? ? ? ? ? ? ?return lookdict(mp, key, hash); > + ? ? ? ? ? ? ? ?PyErr_SetString(PyExc_RuntimeError, > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?"dictionary changed size during lookup"); > + ? ? ? ? ? ? ? ?return NULL; > ? ? ? ? ? ? } > ? ? ? ? } > ? ? ? ? freeslot = NULL; > @@ -379,12 +376,9 @@ > ? ? ? ? ? ? ? ? ? ? return ep; > ? ? ? ? ? ? } > ? ? ? ? ? ? else { > - ? ? ? ? ? ? ? ?/* The compare did major nasty stuff to the > - ? ? ? ? ? ? ? ? * dict: ?start over. > - ? ? ? ? ? ? ? ? * XXX A clever adversary could prevent this > - ? ? ? ? ? ? ? ? * XXX from terminating. > - ? ? ? ? ? ? ? ? */ > - ? ? ? ? ? ? ? ?return lookdict(mp, key, hash); > + ? ? ? ? ? ? ? ?PyErr_SetString(PyExc_RuntimeError, > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?"dictionary changed size during lookup"); > + ? ? ? ? ? ? ? ?return NULL; > ? ? ? ? ? ? } > ? ? ? ? } > ? ? ? ? else if (ep->me_key == dummy && freeslot == NULL) > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From victor.stinner at gmail.com Fri Mar 9 23:27:45 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 09 Mar 2012 23:27:45 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Close #14205: dict lookup raises a RuntimeError if the dict is modified during In-Reply-To: References: Message-ID: <4F5A83E1.40108@gmail.com> On 09/03/2012 22:32, Jim Jewett wrote: > I do not believe the change set below is valid. > > As I read it, the new test verifies that one particular type of Nasty > key will provoke a RuntimeError -- but that particular type already > did so, by hitting the recursion limit. (It doesn't even really > mutate the dict.) Oh yes, thanks for the report. I fixed that test. > Meanwhile, the patch throws out tests for several different types of > mutations that have caused problems -- even segfaults -- in the past, > even after the dict implementation code was already "fixed". > > Changing these tests to "assertRaises" would be fine, but they should > all be kept; if nothing else, they test whether you've caught all > mutation avenues. I ran all these tests, none is still crashing. I don't think that it is interesting to keep them. Victor From yselivanov.ml at gmail.com Fri Mar 9 23:38:05 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 9 Mar 2012 17:38:05 -0500 Subject: [Python-Dev] [Python-checkins] cpython: Close #14205: dict lookup raises a RuntimeError if the dict is modified during In-Reply-To: <4F5A83E1.40108@gmail.com> References: <4F5A83E1.40108@gmail.com> Message-ID: Actually, I too noticed that you've dropped few crasher tests. I think we need to keep them, to make sure that future development will not introduce the same vulnerabilities. That's a common practice with unit-testing. On 2012-03-09, at 5:27 PM, Victor Stinner wrote: > On 09/03/2012 22:32, Jim Jewett wrote: >> I do not believe the change set below is valid. >> >> As I read it, the new test verifies that one particular type of Nasty >> key will provoke a RuntimeError -- but that particular type already >> did so, by hitting the recursion limit. (It doesn't even really >> mutate the dict.) > > Oh yes, thanks for the report. I fixed that test. > >> Meanwhile, the patch throws out tests for several different types of >> mutations that have caused problems -- even segfaults -- in the past, >> even after the dict implementation code was already "fixed". >> >> Changing these tests to "assertRaises" would be fine, but they should >> all be kept; if nothing else, they test whether you've caught all >> mutation avenues. > > I ran all these tests, none is still crashing. I don't think that it is interesting to keep them. > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com From ethan at stoneleaf.us Fri Mar 9 23:40:59 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 09 Mar 2012 14:40:59 -0800 Subject: [Python-Dev] [Python-checkins] cpython: Close #14205: dict lookup raises a RuntimeError if the dict is modified during In-Reply-To: <4F5A83E1.40108@gmail.com> References: <4F5A83E1.40108@gmail.com> Message-ID: <4F5A86FB.7090900@stoneleaf.us> Victor Stinner wrote: > On 09/03/2012 22:32, Jim Jewett wrote: >> I do not believe the change set below is valid. >> >> As I read it, the new test verifies that one particular type of Nasty >> key will provoke a RuntimeError -- but that particular type already >> did so, by hitting the recursion limit. (It doesn't even really >> mutate the dict.) > > Oh yes, thanks for the report. I fixed that test. > >> Meanwhile, the patch throws out tests for several different types of >> mutations that have caused problems -- even segfaults -- in the past, >> even after the dict implementation code was already "fixed". >> >> Changing these tests to "assertRaises" would be fine, but they should >> all be kept; if nothing else, they test whether you've caught all >> mutation avenues. > > I ran all these tests, none is still crashing. I don't think that it is > interesting to keep them. Aren't these regression tests? To be kept to make sure they don't fail in the future? ~Ethan~ From brett at python.org Fri Mar 9 23:54:15 2012 From: brett at python.org (Brett Cannon) Date: Fri, 9 Mar 2012 17:54:15 -0500 Subject: [Python-Dev] Testsuite dependency on _testcapi In-Reply-To: References: Message-ID: On Fri, Mar 9, 2012 at 14:44, Thomas Wouters wrote: > > While testing Python 2.7 internally (at Google) I noticed that (now that > ImportErrors aren't automatically test skips) lots of tests fail if you > don't have the _testcapi module. These tests are (as far as I've seen) > properly marked as cpython-only, but when some wacko decides the _testcapi > module shouldn't, for example, be shipped to a million machines[*] that are > never going to use it, it would be nice to still run the tests that can be > run without _testcapi. Any objections to fixing the tests to use > test.support.import_module() for _testcapi and a 'needs_testcapi' skipping > decorator? > I have no issue with the test.support.import_module() use, although does it really require a full-on decorator? Is there a way to make it more generic to simply take a module name and if the import raises ImportError the test is skipped? -Brett > > To elaborate, we are also not shipping a couple of other modules (like > distutils), but it's not unreasonable to expect those to exist (we modify > the testsuite for that in our own builds only, instead, as well as making > all our code deal with it.) The _testcapi module, however, is internal > *and* meant for tests only, and used in quite a few tests (sometimes only > in a single testfunction.) > > [*] 'a million machines' is not the actual number -- I don't know the > actual number (but I'm sure it's bigger than that), I'm just tossing out > some large number. > -- > Thomas Wouters > > Hi! I'm a .signature virus! copy me into your .signature file to help me > spread! > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Fri Mar 9 23:56:40 2012 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 9 Mar 2012 16:56:40 -0600 Subject: [Python-Dev] Testsuite dependency on _testcapi In-Reply-To: References: Message-ID: 2012/3/9 Thomas Wouters : > > While testing Python 2.7 internally (at Google) I noticed that (now that > ImportErrors aren't automatically test skips) lots of tests fail if you > don't have the _testcapi module. These tests are (as far as I've seen) > properly marked as cpython-only, but when some wacko decides the _testcapi > module shouldn't, for example, be shipped to a million machines[*] that are > never going to use it, it would be nice to still run the tests that can be > run without _testcapi. Any objections to fixing the tests to use > test.support.import_module() for _testcapi and a 'needs_testcapi' skipping > decorator? Sounds fine to me. Post a patch. -- Regards, Benjamin From steve at pearwood.info Fri Mar 9 23:56:28 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 10 Mar 2012 09:56:28 +1100 Subject: [Python-Dev] [Python-checkins] cpython: Close #14205: dict lookup raises a RuntimeError if the dict is modified during In-Reply-To: References: <4F5A83E1.40108@gmail.com> Message-ID: <20120309225628.GA2442@ando> On Fri, Mar 09, 2012 at 05:38:05PM -0500, Yury Selivanov wrote: > Actually, I too noticed that you've dropped few crasher tests. I think > we need to keep them, to make sure that future development will not > introduce the same vulnerabilities. That's a common practice with > unit-testing. The term for this is "regression testing" -- when you fix a bug, you keep a test for that bug forever, to ensure that you never have a regression that re-introduces the bug. -- Steven From victor.stinner at gmail.com Fri Mar 9 23:56:51 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 09 Mar 2012 23:56:51 +0100 Subject: [Python-Dev] Testsuite dependency on _testcapi In-Reply-To: References: Message-ID: <4F5A8AB3.3090300@gmail.com> On 09/03/2012 20:44, Thomas Wouters wrote: > (...) it would be nice to still > run the tests that can be run without _testcapi. Any objections to > fixing the tests to use test.support.import_module() for _testcapi and a > 'needs_testcapi' skipping decorator? test.support.import_module() looks fine for such issue. I'm guilty of adding new "from _testcapi import ..." in some tests (test_unicode at least). You add a test.support.import_module() in such test. Victor From victor.stinner at gmail.com Sat Mar 10 11:44:52 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 10 Mar 2012 11:44:52 +0100 Subject: [Python-Dev] PEP 416: Add a frozendict builtin type In-Reply-To: References: <1330541549.7844.69.camel@surprise> Message-ID: <4F5B30A4.6080306@gmail.com> Le 01/03/2012 14:49, Paul Moore a ?crit : > Just avoid using the term "immutable" at all: > You right, I removed mention of mutable/immutable from the PEP. Victor From ncoghlan at gmail.com Sat Mar 10 16:13:43 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 11 Mar 2012 01:13:43 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Close #14205: dict lookup raises a RuntimeError if the dict is modified during In-Reply-To: <4F5A83E1.40108@gmail.com> References: <4F5A83E1.40108@gmail.com> Message-ID: On Sat, Mar 10, 2012 at 8:27 AM, Victor Stinner wrote: > I ran all these tests, none is still crashing. I don't think that it is > interesting to keep them. Indeed, please add them all back as regular parts of the test suite - this ensures that not only are they fixed now, but they never break again due to other changes. Feel free to create a dedicated "test_fixed_crashers.py" to explain the origins of a fairly eclectic group of tests. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Sat Mar 10 17:23:45 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 10 Mar 2012 11:23:45 -0500 Subject: [Python-Dev] [Python-checkins] peps: PEP 416: remove mentions of mutable/immutable In-Reply-To: References: Message-ID: <4F5B8011.2060809@udel.edu> On 3/10/2012 5:43 AM, victor.stinner wrote: > http://hg.python.org/peps/rev/7278026a5db9 > changeset: 4124:7278026a5db9 > user: Victor Stinner > date: Sat Mar 10 11:43:45 2012 +0100 > summary: > PEP 416: remove mentions of mutable/immutable > > files: > pep-0416.txt | 5 ++--- > 1 files changed, 2 insertions(+), 3 deletions(-) > > > diff --git a/pep-0416.txt b/pep-0416.txt > --- a/pep-0416.txt > +++ b/pep-0416.txt > @@ -20,9 +20,8 @@ > ========= > > A frozendict is a read-only mapping: a key cannot be added nor removed, and a > -key is always mapped to the same value. However, frozendict values can be > -mutable (not hashable). A frozendict is hashable and so immutable if and only > -if all values are hashable (immutable). > +key is always mapped to the same value. However, frozendict values can be not > +hashable. A frozendict is hashable if and only if all values are hashable. I think this would be better with 'key is always mapped to the same value object'. The revised second sentence does not add anything. Some Python objects are hashable and some not. There is nothing special about frozendict value objects either way. The proper revision of your original is "However, frozendict value may not be hashable." Either change to that or remove it. Terry From thomas at python.org Sat Mar 10 23:49:24 2012 From: thomas at python.org (Thomas Wouters) Date: Sat, 10 Mar 2012 14:49:24 -0800 Subject: [Python-Dev] Zipping the standard library. Message-ID: Since Python 2.3 (with the introduction of the zipimport module) it's been sort-of possible to zip up the standard library. Modules/getpath.c:calculate_path even adds a specific location ($prefix/lib/python33.zip) to sys.path if it exists to facilitate that. Or you can include the zipfile alongside an application that embeds Python, or even embed the zipfile in the same application. Actually setting things up is not quite that simple, though, at least on non-Windows: you need to know what to include in the zipfile (only .py, .pyc and .pyo files, no .so files), what to leave in the old location (os.py, or at least *some file called os.py*, needs to stay in $prefix/lib/python3.3) and how to deal with tests and modules that don't like the stdlib living in zipfiles -- and there's more of those than I expected. Also, depending on what else you want to put in the zipfile, you may have to be aware of zipimports limited implementation of zipfiles that involve various 32k-filecount and 2Gb-filesize limits. (And in case you're wondering, yes, we are doing this with Python 2.7 at Google to save space. And yes, hitting the 2Gb limit is quite possible for us.) So with importlib going in, should we do something with zipimport as well? Its deficiencies can easily be fixed by reimpementing it in Python instead -- the zipfile module has long since fixed the same 32k/2Gb issues (reading signed instead of unsigned numbers) and actually supports zip64 extensions (to break the 64k-filecount and 4Gb-filesize limits in "normal" zipfiles.) Actually supporting zipping the stdlib then becomes a bit harder: the importlib bootstrapping would need to include the zipfile module. If we do that, it would be nice to actually support zipping the stdlib in the Python build: making a build target that actually does that, and runs the tests with it. However, this requires modification of a whole bunch of tests, for example ones that assume that stdlib modules (and the tests themselves!) have actual files you can open() as their __file__ attribute, and we'd need to run the testsuite with the stdlib as a zip to prevent new ones from sneaking in. Also, at that point the question becomes if we need a transparent interface for opening module sourcefiles or arbitrary files living in packages, that could grab things out of zipfiles (like setuptools has in... one of the modules) -- or other archives of course. (And, yes, I'm zipping up the stdlib for Python 2.7 at Google, to reduce the impact on the aforementioned million of machines :) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat Mar 10 23:53:10 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 10 Mar 2012 23:53:10 +0100 Subject: [Python-Dev] Zipping the standard library. References: Message-ID: <20120310235310.28f12237@pitrou.net> On Sat, 10 Mar 2012 14:49:24 -0800 Thomas Wouters wrote: > Also, depending on what else you > want to put in the zipfile, you may have to be aware of zipimports limited > implementation of zipfiles that involve various 32k-filecount and > 2Gb-filesize limits. (And in case you're wondering, yes, we are doing this > with Python 2.7 at Google to save space. And yes, hitting the 2Gb limit is > quite possible for us.) Have you investigated filesystems with built-in compression? :) > Also, at that point the question becomes if we need a > transparent interface for opening module sourcefiles or arbitrary files > living in packages, that could grab things out of zipfiles (like setuptools > has in... one of the modules) -- or other archives of course. You mean pkg_resources I suppose. The problem is this kind of API is a PITA to use compared to the simple and obvious open() Regards Antoine. From pje at telecommunity.com Sun Mar 11 07:16:12 2012 From: pje at telecommunity.com (PJ Eby) Date: Sun, 11 Mar 2012 01:16:12 -0500 Subject: [Python-Dev] Zipping the standard library. In-Reply-To: References: Message-ID: On Sat, Mar 10, 2012 at 5:49 PM, Thomas Wouters wrote: > (And, yes, I'm zipping up the stdlib for Python 2.7 at Google, to reduce > the impact on the aforementioned million of machines :) > You might want to consider instead backporting the importlib caching facility, since it provides some of the zipimport benefits for plain old, non-zipped modules. Actually, a caching-only import hook that operated that way wouldn't even need the whole of importlib, just a wrapper over the standard C import that skips the unnecessary filesystem accesses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at python.org Sun Mar 11 20:26:05 2012 From: thomas at python.org (Thomas Wouters) Date: Sun, 11 Mar 2012 12:26:05 -0700 Subject: [Python-Dev] Zipping the standard library. In-Reply-To: References: Message-ID: On Sat, Mar 10, 2012 at 22:16, PJ Eby wrote: > On Sat, Mar 10, 2012 at 5:49 PM, Thomas Wouters wrote: > >> (And, yes, I'm zipping up the stdlib for Python 2.7 at Google, to reduce >> the impact on the aforementioned million of machines :) >> > > You might want to consider instead backporting the importlib caching > facility, since it provides some of the zipimport benefits for plain old, > non-zipped modules. Actually, a caching-only import hook that operated > that way wouldn't even need the whole of importlib, just a wrapper over the > standard C import that skips the unnecessary filesystem accesses. > Thanks for the suggestions (Antoine too), but that's not really the topic I want to discuss here (but if you guys move to Google I'll happily discuss all the stuff we have to deal with.) The questions is really whether Python wants to actually support zipped stdlibs or not. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Sun Mar 11 21:11:23 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Sun, 11 Mar 2012 13:11:23 -0700 Subject: [Python-Dev] im_func: implementation detail? Message-ID: <4F5D06EB.8050602@stoneleaf.us> How does someone know if something in CPython is an implementation detail or not? In the case of im_func, I think it is (an implementation detail), and another person thinks it is part of the language spec. That all implementations must have. ~Ethan~ From guido at python.org Sun Mar 11 21:53:31 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 11 Mar 2012 13:53:31 -0700 Subject: [Python-Dev] im_func: implementation detail? In-Reply-To: <4F5D06EB.8050602@stoneleaf.us> References: <4F5D06EB.8050602@stoneleaf.us> Message-ID: On Sun, Mar 11, 2012 at 1:11 PM, Ethan Furman wrote: > How does someone know if something in CPython is an implementation detail or > not? Sadly, there's a large grey area where the language reference doesn't have enough rigor, so asking here is often the only way. > In the case of im_func, I think it is (an implementation detail), and > another person thinks it is part of the language spec. ?That all > implementations must have. It's part of the language spec. However it's now called __func__. -- --Guido van Rossum (python.org/~guido) From guido at python.org Sun Mar 11 22:08:18 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 11 Mar 2012 14:08:18 -0700 Subject: [Python-Dev] Zipping the standard library. In-Reply-To: References: Message-ID: On Sun, Mar 11, 2012 at 12:26 PM, Thomas Wouters wrote: > Thanks for the suggestions (Antoine too), but that's not really the topic I > want to discuss here (but if you guys move to Google I'll happily discuss > all the stuff we have to deal with.) The questions is really whether Python > wants to actually support zipped stdlibs or not. I do want to support it; that's why we put the facilities you found there in the first place. Unfortunately nobody actually did the necessary second step of trying to bundle the stdlib and trying to make the tests pass. So I think it would be great if we addressed the issues you found, or at least started prioritizing them. I'm not sure if you're saying that you're hitting the 2 GB limit *with just the stdlib* in a zipfile, or if you're hitting this after you've added a bunch of Google code to it as well. I'm also not sure that it's worth the effort to make *all* the tests in the stdlib pass -- some tests may just be testing filesystem things that make no sense when the stdlib is in a zipfile. I see you frowning already about my lax attitide... So let me add that all non-test code should definitely work, and quite possible the only way to ensure that this is the case is to make all the tests pass. The issue with needing os.py outside the zipfile is a good thing to try to fix. The importlib and zipfile issues don't worry me particularly, but depending on your answer about the 2 GB limit I might get more concerned. -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Sun Mar 11 22:40:05 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 12 Mar 2012 07:40:05 +1000 Subject: [Python-Dev] Zipping the standard library. In-Reply-To: References: Message-ID: On Mon, Mar 12, 2012 at 7:08 AM, Guido van Rossum wrote: > I do want to support it; that's why we put the facilities you found > there in the first place. Unfortunately nobody actually did the > necessary second step of trying to bundle the stdlib and trying to > make the tests pass. So I think it would be great if we addressed the > issues you found, or at least started prioritizing them. This is the main stdlib API designed to avoid the need to make the filesystem imports assumption (as you can also use it to read source files): http://docs.python.org/py3k/library/pkgutil#pkgutil.get_data Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From thomas at python.org Sun Mar 11 22:41:29 2012 From: thomas at python.org (Thomas Wouters) Date: Sun, 11 Mar 2012 14:41:29 -0700 Subject: [Python-Dev] Zipping the standard library. In-Reply-To: References: Message-ID: On Sun, Mar 11, 2012 at 14:08, Guido van Rossum wrote: > On Sun, Mar 11, 2012 at 12:26 PM, Thomas Wouters > wrote: > > Thanks for the suggestions (Antoine too), but that's not really the > topic I > > want to discuss here (but if you guys move to Google I'll happily discuss > > all the stuff we have to deal with.) The questions is really whether > Python > > wants to actually support zipped stdlibs or not. > > I do want to support it; that's why we put the facilities you found > there in the first place. Unfortunately nobody actually did the > necessary second step of trying to bundle the stdlib and trying to > make the tests pass. So I think it would be great if we addressed the > issues you found, or at least started prioritizing them. > > I'm not sure if you're saying that you're hitting the 2 GB limit *with > just the stdlib* in a zipfile, or if you're hitting this after you've > added a bunch of Google code to it as well. No, not with just the stdlib, but in a Google binary that embeds Python -- the 32-bit-unsigned numbers in zipfiles are file offsets, so in a Google binary (which, as you know, is typically a completely statically linked binary) the offsets for a zipfile embedded in the binary can be bigger than that. (If you were thinking of PAR files, those don't use zipimport themselves, but their own PEP-302 importer written in Python with the zipfile module, so it's okay.) > I'm also not sure that > it's worth the effort to make *all* the tests in the stdlib pass -- > some tests may just be testing filesystem things that make no sense > when the stdlib is in a zipfile. I see you frowning already about my > lax attitide... Hah, no, I wasn't frowning when I read that :) I don't care about making all tests pass, but I do want them to not fail -- a test should only fail if the tested thing doesn't work, not if the test can't run. For what it's worth, the vast majority of tests work fine, there's just a couple that take what I would call unwarranted assumptions. For example, the zipfile module wants to test the writepy method, so it needs a module and a package to bundle in the zipfile. It could make a bunch of tempfiles (as most other tests do) into a package, but instead it uses email.__file__ to find the email package and uses that. The only failing test I remember that wasn't of the kind of using the stdlib source out of laziness is test_pyclbr, which runs pyclbr over a whole bunch of large stdlib modules. It also does other tests, so I don't think skipping the test for a zipped stdlib is a big deal, but even that could be fixed by using PEP 302's interface for getting the source. Of course, we also have to consider that the zipped stdlib may contain just .pyc files :) So it's definitely possible to fix most tests, possibly all of them, without too much effort. > So let me add that all non-test code should definitely > work, and quite possible the only way to ensure that this is the case > is to make all the tests pass. The issue with needing os.py outside > the zipfile is a good thing to try to fix. > I forgot to include a link to http://bugs.python.org/issue12919 that makes that a little less confusing (to me, although others apparently disagreed :) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Mar 12 00:02:07 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 11 Mar 2012 16:02:07 -0700 Subject: [Python-Dev] Where to discuss PEP 382 vs. PEP 402 (namespace packages)? Message-ID: Martin has asked me to decide on PEP 382 vs. PEP 402 (namespace packages) in time for inclusion of the decision in Python 3.3. As people who attended the language-sig know, I am leaning towards PEP 402 but I admit that at this point I don't have enough information. If I have questions, should I be asking them on the import-sig or on python-dev? Is it tolerable if I ask questions even if the answer is somewhere in the archives? (I spent a lot of time reviewing the "pitchfork thread", http://mail.python.org/pipermail/python-dev/2006-April/064400.html, but that wasn't particularly fruitful, so I'm worried I'd just waste my time browsing the archives -- if the PEP authors did their jobs well the PEPs should include summaries of the discussion anyways.) -- --Guido van Rossum (python.org/~guido) From eric at trueblade.com Mon Mar 12 00:05:48 2012 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 11 Mar 2012 16:05:48 -0700 Subject: [Python-Dev] =?utf-8?q?=5BImport-SIG=5D_Where_to_discuss_PEP_382_?= =?utf-8?q?vs=2E_PEP_402_=28namespace=09packages=29=3F?= In-Reply-To: References: Message-ID: I think restarting the discussion anew here on distutils-sig is appropriate. -- Eric. Guido van Rossum wrote: Martin has asked me to decide on PEP 382 vs. PEP 402 (namespace packages) in time for inclusion of the decision in Python 3.3. As people who attended the language-sig know, I am leaning towards PEP 402 but I admit that at this point I don't have enough information. If I have questions, should I be asking them on the import-sig or on python-dev? Is it tolerable if I ask questions even if the answer is somewhere in the archives? (I spent a lot of time reviewing the "pitchfork thread", http://mail.python.org/pipermail/python-dev/2006-April/064400.html, but that wasn't particularly fruitful, so I'm worried I'd just waste my time browsing the archives -- if the PEP authors did their jobs well the PEPs should include summaries of the discussion anyways.) -- --Guido van Rossum (python.org/~guido) _____________________________________________ Import-SIG mailing list Import-SIG at python.org http://mail.python.org/mailman/listinfo/import-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Mon Mar 12 00:06:54 2012 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 11 Mar 2012 16:06:54 -0700 Subject: [Python-Dev] =?utf-8?q?=5BImport-SIG=5D_Where_to_discuss_PEP_382_?= =?utf-8?q?vs=2E_PEP_402_=28namespace=09packages=29=3F?= In-Reply-To: References: Message-ID: <5b62ac97-c71b-4a10-a713-4c1aa43da783@email.android.com> And of course I meant import-sig. -- Eric. "Eric V. Smith" wrote: I think restarting the discussion anew here on distutils-sig is appropriate. -- Eric. Guido van Rossum wrote: Martin has asked me to decide on PEP 382 vs. PEP 402 (namespace packages) in time for inclusion of the decision in Python 3.3. As people who attended the language-sig know, I am leaning towards PEP 402 but I admit that at this point I don't have enough information. If I have questions, should I be asking them on the import-sig or on python-dev? Is it tolerable if I ask questions even if the answer is somewhere in the archives? (I spent a lot of time reviewing the "pitchfork thread", http://mail.python.org/pipermail/python-dev/2006-April/064400.html, but that wasn't particularly fruitful, so I'm worried I'd just waste my time browsing the archives -- if the PEP authors did their jobs well the PEPs should include summaries of the discussion anyways.) -- --Guido van Rossum (python.org/~guido) _____________________________________________ Import-SIG mailing list Import-SIG at python.org http://mail.python.org/mailman/listinfo/import-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Mon Mar 12 00:24:52 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 11 Mar 2012 16:24:52 -0700 Subject: [Python-Dev] Where to discuss PEP 382 vs. PEP 402 (namespace packages)? In-Reply-To: References: Message-ID: <4F5D3444.8060402@v.loewis.de> Am 11.03.12 16:02, schrieb Guido van Rossum: > Martin has asked me to decide on PEP 382 vs. PEP 402 (namespace > packages) in time for inclusion of the decision in Python 3.3. As > people who attended the language-sig know, I am leaning towards PEP > 402 but I admit that at this point I don't have enough information. If > I have questions, should I be asking them on the import-sig or on > python-dev? import-sig would be best. > Is it tolerable if I ask questions even if the answer is > somewhere in the archives? Sure! Martin From pje at telecommunity.com Mon Mar 12 16:43:28 2012 From: pje at telecommunity.com (PJ Eby) Date: Mon, 12 Mar 2012 11:43:28 -0400 Subject: [Python-Dev] Fwd: [Import-SIG] Where to discuss PEP 382 vs. PEP 402 (namespace packages)? In-Reply-To: References: <20120311184352.02a50c9c@resist> Message-ID: Ugh; this was supposed to be sent to the list, not just Guido. (I wish Gmail defaulted to reply-all in the edit box.) ---------- Forwarded message ---------- From: PJ Eby Date: Mon, Mar 12, 2012 at 12:16 AM Subject: Re: [Import-SIG] Where to discuss PEP 382 vs. PEP 402 (namespace packages)? To: Guido van Rossum On Sun, Mar 11, 2012 at 10:39 PM, Guido van Rossum wrote: > I'm leaning towards PEP 402 or some variant. Let's have a pow-wow at > the sprint tomorrow (I'll arrive in Santa Clara between 10 and 10:30). > I do want to understand Nick's argument better; I haven't studied PEP > 395 yet. > Note that PEP 395 can stay compatible with PEP 402 by a fairly straightforward change: instead of implicitly and automagically guessing the needed sys.path[0] change, it could be made explicit by adding something like this to the top of script/modules that are inside a package: import pkgutil pkgutil.script_module(__name__, 'mypackage.thismodule') Assuming __name__=='__main__', the API would set __main__.__qualname__, set sys.modules[qualname] = __main__, and fix up sys.path[0] if and only if it still is the parent directory of __main__.__file__. (If __name__!=='__main__' and it's not equal to the second argument either, it'd be an error.) Then, in the event of broken relative imports or module aliasing, the error message can suggest adding a script_module() declaration to explicitly make the file a "dual citizen" -- i.e., script/module. (It's already possible for PEP 395 to be confused by stray __init__.py files or __path__ manipulation; using error messages and explicit declaration instead of guessing seems like a better route for 395 to take.) Of course, it's also possible to fix the 395/402 incompatibility by reintroducing some sort of marker, such as .pyp directory extensions or by including *.pyp marker files within package directories. The problem is that these markers work against the intuitive nature of PEP 402 if they are required, and they do not help 395 if nobody uses them due to their optionality. ;-) (Last, but not least, the compromise approach: allow explicit script/module declaration as a workaround for virtual packages, AND support automagic __qualname__ recognition for self-contained packages... but still give error messages for broken relative imports and aliasing that suggest the explicit declaration.) Anyway, the other open issues for 402 are: * Dealing with updates to sys.path * Iterating available virtual packages There was a Python-Dev discussion about the first, in which I realized that sys.path updates can actually be handled transparently by making virtual __path__ objects be special iterables rather than lists; but the PEP hasn't been updated to reflect that. (I was actually waiting for some sign of BDFL interest before adding a potential complication like that to the PEP.) The relevant proposal was: > This seems to lean in favor of making a simple reiterable wrapper > type for the __path__, that only allows you to take the length and > iterate over it. With an appropriate design, it could actually > update itself automatically, given a subname and a parent > __path__/sys.path. That is, it could keep a tuple copy of the > last-seen parent path, and before iteration, compare > tuple(self.parent_path) to self.last_seen_path. If they're > different, it rebuilds the value to be iterated over. > Voila: transparent updating of all virtual __path__ values from > sys.path changes (or modifications to self-contained __path__ > parents, btw), and trying to change it (or read an item from it > positionally) will not create any silent failures. > Alright... *if* we support automatic updates to virtual __paths__, > this is probably how we should do it. (It will require, though, that > imp.find_module be changed to use a different iteration method than > PyList_GetItem, as it's quite possible a virtual __path__ will get > passed into it.) I actually drafted an implementation of this to work with importlib, so it seems pretty feasible to support automatically-updated virtual paths that change on the next import attempt if sys.path (or any parent __path__) has changed since the last time. Iterating virtual packages is a somewhat harder problem, since it's not really practical to do an unbounded subdirectory search for importable files. Probably, the pkgutil module-walking APIs just need to grow some extra flags for virtual package searching, with some reasonable defaults. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmoody at google.com Mon Mar 12 17:15:04 2012 From: pmoody at google.com (Peter Moody) Date: Mon, 12 Mar 2012 09:15:04 -0700 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: Message-ID: On Wed, Feb 29, 2012 at 9:13 PM, Peter Moody wrote: > Just checking in: > > On Mon, Feb 20, 2012 at 5:48 PM, Nick Coghlan wrote: >> At the very least: >> - the IP Interface API needs to move to a point where it more clearly >> *is* an IP Address and *has* an associated IP Network (rather than >> being the other way around) > > This is done [1]. There's cleanup that needs to happen here, but the > interface classes are now subclasses of the respective address > classes. > > Now I need to apply some consistency and then move on to the remaining > issues points: > >> - IP Network needs to behave more like an ordered set of sequential IP >> Addresses (without sometimes behaving like an Address in its own >> right) This is done [2]. Consistent iterable apis and polish left to do. Cheers, peter [2] http://code.google.com/p/ipaddress-py/source/detail?r=578ef1777018211f536cacd29b6750086430fd141 >> - iterable APIs should consistently produce iterators (leaving users >> free to wrap list() around the calls if they want the concrete >> realisation) > > Cheers, > peter > > [1] http://code.google.com/p/ipaddress-py/source/detail?r=10dd6a68139fb99116219865afcd1c183777e8cc > (the date is munged b/c I rebased to my original commit before submitting). > > -- > Peter Moody? ? ? Google? ? 1.650.253.7306 > Security Engineer? pgp:0xC3410038 -- Peter Moody? ? ? Google? ? 1.650.253.7306 Security Engineer? pgp:0xC3410038 From fdrake at acm.org Mon Mar 12 18:17:41 2012 From: fdrake at acm.org (Fred Drake) Date: Mon, 12 Mar 2012 13:17:41 -0400 Subject: [Python-Dev] Fwd: [Import-SIG] Where to discuss PEP 382 vs. PEP 402 (namespace packages)? In-Reply-To: References: <20120311184352.02a50c9c@resist> Message-ID: On Mon, Mar 12, 2012 at 11:43 AM, PJ Eby wrote: > I wish Gmail defaulted to reply-all in the edit box. There's a lab for that. :-) -Fred -- Fred L. Drake, Jr.? ? "A person who won't read has no advantage over one who can't read." ?? --Samuel Langhorne Clemens From michael at voidspace.org.uk Mon Mar 12 23:05:07 2012 From: michael at voidspace.org.uk (Michael Foord) Date: Mon, 12 Mar 2012 15:05:07 -0700 Subject: [Python-Dev] PEP 417: Adding mock to the Python Standard Library Message-ID: <8B5E65A3-E364-4AA2-9963-AFE98C9BB50D@voidspace.org.uk> Hello all, At the Python Language Summit adding the "mock" library to the Python Standard Library was discussed and agreed. Here is a very brief PEP covering the decision and rationale. All the best, Michael Foord -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pep-0417.txt URL: -------------- next part -------------- -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From WWatts at vtrIT.com Mon Mar 12 23:40:09 2012 From: WWatts at vtrIT.com (Watts, Wendy) Date: Mon, 12 Mar 2012 15:40:09 -0700 Subject: [Python-Dev] Job! Python Engineer position Message-ID: <0E23478CC3BCA649B6E08B0A78691B9C5CC160@CA100EX5.west.vis.com> Hello, my name is Wendy; I am a IT recruiter for vtrIT which is a division of Volt Workforce Technical Solutions located in San Francisco. I have an urgent Senior and Junior Python Engineer positions open for a client located in CA. I am reaching out to you in finding out your status of new opportunity, and if we can speak verbal? We will pay travel expenses. Please job description for your review? Specific's of the positions are as follows: * Job Title - one Sr. Python Engineer and 2 (mid-level) Python Engineers * Location: It must be 100% onsite at client location in Mountain View, California. * Duration: six-month contract If interested please forward me your resume a word attachment and I will call you ASAP. Email: wwatts at vtrit.com Wendy Watts IT Recruiter VTRIT 100 First Street, Suite 200 I San Francisco, CA 55120 t: 415.536.5844 I f: 415.536.2845 wwatts at vtrit .com | vtrit.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Tue Mar 13 00:00:49 2012 From: brian at python.org (Brian Curtin) Date: Mon, 12 Mar 2012 16:00:49 -0700 Subject: [Python-Dev] Job! Python Engineer position In-Reply-To: <0E23478CC3BCA649B6E08B0A78691B9C5CC160@CA100EX5.west.vis.com> References: <0E23478CC3BCA649B6E08B0A78691B9C5CC160@CA100EX5.west.vis.com> Message-ID: On Mon, Mar 12, 2012 at 15:40, Watts, Wendy wrote: > Hello, my name is Wendy; I am a IT recruiter for vtrIT which is a division > of Volt Workforce Technical Solutions located in San Francisco. I have an > urgent Senior and Junior Python Engineer positions open for a client located > in CA. I am reaching out to you in finding out your status of new > opportunity, and if we can speak verbal? Please do not post jobs to this list. jobs at python.org is a better location: http://www.python.org/community/jobs/ From greg at krypto.org Tue Mar 13 00:01:34 2012 From: greg at krypto.org (Gregory P. Smith) Date: Mon, 12 Mar 2012 16:01:34 -0700 Subject: [Python-Dev] PEP 417: Adding mock to the Python Standard Library In-Reply-To: <8B5E65A3-E364-4AA2-9963-AFE98C9BB50D@voidspace.org.uk> References: <8B5E65A3-E364-4AA2-9963-AFE98C9BB50D@voidspace.org.uk> Message-ID: +10 for the record. (given we all already agreed upon this in the summit :) make it so. On Mon, Mar 12, 2012 at 3:05 PM, Michael Foord wrote: > Hello all, > > At the Python Language Summit adding the "mock" library to the Python > Standard Library was discussed and agreed. Here is a very brief PEP > covering the decision and rationale. > > All the best, > > Michael Foord > > > > > -- > http://www.voidspace.org.uk/ > > > May you do good and not evil > May you find forgiveness for yourself and forgive others > May you share freely, never taking more than you give. > -- the sqlite blessing > http://www.sqlite.org/different.html > > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/greg%40krypto.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From WWatts at vtrIT.com Tue Mar 13 00:01:28 2012 From: WWatts at vtrIT.com (Watts, Wendy) Date: Mon, 12 Mar 2012 16:01:28 -0700 Subject: [Python-Dev] Job! Python Engineer position In-Reply-To: References: <0E23478CC3BCA649B6E08B0A78691B9C5CC160@CA100EX5.west.vis.com> Message-ID: <0E23478CC3BCA649B6E08B0A78691B9C5CC1A7@CA100EX5.west.vis.com> Great. Thank you. Wendy Watts IT Recruiter VTRIT 100 First Street, Suite 200 I San Francisco, CA 55120 t:?415.536.5844? I f: 415.536.2845 wwatts at vtrit.com | vtrit.com -----Original Message----- From: Brian Curtin [mailto:brian at python.org] Sent: Monday, March 12, 2012 4:01 PM To: Watts, Wendy Cc: python-dev at python.org Subject: Re: [Python-Dev] Job! Python Engineer position On Mon, Mar 12, 2012 at 15:40, Watts, Wendy wrote: > Hello, my name is Wendy; I am a IT recruiter for vtrIT which is a division > of Volt Workforce Technical Solutions located in San Francisco. I have an > urgent Senior and Junior Python Engineer positions open for a client located > in CA. I am reaching out to you in finding out your status of new > opportunity, and if we can speak verbal? Please do not post jobs to this list. jobs at python.org is a better location: http://www.python.org/community/jobs/ From guido at python.org Tue Mar 13 00:31:29 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 12 Mar 2012 16:31:29 -0700 Subject: [Python-Dev] PEP 417: Adding mock to the Python Standard Library In-Reply-To: References: <8B5E65A3-E364-4AA2-9963-AFE98C9BB50D@voidspace.org.uk> Message-ID: More to the point, I am approving the PEP. We chatted briefly at the sprint and I just want to emphasize that the external version should not grow new features before the stdlib version has those same features (we don't want users complaining that the stdlib version is no good). Also, if you have some new features you're planning to add, add them now, before inclusion into the stdlib; and likewise, if you have some things you would like to remove, remove them now. Good luck! --Guido On Mon, Mar 12, 2012 at 4:01 PM, Gregory P. Smith wrote: > +10 for the record. ?(given we all already agreed upon this in the summit :) > > make it so. > > On Mon, Mar 12, 2012 at 3:05 PM, Michael Foord > wrote: >> >> Hello all, >> >> At the Python Language Summit adding the "mock" library to the Python >> Standard Library was discussed and agreed. Here is a very brief PEP covering >> the decision and rationale. >> >> All the best, >> >> Michael Foord >> >> >> >> >> -- >> http://www.voidspace.org.uk/ >> >> >> May you do good and not evil >> May you find forgiveness for yourself and forgive others >> May you share freely, never taking more than you give. >> -- the sqlite blessing >> http://www.sqlite.org/different.html >> >> >> >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/greg%40krypto.org >> > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From solipsis at pitrou.net Tue Mar 13 00:28:16 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 13 Mar 2012 00:28:16 +0100 Subject: [Python-Dev] cpython: give the AST class a __dict__ References: Message-ID: <20120313002816.2eb2652b@pitrou.net> On Mon, 12 Mar 2012 17:56:10 +0100 benjamin.peterson wrote: > http://hg.python.org/cpython/rev/3877bf2e3235 > changeset: 75542:3877bf2e3235 > user: Benjamin Peterson > date: Mon Mar 12 09:46:44 2012 -0700 > summary: > give the AST class a __dict__ This seems to have broken the Windows buildbots. cheers Antoine. From brett at python.org Tue Mar 13 00:33:47 2012 From: brett at python.org (Brett Cannon) Date: Mon, 12 Mar 2012 19:33:47 -0400 Subject: [Python-Dev] New PEP numbering scheme Message-ID: It came up at the sprints about how to choose new PEP numbers. It was agreed that the newest, *lowest* number should be used (e.g. 418) and not the next highest number (e.g. 3156). I have already updated PEP 1 to reflect this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Mar 13 00:37:57 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 12 Mar 2012 16:37:57 -0700 Subject: [Python-Dev] PEP 335 officially rejected Message-ID: We've had many discussions in the past about PEP 335 and they always ended in non-action. I'm cutting any future discussions short and officially rejecting the PEP. Amongst other reasons, I really dislike that the PEP adds to the bytecode for all uses of these operators even though almost no call sites will ever need the feature. PS. The NumPy folks brought up a somewhat separate issue: for them, the most common use case is chained comparisons (e.g. A < B < C). If someone wants to propose a PEP that makes this case overloadable I might be amenable to accepting it, since chained comparisons are used much less frequently than the and/or operators. -- --Guido van Rossum (python.org/~guido) From barry at python.org Tue Mar 13 00:39:34 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 12 Mar 2012 16:39:34 -0700 Subject: [Python-Dev] New PEP numbering scheme In-Reply-To: References: Message-ID: <20120312163934.4165817b@resist> On Mar 12, 2012, at 07:33 PM, Brett Cannon wrote: >It came up at the sprints about how to choose new PEP numbers. It was >agreed that the newest, *lowest* number should be used (e.g. 418) and not >the next highest number (e.g. 3156). I have already updated PEP 1 to >reflect this. +1 -Barry From michael at voidspace.org.uk Tue Mar 13 00:47:27 2012 From: michael at voidspace.org.uk (Michael Foord) Date: Mon, 12 Mar 2012 16:47:27 -0700 Subject: [Python-Dev] PEP 417: Adding mock to the Python Standard Library In-Reply-To: References: <8B5E65A3-E364-4AA2-9963-AFE98C9BB50D@voidspace.org.uk> Message-ID: <1809579D-6724-4357-BBD4-B8E45A436550@voidspace.org.uk> On 12 Mar 2012, at 16:31, Guido van Rossum wrote: > More to the point, I am approving the PEP. > > We chatted briefly at the sprint and I just want to emphasize that the > external version should not grow new features before the stdlib > version has those same features (we don't want users complaining that > the stdlib version is no good). Also, if you have some new features > you're planning to add, add them now, before inclusion into the > stdlib; and likewise, if you have some things you would like to > remove, remove them now. > Thanks. I'm happy to live with new feature releases only every 18 months in the external backport. Before inclusion there is one obsolete feature I'd like to remove (mocksignature) and two minor features I'd like to add (support for attribute deletion and a helper function for mocking open as a context manager). Beyond that the api is stable. A bunch of Python 2 compatibility code can also be removed in the standard library version. All the best, Michael Foord > Good luck! > > --Guido > > On Mon, Mar 12, 2012 at 4:01 PM, Gregory P. Smith wrote: >> +10 for the record. (given we all already agreed upon this in the summit :) >> >> make it so. >> >> On Mon, Mar 12, 2012 at 3:05 PM, Michael Foord >> wrote: >>> >>> Hello all, >>> >>> At the Python Language Summit adding the "mock" library to the Python >>> Standard Library was discussed and agreed. Here is a very brief PEP covering >>> the decision and rationale. >>> >>> All the best, >>> >>> Michael Foord >>> >>> >>> >>> >>> -- >>> http://www.voidspace.org.uk/ >>> >>> >>> May you do good and not evil >>> May you find forgiveness for yourself and forgive others >>> May you share freely, never taking more than you give. >>> -- the sqlite blessing >>> http://www.sqlite.org/different.html >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> http://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> http://mail.python.org/mailman/options/python-dev/greg%40krypto.org >>> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From guido at python.org Tue Mar 13 00:55:33 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 12 Mar 2012 16:55:33 -0700 Subject: [Python-Dev] PEP 417: Adding mock to the Python Standard Library In-Reply-To: <1809579D-6724-4357-BBD4-B8E45A436550@voidspace.org.uk> References: <8B5E65A3-E364-4AA2-9963-AFE98C9BB50D@voidspace.org.uk> <1809579D-6724-4357-BBD4-B8E45A436550@voidspace.org.uk> Message-ID: On Mon, Mar 12, 2012 at 4:47 PM, Michael Foord wrote: > > On 12 Mar 2012, at 16:31, Guido van Rossum wrote: > >> More to the point, I am approving the PEP. >> >> We chatted briefly at the sprint and I just want to emphasize that the >> external version should not grow new features before the stdlib >> version has those same features (we don't want users complaining that >> the stdlib version is no good). Also, if you have some new features >> you're planning to add, add them now, before inclusion into the >> stdlib; and likewise, if you have some things you would like to >> remove, remove them now. >> > > Thanks. I'm happy to live with new feature releases only every 18 months in the external backport. > > Before inclusion there is one obsolete feature I'd like to remove (mocksignature) and two minor features I'd like to add (support for attribute deletion and a helper function for mocking open as a context manager). Beyond that the api is stable. A bunch of Python 2 ?compatibility code can also be removed in the standard library version. Sounds good. Congrats! -- --Guido van Rossum (python.org/~guido) From victor.stinner at gmail.com Tue Mar 13 01:18:29 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 13 Mar 2012 01:18:29 +0100 Subject: [Python-Dev] cpython: give the AST class a __dict__ In-Reply-To: <20120313002816.2eb2652b@pitrou.net> References: <20120313002816.2eb2652b@pitrou.net> Message-ID: > benjamin.peterson wrote: >> http://hg.python.org/cpython/rev/3877bf2e3235 >> changeset: ? 75542:3877bf2e3235 >> user: ? ? ? ?Benjamin Peterson >> date: ? ? ? ?Mon Mar 12 09:46:44 2012 -0700 >> summary: >> ? give the AST class a __dict__ > > This seems to have broken the Windows buildbots. http://hg.python.org/cpython/rev/6bee4eea1efa should fix the compilation on Windows. Victor From guido at python.org Tue Mar 13 01:51:13 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 12 Mar 2012 17:51:13 -0700 Subject: [Python-Dev] Review of PEP 362 (signature object) Message-ID: I'm very sympathetic to this PEP. I would accept it outright except I have a few quibbles on details, and some questions and remarks. - There are several examples of poor English grammar, perhaps from your co-author. Can you fix these? (Do you need me to produce a list?) - You're using an informal notation to indicate compound types, e.g. dict(str, object). I'm not sure it's worth using this particular notation without defining it (although maybe the time is ripe for creating a PEP that proposes a standard use for parameter annotations...). You're also not using it very consistently - sig.name is currently the unqualified function name. Now we have __qualname__ maybe this should be the qualified name instead? - Did you think about whether var_args and var_kw_args should be '' or None or undefined if these aren't present in the actual definition? - If there is no return annotation, is return_annotation None or undefined? (TBH I think undefined is awkward because you'd have to use hasattr() to test for its presence. I'd be okay with equating None with no return annotation. For parameter annotations I'm less sure, it's not so bad to test for presence in the dict, and you can easily use .get().) - I don't quite understand how bind() is supposed to work. Maybe an example would help? (It could also use some motivation. I think this is meant to expose a canonical version of the algorithm that maps arguments to parameters. What's a use case?) - Why is bind() listed under attributes, while there's also a list of methods? Is it something funky about self? - The PEP still seems to support tuple unpacking, which is no longer supported in Python 3. Please take it out. - I see it was my idea to give kw-only parameter a valid but meaningless position. I think I want to revert my opinion; it would be odd if there's a (kw-only) *parameter* 5 that cannot correspond to *argument* 5. So let's set it to None if it's kw-only. Maybe sig.parameters should not have these guys? Or it should have a separate sig.kw_only_parameters which is a dict?????? - There is mention of has_annotation but no definition. Is this due to a half revision of the PEP? I sort of see the point but maybe it's more pragmatic to set it to None for an absent annotation? (Later: maybe I see, there's a similar pattern for defaults, and it does make sense to distinguish between "x" and "x = y".) - And why are there two ways of getting the annotations, one via sig.var_annotations[v] and once via sig[v].annotation ? - Actually I now see there are also, kind of, two ways to access the Parameter objects: sig[v] and sig.parameters[i]. But maybe that's more defensible since we want to be able to access them by position or by name. - You have some examples like "var_args(*[1,2,3])" -- I think that should just be "var_args(1, 2, 3)" right? Similar "var_kw_args(**{'a': 1})" should be "var_kw_args(a=1)"... - You have some example code using try: [] // except KeyError: pass // else: . Wouldn't that be expressed cleaner using if in : ? - Similar, this smells a bit; can you explain or improve? try: duck = sig.var_annotations[sig.var_kw_args] except (KeyError, AttributeError): That's all I have for now; on to reject^Wreview some more PEPs... -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Tue Mar 13 02:12:34 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 13 Mar 2012 11:12:34 +1000 Subject: [Python-Dev] Review of PEP 362 (signature object) In-Reply-To: References: Message-ID: On Tue, Mar 13, 2012 at 10:51 AM, Guido van Rossum wrote: > - I don't quite understand how bind() is supposed to work. Maybe an > example would help? (It could also use some motivation. I think this > is meant to expose a canonical version of the algorithm that maps > arguments to parameters. What's a use case?) I can help with that part: one use case is to give early errors for bad arguments to delayed calls. Currently, if you have a delayed call (e.g. a callback of some kind) that accepts (callable, *args, **kwds), there's no parameter checking until the call actually happens. That can lead to quite a debugging hunt as you try to track down where the bad callback was originally registered. However, with bind() available, you can do an initial sanity check that the arguments can at least be used to invoke the callable, throwing an error on *registration* if the callback is simply never going to work. Another use case is more sophisticated protocol checking than a simple hasattr() check for a method name - you can check that the method will accept the arguments you want to pass, not just whether it exists or not. (For example, this can help generate better error messages if a protocol evolves to accept additional optional arguments, but supporting those arguments is *required* for a particular application) Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From brett at python.org Tue Mar 13 02:51:48 2012 From: brett at python.org (Brett Cannon) Date: Mon, 12 Mar 2012 21:51:48 -0400 Subject: [Python-Dev] Review of PEP 362 (signature object) In-Reply-To: References: Message-ID: On Mon, Mar 12, 2012 at 20:51, Guido van Rossum wrote: > I'm very sympathetic to this PEP. I would accept it outright except I > have a few quibbles on details, and some questions and remarks. > > - There are several examples of poor English grammar, perhaps from > your co-author. Can you fix these? (Do you need me to produce a list?) > Nope. I will do a re-read since it's an old PEP. > > - You're using an informal notation to indicate compound types, e.g. > dict(str, object). I'm not sure it's worth using this particular > notation without defining it (although maybe the time is ripe for > creating a PEP that proposes a standard use for parameter > annotations...). You're also not using it very consistently > > - sig.name is currently the unqualified function name. Now we have > __qualname__ maybe this should be the qualified name instead? > Probably. > > - Did you think about whether var_args and var_kw_args should be '' or > None or undefined if these aren't present in the actual definition? > It's an open issue in the PEP. Perhaps they can be set to their default values of () and {}, respectively? > > - If there is no return annotation, is return_annotation None or > undefined? (TBH I think undefined is awkward because you'd have to use > hasattr() to test for its presence. I'd be okay with equating None > with no return annotation. For parameter annotations I'm less sure, > it's not so bad to test for presence in the dict, and you can easily > use .get().) > I think it should be None since that is what the return value is > > - I don't quite understand how bind() is supposed to work. Maybe an > example would help? (It could also use some motivation. I think this > is meant to expose a canonical version of the algorithm that maps > arguments to parameters. What's a use case?) > Nick addressed this. > > - Why is bind() listed under attributes, while there's also a list of > methods? Is it something funky about self? > Probably just an oversight. > > - The PEP still seems to support tuple unpacking, which is no longer > supported in Python 3. Please take it out. > Sure thing. > > - I see it was my idea to give kw-only parameter a valid but > meaningless position. I think I want to revert my opinion; it would be > odd if there's a (kw-only) *parameter* 5 that cannot correspond to > *argument* 5. So let's set it to None if it's kw-only. Maybe > sig.parameters should not have these guys? Or it should have a > separate sig.kw_only_parameters which is a dict?????? > Yeah, trying to handle these odd cases is one of the reasons I have not pushed hard for this PEP before. =) > > - There is mention of has_annotation but no definition. Is this due to > a half revision of the PEP? I sort of see the point but maybe it's > more pragmatic to set it to None for an absent annotation? (Later: > maybe I see, there's a similar pattern for defaults, and it does make > sense to distinguish between "x" and "x = y".) > I will clarify in the PEP. > > - And why are there two ways of getting the annotations, one via > sig.var_annotations[v] and once via sig[v].annotation ? > Probably just made sense for some reason all those years ago. > > - Actually I now see there are also, kind of, two ways to access the > Parameter objects: sig[v] and sig.parameters[i]. But maybe that's more > defensible since we want to be able to access them by position or by > name. > > - You have some examples like "var_args(*[1,2,3])" -- I think that > should just be "var_args(1, 2, 3)" right? Similar "var_kw_args(**{'a': > 1})" should be "var_kw_args(a=1)"... > > Quite possibly. > - You have some example code using try: [] // > except KeyError: pass // else: . Wouldn't that be expressed > cleaner using if in : ? > Yes. =) > > - Similar, this smells a bit; can you explain or improve? > try: > duck = sig.var_annotations[sig.var_kw_args] > except (KeyError, AttributeError): > > Sure. -Brett > That's all I have for now; on to reject^Wreview some more PEPs... > > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shazow at gmail.com Tue Mar 13 03:23:11 2012 From: shazow at gmail.com (Andrey Petrov) Date: Mon, 12 Mar 2012 19:23:11 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives Message-ID: Hi Pythonistas, I've had the pleasure of speaking with Guido at PyCon and it became evident that some of Python's included batteries are significantly lagging behind the rapidly-evolving defacto standards of the community?specifically in cases like urllib and urllib2, which lack important features provided by alternatives like httplib2, requests, and my own urllib3. Part 1: I propose we add a snippet to the top of the documentation of specific archaic standard modules which encourages users to investigate third-party alternatives if they find the standard module frustrating or otherwise lacking. These notes would target new users, including those coming from other languages where the third-party library choices are not nearly as amazing as they are in Python. (For what it's worth, Guido has verbally agreed to a proposal of this nature.) What such a snippet might look like: "Batteries are included with Python but sometimes they are old and leaky?this is one of those cases. Please have a look in PyPI for more modern alternatives provided by the Python community." Additionally, I would like for us as a community to identify which other standard libraries (cgi? ssl? others?) are candidates for this kind of messaging in the official Python documentation. Part 2: I propose we add a new category of package identifiers such as "Topic :: Standard Library Alternative :: {stdlib_package_name}" which authors of libraries can tag themselves under. The documentation warning snippet will provide a link to the appropriate PyPI query to list packages claiming to be alternatives to the stdlib package in question. Objections? Concerns? Improvements? What is the next step to making this happen? Pythonically yours, - Andrey (on behalf of Ori Livneh, Kenneth Reitz, Brandon Rhodes, David Wolever, and everyone else who contributed to this letter during our PyCon sprint.) P.S. Appendix: Here are some additional snippet alternatives that were proposed: > Batteries are included with Python but sometimes they are old and leaky?this is one of those cases. Please have a look in PyPI for more modern alternatives provided by the Python community. > While this module has served Python programmers faithfully for many years, there are now many powerful alternatives available as third-party modules. To learn more about them, view the Python Package Index results for the category "Topic :: Standard Library Alternative :: asyncore." ? [With the topic name as a hyperlink] > This module has been identified by the community as crusty, a signal that better alternatives exist outside the standard library. Because the Python standard library is constained to maintain backward-compatibility, it does not always reflect what is current or common in the Python community. If you are not constrained to support legacy code, you may wish to browse the Python Package Index for alternatives to this module. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Tue Mar 13 04:07:03 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 12 Mar 2012 23:07:03 -0400 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: <20120313030704.46C512500ED@webabinitio.net> I don't like any of the suggested wordings. I have no problem with us recommending other modules, but most of the Python libraries are perfectly functional (not "leaky" or some other pejorative), they just aren't as capable as the wiz-bang new stuff that's available on PyPI. --David From eliben at gmail.com Tue Mar 13 04:22:45 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 13 Mar 2012 05:22:45 +0200 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <20120313030704.46C512500ED@webabinitio.net> References: <20120313030704.46C512500ED@webabinitio.net> Message-ID: On Tue, Mar 13, 2012 at 05:07, R. David Murray wrote: > I don't like any of the suggested wordings. ?I have no problem with > us recommending other modules, but most of the Python libraries are > perfectly functional (not "leaky" or some other pejorative), they just > aren't as capable as the wiz-bang new stuff that's available on PyPI. > +1 to David's comment, and -0 on the proposal as a whole. The suggested wordings are simply offensive to those modules & their maintainers specifically, and to Python generally. Personally, I think an intelligent user should realize that a language's standard library won't provide all the latest and shiniest gadgets. Rather, it will focus on providing stable tools that have withstood the test of time and can serve as a basis for building more advanced tools. That intelligent user should also be aware of PyPI (and the main Python page makes it prominent enough), so I see no reason explicitly pointing to it in the documentation of several modules. Eli From ctb at msu.edu Tue Mar 13 04:25:49 2012 From: ctb at msu.edu (C. Titus Brown) Date: Mon, 12 Mar 2012 20:25:49 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: <20120313030704.46C512500ED@webabinitio.net> Message-ID: <20120313032548.GA29711@idyll.org> On Tue, Mar 13, 2012 at 05:22:45AM +0200, Eli Bendersky wrote: > On Tue, Mar 13, 2012 at 05:07, R. David Murray wrote: > > I don't like any of the suggested wordings. ?I have no problem with > > us recommending other modules, but most of the Python libraries are > > perfectly functional (not "leaky" or some other pejorative), they just > > aren't as capable as the wiz-bang new stuff that's available on PyPI. > > > > +1 to David's comment, and -0 on the proposal as a whole. > > The suggested wordings are simply offensive to those modules & their > maintainers specifically, and to Python generally. > > Personally, I think an intelligent user should realize that a > language's standard library won't provide all the latest and shiniest > gadgets. Rather, it will focus on providing stable tools that have > withstood the test of time and can serve as a basis for building more > advanced tools. That intelligent user should also be aware of PyPI > (and the main Python page makes it prominent enough), so I see no > reason explicitly pointing to it in the documentation of several > modules. I see the point, but as a reasonably knowledgeable Python programmer (intelligent? who knows...) I regularly discover nifty new modules that "replace" stdlib modules. It'd be nice to have pointers in the docs, although that runs the risk of having the pointers grow stale, too. --titus -- C. Titus Brown, ctb at msu.edu From eliben at gmail.com Tue Mar 13 04:42:55 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 13 Mar 2012 05:42:55 +0200 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <20120313032548.GA29711@idyll.org> References: <20120313030704.46C512500ED@webabinitio.net> <20120313032548.GA29711@idyll.org> Message-ID: On Tue, Mar 13, 2012 at 05:25, C. Titus Brown wrote: > On Tue, Mar 13, 2012 at 05:22:45AM +0200, Eli Bendersky wrote: >> On Tue, Mar 13, 2012 at 05:07, R. David Murray wrote: >> > I don't like any of the suggested wordings. ?I have no problem with >> > us recommending other modules, but most of the Python libraries are >> > perfectly functional (not "leaky" or some other pejorative), they just >> > aren't as capable as the wiz-bang new stuff that's available on PyPI. >> > >> >> ?+1 to David's comment, and -0 on the proposal as a whole. >> >> The suggested wordings are simply offensive to those modules & their >> maintainers specifically, and to Python generally. >> >> Personally, I think an intelligent user should realize that a >> language's standard library won't provide all the latest and shiniest >> gadgets. Rather, it will focus on providing stable tools that have >> withstood the test of time and can serve as a basis for building more >> advanced tools. That intelligent user should also be aware of PyPI >> (and the main Python page makes it prominent enough), so I see no >> reason explicitly pointing to it in the documentation of several >> modules. > > I see the point, but as a reasonably knowledgeable Python programmer > (intelligent? who knows...) I regularly discover nifty new modules > that "replace" stdlib modules. ?It'd be nice to have pointers in the > docs, although that runs the risk of having the pointers grow stale, > too. > Exactly. It's not the job of the core developers to keep track of the latest and greatest gadgets and to diligently update the docs when something new comes out. Note that "the latest and coolest" changes frequently, so this may mean different "recommendations" between 3.x.y and 3.x.y+1, which is even more confusing. Wasn't a PyPI recommendation / voting system discussed a while ago? *That* would be much more appropriate than officially endorsing specific modules by pointing to them in the standard documentation. Eli From ctb at msu.edu Tue Mar 13 04:48:21 2012 From: ctb at msu.edu (C. Titus Brown) Date: Mon, 12 Mar 2012 20:48:21 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: <20120313030704.46C512500ED@webabinitio.net> <20120313032548.GA29711@idyll.org> Message-ID: <20120313034819.GC29711@idyll.org> On Tue, Mar 13, 2012 at 05:42:55AM +0200, Eli Bendersky wrote: > On Tue, Mar 13, 2012 at 05:25, C. Titus Brown wrote: > > I see the point, but as a reasonably knowledgeable Python programmer > > (intelligent? who knows...) I regularly discover nifty new modules > > that "replace" stdlib modules. ?It'd be nice to have pointers in the > > docs, although that runs the risk of having the pointers grow stale, > > too. > > > > Exactly. It's not the job of the core developers to keep track of the > latest and greatest gadgets and to diligently update the docs when > something new comes out. Note that "the latest and coolest" changes > frequently, so this may mean different "recommendations" between 3.x.y > and 3.x.y+1, which is even more confusing. > > Wasn't a PyPI recommendation / voting system discussed a while ago? > *That* would be much more appropriate than officially endorsing > specific modules by pointing to them in the standard documentation. I feel like there's a middle ground where stable, long-term go-to modules could be mentioned, though. I don't spend a lot of time browsing PyPI, but I suspect almost everyone spends a certain amount of time in the Python docs (which is a testimony to their quality IMO). So I'm in favor of conservative link-outs but without any deprecating language. --titus -- C. Titus Brown, ctb at msu.edu From brian at python.org Tue Mar 13 04:58:12 2012 From: brian at python.org (Brian Curtin) Date: Mon, 12 Mar 2012 20:58:12 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: On Mon, Mar 12, 2012 at 19:23, Andrey Petrov wrote: > What such a snippet might look like: > > "Batteries are included with Python but sometimes they are old and > leaky?this is one of those cases. Please have a look in PyPI for more modern > alternatives provided by the Python community." What does "leaky" mean here? Someone's going to see that, think it has memory leaks, then rant on the internet about how we ship crap and just document it as so. > Part 2: > I propose we add a new category of package identifiers such as "Topic :: > Standard Library Alternative :: {stdlib_package_name}" which authors of > libraries can tag themselves under. The documentation warning snippet will > provide a link to the appropriate PyPI query to list packages claiming to be > alternatives to the stdlib package in question. Automating it to something on PyPI is the not the right answer. People will use it incorrectly, either in that they'll add it to packages for which it isn't accurate, and people just flat out won't use it or know about it. It won't be accurate this way, and anything that we're documenting needs to be vetted. It's not often that a great alternative comes up, so I don't see the manual burden being too great. From v+python at g.nevcal.com Tue Mar 13 05:01:31 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Mon, 12 Mar 2012 21:01:31 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <20120313034819.GC29711@idyll.org> References: <20120313030704.46C512500ED@webabinitio.net> <20120313032548.GA29711@idyll.org> <20120313034819.GC29711@idyll.org> Message-ID: <4F5EC69B.1090000@g.nevcal.com> On 3/12/2012 8:48 PM, C. Titus Brown wrote: > I feel like there's a middle ground where stable, long-term go-to modules could > be mentioned, though. I don't spend a lot of time browsing PyPI, but I suspect > almost everyone spends a certain amount of time in the Python docs (which is a > testimony to their quality IMO). So I'm in favor of conservative link-outs > but without any deprecating language. Any outward links will be somewhat intrusive, and their existence will admit that the stdlib module is limited in some fashion, such that someone invested time and effort to create an alternative. On the other hand, if there were a standard place for external links to alternatives, say, perhaps, at the bottom of the left-hand table of contents for the module, and if it were Wiki-like (anyone could add an alternative) then the core developers wouldn't need to monitor and approve the alternatives. The alternatives would not be listed in the TOC, only the link, if alternatives were submitted. At the link target Wiki, there could be various alternatives, pro/con comments, user votes, whatever seems useful. If there is truly a desire by core developers to recommend specific alternative modules, then wording like the following seems neutral to me: Alternatives to this module exist at [list of links], which may be updated more regularly than the stdlib. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shazow at gmail.com Tue Mar 13 05:14:56 2012 From: shazow at gmail.com (Andrey Petrov) Date: Mon, 12 Mar 2012 21:14:56 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: On Mon, Mar 12, 2012 at 8:58 PM, Brian Curtin wrote: > On Mon, Mar 12, 2012 at 19:23, Andrey Petrov wrote: >> What such a snippet might look like: >> >> "Batteries are included with Python but sometimes they are old and >> leaky?this is one of those cases. Please have a look in PyPI for more modern >> alternatives provided by the Python community." > > What does "leaky" mean here? Someone's going to see that, think it has > memory leaks, then rant on the internet about how we ship crap and > just document it as so. I agree Brian and David, the choice of "leaky" in the wording is poor. It was supposed to be maintaining the "batteries" metaphor but it's clearly ambiguous. Perhaps something along the lines of... "Batteries are included with Python but for stability and backwards compatibility, some of the standard library is not always as modern as alternatives provided by the Python community?this is one of those cases. Please have a look at PyPI for more cutting-edge alternatives." >> Part 2: >> I propose we add a new category of package identifiers such as "Topic :: >> Standard Library Alternative :: {stdlib_package_name}" which authors of >> libraries can tag themselves under. The documentation warning snippet will >> provide a link to the appropriate PyPI query to list packages claiming to be >> alternatives to the stdlib package in question. > > Automating it to something on PyPI is the not the right answer. People > will use it incorrectly, either in that they'll add it to packages for > which it isn't accurate, and people just flat out won't use it or know > about it. It won't be accurate this way, and anything that we're > documenting needs to be vetted. > > It's not often that a great alternative comes up, so I don't see the > manual burden being too great. There are a dozen or more urllib/httplib/pycurl competitors on PyPI, and new ones spring up all the time. I'm not sure how we would go about objectively blessing the best "official" option at any given moment, or how frequently we would have to do this. With self-identifying, we could sort by some sort metric (monthly downloads? magical score?) and create a somewhat-actionable list. - Andrey From tjreedy at udel.edu Tue Mar 13 05:22:27 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 13 Mar 2012 00:22:27 -0400 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: On 3/12/2012 10:23 PM, Andrey Petrov wrote: > I've had the pleasure of speaking with Guido at PyCon and it became > evident that some of Python's included batteries are significantly > lagging behind the rapidly-evolving defacto standards of the > community?specifically in cases like urllib and urllib2, which lack > important features provided by alternatives like httplib2, requests, and > my own urllib3. > > > Part 1: > I propose we add a snippet to the top of the documentation of specific > archaic standard modules which encourages users to investigate > third-party alternatives if they find the standard module frustrating or > otherwise lacking. These notes would target new users, including those > coming from other languages where the third-party library choices are > not nearly as amazing as they are in Python. I would rather we figure out how to encourage authors of advancing packages to contribute better implementations of existing features and well-tested new features back to the stdlib module. For instance, are you the same 'Andrey Petrov' who is'darkprokoba' on the tracker? As near as I can tell, that user has posted on one issue about free threading and nothing else, in particular, nothing about web protocols. If that is you, why not? > What such a snippet might look like: > "Batteries are included with Python but sometimes they are old and > leaky?this is one of those cases. Please have a look in PyPI for more > modern alternatives provided by the Python community." You have every right to work independently. develop alternative modules, and promote them. But suggesting that we denigrate our work to promote yours strikes me as inappropriate. If nothing else, it would discourage rather than encourage more contributions from more people. > Additionally, I would like for us as a community to identify which > other standard libraries (cgi? ssl? others?) are candidates for this > kind of messaging in the official Python documentation. To the degree feasible, stdlib modules should be the best possible in the area they cover. Then all who install Python would benefit. Do you disagree? I would like more of the community to help make that happen. Any messages in the stdlib doc should be about modules that do things intentionally not covered in the stdlib. -- Terry Jan Reedy From brian at python.org Tue Mar 13 05:23:20 2012 From: brian at python.org (Brian Curtin) Date: Mon, 12 Mar 2012 21:23:20 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: On Mon, Mar 12, 2012 at 21:14, Andrey Petrov wrote: > On Mon, Mar 12, 2012 at 8:58 PM, Brian Curtin wrote: >> On Mon, Mar 12, 2012 at 19:23, Andrey Petrov wrote: >>> What such a snippet might look like: >>> >>> "Batteries are included with Python but sometimes they are old and >>> leaky?this is one of those cases. Please have a look in PyPI for more modern >>> alternatives provided by the Python community." >> >> What does "leaky" mean here? Someone's going to see that, think it has >> memory leaks, then rant on the internet about how we ship crap and >> just document it as so. > > I agree Brian and David, the choice of "leaky" in the wording is poor. > It was supposed to be maintaining the "batteries" metaphor but it's > clearly ambiguous. > > Perhaps something along the lines of... > > "Batteries are included with Python but for stability and backwards > compatibility, some of the standard library is not always as modern as > alternatives provided by the Python community?this is one of those > cases. Please have a look at PyPI for more cutting-edge alternatives." Sorry for another color choice on the bikeshed, but I would drop the word or references to "batteries". *We* know what "batteries included" means, but there are undoubtedly people who won't get it. It's just code - let's call it code. >>> Part 2: >>> I propose we add a new category of package identifiers such as "Topic :: >>> Standard Library Alternative :: {stdlib_package_name}" which authors of >>> libraries can tag themselves under. The documentation warning snippet will >>> provide a link to the appropriate PyPI query to list packages claiming to be >>> alternatives to the stdlib package in question. >> >> Automating it to something on PyPI is the not the right answer. People >> will use it incorrectly, either in that they'll add it to packages for >> which it isn't accurate, and people just flat out won't use it or know >> about it. It won't be accurate this way, and anything that we're >> documenting needs to be vetted. >> >> It's not often that a great alternative comes up, so I don't see the >> manual burden being too great. > > There are a dozen or more urllib/httplib/pycurl competitors on PyPI, > and new ones spring up all the time. I'm not sure how we would go > about objectively blessing the best "official" option at any given > moment, or how frequently we would have to do this. The same way we choose to accept libraries into the standard library. New ones spring up all the time - mature, proven, and widely used ones do not. If someone thinks libfoo is ready, they suggest it. If we haven't heard of it, the conversation ends. If we have people who know it, maybe we have them look deeper and figure out if it's something we can put our stamp on just like we might with the recent talk of "experimental package" inclusion. > With self-identifying, we could sort by some sort metric (monthly > downloads? magical score?) and create a somewhat-actionable list. Downloads don't mean the code is good. Voting is gamed. I really don't think there's a good automated solution to tell us what the high-quality replacement projects are. From guido at python.org Tue Mar 13 05:40:37 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 12 Mar 2012 21:40:37 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: On Mon, Mar 12, 2012 at 9:22 PM, Terry Reedy wrote: > I would rather we figure out how to encourage authors of advancing packages > to contribute better implementations of existing features and well-tested > new features back to the stdlib module. I would not. There are many excellent packages out there that should not be made into stdlib packages simply because their authors are not done adding new features. If you contribute something to the stdlib and also maintain a non-stdlib version of the same code to which you regularly add features, your code is not ready for inclusion into the stdlib. Not only should you be willing to wait 18 months (until the next feature release) before your features are released, but you should also accept that only the latest version of Python will see those features. This obviously makes it very unattractive for many authors to ever contribute to the stdlib. That's fine. There's a healthy ecosystem of 3rd party modules. For some areas (web stuff especially) there's just no way that the stdlib can keep up. Yes, the stdlib offerings work. But no, they are not very convenient and may not support popular idioms very well. For these types of modules I think it is a good idea to place some sort of pointer in the stdlib docs to an external page (maybe a wiki page) that collects a currently popular set of alternatives, or perhaps a few pointers and wiki pages. We should still be conservative with this, and we should word it to avoid implying that the stdlib code is buggy -- it just isn't as spiffy or featureful as the 3rd party options. (Agreed that the "leaky" wording was unfortunate. I may have inadvertently suggested this, taking the analogy with "batteries included". But I didn't mean it to be literally included into the stdlib.) -- --Guido van Rossum (python.org/~guido) From guido at python.org Tue Mar 13 05:43:49 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 12 Mar 2012 21:43:49 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: On Mon, Mar 12, 2012 at 9:23 PM, Brian Curtin wrote: > Downloads don't mean the code is good. Voting is gamed. I really don't > think there's a good automated solution to tell us what the > high-quality replacement projects are. Sure, these are imperfect metrics. But not having any metrics at all is flawed too. Despite the huge flamewar we had 1-2 years ago about PyPI comments, I think we should follow the lead of the many app stores that pop up on the web -- users will recognize the pattern and will tune their skepticism sensors as needed. -- --Guido van Rossum (python.org/~guido) From eliben at gmail.com Tue Mar 13 05:48:20 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 13 Mar 2012 06:48:20 +0200 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: On Tue, Mar 13, 2012 at 06:43, Guido van Rossum wrote: > On Mon, Mar 12, 2012 at 9:23 PM, Brian Curtin wrote: >> Downloads don't mean the code is good. Voting is gamed. I really don't >> think there's a good automated solution to tell us what the >> high-quality replacement projects are. > > Sure, these are imperfect metrics. But not having any metrics at all > is flawed too. Despite the huge flamewar we had 1-2 years ago about > PyPI comments, I think we should follow the lead of the many app > stores that pop up on the web -- users will recognize the pattern and > will tune their skepticism sensors as needed. > An additional bonus of such a system is that we won't have to maintain a separate Wiki page with "popular" choices. Pointing to the PyPI "rating" page, which can presumably be filtered by tags (i.e. web, scientific, XML, etc.) should be sufficient, given that such a rating page exists. Eli From anacrolix at gmail.com Tue Mar 13 06:10:18 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Tue, 13 Mar 2012 13:10:18 +0800 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: Definitely think some library vetting needs to occur. Superior alternatives do exist and are difficult to find and choose from. Stuff like LXML, Requests, Tornado are clear winners. The more of this done externally (ie PyPI the better). I still think a set of requirements for "official approval" would be good. This could outline things like requiring that certain stable Python versions are supported, interface stability, demonstrated user base, documentation etc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From senthil at uthcode.com Tue Mar 13 06:10:27 2012 From: senthil at uthcode.com (Senthil Kumaran) Date: Mon, 12 Mar 2012 22:10:27 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: <20120313051027.GB2318@mathmagic> On Mon, Mar 12, 2012 at 07:23:11PM -0700, Andrey Petrov wrote: > I've had the pleasure of speaking with Guido at PyCon and it became evident > that some of Python's included batteries are significantly lagging behind the > rapidly-evolving defacto standards of the community specifically in cases like > urllib and urllib2, which lack important features provided by alternatives like > httplib2, requests, and my own urllib3. Well, I think I have address this because I am the maintainer of those modules in standard lib. First things first, it looks to me that trashing something gives good motivation to you (and others working on related modules). I don't have a problem with that. But on the other hand, if you think things can be improved in stdlib, you are welcome to contribute. Just remember that new features, refactoring with backwards compatibility, 'cool api' for new features should go in 3.3+ onwards. Bug fixes, confusions on what's RFC supported vs what's defacto standards, fine line between bugs and features, those can be considered for 2.7. I am personally in favor of constantly improving the standard library modules along with mention of any good libraries which can be useful for the purposes of the user. We already have lots of such references in standard library documentation. If there is a well maintained package, as long as the external package is active and maintained, we can have it as link in the docs. Sometimes those external packages become inactive too, in those cases, those should pruned. Its' all about maintaining libraries and docs and being helpful. -- Senthil From shazow at gmail.com Tue Mar 13 06:48:04 2012 From: shazow at gmail.com (Andrey Petrov) Date: Mon, 12 Mar 2012 22:48:04 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <20120313051027.GB2318@mathmagic> References: <20120313051027.GB2318@mathmagic> Message-ID: Dear authors of Python's standard library: Please accept my deepest apologies. We didn't mean for the messaging to come off as unappreciative for you work, and I am very sorry for that! Without the aforementioned urllib/httplib/etc I would have never made it as far as to build my own libraries which have suited my needs better than the foundations that originally allowed me to get off the ground. I am very grateful for all of those responsible for building the Python standard library and I appreciate your continued efforts. Some specific replies: @Senthil: I originally asked Guido for guidance on improving the standard library and perhaps including some of my favourite projects, but he pointed out that in a couple of years we might end up again in the same position as before but with one extra library people will complain about for being obsoleted yet remains impossible to deprecate. I agreed with Guido that embracing and even encouraging users to use the rapidly-evolving community-built packages alongside the tried-and-true standard library is the best move here. @Terry: I don't know who 'darkprokoba' is. Unfortunately 'Andrey Petrov' is a very common name. I go under the handle 'shazow' but I haven't participated in core Python discussions until today. I did not suggest for Python to endorse a specific module, even if it is my own. In fact, I generally oppose doing this as I feel that when Django was announced as the blessed 'winner' of the Python MVC frameworks was a harmful event to all other competing frameworks at the time. My suggestion is to inform the users when there are other potentially better suited alternatives available from the community PyPI and to educate them how to find these alternatives. - Andrey From senthil at uthcode.com Tue Mar 13 07:12:39 2012 From: senthil at uthcode.com (Senthil Kumaran) Date: Mon, 12 Mar 2012 23:12:39 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: <20120313051027.GB2318@mathmagic> Message-ID: <20120313061239.GE2318@mathmagic> On Mon, Mar 12, 2012 at 10:48:04PM -0700, Andrey Petrov wrote: > @Senthil: I originally asked Guido for guidance on improving the > standard library and perhaps including some of my favourite projects, > but he pointed out that in a couple of years we might end up again in > the same position as before but with one extra library people will > complain about for being obsoleted yet remains impossible to > deprecate. > > I agreed with Guido that embracing and even encouraging users to use the > rapidly-evolving community-built packages alongside the tried-and-true > standard library is the best move here. I agree with that too. I think, any improvements which can made by external libraries, if they can be considered as can be good improvement in the stdlib, It's a good idea to push them in. I looked at the requests, I think that apis for various HTTP verbs is something interesting, I think web applications authors see some advantage there explicit GET / POST and sending specific options to those. That facility, if it can be brought into urllib.request (3.3 onwards) it could be nice. Also you could have noticed an addition of method= parameter in 3.3 I have not had a chance to look at urllib3. Should do. I have been following httplib2 and use it. Thanks, Senthil From valhallasw at arctus.nl Tue Mar 13 12:44:58 2012 From: valhallasw at arctus.nl (Merlijn van Deen) Date: Tue, 13 Mar 2012 12:44:58 +0100 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) Message-ID: http://bugs.python.org/issue6784?("byte/unicode pickle incompatibilities between python2 and python3") Hello all, Currently, pickle unpickles python2 'str' objects as python3 'str' objects, where the encoding to use is passed to the Unpickler. However, there are cases where it makes more sense to unpickle a python2 'str' as python3 'bytes' - for instance when it is actually binary data, and not text. Currently, the mapping is as follows, when reading a pickle: python2 'str' -> python3 'str' (using an encoding supplied to Unpickler) python2 'unicode' -> python3 'str' or, when creating a pickle using protocol <= 2: python3 'str' -> python2 'unicode' python3 'bytes' -> python2 '__builtins__.bytes object' This issue suggests to add a flag to change the behaviour as follows: a) python2 'str' -> python3 'bytes' b) python3 'bytes' -> python2 'str' The question on this is how to pass this flag. To quote Antoine (with permission) on my mail about this issue on core-mentorship: > I haven't answered because I'm unsure about the approach itself - do we > want to add yet another argument to pickle methods, especially this late > in the 3.x development cycle? Currently, I have implemented it using an extra argument for the Pickler and Unpickler objects ('bytestr'), which toggles the behaviour. I.e.: >>> pickled = Pickler(data, bytestr=True); unpickled = Unpickler(data, bytestr=True). This is the approach used in pickle_bytestr.patch [1] Another option would be to implement a seperate Pickler/Unpickler object, such that >>> pickled = BytestrPickler(data, bytestr=True); unpickled = BytestrUnpickler(data, bytestr=True) This is the approach I initially implemented [2]. Alternatively, there is the option only to implement the Unpickler, leaving the Pickler as it is. This allows >>> unpickled = Unpickler(data, encoding=bytes) where the bytes type is used as a special 'flag'. And, of course, there is the option not to implement this in the stdlib at all. What are your ideas on this? Best, Merlijn [0] http://bugs.python.org/issue6784 [1] http://bugs.python.org/file24719/pickle_bytestr.patch [2] https://github.com/valhallasw/py2/blob/master/bytestrpickle.py From valhallasw at arctus.nl Tue Mar 13 12:45:59 2012 From: valhallasw at arctus.nl (Merlijn van Deen) Date: Tue, 13 Mar 2012 12:45:59 +0100 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: Message-ID: Oops. I should re-read my mails before I send them, not /after/ I send them. On 13 March 2012 12:44, Merlijn van Deen wrote: >>>> pickled = BytestrPickler(data, bytestr=True); unpickled = BytestrUnpickler(data, bytestr=True) should of course read >>>> pickled = BytestrPickler(data); unpickled = BytestrUnpickler(data) Sorry about that. Merlijn From p.f.moore at gmail.com Tue Mar 13 14:31:38 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 13 Mar 2012 13:31:38 +0000 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <20120313034819.GC29711@idyll.org> References: <20120313030704.46C512500ED@webabinitio.net> <20120313032548.GA29711@idyll.org> <20120313034819.GC29711@idyll.org> Message-ID: On 13 March 2012 03:48, C. Titus Brown wrote: > I feel like there's a middle ground where stable, long-term go-to modules could > be mentioned, though. ?I don't spend a lot of time browsing PyPI, but I suspect > almost everyone spends a certain amount of time in the Python docs (which is a > testimony to their quality IMO). ?So I'm in favor of conservative link-outs > but without any deprecating language. I applaud the idea of promoting the many excellent packages available. It can be very hard to separate the good from the indifferent (or even bad) when browsing PyPI. I've found some very good packages recently which I'd never have known about without some random comment on a mailing list. However, I'm not keen on having the stdlib documentation suggest that I should be using something else. No code should ever be documenting "don't use me, there are better alternatives" unless it is deprecated or obsolete. On the other hand, I would love to see a community-maintained document that described packages that are acknowledged as "best of breed". That applies whether or not those packages replace something in the stdlib. Things like pywin32, lxml, and requests would be examples in my experience. There's no reason this *has* to be in the core documentation - it may be relevant that nothing has sprung up independently yet... Maybe a separate item in the Python documentation, "External Modules", could be created and maintained by the community? By being in the documentation, it has a level of "official recommendation" status, and by being a top-level document it's visible (more so than, for example, a HOWTO document would be). Because it's in the released documentation, it is relatively stable, which implies that external modules would need to have a genuine track record to get in there, but because it's community maintained it should reflect a wider consensus than just the core developers' views. Paul. From donald.stufft at gmail.com Tue Mar 13 14:34:33 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Tue, 13 Mar 2012 09:34:33 -0400 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: <20120313030704.46C512500ED@webabinitio.net> <20120313032548.GA29711@idyll.org> <20120313034819.GC29711@idyll.org> Message-ID: <273F4C10E0754BED9962CCD62D7D3882@gmail.com> On Tuesday, March 13, 2012 at 9:31 AM, Paul Moore wrote: > On 13 March 2012 03:48, C. Titus Brown wrote: > > I feel like there's a middle ground where stable, long-term go-to modules could > > be mentioned, though. I don't spend a lot of time browsing PyPI, but I suspect > > almost everyone spends a certain amount of time in the Python docs (which is a > > testimony to their quality IMO). So I'm in favor of conservative link-outs > > but without any deprecating language. > > > > > I applaud the idea of promoting the many excellent packages available. > It can be very hard to separate the good from the indifferent (or even > bad) when browsing PyPI. I've found some very good packages recently > which I'd never have known about without some random comment on a > mailing list. > > However, I'm not keen on having the stdlib documentation suggest that > I should be using something else. No code should ever be documenting > "don't use me, there are better alternatives" unless it is deprecated > or obsolete. > > On the other hand, I would love to see a community-maintained document > that described packages that are acknowledged as "best of breed". That > applies whether or not those packages replace something in the stdlib. > Things like pywin32, lxml, and requests would be examples in my > experience. There's no reason this *has* to be in the core > documentation - it may be relevant that nothing has sprung up > independently yet... > > http://python-guide.org ? > > Maybe a separate item in the Python documentation, "External Modules", > could be created and maintained by the community? By being in the > documentation, it has a level of "official recommendation" status, and > by being a top-level document it's visible (more so than, for example, > a HOWTO document would be). Because it's in the released > documentation, it is relatively stable, which implies that external > modules would need to have a genuine track record to get in there, but > because it's community maintained it should reflect a wider consensus > than just the core developers' views. > > Paul. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org (mailto:Python-Dev at python.org) > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/donald.stufft%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Tue Mar 13 15:35:10 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 13 Mar 2012 10:35:10 -0400 Subject: [Python-Dev] Review of PEP 362 (signature object) In-Reply-To: References: Message-ID: <59118671-F1E5-4D32-833E-64C2B6517634@gmail.com> Guido, Brett, I've tried to use the proposed signature object, however, I found that the 'bind' method is incorrect, and came up with my own implementation of the PEP: https://gist.github.com/2029032 (If needed, I can change the licence to PSFL) I used my version to implement typechecking, arguments validation in various RPC dispatch mechanisms, and it is proven to work. First of all, in the current version of the PEP, "bind" doesn't work correctly with "varargs", as it returns a dictionary object: def foo(bar, *args): print(bar, args) s = signature(foo) bound = s.bind(1, 2 ,3 ,4) print('Bound args:', bound) foo(**bound) This code outputs the following: Bound args: {'args': (2, 3, 4), 'bar': 1} Traceback (most recent call last): File "test.py", line 286, in foo(**bound) TypeError: foo() got an unexpected keyword argument 'args' The conclusion is that ** form of unpacking is not always enough, so 'bind' should at least return a pair of (args, kwargs). Secondly, I don't think that even (args, kwargs) pair is enough, as some information about how arguments were mapped is lost. In my implementation, 'bind' method returns an instance of 'BoundArguments' class, which preserves the exact mapping, and has two convenience properties '.args' and '.kwargs', so the example above transforms into: bound = s.bind(1, 2, 3, 4) foo(*bound.args, **bound.kwargs) And that works as it should. When some advanced processing is required, you can work with its private fields: >>> bound._args (1,) >>> bound._varargs (2, 3, 4) I also think that keyword-only arguments deserve to land in a separate collection in the signature object, in my implementation it is 'signature.kwonlyargs' slot. It is easier to do some advanced processing of arguments this way, and even the 'signature.__iter__' is simpler. Finally, I also added 'render_args' method to the signature. It just renders function's arguments as in its definition: >>> signature(foo).render_args() bar, *args This is useful for printing detailed error messages and hints. - Yury From p.f.moore at gmail.com Tue Mar 13 15:52:19 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 13 Mar 2012 14:52:19 +0000 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <273F4C10E0754BED9962CCD62D7D3882@gmail.com> References: <20120313030704.46C512500ED@webabinitio.net> <20120313032548.GA29711@idyll.org> <20120313034819.GC29711@idyll.org> <273F4C10E0754BED9962CCD62D7D3882@gmail.com> Message-ID: On 13 March 2012 13:34, Donald Stufft wrote: > http://python-guide.org ? Hmm, yes maybe. I had seen this before (it's where I found out about requests, IIRC). As it says, it "is mostly a skeleton at the moment". With some fleshing out, then it's probably a good start. I have some problems with its recommendations (notably "If you?re starting work on a new Python module, I recommend you write it for Python 2.5 or 2.6, and add support for Python3 in a later iteration." which is probably not appropriate for something that would be officially sanctioned by the core developers). Also, I don't think we want something advertised as "opinionated". And it covers a much wider area than the original suggestion. I'd envisaged something more like a simple list of "obvious" modules: """ Requests - URL Requests is a module designed to make getting resources from the web as easy as possible. It is a simpler and more powerful alternative to the stdlib urllib and urllib2 modules. ... Some code samples here giving basic usage. """ My ideal would be something I could scan in a few spare moments, and pick up pointers to particular modules that I'd find useful. Basically, take the "Scenario Guide" section of python-guide, flesh it out, and turn it into a flat list. (I don't like the "Scenario" approach as it tends to force people into a particular view of the world, but maybe that's just me, my applications tend to be more eclectic than any of the "normal" categories). Paul. From guido at python.org Tue Mar 13 15:52:17 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 13 Mar 2012 07:52:17 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <20120313051027.GB2318@mathmagic> References: <20120313051027.GB2318@mathmagic> Message-ID: On Mon, Mar 12, 2012 at 10:10 PM, Senthil Kumaran wrote: > On Mon, Mar 12, 2012 at 07:23:11PM -0700, Andrey Petrov wrote: >> I've had the pleasure of speaking with Guido at PyCon and it became evident >> that some of Python's included batteries are significantly lagging behind the >> rapidly-evolving defacto standards of the community specifically in cases like >> urllib and urllib2, which lack important features provided by alternatives like >> httplib2, requests, and my own urllib3. > > Well, I think I have address this because I am the maintainer of those > modules in standard lib. > > First things first, it looks to me that trashing something gives good > motivation to you (and others working on related modules). I don't > have a problem with that. > > But on the other hand, if you think things can be improved in stdlib, > you are welcome to contribute. Just remember that new features, > refactoring with backwards compatibility, 'cool api' for new features > should go in 3.3+ onwards. Bug fixes, confusions on what's RFC > supported vs what's defacto standards, fine line between bugs and > features, those can be considered for 2.7. > > I am personally in favor of constantly improving the standard library > modules along with mention of any good libraries which can be useful > for the purposes of the user. > > We already have lots of such references in standard library > documentation. If there is a well maintained package, as long as the > external package is active and maintained, we can have it as link in > the docs. Sometimes those external packages become inactive too, in > those cases, those should pruned. Its' all about maintaining libraries > and docs and being helpful. Well said. Improving existing stdlib modules is always welcome of course! (And the bar is much lower than for adding new modules.) -- --Guido van Rossum (python.org/~guido) From guido at python.org Tue Mar 13 17:27:51 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 13 Mar 2012 09:27:51 -0700 Subject: [Python-Dev] Exceptions in comparison operators In-Reply-To: References: <4F54B496.3030905@hotpy.org> Message-ID: Mark, did you do anything with my reply? On Mon, Mar 5, 2012 at 10:41 AM, Guido van Rossum wrote: > On Mon, Mar 5, 2012 at 4:41 AM, Mark Shannon wrote: >> Comparing two objects (of the same type for simplicity) >> involves a three stage lookup: >> The class has the operator C.__eq__ >> It can be applied to operator (descriptor protocol): C().__eq__ >> and it produces a result: C().__eq__(C()) >> >> Exceptions can be raised in all 3 phases, >> but an exception in the first phase is not really an error, >> its just says the operation is not supported. >> E.g. >> >> class C: pass >> >> C() == C() is False, rather than raising an Exception. >> >> If an exception is raised in the 3rd stage, then it is propogated, >> as follows: >> >> class C: >> ? def __eq__(self, other): >> ? ? ? raise Exception("I'm incomparable") >> >> C() == C() ?raises an exception >> >> However, if an exception is raised in the second phase (descriptor) >> then it is silenced: >> >> def no_eq(self): >> ? ?raise Exception("I'm incomparable") >> >> class C: >> ? __eq__ = property(no_eq) >> >> C() == C() is False. >> >> But should it raise an exception? >> >> The behaviour for arithmetic is different. >> >> def no_add(self): >> ? ?raise Exception("I don't add up") >> >> class C: >> ? __add__ = property(no_add) >> >> C() + C() raises an exception. >> >> So what is the "correct" behaviour? >> It is my opinion that comparisons should behave like arithmetic >> and raise an exception. > > I think you're probably right. This is one of those edge cases that > are so rare (and always considered a bug in the user code) that we > didn't define carefully what should happen. There are probably some > implementation-specific reasons why it was done this way (comparisons > use a very different code path from regular binary operators) but that > doesn't sound like a very good reason. > > OTOH there *is* a difference: as you say, C() == C() is False when the > class doesn't define __eq__, whereas C() + C() raises an exception if > it doesn't define __add__. Still, this is more likely to have favored > the wrong outcome for (2) by accident than by design. > > You'll have to dig through the CPython implementation and find out > exactly what code needs to be changed before I could be sure though -- > sometimes seeing the code jogs my memory. > > But I think of x==y as roughly equivalent to > > r = NotImplemented > if hasattr(x, '__eq__'): > ?r = x.__eq__(y) > if r is NotImplemented and hasattr(y, '__eq__'): > ?r = y.__eq__(x) > if r is NotImplemented: > ?r = False > > which would certainly suggest that (2) should raise an exception. A > possibility is that the code looking for the __eq__ attribute > suppresses *all* exceptions instead of just AttributeError. If you > change no_eq() to return 42, for example, the comparison raises the > much more reasonable TypeError: 'int' object is not callable. > > -- > --Guido van Rossum (python.org/~guido) -- --Guido van Rossum (python.org/~guido) From merwok at netwok.org Tue Mar 13 17:40:30 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Tue, 13 Mar 2012 17:40:30 +0100 Subject: [Python-Dev] [RELEASED] Distutils2 1.0a4 In-Reply-To: <4F5F77CB.1040600@netwok.org> References: <4F5F77CB.1040600@netwok.org> Message-ID: <4F5F787E.1090307@netwok.org> What would be a release email without errors? :) The wiki link I gave doesn?t work, it should be http://wiki.python.org/moin/Distutils2/Contributing From merwok at netwok.org Tue Mar 13 17:37:31 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Tue, 13 Mar 2012 17:37:31 +0100 Subject: [Python-Dev] [RELEASED] Distutils2 1.0a4 Message-ID: <4F5F77CB.1040600@netwok.org> Hello, On behalf of the distutils2 contributors, I am thrilled to announce the release of Distutils2 1.0a4. Distutils2 is the packaging library that supersedes Distutils. It supports distributing, uploading, downloading, installing and removing projects, and is also a support library for other packaging tools like pip and buildout. It will be provided as part of Python 3.3; this release is a backport compatible with Python 2.5 to 2.7. Distutils2 1.0a4 contains a number of known bugs, limitations and missing features, but we have released it to let users and developers download and try it. This means you! If you want to report new issues, request features or contribute, please read DEVNOTES.txt in the source distribution or http://wiki.python.org/Distutils2/Contributing More alpha releases will be cut when important bugs get fixed during the next few months, like Windows or PyPy compatibility. The first beta is planned for June, and 1.0 final for August (to follow Python 3.3.0). Until beta, the API is subject to drastic changes and code can get removed. Basic documentation is at http://docs.python.org/dev/packaging ; it will get updated, expanded and improved in the coming months. Enjoy! Repository: http://hg.python.org/distutils2 Bug tracker: http://bugs.python.org/ (component "Distutils2") Mailing list: http://mail.python.org/mailman/listinfo/distutils-sig/ From tarek at ziade.org Tue Mar 13 18:36:38 2012 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Tue, 13 Mar 2012 10:36:38 -0700 Subject: [Python-Dev] [RELEASED] Distutils2 1.0a4 In-Reply-To: <4F5F77CB.1040600@netwok.org> References: <4F5F77CB.1040600@netwok.org> Message-ID: <4F5F85A6.1050802@ziade.org> Thanks a lot for your hard work and dedication on packaging ! On 3/13/12 9:37 AM, ?ric Araujo wrote: > Hello, > > On behalf of the distutils2 contributors, I am thrilled to announce the > release of Distutils2 1.0a4. > > Distutils2 is the packaging library that supersedes Distutils. It > supports distributing, uploading, downloading, installing and removing > projects, and is also a support library for other packaging tools like > pip and buildout. It will be provided as part of Python 3.3; this > release is a backport compatible with Python 2.5 to 2.7. > > Distutils2 1.0a4 contains a number of known bugs, limitations and > missing features, but we have released it to let users and developers > download and try it. This means you! If you want to report new issues, > request features or contribute, please read DEVNOTES.txt in the source > distribution or http://wiki.python.org/Distutils2/Contributing > > More alpha releases will be cut when important bugs get fixed during the > next few months, like Windows or PyPy compatibility. The first beta is > planned for June, and 1.0 final for August (to follow Python 3.3.0). > Until beta, the API is subject to drastic changes and code can get removed. > > Basic documentation is at http://docs.python.org/dev/packaging ; it will > get updated, expanded and improved in the coming months. > > Enjoy! > > Repository: http://hg.python.org/distutils2 > Bug tracker: http://bugs.python.org/ (component "Distutils2") > Mailing list: http://mail.python.org/mailman/listinfo/distutils-sig/ > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com From mark at hotpy.org Tue Mar 13 19:35:33 2012 From: mark at hotpy.org (Mark Shannon) Date: Tue, 13 Mar 2012 18:35:33 +0000 Subject: [Python-Dev] Exceptions in comparison operators In-Reply-To: References: <4F54B496.3030905@hotpy.org> Message-ID: <4F5F9375.8040007@hotpy.org> Guido van Rossum wrote: > Mark, did you do anything with my reply? Not yet. I noticed the difference when developing my HotPy VM (latest incarnation thereof) which substitutes a sequence of low-level bytecodes for the high-level ones when tracing. (A bit like PyPy but much more Python-specific and amenable to interpretation, rather than compilation) I generate all the code sequences for binary ops from a template and noticed the slight difference when running the test suite. My implementation of equals follows the same pattern as the arithmetic operators (which is why I was wondering if that were the correct behaviour). My definition of op1 == op2: def surrogate_eq(op1, op2): if $overrides(op1, op2, '__eq__'): if op2?__eq__: result = op2$__eq__(op1) if result is not NotImplemented: return result if op1?__eq__: result = op1$__eq__(op2) if result is not NotImplemented: return result else: if op1?__eq__: result = op1$__eq__(op2) if result is not NotImplemented: return result if op2?__eq__: result = op2$__eq__(op1) if result is not NotImplemented: return result return op1 is op2 Where: x$__op__ means special lookup (bypassing the instance dictionary): x?__op__ means has the named special method i.e. any('__op__' in t.__dict__ for t in type(op).__mro__)) and $overrides(op1, op2, 'xxx') means that type(op2) is a proper subtype of type(op1) *and* type(op1).__dict__['xxx'] != type(op2).__dict__['xxx'] It would appear that the current version is: def surrogate_eq(op1, op2): if is_proper_subtype_of( type(op1), type(op1) ): if op2?__eq__: result = op2$__eq__(op1) if result is not NotImplemented: return result if op1?__eq__: result = op1$__eq__(op2) if result is not NotImplemented: return result else: if op1?__eq__: result = op1$__eq__(op2) if result is not NotImplemented: return result if op2?__eq__: result = op2$__eq__(op1) if result is not NotImplemented: return result return op1 is op2 Which means that == behaves differently to + for subtypes which do not override the __eq__ method. Thus: class MyValue1: def __init__(self, val): self.val = val def __lt__(self, other): print("lt") return self.val < other.val def __gt__(self, other): print("gt") return self.val > other.val def __add__(self, other): print("add") return self.val + other.val def __radd__(self, other): print("radd") return self.val + other.val class MyValue2(MyValue1): pass a = MyValue1(1) b = MyValue2(2) print(a + b) print(a < b) currently prints the following: add 3 gt True Cheers, Mark. > > On Mon, Mar 5, 2012 at 10:41 AM, Guido van Rossum wrote: >> On Mon, Mar 5, 2012 at 4:41 AM, Mark Shannon wrote: >>> Comparing two objects (of the same type for simplicity) >>> involves a three stage lookup: >>> The class has the operator C.__eq__ >>> It can be applied to operator (descriptor protocol): C().__eq__ >>> and it produces a result: C().__eq__(C()) >>> >>> Exceptions can be raised in all 3 phases, >>> but an exception in the first phase is not really an error, >>> its just says the operation is not supported. >>> E.g. >>> >>> class C: pass >>> >>> C() == C() is False, rather than raising an Exception. >>> >>> If an exception is raised in the 3rd stage, then it is propogated, >>> as follows: >>> >>> class C: >>> def __eq__(self, other): >>> raise Exception("I'm incomparable") >>> >>> C() == C() raises an exception >>> >>> However, if an exception is raised in the second phase (descriptor) >>> then it is silenced: >>> >>> def no_eq(self): >>> raise Exception("I'm incomparable") >>> >>> class C: >>> __eq__ = property(no_eq) >>> >>> C() == C() is False. >>> >>> But should it raise an exception? >>> >>> The behaviour for arithmetic is different. >>> >>> def no_add(self): >>> raise Exception("I don't add up") >>> >>> class C: >>> __add__ = property(no_add) >>> >>> C() + C() raises an exception. >>> >>> So what is the "correct" behaviour? >>> It is my opinion that comparisons should behave like arithmetic >>> and raise an exception. >> I think you're probably right. This is one of those edge cases that >> are so rare (and always considered a bug in the user code) that we >> didn't define carefully what should happen. There are probably some >> implementation-specific reasons why it was done this way (comparisons >> use a very different code path from regular binary operators) but that >> doesn't sound like a very good reason. >> >> OTOH there *is* a difference: as you say, C() == C() is False when the >> class doesn't define __eq__, whereas C() + C() raises an exception if >> it doesn't define __add__. Still, this is more likely to have favored >> the wrong outcome for (2) by accident than by design. >> >> You'll have to dig through the CPython implementation and find out >> exactly what code needs to be changed before I could be sure though -- >> sometimes seeing the code jogs my memory. >> >> But I think of x==y as roughly equivalent to >> >> r = NotImplemented >> if hasattr(x, '__eq__'): >> r = x.__eq__(y) >> if r is NotImplemented and hasattr(y, '__eq__'): >> r = y.__eq__(x) >> if r is NotImplemented: >> r = False >> >> which would certainly suggest that (2) should raise an exception. A >> possibility is that the code looking for the __eq__ attribute >> suppresses *all* exceptions instead of just AttributeError. If you >> change no_eq() to return 42, for example, the comparison raises the >> much more reasonable TypeError: 'int' object is not callable. >> >> -- >> --Guido van Rossum (python.org/~guido) > > > From v+python at g.nevcal.com Tue Mar 13 19:58:13 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Tue, 13 Mar 2012 11:58:13 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: <20120313030704.46C512500ED@webabinitio.net> <20120313032548.GA29711@idyll.org> <20120313034819.GC29711@idyll.org> Message-ID: <4F5F98C5.1000309@g.nevcal.com> On 3/13/2012 6:31 AM, Paul Moore wrote: > It can be very hard to separate the good from the indifferent (or even > bad) when browsing PyPI. I've found some very good packages recently > which I'd never have known about without some random comment on a > mailing list. +1 > However, I'm not keen on having the stdlib documentation suggest that > I should be using something else. No code should ever be documenting > "don't use me, there are better alternatives" unless it is deprecated > or obsolete. +0 > On the other hand, I would love to see a community-maintained document > that described packages that are acknowledged as "best of breed". That > applies whether or not those packages replace something in the stdlib. > Things like pywin32, lxml, and requests would be examples in my > experience. +1 > There's no reason this*has* to be in the core > documentation - it may be relevant that nothing has sprung up > independently yet... Hmm. > Maybe a separate item in the Python documentation, "External Modules", > could be created and maintained by the community? By being in the > documentation, it has a level of "official recommendation" status, and > by being a top-level document it's visible (more so than, for example, > a HOWTO document would be). Because it's in the released > documentation, it is relatively stable, which implies that external > modules would need to have a genuine track record to get in there, but > because it's community maintained it should reflect a wider consensus > than just the core developers' views. +1 This is the best proposal I've seen for including references to external modules. It gets it in the core documentation, hopefully with enough keywords that search would typically find external modules that are supersets of stdlib modules in the same result set that the stdlib module would be found. Yet it doesn't intrude on the documentation for the stdlib module. And beyond a 1-2 paragraph description, would not be fully documented, except by referencing the external module's documentation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at kennethreitz.com Tue Mar 13 20:13:52 2012 From: me at kennethreitz.com (Kenneth Reitz) Date: Tue, 13 Mar 2012 12:13:52 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: I think the cheesehop trove classifiers would be the ideal way to agnostically link to a page of packages related to the standard package in question. No need for sort order. The beauty of this solution is that packages that aren't maintained won't add the appropriate classifier to their package, and therefore not show up in the list. -- Kenneth Reitz On Monday, March 12, 2012 at 9:23 PM, Brian Curtin wrote: > On Mon, Mar 12, 2012 at 21:14, Andrey Petrov wrote: > > On Mon, Mar 12, 2012 at 8:58 PM, Brian Curtin wrote: > > > On Mon, Mar 12, 2012 at 19:23, Andrey Petrov wrote: > > > > What such a snippet might look like: > > > > > > > > "Batteries are included with Python but sometimes they are old and > > > > leaky?this is one of those cases. Please have a look in PyPI for more modern > > > > alternatives provided by the Python community." > > > > > > > > > > > > > What does "leaky" mean here? Someone's going to see that, think it has > > > memory leaks, then rant on the internet about how we ship crap and > > > just document it as so. > > > > > > > > > I agree Brian and David, the choice of "leaky" in the wording is poor. > > It was supposed to be maintaining the "batteries" metaphor but it's > > clearly ambiguous. > > > > Perhaps something along the lines of... > > > > "Batteries are included with Python but for stability and backwards > > compatibility, some of the standard library is not always as modern as > > alternatives provided by the Python community?this is one of those > > cases. Please have a look at PyPI for more cutting-edge alternatives." > > > > > Sorry for another color choice on the bikeshed, but I would drop the > word or references to "batteries". *We* know what "batteries included" > means, but there are undoubtedly people who won't get it. It's just > code - let's call it code. > > > > > Part 2: > > > > I propose we add a new category of package identifiers such as "Topic :: > > > > Standard Library Alternative :: {stdlib_package_name}" which authors of > > > > libraries can tag themselves under. The documentation warning snippet will > > > > provide a link to the appropriate PyPI query to list packages claiming to be > > > > alternatives to the stdlib package in question. > > > > > > > > > > > > > Automating it to something on PyPI is the not the right answer. People > > > will use it incorrectly, either in that they'll add it to packages for > > > which it isn't accurate, and people just flat out won't use it or know > > > about it. It won't be accurate this way, and anything that we're > > > documenting needs to be vetted. > > > > > > It's not often that a great alternative comes up, so I don't see the > > > manual burden being too great. > > > > > > > > > There are a dozen or more urllib/httplib/pycurl competitors on PyPI, > > and new ones spring up all the time. I'm not sure how we would go > > about objectively blessing the best "official" option at any given > > moment, or how frequently we would have to do this. > > > > > The same way we choose to accept libraries into the standard library. > New ones spring up all the time - mature, proven, and widely used ones > do not. If someone thinks libfoo is ready, they suggest it. If we > haven't heard of it, the conversation ends. If we have people who know > it, maybe we have them look deeper and figure out if it's something we > can put our stamp on just like we might with the recent talk of > "experimental package" inclusion. > > > With self-identifying, we could sort by some sort metric (monthly > > downloads? magical score?) and create a somewhat-actionable list. > > > > > Downloads don't mean the code is good. Voting is gamed. I really don't > think there's a good automated solution to tell us what the > high-quality replacement projects are. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Tue Mar 13 20:38:04 2012 From: brian at python.org (Brian Curtin) Date: Tue, 13 Mar 2012 14:38:04 -0500 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: On Tue, Mar 13, 2012 at 14:13, Kenneth Reitz wrote: > I think the cheesehop trove classifiers would be the ideal way to > agnostically link to a page of packages related to the standard package in > question. No need for sort order. Randomize the order for all I care. We still need to ensure we're suggesting quality projects. It doesn't make sense for us to suggest alternatives that we wouldn't want to use ourselves by just polling some list that anyone can get on. This is documentation that receives hundreds of thousands of views a month*. We need to be picky about what goes in it. > The beauty of this solution is that packages that aren't maintained won't > add the appropriate classifier to their package, and therefore not show up > in the list. Just because it's maintained doesn't mean it's not garbage. I think we really need to start every project off with a 0 and make them prove that they're a 10. Just being active means nothing. * http://www.python.org/webstats/usage_201202.html#TOPURLS - I don't know what page "Documentation" means since it doesn't have a specific link, but whatever page that is got hit 960K times in February. From fuzzyman at voidspace.org.uk Tue Mar 13 20:42:20 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 13 Mar 2012 12:42:20 -0700 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: Message-ID: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> On 13 Mar 2012, at 04:44, Merlijn van Deen wrote: > http://bugs.python.org/issue6784 ("byte/unicode pickle > incompatibilities between python2 and python3") > > Hello all, > > Currently, pickle unpickles python2 'str' objects as python3 'str' > objects, where the encoding to use is passed to the Unpickler. > However, there are cases where it makes more sense to unpickle a > python2 'str' as python3 'bytes' - for instance when it is actually > binary data, and not text. > > Currently, the mapping is as follows, when reading a pickle: > python2 'str' -> python3 'str' (using an encoding supplied to Unpickler) > python2 'unicode' -> python3 'str' > > or, when creating a pickle using protocol <= 2: > python3 'str' -> python2 'unicode' > python3 'bytes' -> python2 '__builtins__.bytes object' > It does seem unfortunate that by default it is impossible for a developer to "do the right thing" as regards pickling / unpickling here. Binary data on Python 2 being unpickled as Unicode on Python 3 is presumably for the convenience of developers doing the *wrong thing* (and only works for ascii anyway). All the best, Michael Foord > This issue suggests to add a flag to change the behaviour as follows: > a) python2 'str' -> python3 'bytes' > b) python3 'bytes' -> python2 'str' > > The question on this is how to pass this flag. To quote Antoine (with > permission) on my mail about this issue on core-mentorship: > >> I haven't answered because I'm unsure about the approach itself - do we >> want to add yet another argument to pickle methods, especially this late >> in the 3.x development cycle? > > > Currently, I have implemented it using an extra argument for the > Pickler and Unpickler objects ('bytestr'), which toggles the > behaviour. I.e.: >>>> pickled = Pickler(data, bytestr=True); unpickled = Unpickler(data, bytestr=True). > This is the approach used in pickle_bytestr.patch [1] > > Another option would be to implement a seperate Pickler/Unpickler > object, such that >>>> pickled = BytestrPickler(data, bytestr=True); unpickled = BytestrUnpickler(data, bytestr=True) > This is the approach I initially implemented [2]. > > Alternatively, there is the option only to implement the Unpickler, > leaving the Pickler as it is. This allows >>>> unpickled = Unpickler(data, encoding=bytes) > where the bytes type is used as a special 'flag'. > > And, of course, there is the option not to implement this in the stdlib at all. > > > What are your ideas on this? > > Best, > Merlijn > > [0] http://bugs.python.org/issue6784 > [1] http://bugs.python.org/file24719/pickle_bytestr.patch > [2] https://github.com/valhallasw/py2/blob/master/bytestrpickle.py > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From van.lindberg at gmail.com Tue Mar 13 20:43:27 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 13 Mar 2012 14:43:27 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 Message-ID: Following up on conversations at PyCon, I want to bring up one of my personal hobby horses for change in 3.3: Fix install layout on Windows, with a side order of making the PATH work better. Short version: 1) The layout for the python root directory for all platforms should be as follows: stdlib = {base/userbase}/lib/python{py_version_short} platstdlib = {base/userbase}/lib/python{py_version_short} purelib = {base/userbase}/lib/python{py_version_short}/site-packages platlib = {base/userbase}/lib/python{py_version_short}/site-packages include = {base/userbase}/include/python{py_version_short} scripts = {base/userbase}/bin data = {base/userbase} 2) On Windows, the python executable (python.exe) will be in the "bin" directory. That way the installer can optionally add just that directory to the PATH to pick up all python-related executables (like pip, easy_install, etc). I have talked with a number of people at PyCon, including Tarek and MvL. Nobody objected, and several thought it was a good idea. Long version: As a bit of background,the layout for the Python root directory is different between platforms, varying in capitalization ("Include" vs. "include") and sometimes in the names of directories used ("Scripts" on Windows vs. "bin" most everywhere else). Further, the python version may or may not be included in the path to the standard library or not. In times past, this layout was driven by an INSTALL_SCHEMES dict deep in the guts of distutils, but with distutils2 it has been lifted out and placed within a config file (sysconfig.cfg). [1] Proposal #1 above deals with this inconsistency [1]. More concretely, it also makes it so that I can check in an entire environment into source control and work on it cross platform. As an additional wrinkle on Windows, the main python binary (python.exe) is placed in the python root directory, but all associated runnable files are placed in the "Scripts" directory, so that someone who wants to run both Python and a Python script needs to add both $PYTHON_ROOT and $PYTHON_ROOT/Scripts to the PATH. To add just a little more complication, the python binary is placed within the binaries directory when a virtualenv is created, leading to an inconsistency between regular python installs and virtualenvs. Proposal #2 again provides consistency between virtualenvs and regular Python installs, and on windows it allows a single directory to be placed on the PATH to get all python-related binaries to run. [1] https://bitbucket.org/tarek/distutils2/src/6c3d67ed3adb/distutils2/_backport/sysconfig.cfg [2] It may be a foolish consistency, but I have a little mind. Thanks, Van From tjreedy at udel.edu Tue Mar 13 20:49:26 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 13 Mar 2012 15:49:26 -0400 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: On 3/13/2012 12:40 AM, Guido van Rossum wrote: > On Mon, Mar 12, 2012 at 9:22 PM, Terry Reedy wrote: >> I would rather we figure out how to encourage authors of advancing packages >> to contribute better implementations of existing features and well-tested >> new features back to the stdlib module. > > I would not. I think you misunderstood me and are talking about something other than what I meant. There are about 3250 open issues (this slowly but steadily grows). Of them, 1450 are behavior (bug) issues. We need more people, especially people with specialized expertise, writing and reviewing patches. As you said in response to Senthil > Improving existing stdlib modules is always welcome Exactly. So I would like to figure out how to encourage more such improvements. > There are many excellent packages out there that should > not be made into stdlib packages simply because their authors are not > done adding new features. Or because the package is outside the reasonable scope of the stdlib, or requires a different type of expertise than most core development, or for other reasons. Authors of separately maintained packages are, from our viewpoint, as eligible to help with tracker issues as anyone else, even while they continue work on their external package. Some of them are more likely than most contributors to have the knowledge needed for some particular issues. -- Terry Jan Reedy From thomas at python.org Tue Mar 13 20:50:09 2012 From: thomas at python.org (Thomas Wouters) Date: Tue, 13 Mar 2012 12:50:09 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: On Tue, Mar 13, 2012 at 12:38, Brian Curtin wrote: > On Tue, Mar 13, 2012 at 14:13, Kenneth Reitz wrote: > > I think the cheesehop trove classifiers would be the ideal way to > > agnostically link to a page of packages related to the standard package > in > > question. No need for sort order. > > Randomize the order for all I care. We still need to ensure we're > suggesting quality projects. It doesn't make sense for us to suggest > alternatives that we wouldn't want to use ourselves by just polling > some list that anyone can get on. > > This is documentation that receives hundreds of thousands of views a > month*. We need to be picky about what goes in it. > > > The beauty of this solution is that packages that aren't maintained won't > > add the appropriate classifier to their package, and therefore not show > up > > in the list. > > Just because it's maintained doesn't mean it's not garbage. I think we > really need to start every project off with a 0 and make them prove > that they're a 10. Just being active means nothing. > > > > * http://www.python.org/webstats/usage_201202.html#TOPURLS - I don't > know what page "Documentation" means since it doesn't have a specific > link, but whatever page that is got hit 960K times in February. > GroupURL /doc/* Documentation So it's anything that's in www.python.org/doc/. I don't believe it counts doc.python.org and docs.python.org. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Tue Mar 13 21:11:43 2012 From: brian at python.org (Brian Curtin) Date: Tue, 13 Mar 2012 15:11:43 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: On Tue, Mar 13, 2012 at 14:43, VanL wrote: > Following up on conversations at PyCon, I want to bring up one of my > personal hobby horses for change in 3.3: Fix install layout on Windows, with > a side order of making the PATH work better. > > Short version: > > 1) The layout for the python root directory for all platforms should be as > follows: > > stdlib = {base/userbase}/lib/python{py_version_short} > platstdlib = {base/userbase}/lib/python{py_version_short} > purelib = {base/userbase}/lib/python{py_version_short}/site-packages > platlib = {base/userbase}/lib/python{py_version_short}/site-packages > include = {base/userbase}/include/python{py_version_short} > scripts = {base/userbase}/bin > data = {base/userbase} I'm familiar with the scripts/bin change. I take it the rest of that stuff matches *nix? Text later on seems to support this, so I think I'm on board with it. > 2) On Windows, the python executable (python.exe) will be in the "bin" > directory. That way the installer can optionally add just that directory to > the PATH to pick up all python-related executables (like pip, easy_install, > etc). I'm updating my installer patch to do exactly this. After talking with Dino from Microsoft's Python Tools team, we're also going to add an additional registry key for them to find that bin/ path. > I have talked with a number of people at PyCon, including Tarek and MvL. > Nobody objected, and several thought it was a good idea. Martin and I spoke on Friday and at least the bin/ folder and Path stuff are acceptable and I'm working on the code for those. > To add just a little more complication, the python binary is placed within > the binaries directory when a virtualenv is created, leading to an > inconsistency between regular python installs and virtualenvs. If that virtualenv PEP is also accepted for 3.3, I think we can take care of inconsistencies there (at least moving forward). From guido at python.org Tue Mar 13 22:13:31 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 13 Mar 2012 14:13:31 -0700 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: On Tue, Mar 13, 2012 at 12:42 PM, Michael Foord wrote: > > On 13 Mar 2012, at 04:44, Merlijn van Deen wrote: > >> http://bugs.python.org/issue6784 ("byte/unicode pickle >> incompatibilities between python2 and python3") >> >> Hello all, >> >> Currently, pickle unpickles python2 'str' objects as python3 'str' >> objects, where the encoding to use is passed to the Unpickler. >> However, there are cases where it makes more sense to unpickle a >> python2 'str' as python3 'bytes' - for instance when it is actually >> binary data, and not text. >> >> Currently, the mapping is as follows, when reading a pickle: >> python2 'str' -> python3 'str' (using an encoding supplied to Unpickler) >> python2 'unicode' -> python3 'str' >> >> or, when creating a pickle using protocol <= 2: >> python3 'str' -> python2 'unicode' >> python3 'bytes' -> python2 '__builtins__.bytes object' >> > > > It does seem unfortunate that by default it is impossible for a developer to "do the right thing" as regards pickling / unpickling here. Binary data on Python 2 being unpickled as Unicode on Python 3 is presumably for the convenience of developers doing the *wrong thing* (and only works for ascii anyway). Well, since trying to migrate data between versions using pickle is the "wrong" thing anyway, I think the status quo is just fine. Developers doing the "right" thing don't use pickle for this purpose. -- --Guido van Rossum (python.org/~guido) From guido at python.org Tue Mar 13 22:16:40 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 13 Mar 2012 14:16:40 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: On Tue, Mar 13, 2012 at 12:49 PM, Terry Reedy wrote: > Authors of separately maintained packages are, from our viewpoint, as > eligible to help with tracker issues as anyone else, even while they > continue work on their external package. Some of them are more likely than > most contributors to have the knowledge needed for some particular issues. This is a good idea. I was chatting w. Senthil this morning about adding improvements to urllib/request.py based upon ideas from urllib3, requests, httplib2 (?), and we came to the conclusion that it might be a good idea to let those packages' authors review the proposed stdlib improvements. -- --Guido van Rossum (python.org/~guido) From tjreedy at udel.edu Tue Mar 13 22:19:57 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 13 Mar 2012 17:19:57 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: On 3/13/2012 3:43 PM, VanL wrote: > Following up on conversations at PyCon, I want to bring up one of my > personal hobby horses for change in 3.3: Fix install layout on Windows, > with a side order of making the PATH work better. > > Short version: > > 1) The layout for the python root directory for all platforms should be > as follows: > > stdlib = {base/userbase}/lib/python{py_version_short} > platstdlib = {base/userbase}/lib/python{py_version_short} > purelib = {base/userbase}/lib/python{py_version_short}/site-packages > platlib = {base/userbase}/lib/python{py_version_short}/site-packages > include = {base/userbase}/include/python{py_version_short} > scripts = {base/userbase}/bin > data = {base/userbase} What is {base/userbase} actually on a typical machine? It is fixed or user choice? -- Terry Jan Reedy From solipsis at pitrou.net Tue Mar 13 22:21:03 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 13 Mar 2012 22:21:03 +0100 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives References: Message-ID: <20120313222103.163877e6@pitrou.net> On Tue, 13 Mar 2012 14:16:40 -0700 Guido van Rossum wrote: > On Tue, Mar 13, 2012 at 12:49 PM, Terry Reedy wrote: > > Authors of separately maintained packages are, from our viewpoint, as > > eligible to help with tracker issues as anyone else, even while they > > continue work on their external package. Some of them are more likely than > > most contributors to have the knowledge needed for some particular issues. > > This is a good idea. I was chatting w. Senthil this morning about > adding improvements to urllib/request.py based upon ideas from > urllib3, requests, httplib2 (?), and we came to the conclusion that it > might be a good idea to let those packages' authors review the > proposed stdlib improvements. We don't have any provisions against reviewal by third-party developers already. I think the main problem (for us, of course) is that these people generally aren't interested enough to really dive in stdlib patches and proposals. For example, for the ssl module, I have sometimes tried to involve authors of third-party packages such as pyOpenSSL (or, IIRC, M2Crypto), but I got very little or no reviewing. Regards Antoine. From guido at python.org Tue Mar 13 22:38:03 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 13 Mar 2012 14:38:03 -0700 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <20120313222103.163877e6@pitrou.net> References: <20120313222103.163877e6@pitrou.net> Message-ID: On Tue, Mar 13, 2012 at 2:21 PM, Antoine Pitrou wrote: > On Tue, 13 Mar 2012 14:16:40 -0700 > Guido van Rossum wrote: > >> On Tue, Mar 13, 2012 at 12:49 PM, Terry Reedy wrote: >> > Authors of separately maintained packages are, from our viewpoint, as >> > eligible to help with tracker issues as anyone else, even while they >> > continue work on their external package. Some of them are more likely than >> > most contributors to have the knowledge needed for some particular issues. >> >> This is a good idea. I was chatting w. Senthil this morning about >> adding improvements to urllib/request.py based upon ideas from >> urllib3, requests, httplib2 (?), and we came to the conclusion that it >> might be a good idea to let those packages' authors review the >> proposed stdlib improvements. > > We don't have any provisions against reviewal by third-party > developers already. I think the main problem (for us, of course) is that > these people generally aren't interested enough to really dive in > stdlib patches and proposals. > > For example, for the ssl module, I have sometimes tried to involve > authors of third-party packages such as pyOpenSSL (or, IIRC, M2Crypto), > but I got very little or no reviewing. IIRC M2Crypto is currently unmaintained, so that doesn't surprise me. (In general it seems most crypto wrappers seem unmaintained -- it must be a thankless job.) Still, AFAICT both requests and urllib3 are very actively maintained by people who know what they are doing, and it would be nice if we could build bridges instead of competition. So let's at least try. (But I'm not asking you, Antoine, to try and approach them personally. :-) -- --Guido van Rossum (python.org/~guido) From van.lindberg at gmail.com Tue Mar 13 22:39:10 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 13 Mar 2012 16:39:10 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: On 3/13/2012 4:19 PM, Terry Reedy wrote: > What is {base/userbase} actually on a typical machine? It is fixed or > user choice? It is based upon user choice and on whether it is a system-wide install (base) or a single-user install (userbase). Typically, though, it is just "where you installed Python" (/usr/local, C:\python\3.3, whatever). From van.lindberg at gmail.com Tue Mar 13 22:40:01 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 13 Mar 2012 16:40:01 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: On 3/13/2012 3:11 PM, Brian Curtin wrote: > I'm familiar with the scripts/bin change. I take it the rest of that > stuff matches *nix? Text later on seems to support this, so I think > I'm on board with it. Yes, that is correct. > Martin and I spoke on Friday and at least the bin/ folder and Path > stuff are acceptable and I'm working on the code for those. [...] > If that virtualenv PEP is also accepted for 3.3, I think we can take > care of inconsistencies there (at least moving forward). Thanks Brian! From valhallasw at arctus.nl Tue Mar 13 22:50:35 2012 From: valhallasw at arctus.nl (Merlijn van Deen) Date: Tue, 13 Mar 2012 22:50:35 +0100 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: On 13 March 2012 22:13, Guido van Rossum wrote: > Well, since trying to migrate data between versions using pickle is > the "wrong" thing anyway, I think the status quo is just fine. > Developers doing the "right" thing don't use pickle for this purpose. I'm confused by this. "The pickle serialization format is guaranteed to be backwards compatible across Python releases" [1], which - at least to me - suggests it's fine to use pickle for long-term storage, and that reading this data in new Python versions is not a "bad" thing to do. Am I missing something here? [1] http://docs.python.org/library/pickle.html#the-pickle-protocol From guido at python.org Tue Mar 13 23:08:35 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 13 Mar 2012 15:08:35 -0700 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: On Tue, Mar 13, 2012 at 2:50 PM, Merlijn van Deen wrote: > On 13 March 2012 22:13, Guido van Rossum wrote: >> Well, since trying to migrate data between versions using pickle is >> the "wrong" thing anyway, I think the status quo is just fine. >> Developers doing the "right" thing don't use pickle for this purpose. > > I'm confused by this. "The pickle serialization format is guaranteed > to be backwards compatible across Python releases" [1], which - at > least to me - suggests it's fine to use pickle for long-term storage, > and that reading this data in new Python versions is not a "bad" > thing to do. Am I missing something here? > > [1] http://docs.python.org/library/pickle.html#the-pickle-protocol That was probably written before Python 3. Python 3 also dropped the long-term backwards compatibilities for the language and stdlib. I am certainly fine with adding a warning to the docs that this guarantee does not apply to the Python 2/3 boundary. But I don't think we should map 8-bit str instances from Python 2 to bytes in Python 3. My snipe was mostly in reference to the many other things that can go wrong with pickled data as your environment evolves -- if you're not careful you can have references (by name) to modules, functions, classes in pickled data that won't resolve in a later (or earlier!) version of your app, or you might have objects that are unpickled in an incomplete state that causes later use of the objects to break (e.g. if a newer version of __init__() sets some extra instance variables -- unpickling doesn't generally call __init__, so these new variables won't be set if they didn't exist in the old version). Etc., etc. If you can solve your problem with a suitably hacked Unpickler subclass that's fine with me, but I would personally use this opportunity to change the app to some other serialization format that is perhaps less general but more robust than pickle. I've been bitten by too many pickle-related problems to recommend pickle to anyone... -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Tue Mar 13 23:49:21 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 14 Mar 2012 08:49:21 +1000 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: On Wed, Mar 14, 2012 at 8:08 AM, Guido van Rossum wrote: > If you can solve your problem with a suitably hacked Unpickler > subclass that's fine with me, but I would personally use this > opportunity to change the app to some other serialization format that > is perhaps less general but more robust than pickle. I've been bitten > by too many pickle-related problems to recommend pickle to anyone... It's fine for in-memory storage of (almost) arbitrary objects (I use it to stash things in a memory backed sqlite DB via SQLAlchemy) and for IPC, but yeah, for long-term cross-version persistent storage, I'd be looking to something like JSON rather than pickle. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From kristjan at ccpgames.com Tue Mar 13 23:53:42 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 13 Mar 2012 22:53:42 +0000 Subject: [Python-Dev] making python's c iterators picklable (http://bugs.python.org/issue14288) Message-ID: http://bugs.python.org/issue14288 Raymond suggested that this patch should be discussed here, so here goes: How this came about: There are frameworks, such as the Nagare web framework, (http://www.nagare.org/) that rely on suspending execution at some point and resuming it again. Nagare does this using Stackless python, pickles the execution state of a tasklet, and resumes it later (possibly elsewhere). Other frameworks are doing similar things in cloud computing. I have seen such presentation at previous PyCons, and they have had to write their own picklers to get around these problems. The problem is this: While pickling execution state (frame objects, functions) might be considered exotic, and indeed Stackless has modifications unique to it to do it, they quickly run into trouble that have nothing to do really with the fact that they are doing such exotic things. For example, the fact that the very common dictiter is implemented in C and not python, necessitates that special pickle support is done added for that, otherwise only some context can be pickled, (those that are not currently iterating through a dict) and not others. Now stackless has tried to provide this functionality for many years and indeed has special pickling support for dictiters, listiters, etc. (stuff that has nothing to do with the stacklessness of Stackless, really). However, (somewhat) recently a lot of the itertools were moved into C. Suddenly iterators, previously picklable (by merit of being in .py) stopped being that, just because they became C objects. In addition, a bunch of other iterators started showing up (stringiter, bytesiter). This started to cause problems. Suddenly you have to arbitrarily restrict what you can and can't do in code that is using these approaches. For Stackless, (and Nagare), it was necessary to ban the usage of the _itertools module in web programs. Instead of adding this to Stackless, and thus widening the gap between stackless and cpython, I think it is a good idea simply to fix this in cpython itself. Note that I also consider this to be of general utility to regular, non-exotic applications: Why should an application, that is working with a bunch of data, but wants to stop that for a bit, and maybe save it out to disk, have to worry about transforming the data into valid primitive datastructures before doing so? In my opinion, any objects that have simple and obvious pickle semantics should be picklable. Iterators are just regular objects with some state. They are not file pointers or sockets or database cursors. And again, I argue that if these objects were implemented in .py, they would already be automatically picklable (indeed, itertools.py was). The detail that some iterators in standard python are implemented in C should not automatically restrict their usage for no particular reason. The patch is straightforward. Most of it is tests, in fact. But it does use a few tricks in some places to get around the fact that some of those iterator types are hidden. We did try to be complete and find all the c iterators, but it was a year ago that the bulk of this work was done and something might have been added in the meantime. Anyway, that's my pitch. Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From jackdied at gmail.com Tue Mar 13 23:58:07 2012 From: jackdied at gmail.com (Jack Diederich) Date: Tue, 13 Mar 2012 18:58:07 -0400 Subject: [Python-Dev] making python's c iterators picklable (http://bugs.python.org/issue14288) In-Reply-To: References: Message-ID: 2012/3/13 Kristj?n Valur J?nsson : > http://bugs.python.org/issue14288 > In my opinion, any objects that have simple and obvious pickle semantics > should be picklable.? Iterators are just regular objects with some state. > They are not file pointers or sockets or database cursors.? And again, I > argue that if these objects were implemented in .py, they would already be > automatically picklable (indeed, itertools.py was).? The detail that some > iterators in standard python are implemented in C should not automatically > restrict their usage for no particular reason. +1, things that can be pickled should be pickleable. -Jack From solipsis at pitrou.net Wed Mar 14 00:05:43 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 14 Mar 2012 00:05:43 +0100 Subject: [Python-Dev] making python's c iterators picklable (http://bugs.python.org/issue14288) References: Message-ID: <20120314000543.453c9463@pitrou.net> On Tue, 13 Mar 2012 22:53:42 +0000 Kristj?n Valur J?nsson wrote: > http://bugs.python.org/issue14288 > > Raymond suggested that this patch should be discussed here, so here goes: Sounds good on the principle. Of course, the patch needs to be reviewed. cheers Antoine. From tismer at stackless.com Wed Mar 14 00:54:10 2012 From: tismer at stackless.com (Christian Tismer) Date: Tue, 13 Mar 2012 16:54:10 -0700 Subject: [Python-Dev] making python's c iterators picklable (http://bugs.python.org/issue14288) In-Reply-To: <20120314000543.453c9463@pitrou.net> References: <20120314000543.453c9463@pitrou.net> Message-ID: <4F5FDE22.5080305@stackless.com> On 3/13/12 4:05 PM, Antoine Pitrou wrote: > On Tue, 13 Mar 2012 22:53:42 +0000 > Kristj?n Valur J?nsson wrote: >> http://bugs.python.org/issue14288 >> >> Raymond suggested that this patch should be discussed here, so here goes: > Sounds good on the principle. > Of course, the patch needs to be reviewed. I am very much for this patch. Of course I am biased by my stackless history and would be very happy to get most pickling into Python, where it belongs. To improve the patch a bit, I propose to put the hidden types into some module. I agree with Kristjan that the types module would be a good place to bootstrap the hidden iterable types. I even would like to propose a PEP: Whenever stuff is turned into C, which was picklable when implemented in Python, or something new is implemented that makes sense to pickle, a pickle implementation should always be provided. cheers -- chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de work +49 173 24 18 776 mobile +49 173 24 18 776 fax n.a. PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From steve at pearwood.info Wed Mar 14 00:55:35 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 14 Mar 2012 10:55:35 +1100 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: <4F5FDE77.7030601@pearwood.info> Brian Curtin wrote: > On Tue, Mar 13, 2012 at 14:13, Kenneth Reitz wrote: >> I think the cheesehop trove classifiers would be the ideal way to >> agnostically link to a page of packages related to the standard package in >> question. No need for sort order. > > Randomize the order for all I care. We still need to ensure we're > suggesting quality projects. It doesn't make sense for us to suggest > alternatives that we wouldn't want to use ourselves by just polling > some list that anyone can get on. "Need" is awfully strong. I don't believe it is the responsibility of the standard library to be judge and reviewer of third party packages that it doesn't control. -1 on adding *any* sort of recommendations about third-party software except, at most, a case-by-case basis where absolutely necessary. What problem are we actually trying to solve here? Do we think that there are users who really have no clue where to find 3rd party software AND don't know how to use Google, BUT read the Python docs? I find it difficult to believe that there are people who both read the docs and are so clueless that they need to be told that there are alternatives available and which they should be using. Personally I think this is a solution in search of a problem. Judging by the python-tutor mailing list, even *beginners* know that they aren't limited to the stdlib and how to go about finding third party software. There are many more questions about PyGame and wxPython than there are about tkinter. There are plenty of questions about numpy. There are lots of questions about niche packages I'd never even heard of. I simply don't think there is any evidence that there are appreciable numbers of Python coders, beginners or expert, who need to be told about third party software. Who are these people we're trying to reach out to? > This is documentation that receives hundreds of thousands of views a > month*. We need to be picky about what goes in it. Exactly why we should be wary of recommending specific packages. Should we recommend wxPython over Pyjamas or PyGUI or PyGtk? On what basis? Whatever choice we make is going to be wrong for some people, and is potentially unfair to the maintainers of the packages left out. Should we recommend them all? That's no help to anyone. Make no recommendation at all? That's the status quo. What counts as "best of breed" can change rapidly -- software churn is part of the reason that the packages aren't in the stdlib in the first place. It can also be a matter of taste and convention. There are a few non-brainers, like numpy, but everything else, no, let's not open this can of worms. I can see no benefit to this suggestion, and all sorts of ways that this might go badly. -- Steven From victor.stinner at gmail.com Wed Mar 14 00:57:16 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 14 Mar 2012 00:57:16 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? Message-ID: Hi, I added two functions to the time module in Python 3.3: wallclock() and monotonic(). I'm unable to explain the difference between these two functions, even if I wrote them :-) wallclock() is suppose to be more accurate than time() but has an unspecified starting point. monotonic() is similar except that it is monotonic: it cannot go backward. monotonic() may not be available or fail whereas wallclock() is available/work, but I think that the two functions are redundant. I prefer to keep only monotonic() because it is not affected by system clock update and should help to fix issues on NTP update in functions implementing a timeout. What do you think? -- monotonic() has 3 implementations: * Windows: QueryPerformanceCounter() with QueryPerformanceFrequency() * Mac OS X: mach_absolute_time() with mach_timebase_info() * UNIX: clock_gettime(CLOCK_MONOTONIC_RAW) or clock_gettime(CLOCK_MONOTONIC) wallclock() has 3 implementations: * Windows: QueryPerformanceCounter() with QueryPerformanceFrequency(), with a fallback to GetSystemTimeAsFileTime() if QueryPerformanceFrequency() failed * UNIX: clock_gettime(CLOCK_MONOTONIC_RAW), clock_gettime(CLOCK_MONOTONIC) or clock_gettime(CLOCK_REALTIME), with a fallback to gettimeofday() if clock_gettime(*) failed * Otherwise: gettimeofday() (wallclock should also use mach_absolute_time() on Mac OS X) Victor From fuzzyman at voidspace.org.uk Wed Mar 14 01:03:58 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 13 Mar 2012 17:03:58 -0700 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: <6EAD1A31-F759-4AF7-A636-68CB21ADDB18@voidspace.org.uk> On 13 Mar 2012, at 16:57, Victor Stinner wrote: > Hi, > > I added two functions to the time module in Python 3.3: wallclock() > and monotonic(). I'm unable to explain the difference between these > two functions, even if I wrote them :-) wallclock() is suppose to be > more accurate than time() but has an unspecified starting point. > monotonic() is similar except that it is monotonic: it cannot go > backward. monotonic() may not be available or fail whereas wallclock() > is available/work, but I think that the two functions are redundant. > > I prefer to keep only monotonic() because it is not affected by system > clock update and should help to fix issues on NTP update in functions > implementing a timeout. > > What do you think? > I am in the middle of adding a feature to unittest that involves timing of individual tests. I want the highest resolution cross platform measure of wallclock time - and time.wallclock() looked ideal. If monotonic may not exist or can fail why would that be better? Michael > -- > > monotonic() has 3 implementations: > * Windows: QueryPerformanceCounter() with QueryPerformanceFrequency() > * Mac OS X: mach_absolute_time() with mach_timebase_info() > * UNIX: clock_gettime(CLOCK_MONOTONIC_RAW) or clock_gettime(CLOCK_MONOTONIC) > > wallclock() has 3 implementations: > * Windows: QueryPerformanceCounter() with QueryPerformanceFrequency(), > with a fallback to GetSystemTimeAsFileTime() if > QueryPerformanceFrequency() failed > * UNIX: clock_gettime(CLOCK_MONOTONIC_RAW), > clock_gettime(CLOCK_MONOTONIC) or clock_gettime(CLOCK_REALTIME), with > a fallback to gettimeofday() if clock_gettime(*) failed > * Otherwise: gettimeofday() > > (wallclock should also use mach_absolute_time() on Mac OS X) > > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From nadeem.vawda at gmail.com Wed Mar 14 01:18:26 2012 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Wed, 14 Mar 2012 02:18:26 +0200 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: So wallclock() falls back to a not-necessarily-monotonic time source if necessary, while monotonic() raises an exception in that case? ISTM that these don't need to be separate functions - rather, we can have one function that takes a flag (called require_monotonic, or something like that) telling it which failure mode to use. Does that make sense? Cheers, Nadeem From guido at python.org Wed Mar 14 01:27:14 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 13 Mar 2012 17:27:14 -0700 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: On Tue, Mar 13, 2012 at 4:57 PM, Victor Stinner wrote: > I added two functions to the time module in Python 3.3: wallclock() > and monotonic(). I'm unable to explain the difference between these > two functions, even if I wrote them :-) wallclock() is suppose to be > more accurate than time() but has an unspecified starting point. > monotonic() is similar except that it is monotonic: it cannot go > backward. monotonic() may not be available or fail whereas wallclock() > is available/work, but I think that the two functions are redundant. > > I prefer to keep only monotonic() because it is not affected by system > clock update and should help to fix issues on NTP update in functions > implementing a timeout. > > What do you think? I think wallclock() is an awkward name; in other contexts I've seen "wall clock time" used to mean the time that a clock on the wall would show, i.e. local time. This matches definition #1 of http://www.catb.org/jargon/html/W/wall-time.html (while yours matches #2 :-). I agree that it's better to have only one of these. I also think if we offer it we should always have it -- if none of the implementations are available, I guess you could fall back on returning time.time(), with some suitable offset so people don't think it is always the same. Maybe it could be called realtime()? -- --Guido van Rossum (python.org/~guido) From kristjan at ccpgames.com Wed Mar 14 01:27:14 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 14 Mar 2012 00:27:14 +0000 Subject: [Python-Dev] sharing sockets among processes on windows Message-ID: Hi, I?m interested in contributing a patch to duplicate sockets between processes on windows. Tha api to do this is WSADuplicateSocket/WSASocket(), as already used by dup() in the _socketmodule.c Here?s what I have: 1) Sockets have a method, duplicate(target_pid), that return a bytes object containing the socket info for the target process 2) When socket(x, y, z, data) is called with this bytes object as the fourth argument, it is recreated from that. What are your thoughts on this? Also, should I try to reuse the socket.dup() function somehow, perhaps by giving the target pid? Secondly, there is multiprocessing.reduction which is doing similar things for unix. Does anyone familiar with it know how it goes about doing this? Would it be simple to change it to use this mechanism on windows? Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Mar 14 01:29:46 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 14 Mar 2012 01:29:46 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: <4F5FE67A.3020505@gmail.com> On 14/03/2012 01:18, Nadeem Vawda wrote: > So wallclock() falls back to a not-necessarily-monotonic time source > if necessary, > while monotonic() raises an exception in that case? ISTM that these > don't need to > be separate functions - rather, we can have one function that takes a > flag (called > require_monotonic, or something like that) telling it which failure mode to use. > Does that make sense? I don't think that time.monotonic() can fail in practice and it is available for all modern platforms (Windows, Mac OS X and OS implemented clock_gettime()). On Windows, time.monotonic() fails with an OSError if QueryPerformanceFrequency() failed. QueryPerformanceFrequency() can fail if "the installed hardware does not support a high-resolution performance counter" according to Microsoft documentation. Windows uses the CPU RDTSC instruction, or the ACPI power management timer or event the old 8245 PIT. I think that you have at least one of this device on your computer. I suppose that you can use a manual fallback to time.time() if time.monotonic() failed. If time.monotonic() fails, it fails directly at the first call. Example of a fallback working with Python < 3.3: try: time.monotonic() except (OSError, AttributeError): get_time = time.time else: get_time = time.monotonic Victor From kristjan at ccpgames.com Wed Mar 14 01:45:27 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 14 Mar 2012 00:45:27 +0000 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: The reason I originally suggested "wallclock" was because that term is often used to distinguish time measurements (delta) that show real world time from those showing CPU or Kernel time. "number.crunch() took 2 seconds wallclock time but only 1 second CPU!". The original problem was that time.clock() was "wallclock" on some platforms but "cpu" on others, IIRC. But monotonic is probably even better. I agree removing one or the other, probably wallclock. K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Guido van Rossum Sent: 13. mars 2012 17:27 To: Victor Stinner Cc: Python Dev Subject: Re: [Python-Dev] Drop the new time.wallclock() function? I think wallclock() is an awkward name; in other contexts I've seen "wall clock time" used to mean the time that a clock on the wall would show, i.e. local time. This matches definition #1 of http://www.catb.org/jargon/html/W/wall-time.html (while yours matches #2 :-). I agree that it's better to have only one of these. I also think if we offer it we should always have it -- if none of the implementations are available, I guess you could fall back on returning time.time(), with some suitable offset so people don't think it is always the same. Maybe it could be called realtime()? From victor.stinner at gmail.com Wed Mar 14 02:03:42 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 14 Mar 2012 02:03:42 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: > I agree that it's better to have only one of these. I also think if we > offer it we should always have it -- if none of the implementations > are available, I guess you could fall back on returning time.time(), > with some suitable offset so people don't think it is always the same. > Maybe it could be called realtime()? For a concrete use case, see for example: http://bugs.python.org/issue14222 I just wrote two patches, for the queue and threading modules, using time.monotonic() if available, with a fallback to time.time(). My patches call time.monotonic() to ensure that it doesn't fail with OSError. I suppose that most libraries and programs will have to implement a similar fallback. We may merge both functions with a flag to be able to disable the fallback. Example: - time.realtime(): best-effort monotonic, with a fallback - time.realtime(monotonic=True): monotonic, may raise OSError or NotImplementedError Victor From tismer at stackless.com Wed Mar 14 02:06:35 2012 From: tismer at stackless.com (Christian Tismer) Date: Tue, 13 Mar 2012 18:06:35 -0700 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: <4F5FEF1B.8030306@stackless.com> On 3/13/12 5:45 PM, Kristj?n Valur J?nsson wrote: > The reason I originally suggested "wallclock" was because that term is often used to distinguish time measurements (delta) that show real world time from those showing CPU or Kernel time. "number.crunch() took 2 seconds wallclock time but only 1 second CPU!". The original problem was that time.clock() was "wallclock" on some platforms but "cpu" on others, IIRC. > But monotonic is probably even better. I agree removing one or the other, probably wallclock. > K > > -----Original Message----- > From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Guido van Rossum > Sent: 13. mars 2012 17:27 > To: Victor Stinner > Cc: Python Dev > Subject: Re: [Python-Dev] Drop the new time.wallclock() function? > > I think wallclock() is an awkward name; in other contexts I've seen "wall clock time" used to mean the time that a clock on the wall would show, i.e. local time. This matches definition #1 of http://www.catb.org/jargon/html/W/wall-time.html (while yours matches > #2 :-). > > I agree that it's better to have only one of these. I also think if we offer it we should always have it -- if none of the implementations are available, I guess you could fall back on returning time.time(), with some suitable offset so people don't think it is always the same. > Maybe it could be called realtime()? > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/tismer%40stackless.com Btw., have you considered virtual machines? I happen to run windows in Parallels or Virtualbox quite often. There the "wall clock" stuff notoriously does not work. It would be good (but difficult?) if the supposed-to-be-accurate clock could test itself, if it works at all, and replace itself with a fallback. In my case, this causes quite a few PyPy tests to fail ;-) ciao -- Chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de work +49 173 24 18 776 mobile +49 173 24 18 776 fax n.a. PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From yselivanov.ml at gmail.com Wed Mar 14 02:09:46 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 13 Mar 2012 21:09:46 -0400 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: If we need to decide to which function should be kept - I vote for monotonic. It's extremely useful (even essential) to track timeouts in various schedulers implementations, for example. Quick search also shows the demand for it, as there are questions on stackoverflow.com and few packages on PyPI. - Yury On 2012-03-13, at 9:03 PM, Victor Stinner wrote: >> I agree that it's better to have only one of these. I also think if we >> offer it we should always have it -- if none of the implementations >> are available, I guess you could fall back on returning time.time(), >> with some suitable offset so people don't think it is always the same. >> Maybe it could be called realtime()? > > For a concrete use case, see for example: > http://bugs.python.org/issue14222 > > I just wrote two patches, for the queue and threading modules, using > time.monotonic() if available, with a fallback to time.time(). My > patches call time.monotonic() to ensure that it doesn't fail with > OSError. > > I suppose that most libraries and programs will have to implement a > similar fallback. > > We may merge both functions with a flag to be able to disable the > fallback. Example: > > - time.realtime(): best-effort monotonic, with a fallback > - time.realtime(monotonic=True): monotonic, may raise OSError or > NotImplementedError > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com From nadeem.vawda at gmail.com Wed Mar 14 02:10:46 2012 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Wed, 14 Mar 2012 03:10:46 +0200 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: On Wed, Mar 14, 2012 at 3:03 AM, Victor Stinner wrote: > I suppose that most libraries and programs will have to implement a > similar fallback. > > We may merge both functions with a flag to be able to disable the > fallback. Example: > > ?- time.realtime(): best-effort monotonic, with a fallback > ?- time.realtime(monotonic=True): monotonic, may raise OSError or > NotImplementedError This was my suggestion - I think it's useful to have the fallback available (since most users will want it), but at the same time we should also cater to users who need a clock that is *guaranteed* to be monotonic. As an aside, I think "monotonic" is a better name than "realtime"; it conveys the functions purpose more clearly. Then we could call the flag "strict". Cheers, Nadeem From andrew.svetlov at gmail.com Wed Mar 14 02:12:32 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Tue, 13 Mar 2012 18:12:32 -0700 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: <4F5FE67A.3020505@gmail.com> References: <4F5FE67A.3020505@gmail.com> Message-ID: On Tue, Mar 13, 2012 at 5:29 PM, Victor Stinner wrote: > I suppose that you can use a manual fallback to time.time() if > time.monotonic() failed. If time.monotonic() fails, it fails directly at the > first call. Example of a fallback working with Python < 3.3: > > try: > ? time.monotonic() > except (OSError, AttributeError): > ? get_time = time.time > else: > ? get_time = time.monotonic > I like 'fallback' solution while `get_time` is not the best name for high precision timer from my perspective. Can you call it `monotonic` or `realtime`? From kristjan at ccpgames.com Wed Mar 14 02:11:51 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 14 Mar 2012 01:11:51 +0000 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: <4F5FEF1B.8030306@stackless.com> References: <4F5FEF1B.8030306@stackless.com> Message-ID: Interesting thought. Althougn I don't see how that could fail on windows, if the QPC function is really just talking to a clock chip, surely that hasn't been virtualized. Is there an actual example of windows hardware where this api fails (virtual or not?) Perhaps there is no real need to have a fallback mechanism, and it would even be best to write such a mechanism inside the function itself, and just return getsystemtimeasfiletime() instead. K -----Original Message----- From: Christian Tismer [mailto:tismer at stackless.com] Sent: 13. mars 2012 18:07 To: Kristj?n Valur J?nsson Cc: Guido van Rossum; Victor Stinner; Python Dev Subject: Re: [Python-Dev] Drop the new time.wallclock() function? Btw., have you considered virtual machines? I happen to run windows in Parallels or Virtualbox quite often. There the "wall clock" stuff notoriously does not work. It would be good (but difficult?) if the supposed-to-be-accurate clock could test itself, if it works at all, and replace itself with a fallback. In my case, this causes quite a few PyPy tests to fail ;-) ciao -- Chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de work +49 173 24 18 776 mobile +49 173 24 18 776 fax n.a. PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From guido at python.org Wed Mar 14 02:34:05 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 13 Mar 2012 18:34:05 -0700 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: On Tue, Mar 13, 2012 at 6:03 PM, Victor Stinner wrote: >> I agree that it's better to have only one of these. I also think if we >> offer it we should always have it -- if none of the implementations >> are available, I guess you could fall back on returning time.time(), >> with some suitable offset so people don't think it is always the same. >> Maybe it could be called realtime()? > > For a concrete use case, see for example: > http://bugs.python.org/issue14222 > > I just wrote two patches, for the queue and threading modules, using > time.monotonic() if available, with a fallback to time.time(). My > patches call time.monotonic() to ensure that it doesn't fail with > OSError. > > I suppose that most libraries and programs will have to implement a > similar fallback. It seems horrible to force everyone to copy the same silly block of code. The time module itself should implement this once. > We may merge both functions with a flag to be able to disable the > fallback. Example: > > ?- time.realtime(): best-effort monotonic, with a fallback > ?- time.realtime(monotonic=True): monotonic, may raise OSError or > NotImplementedError I have no opinions on this or other API details. But please make the function always exist and return something vaguely resembling a monotonic real-time clock. (BTW IMO the docs should state explicitly that it returns a float.) -- --Guido van Rossum (python.org/~guido) From martin at v.loewis.de Wed Mar 14 02:37:38 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 13 Mar 2012 18:37:38 -0700 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: <4F5FF662.4070201@v.loewis.de> > 1) The layout for the python root directory for all platforms should be > as follows: > > stdlib = {base/userbase}/lib/python{py_version_short} > platstdlib = {base/userbase}/lib/python{py_version_short} > purelib = {base/userbase}/lib/python{py_version_short}/site-packages > platlib = {base/userbase}/lib/python{py_version_short}/site-packages > include = {base/userbase}/include/python{py_version_short} > scripts = {base/userbase}/bin > data = {base/userbase} [...] > I have talked with a number of people at PyCon, including Tarek and MvL. > Nobody objected, and several thought it was a good idea. I admit that I didn't understand that lib/python{version} was also part of the proposal. I'm fine with the bin/ change, but skeptical about the lib change - this just adds a redundant level of directories on Windows. The installation will end up in c:\python33\lib\python3.3 which has the software name and version twice in the path. Do we *really* need this? Regards, Martin From van.lindberg at gmail.com Wed Mar 14 02:57:38 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 13 Mar 2012 20:57:38 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F5FF662.4070201@v.loewis.de> References: <4F5FF662.4070201@v.loewis.de> Message-ID: On Mar 13, 2012, at 8:37 PM, "Martin v. L?wis" wrote: >> 1) The layout for the python root directory for all platforms should be >> as follows: >> >> stdlib = {base/userbase}/lib/python{py_version_short} >> platstdlib = {base/userbase}/lib/python{py_version_short} >> purelib = {base/userbase}/lib/python{py_version_short}/site-packages >> platlib = {base/userbase}/lib/python{py_version_short}/site-packages >> include = {base/userbase}/include/python{py_version_short} >> scripts = {base/userbase}/bin >> data = {base/userbase} > [...] >> I have talked with a number of people at PyCon, including Tarek and MvL. >> Nobody objected, and several thought it was a good idea. > > I admit that I didn't understand that lib/python{version} was > also part of the proposal. I'm fine with the bin/ change, but > skeptical about the lib change - this just adds a redundant level > of directories on Windows. The installation will end up in > > c:\python33\lib\python3.3 > > which has the software name and version twice in the path. > > Do we *really* need this? We *already* have this. The only difference in this proposal is that we go from py_version_nodot to py_version_short, i.e. from c:\python33\lib\python33 to c:\python33\lib\python3.3 Given that we already repeat it, isn't it better to be consistent? Thanks, Van From tjreedy at udel.edu Wed Mar 14 03:58:40 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 13 Mar 2012 22:58:40 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F5FF662.4070201@v.loewis.de> Message-ID: On 3/13/2012 9:57 PM, VanL wrote: > On Mar 13, 2012, at 8:37 PM, "Martin v. L?wis" > wrote: > >>> 1) The layout for the python root directory for all platforms >>> should be as follows: >>> >>> stdlib = {base/userbase}/lib/python{py_version_short} platstdlib >>> = {base/userbase}/lib/python{py_version_short} purelib = >>> {base/userbase}/lib/python{py_version_short}/site-packages >>> platlib = >>> {base/userbase}/lib/python{py_version_short}/site-packages >>> include = {base/userbase}/include/python{py_version_short} >>> scripts = {base/userbase}/bin data = {base/userbase} >> [...] >>> I have talked with a number of people at PyCon, including Tarek >>> and MvL. Nobody objected, and several thought it was a good >>> idea. >> >> I admit that I didn't understand that lib/python{version} was also >> part of the proposal. I'm fine with the bin/ change, but skeptical >> about the lib change - this just adds a redundant level of >> directories on Windows. The installation will end up in >> >> c:\python33\lib\python3.3 >> >> which has the software name and version twice in the path. >> >> Do we *really* need this? > > > We *already* have this. The only difference in this proposal is that > we go from py_version_nodot to py_version_short, i.e. from > > c:\python33\lib\python33 Right not, we (at least I) have .../python33/Lib/ .../python32/Lib/ > to > > c:\python33\lib\python3.3 > > Given that we already repeat it, isn't it better to be consistent? But there is no repetition currently on Windows installations. I though you were just proposing to switch lib (lower-cased, and scripts renamed as bin, and pythonxx). So I do not think I yet understand what the proposal is and how it would be different from what I have now. -- > Terry Jan Reedy From anacrolix at gmail.com Wed Mar 14 04:31:34 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Wed, 14 Mar 2012 11:31:34 +0800 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <20120313222103.163877e6@pitrou.net> References: <20120313222103.163877e6@pitrou.net> Message-ID: On Mar 14, 2012 5:27 AM, "Antoine Pitrou" wrote: > > On Tue, 13 Mar 2012 14:16:40 -0700 > Guido van Rossum wrote: > > > On Tue, Mar 13, 2012 at 12:49 PM, Terry Reedy wrote: > > > Authors of separately maintained packages are, from our viewpoint, as > > > eligible to help with tracker issues as anyone else, even while they > > > continue work on their external package. Some of them are more likely than > > > most contributors to have the knowledge needed for some particular issues. > > > > This is a good idea. I was chatting w. Senthil this morning about > > adding improvements to urllib/request.py based upon ideas from > > urllib3, requests, httplib2 (?), and we came to the conclusion that it > > might be a good idea to let those packages' authors review the > > proposed stdlib improvements. > > We don't have any provisions against reviewal by third-party > developers already. I think the main problem (for us, of course) is that > these people generally aren't interested enough to really dive in > stdlib patches and proposals. > > For example, for the ssl module, I have sometimes tried to involve > authors of third-party packages such as pyOpenSSL (or, IIRC, M2Crypto), > but I got very little or no reviewing. Rather than indicating apathy on the party of third party developers, this might be a sign that core Python is unapproachable or not worth the effort. For instance I have several one line patches languishing, I can't imagine how disappointing it would be to have significantly larger patches ignored, but it happens. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Wed Mar 14 05:03:10 2012 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 14 Mar 2012 06:03:10 +0200 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: <20120313222103.163877e6@pitrou.net> Message-ID: > Rather than indicating apathy on the party of third party developers, this > might be a sign that core Python is unapproachable or not worth the effort. > > For instance I have several one line patches languishing, I can't imagine > how disappointing it would be to have significantly larger patches ignored, > but it happens. > A one-line patch for a complex module or piece of code may require much more than looking at that single line to really review. I hope you understand that. That said, if you find any issues in the bug tracker that in your opinion need only a few minutes of attention from a core developer, feel free to send a note to the mentorship mailing list. People sometimes come there asking for precisely this thing (help reviewing a simple patch they submitted), and usually get help quickly if their request is justified. Eli From kristjan at ccpgames.com Wed Mar 14 05:26:16 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 14 Mar 2012 04:26:16 +0000 Subject: [Python-Dev] SocketServer issues Message-ID: Hi there. I want to mention some issues I've had with the socketserver module, and discuss if there's a way to make it nicer. So, for a long time we were able to create magic stackless mixin classes for it, like ThreadingMixIn, and assuming we had the appropriate socket replacement library, be able to use it nicely using tasklets. Then, at some point, the run_forever loop was changed to support timeout through the use of select.select() before every socket.accept() call. This was very awkward because the whole concept of select() really goes contrary to the approach of using microthreads, non-blocking IO and all that. The only reason for this select call, was to support timeout for the accept. And even for vanilla applications, it necessiates an extra kernel call for every accept, just for the timeout. The way around this for me has to do local modifications to the SocketServer and just get rid of the select. So, my first question is: Why not simply rely on the already built-in timeout support in the socket module? Setting the correct timeout value on the accepting socket, will achieve the same thing. Of course, one then has to reset the timeout value on the accepted socket, but this is minor. Second: Of late the SocketServer has grown additional features and attributes. In particular, it now has two event objects, __shutdown_request and __is_shut_down. Notice the double underscores. They make it impossible to subclass the SocketServer class to provide a different implementation of run_forever(). Is there any good reason why these attributes have been made "private" like this? Having just seen Raymond's talk on how to subclass right, this looks like the wrong way to use the double leading underscores. So, two things really: The use of select.select in SocketServer makes it necessary to subclass it to write a new version of run_forever() for those that wish to use a non-blocking IO library instead of socket. And the presence of these private attributes make it (theoretically) impossible to specialize run_forever in a mix-in class. Any thoughs? Is anyone interested in seeing how the timeouts can be done without using select.select()? And what do you think about removing the double underscores from there and thus making serve_forever owerrideable? Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From tismer at stackless.com Wed Mar 14 06:24:20 2012 From: tismer at stackless.com (Christian Tismer) Date: Tue, 13 Mar 2012 22:24:20 -0700 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F5FEF1B.8030306@stackless.com> Message-ID: <4F602B84.2020208@stackless.com> The performancecounter is a thing that typically gets intercepted by the VM infrastructure and does no longer work as a reliable timing source. In PyPy there are tests which check certain assumptions how much the performancecounter must advance at least between a few opcodes, and that does not work in a VM. cheers - Chris On 3/13/12 6:11 PM, Kristj?n Valur J?nsson wrote: > Interesting thought. > Althougn I don't see how that could fail on windows, if the QPC function is really just talking to a clock chip, surely that hasn't been virtualized. > Is there an actual example of windows hardware where this api fails (virtual or not?) Perhaps there is no real need to have a fallback mechanism, and it would even be best to write such a mechanism inside the function itself, and just return getsystemtimeasfiletime() instead. > > K > > -----Original Message----- > From: Christian Tismer [mailto:tismer at stackless.com] > Sent: 13. mars 2012 18:07 > To: Kristj?n Valur J?nsson > Cc: Guido van Rossum; Victor Stinner; Python Dev > Subject: Re: [Python-Dev] Drop the new time.wallclock() function? > > Btw., have you considered virtual machines? > I happen to run windows in Parallels or Virtualbox quite often. > There the "wall clock" stuff notoriously does not work. > > It would be good (but difficult?) if the supposed-to-be-accurate clock could test itself, if it works at all, and replace itself with a fallback. > > In my case, this causes quite a few PyPy tests to fail ;-) > > ciao -- Chris > -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de work +49 173 24 18 776 mobile +49 173 24 18 776 fax n.a. PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From rdmurray at bitdance.com Wed Mar 14 06:29:15 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 14 Mar 2012 01:29:15 -0400 Subject: [Python-Dev] getting patches committed (was Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives) In-Reply-To: References: <20120313222103.163877e6@pitrou.net> Message-ID: <20120314052915.8BA282500ED@webabinitio.net> On Wed, 14 Mar 2012 06:03:10 +0200, Eli Bendersky wrote: > > Rather than indicating apathy on the party of third party developers, this > > might be a sign that core Python is unapproachable or not worth the effort. > > > > For instance I have several one line patches languishing, I can't imagine > > how disappointing it would be to have significantly larger patches ignored, > > but it happens. > > A one-line patch for a complex module or piece of code may require > much more than looking at that single line to really review. I hope > you understand that. In addition, sometimes patches just get forgotten. It's not like there are enough core devs with enough time that we are actually doing searches for open issues with patches...generally we have enough to do in our interest areas, and so stay there unless an issue is brought to our attention. So to bring an issue to our attention, you can first ping the issue with a status query, or get someone (anyone, pretty much) to do a review and post it to the issue. You can also look to see if you can figure out, either from the experts list in the devguide, or hg history, or tracker activity, who might be a reasonable person to look at the issue, and add them to the nosy list. Either of these actions will often "wake up" an issue, and if it is not one of the complex (or controversial) ones Eli alluded to, it will often then get committed. If that fails, and the patch has been on the tracker for a while, it is perfectly reasonable to ask about it here. What we really need most are *reviews*. And we need these for two reasons. First, there aren't enough active committers to keep up with the patch inflow. Reviews really help, because they usually simplify the commit review process for the committer, saving time, and making it more appealing to work on the issue. Second, it is as much (or more) from quality reviews as quality patches that we recognize people who it would be beneficial to invite to be committers. And every new committer increases the chances that new patches will actually get committed.... --David From jyasskin at gmail.com Wed Mar 14 06:42:24 2012 From: jyasskin at gmail.com (Jeffrey Yasskin) Date: Tue, 13 Mar 2012 22:42:24 -0700 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: <6EAD1A31-F759-4AF7-A636-68CB21ADDB18@voidspace.org.uk> References: <6EAD1A31-F759-4AF7-A636-68CB21ADDB18@voidspace.org.uk> Message-ID: On Tue, Mar 13, 2012 at 5:03 PM, Michael Foord wrote: > > On 13 Mar 2012, at 16:57, Victor Stinner wrote: > >> Hi, >> >> I added two functions to the time module in Python 3.3: wallclock() >> and monotonic(). I'm unable to explain the difference between these >> two functions, even if I wrote them :-) wallclock() is suppose to be >> more accurate than time() but has an unspecified starting point. >> monotonic() is similar except that it is monotonic: it cannot go >> backward. monotonic() may not be available or fail whereas wallclock() >> is available/work, but I think that the two functions are redundant. >> >> I prefer to keep only monotonic() because it is not affected by system >> clock update and should help to fix issues on NTP update in functions >> implementing a timeout. >> >> What do you think? >> > > > I am in the middle of adding a feature to unittest that involves timing of individual tests. I want the highest resolution cross platform measure of wallclock time - and time.wallclock() looked ideal. If monotonic may not exist or can fail why would that be better? > Isn't the highest resolution cross platform measure of "wallclock" time spelled "time.clock()"? Its docs say "this is the function to use for benchmarking Python or timing algorithms", and it would be a shame to add and teach a new function rather than improving clock()'s definition. Jeffrey From anacrolix at gmail.com Wed Mar 14 06:49:18 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Wed, 14 Mar 2012 13:49:18 +0800 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: <20120313222103.163877e6@pitrou.net> Message-ID: Thanks for the suggestions. On Mar 14, 2012 12:03 PM, "Eli Bendersky" wrote: > > Rather than indicating apathy on the party of third party developers, > this > > might be a sign that core Python is unapproachable or not worth the > effort. > > > > For instance I have several one line patches languishing, I can't imagine > > how disappointing it would be to have significantly larger patches > ignored, > > but it happens. > > > > A one-line patch for a complex module or piece of code may require > much more than looking at that single line to really review. I hope > you understand that. > > That said, if you find any issues in the bug tracker that in your > opinion need only a few minutes of attention from a core developer, > feel free to send a note to the mentorship mailing list. People > sometimes come there asking for precisely this thing (help reviewing a > simple patch they submitted), and usually get help quickly if their > request is justified. > > Eli > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jyasskin at gmail.com Wed Mar 14 07:16:28 2012 From: jyasskin at gmail.com (Jeffrey Yasskin) Date: Tue, 13 Mar 2012 23:16:28 -0700 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: On Tue, Mar 13, 2012 at 6:10 PM, Nadeem Vawda wrote: > On Wed, Mar 14, 2012 at 3:03 AM, Victor Stinner > wrote: >> I suppose that most libraries and programs will have to implement a >> similar fallback. >> >> We may merge both functions with a flag to be able to disable the >> fallback. Example: >> >> ?- time.realtime(): best-effort monotonic, with a fallback >> ?- time.realtime(monotonic=True): monotonic, may raise OSError or >> NotImplementedError > > This was my suggestion - I think it's useful to have the fallback > available (since most users will want it), but at the same time we > should also cater to users who need a clock that is *guaranteed* to > be monotonic. > > As an aside, I think "monotonic" is a better name than "realtime"; > it conveys the functions purpose more clearly. Then we could call > the flag "strict". While you're bikeshedding: Some of the drafts of the new C++ standard had a monotonic_clock, which was guaranteed to only go forwards, but which could be affected by system clock updates that went forwards. Because of problems in defining timeouts using an adjustable clock, C++11 instead defines a "steady_clock", which ticks as steadily as the machine/OS/library can ensure, and is definitely not affected by any time adjustments: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3191.htm. I realize that Unix's clock_gettime(CLOCK_MONOTONIC) already includes the steadiness criterion, but the word itself doesn't actually include the meaning. From skippy.hammond at gmail.com Wed Mar 14 07:32:22 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Wed, 14 Mar 2012 17:32:22 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: <4F603B76.4050004@gmail.com> On 14/03/2012 6:43 AM, VanL wrote: > Following up on conversations at PyCon, I want to bring up one of my > personal hobby horses for change in 3.3: Fix install layout on Windows, > with a side order of making the PATH work better. > > Short version: > > 1) The layout for the python root directory for all platforms should be > as follows: > > stdlib = {base/userbase}/lib/python{py_version_short} > platstdlib = {base/userbase}/lib/python{py_version_short} > purelib = {base/userbase}/lib/python{py_version_short}/site-packages > platlib = {base/userbase}/lib/python{py_version_short}/site-packages > include = {base/userbase}/include/python{py_version_short} As per comments later in the thread, I'm -1 on including "python{py_version_short}" in the lib directories for a number of reasons; one further reason not outlined is that it would potentially make running Python directly from a built tree difficult. For the same reason, I'm also -1 on having that in the include dir. > scripts = {base/userbase}/bin We should note that this may cause pain for a number of projects - I've seen quite a few projects that already assume "Scripts" on Windows - eg, virtualenv and setuptools IIRC - and also assume the executable is where it currently lives - one example off the top of my head is the mozilla "jetpack" project - see: https://github.com/mozilla/addon-sdk/blob/master/bin/activate.bat#L117 This code (and any other code looking in "Scripts" on Windows) will fail and need to be updated with this change. Further, assuming such projects want to target multiple Python versions, it will need to keep the old code and check the new location. Another bit of code which would be impacted is the PEP397 launcher; it too would have to grow version specific logic to locate the executable. So while I'm not (yet) -1 on the general idea, I'm close. I guess I don't understand how the benefits this offers outweigh the costs to 3rd parties. Given the work on making Python more virtualenv friendly, can't we just wear the costs of the existing scheme in the stdlib and avoid breaking the code already out there? IOW, who exactly will benefit from this, and how does the cost of them supporting the existing scheme compare to the cost of the breakage to multiple 3rd parties? Mark From kristjan at ccpgames.com Wed Mar 14 07:32:29 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 14 Mar 2012 06:32:29 +0000 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <6EAD1A31-F759-4AF7-A636-68CB21ADDB18@voidspace.org.uk> Message-ID: To quote: "On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of "processor time", depends on that of the C function of the same name," The problem is that it is defined to return "processor time." This is historical baggage that comes from just writing a python wrapper around the unix "clock" function. Of course, "processor time" is quite useless when one is trying to write timeout algorithms or other such things that need to time out in real time, not just cpu cycles. K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Jeffrey Yasskin Sent: 13. mars 2012 22:42 To: Michael Foord Cc: Python Dev Subject: Re: [Python-Dev] Drop the new time.wallclock() function? Isn't the highest resolution cross platform measure of "wallclock" time spelled "time.clock()"? Its docs say "this is the function to use for benchmarking Python or timing algorithms", and it would be a shame to add and teach a new function rather than improving clock()'s definition. Jeffrey _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From solipsis at pitrou.net Wed Mar 14 10:02:26 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 14 Mar 2012 10:02:26 +0100 Subject: [Python-Dev] SocketServer issues References: Message-ID: <20120314100226.04627743@pitrou.net> On Wed, 14 Mar 2012 04:26:16 +0000 Kristj?n Valur J?nsson wrote: > Hi there. > I want to mention some issues I've had with the socketserver module, and discuss if there's a way to make it nicer. > So, for a long time we were able to create magic stackless mixin classes for > it, like ThreadingMixIn, and assuming we had the appropriate socket > replacement library, be able to use it nicely using tasklets. I don't really think the ability to "create magic stackless mixin classes" should be a driving principle for the stdlib. I would suggest using a proper non-blocking framework such as Twisted. > So, my first question is: Why not simply rely on the already built-in timeout > support in the socket module? In case you didn't notice, the built-in timeout support *also* uses select(). Regards Antoine. From solipsis at pitrou.net Wed Mar 14 10:07:14 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 14 Mar 2012 10:07:14 +0100 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives References: <4F5FDE77.7030601@pearwood.info> Message-ID: <20120314100714.31f0a289@pitrou.net> On Wed, 14 Mar 2012 10:55:35 +1100 Steven D'Aprano wrote: > > What problem are we actually trying to solve here? Do we think that there are > users who really have no clue where to find 3rd party software AND don't know > how to use Google, BUT read the Python docs? I find it difficult to believe > that there are people who both read the docs and are so clueless that they > need to be told that there are alternatives available and which they should be > using. I find it quite easy to believe myself. Many people will learn some Python by reading the docs, without knowing the rest of the ecosystem. So, yes, publicizing the widely accepted alternatives (such as Twisted for asyncore) *is* helpful. (that doesn't mean any shiny new gadget must be advocated, though; third-party libraries should be mature enough before we start mentioning them) > Should we recommend wxPython over Pyjamas or PyGUI or PyGtk? On what basis? You don't have to recommend anything. Just mention them. You know what, we *already* do that job: http://docs.python.org/dev/library/othergui.html#other-gui-packages Regards Antoine. From solipsis at pitrou.net Wed Mar 14 10:16:18 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 14 Mar 2012 10:16:18 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? References: Message-ID: <20120314101618.5cf1850f@pitrou.net> On Wed, 14 Mar 2012 02:03:42 +0100 Victor Stinner wrote: > > We may merge both functions with a flag to be able to disable the > fallback. Example: > > - time.realtime(): best-effort monotonic, with a fallback > - time.realtime(monotonic=True): monotonic, may raise OSError or > NotImplementedError That's a rather awful name. time.time() is *the* real time. time.monotonic(fallback=False) would be a better API. Regards Antoine. From stefan at bytereef.org Wed Mar 14 10:30:41 2012 From: stefan at bytereef.org (Stefan Krah) Date: Wed, 14 Mar 2012 10:30:41 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: <20120314101618.5cf1850f@pitrou.net> References: <20120314101618.5cf1850f@pitrou.net> Message-ID: <20120314093041.GA20214@sleipnir.bytereef.org> Antoine Pitrou wrote: > time.monotonic(fallback=False) would be a better API. +1 Stefan Krah From solipsis at pitrou.net Wed Mar 14 10:39:41 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 14 Mar 2012 10:39:41 +0100 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: <20120313222103.163877e6@pitrou.net> Message-ID: <20120314103941.7cd7c9e6@pitrou.net> On Wed, 14 Mar 2012 11:31:34 +0800 Matt Joiner wrote: > > Rather than indicating apathy on the party of third party developers, this > might be a sign that core Python is unapproachable or not worth the effort. > > For instance I have several one line patches languishing, I can't imagine > how disappointing it would be to have significantly larger patches ignored, > but it happens. Can you give a pointer to these one-liners? Once a patch gets a month old or older, it tends to disappear from everyone's radar unless you somehow "ping" on the tracker, or post a message to the mailing-list. (of course, you shouldn't spam the list with open issues either) Thanks Antoine. From stefan at bytereef.org Wed Mar 14 10:58:10 2012 From: stefan at bytereef.org (Stefan Krah) Date: Wed, 14 Mar 2012 10:58:10 +0100 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <20120314103941.7cd7c9e6@pitrou.net> References: <20120313222103.163877e6@pitrou.net> <20120314103941.7cd7c9e6@pitrou.net> Message-ID: <20120314095810.GA20325@sleipnir.bytereef.org> Antoine Pitrou wrote: > > For instance I have several one line patches languishing, I can't imagine > > how disappointing it would be to have significantly larger patches ignored, > > but it happens. > > Can you give a pointer to these one-liners? Almost a one-liner, but vast knowledge required (how do you prove that using (freefunc) is safe if it's the first usage in the tree?). http://bugs.python.org/file21610/atexit-leak.patch I think there are many issues like that one where the implications of a short patch can only be assessed by small number of committers. Stefan Krah From solipsis at pitrou.net Wed Mar 14 11:00:44 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 14 Mar 2012 11:00:44 +0100 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives References: <20120313222103.163877e6@pitrou.net> <20120314103941.7cd7c9e6@pitrou.net> <20120314095810.GA20325@sleipnir.bytereef.org> Message-ID: <20120314110044.08971b5d@pitrou.net> On Wed, 14 Mar 2012 10:58:10 +0100 Stefan Krah wrote: > Antoine Pitrou wrote: > > > For instance I have several one line patches languishing, I can't imagine > > > how disappointing it would be to have significantly larger patches ignored, > > > but it happens. > > > > Can you give a pointer to these one-liners? > > Almost a one-liner, but vast knowledge required (how do you prove that > using (freefunc) is safe if it's the first usage in the tree?). > > http://bugs.python.org/file21610/atexit-leak.patch Well, can you please post a URL to the issue itself? Thanks Antoine. From mark at hotpy.org Wed Mar 14 11:05:11 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 14 Mar 2012 10:05:11 +0000 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <20120314095810.GA20325@sleipnir.bytereef.org> References: <20120313222103.163877e6@pitrou.net> <20120314103941.7cd7c9e6@pitrou.net> <20120314095810.GA20325@sleipnir.bytereef.org> Message-ID: <4F606D57.80304@hotpy.org> Stefan Krah wrote: > Antoine Pitrou wrote: >>> For instance I have several one line patches languishing, I can't imagine >>> how disappointing it would be to have significantly larger patches ignored, >>> but it happens. >> Can you give a pointer to these one-liners? > > Almost a one-liner, but vast knowledge required (how do you prove that > using (freefunc) is safe if it's the first usage in the tree?). > > http://bugs.python.org/file21610/atexit-leak.patch > But how do you find issues? I want to do some reviews, but I don't want to wade through issues on components I know little or nothing about in order to find the ones I can review. There does not seem to be a way to filter search results in the tracker. Cheers, Mark From stefan at bytereef.org Wed Mar 14 11:20:53 2012 From: stefan at bytereef.org (Stefan Krah) Date: Wed, 14 Mar 2012 11:20:53 +0100 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <20120314110044.08971b5d@pitrou.net> References: <20120313222103.163877e6@pitrou.net> <20120314103941.7cd7c9e6@pitrou.net> <20120314095810.GA20325@sleipnir.bytereef.org> <20120314110044.08971b5d@pitrou.net> Message-ID: <20120314102053.GA20430@sleipnir.bytereef.org> Antoine Pitrou wrote: > > Almost a one-liner, but vast knowledge required (how do you prove that > > using (freefunc) is safe if it's the first usage in the tree?). > > > > http://bugs.python.org/file21610/atexit-leak.patch > > Well, can you please post a URL to the issue itself? That sounds like an excellent plan. :) http://bugs.python.org/issue11826 Stefan Krah From facundobatista at gmail.com Wed Mar 14 12:21:28 2012 From: facundobatista at gmail.com (Facundo Batista) Date: Wed, 14 Mar 2012 08:21:28 -0300 Subject: [Python-Dev] PEP 8 misnaming Message-ID: Hello! In the "Maximum Line Length" section of PEP 8 it says: "The preferred place to break around a binary operator is *after* the operator, not before it." And after that is an example (trimmed here): if (width == 0 and height == 0 and color == 'red' and emphasis == 'strong' or highlight > 100): raise ValueError("sorry, you lose") In the example the line is broken after the 'and' or 'or' *keywords*, not after the '==' *operator* (which is the nice way of doing it). Maybe the sentence above is misleading? Thanks! -- .? ? Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From geoffspear at gmail.com Wed Mar 14 12:30:31 2012 From: geoffspear at gmail.com (Geoffrey Spear) Date: Wed, 14 Mar 2012 07:30:31 -0400 Subject: [Python-Dev] PEP 8 misnaming In-Reply-To: References: Message-ID: On Wed, Mar 14, 2012 at 7:21 AM, Facundo Batista wrote: > Hello! > > In the "Maximum Line Length" section of PEP 8 it says: > > ? ?"The preferred place to break around a binary operator is *after* > the operator, not before it." > > And after that is an example (trimmed here): > > ? ? ? ? ? ?if (width == 0 and height == 0 and > ? ? ? ? ? ? ? ?color == 'red' and emphasis == 'strong' or > ? ? ? ? ? ? ? ?highlight > 100): > ? ? ? ? ? ? ? ?raise ValueError("sorry, you lose") > > In the example the line is broken after the 'and' or 'or' *keywords*, > not after the '==' *operator* (which is the nice way of doing it). > > Maybe the sentence above is misleading? 'and' and 'or' are both binary logical operators. The fact that they are keywords is irrelevant; the sentence isn't misleading. From victor.stinner at gmail.com Wed Mar 14 13:27:19 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 14 Mar 2012 13:27:19 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: <20120314101618.5cf1850f@pitrou.net> References: <20120314101618.5cf1850f@pitrou.net> Message-ID: 2012/3/14 Antoine Pitrou : > On Wed, 14 Mar 2012 02:03:42 +0100 > Victor Stinner wrote: >> >> We may merge both functions with a flag to be able to disable the >> fallback. Example: >> >> ?- time.realtime(): best-effort monotonic, with a fallback >> ?- time.realtime(monotonic=True): monotonic, may raise OSError or >> NotImplementedError > > That's a rather awful name. ?time.time() is *the* real time. > > time.monotonic(fallback=False) would be a better API. I would prefer to enable the fallback by default with a warning in the doc, just because it is more convinient and it is what user want even if they don't know that they need a fallback :-) Enabling the fallback by default allow to write such simple code: try: from time import monotonic as get_time except ImportError: # Python < 3.3 from time import time as get_time Use time.monotonic(strict=True) if you need a truly monotonic clock. monotonic() may not be the best name in this case. Jeffrey Yasskin proposed time.steady_clock(), so time.steady_clock(monotonic=False)? Victor From solipsis at pitrou.net Wed Mar 14 13:27:47 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 14 Mar 2012 13:27:47 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> Message-ID: <20120314132747.6c857a53@pitrou.net> On Wed, 14 Mar 2012 13:27:19 +0100 Victor Stinner wrote: > > monotonic() may not be the best name in this case. Jeffrey Yasskin > proposed time.steady_clock(), so time.steady_clock(monotonic=False)? I don't know what "steady" is supposed to mean here, so perhaps the best solution is to improve the doc? Also, "monotonic=False" implies that it won't be monotonic, which is false. Regards Antoine. From ben+python at benfinney.id.au Wed Mar 14 14:03:10 2012 From: ben+python at benfinney.id.au (Ben Finney) Date: Thu, 15 Mar 2012 00:03:10 +1100 Subject: [Python-Dev] PEP 8 misnaming References: Message-ID: <877gyngws1.fsf@benfinney.id.au> Facundo Batista writes: > if (width == 0 and height == 0 and > color == 'red' and emphasis == 'strong' or > highlight > 100): > raise ValueError("sorry, you lose") > > In the example the line is broken after the 'and' or 'or' *keywords*, ?and? and ?or? are binary operators (that also happen to be keywords). The description is accurate and IMO not misleading. > not after the '==' *operator* (which is the nice way of doing it). ?1. The lower-priority binding operator is the better place to break the line. The binary logical operators bind at lower priority than the equality operator. -- \ ?If you do not trust the source do not use this program.? | `\ ?Microsoft Vista security dialogue | _o__) | Ben Finney From brian at python.org Wed Mar 14 15:12:32 2012 From: brian at python.org (Brian Curtin) Date: Wed, 14 Mar 2012 09:12:32 -0500 Subject: [Python-Dev] 2012 Language Summit Report Message-ID: As with last year, I've put together a summary of the Python Language Summit which took place last week at PyCon 2012. This was compiled from my notes as well as those of Eric Snow and Senthil Kumaran, and I think we got decent coverage of what was said throughout the day. http://blog.python.org/2012/03/2012-language-summit-report.html If you have questions or comments about discussions which occurred there, please create a new thread for your topic. Feel free to contact me directly if I've left anything out or misprinted anything. From jimjjewett at gmail.com Wed Mar 14 15:53:48 2012 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Wed, 14 Mar 2012 07:53:48 -0700 (PDT) Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: Message-ID: <4f60b0fc.2ab2320a.24c8.ffffa0c3@mx.google.com> In view-source:http://mail.python.org/pipermail/python-dev/2012-March/117586.html van.lindberg at gmail.com posted: >>> 1) The layout for the python root directory for all platforms should be >>> as follows: >>> stdlib = {base/userbase}/lib/python{py_version_short} >>> platstdlib = {base/userbase}/lib/python{py_version_short} >>> purelib = {base/userbase}/lib/python{py_version_short}/site-packages >>> platlib = {base/userbase}/lib/python{py_version_short}/site-packages >>> include = {base/userbase}/include/python{py_version_short} >>> scripts = {base/userbase}/bin >>> data = {base/userbase} Why? Pure python vs compiled C doesn't need to be separated at the directory level, except for cleanliness. Some (generally unix) systems prefer to split the libraries into several additional pieces depending on CPU architecture. The structure listed above doesn't have a location for docs. Some packages (such as tcl) may be better off in their own area. What is "data"? Is this an extra split compared to today, or does it refer to things like LICENSE.txt, README.txt, and NEWS.txt? And even once I figure out where files have moved, and assume that the split is perfect -- what does this buy me over the current situation? I was under the impression that programs like distutils already handled finding the appropriate directories for a program; if you're rewriting that logic, you're just asking for bugs on a strange platform that you don't use. If you're looking for things interactively, then platform conventions are probably more important than consistency across platforms. If you disagree, you are welcome to reorganize your personal linux installation so that it matches windows, and see whether it causes you any problems. > ... We *already* have this. The only difference in this proposal is > that we go from py_version_nodot to py_version_short, i.e. from > c:\python33\lib\python33 > to > c:\python33\lib\python3.3 I have not seen that redundancy before on windows. I'm pretty sure that it is a relic of your Linux provider wanting to support multiple python versions using shared filesystems. The Windows standard is to use a local disk, and to bundle it all up into its own directory, similar to the way that java apps sometimes ship with their own JVM. Also note that using the dot in a directory name is incautious. I haven't personally had trouble in several years, but doing so is odd enough that some should be expected. Python already causes some grief by not installing in "Program Files", but that is at least justified by the "spaces in filenames" problem; what is the advantange of 3.3? I'm using windows, and I just followed the defaults at installation. It is possible that the installer continued to do something based on an earlier installation, but I don't think this machine has ever had a customized installation of any python version. C:\python32\* Everything is under here; I assume {base/userbase} would be set to C:\python32 As is customary for windows, the base directory contains the license/readme/news and all executables that the user is expected to use directly. (python.exe, pythonw.exe. It also contains w9xpopen.exe that users do not use, but that too is fairly common.) There is no data directory. Subdirectories are: C:\python32\DLLs In additional to regular DLL files, it contains .pyd files and icons. It looks like modules from the stdlib that happen to be written in C. Most users will never bother to look here. C:\python32\Doc A .chm file; full html would be fine too, but removing it would be a bad idea. C:\python32\include These are the header files, though most users will never have any use for them, as there isn't generally a compiler. C:\python32\Lib The standard library -- or at least the portion implemented in python. Note that site-packages is a subdirectory here. It doesn't happen to have an __init__.py, but to an ordinary user it looks just like any other stdlib package, such as xml or multiprocessing. I personally happen to keep things in subdirectories of site-packages, but I can't say what is standard. Moving site-packages out of the Lib directory might make sense, but probably isn't worth the backward compatibility hit. C:\python32\libs .lib files. I'm not entirely sure what these (as opposed to the DLLs) are for; lib files aren't that common on windows. My machine does not appear to have any that aren't associated with cross-platform tools or unix emulation. C:\python32\tcl Note that this is in addition to associated files under DLLs and libs. I would prefer to see them in one place, but moving it in with non-tcl stuff would not be an improvement. Most users will never look (or care); those that do usually appreciate knowing that, for example, the dde subdirectory is for tcl. C:\python32\Tools This has three subdirectories (i18n, pynche, Scripts). Moving the .py files in with the binary just because you could execute them using file associations would be a step backwards; you can do the same regardless of where they are. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From van.lindberg at gmail.com Wed Mar 14 16:22:16 2012 From: van.lindberg at gmail.com (VanL) Date: Wed, 14 Mar 2012 10:22:16 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F5FF662.4070201@v.loewis.de> Message-ID: On 3/13/2012 9:58 PM, Terry Reedy wrote: >> Given that we already repeat it, isn't it better to be consistent? > > But there is no repetition currently on Windows installations. > I though you were just proposing to switch lib (lower-cased, and scripts > renamed as bin, and pythonxx). So I do not think I yet understand what > the proposal is and how it would be different from what I have now. Aaah, I was looking at my local installations, which happen to be "nt-user". Looking at the system installation ("nt") I see that there is no repetition. I am fine with keeping the distinction between base installs (no py_version) and user installs (including a py_version). I would just suggest that when you have a py_version, it be the same py_version (not dots sometimes, nodot other times). It also begs the question as to whether the py_version is *ever* needed. Thanks, Van From guido at python.org Wed Mar 14 16:27:08 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 14 Mar 2012 08:27:08 -0700 Subject: [Python-Dev] SocketServer issues In-Reply-To: <20120314100226.04627743@pitrou.net> References: <20120314100226.04627743@pitrou.net> Message-ID: Hopefully it doesn't use select if no timeout is set... --Guido van Rossum (sent from Android phone) On Mar 14, 2012 2:08 AM, "Antoine Pitrou" wrote: > On Wed, 14 Mar 2012 04:26:16 +0000 > Kristj?n Valur J?nsson wrote: > > Hi there. > > I want to mention some issues I've had with the socketserver module, and > discuss if there's a way to make it nicer. > > So, for a long time we were able to create magic stackless mixin classes > for > > it, like ThreadingMixIn, and assuming we had the appropriate socket > > replacement library, be able to use it nicely using tasklets. > > I don't really think the ability to "create magic stackless mixin > classes" should be a driving principle for the stdlib. > I would suggest using a proper non-blocking framework such as Twisted. > > > So, my first question is: Why not simply rely on the already built-in > timeout > > support in the socket module? > > In case you didn't notice, the built-in timeout support *also* uses > select(). > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott+python-dev at scottdial.com Wed Mar 14 16:09:48 2012 From: scott+python-dev at scottdial.com (Scott Dial) Date: Wed, 14 Mar 2012 11:09:48 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F5FF662.4070201@v.loewis.de> Message-ID: <4F60B4BC.9040702@scottdial.com> On 3/13/2012 9:57 PM, VanL wrote: > On Mar 13, 2012, at 8:37 PM, "Martin v. L?wis" wrote: >> The installation will end up in >> >> c:\python33\lib\python3.3 >> >> which has the software name and version twice in the path. >> >> Do we *really* need this? > > We *already* have this. The only difference in this proposal is that we go from py_version_nodot to py_version_short, i.e. from > > c:\python33\lib\python33 > > to > > c:\python33\lib\python3.3 > > Given that we already repeat it, isn't it better to be consistent? > Is it? I think you are confusing two different configuration sections in sysconfig.cfg: [nt] stdlib = {base}/Lib platstdlib = {base}/Lib purelib = {base}/Lib/site-packages platlib = {base}/Lib/site-packages include = {base}/Include platinclude = {base}/Include scripts = {base}/Scripts data = {base} [nt_user] stdlib = {userbase}/Python{py_version_nodot} platstdlib = {userbase}/Python{py_version_nodot} purelib = {userbase}/Python{py_version_nodot}/site-packages platlib = {userbase}/Python{py_version_nodot}/site-packages include = {userbase}/Python{py_version_nodot}/Include scripts = {userbase}/Scripts data = {userbase} -- Scott Dial scott at scottdial.com From van.lindberg at gmail.com Wed Mar 14 16:48:58 2012 From: van.lindberg at gmail.com (VanL) Date: Wed, 14 Mar 2012 10:48:58 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4f60b0fc.2ab2320a.24c8.ffffa0c3@mx.google.com> References: <4f60b0fc.2ab2320a.24c8.ffffa0c3@mx.google.com> Message-ID: On 3/14/2012 9:53 AM, Jim J. Jewett wrote: > In view-source:http://mail.python.org/pipermail/python-dev/2012-March/117586.html > van.lindberg at gmail.com posted: > >>>> 1) The layout for the python root directory for all platforms should be >>>> as follows: > >>>> stdlib = {base/userbase}/lib/python{py_version_short} >>>> platstdlib = {base/userbase}/lib/python{py_version_short} >>>> purelib = {base/userbase}/lib/python{py_version_short}/site-packages >>>> platlib = {base/userbase}/lib/python{py_version_short}/site-packages >>>> include = {base/userbase}/include/python{py_version_short} >>>> scripts = {base/userbase}/bin >>>> data = {base/userbase} > > Why? > > Pure python vs compiled C doesn't need to be separated at the directory > level, except for cleanliness. I am deliberately being cautious here. I actually agree with you. I am only suggesting we maintain all of these different distinctions because that is what we have already. You can see what we have currently at http://hg.python.org/distutils2/file/2cec52b682a9/distutils2/_backport/sysconfig.cfg I am *not* suggesting that docs, etc change at all - that is included in a different part of the configuration and is not modified by what I propose here (lines 1-26). As noted earlier in the thread, I also change my proposal to maintain the existing differences between system installs and user installs. Thus, the only place I am proposing changing are the values for the keys listed above. Specifically, this (lines 57-65 in the file above): [nt] stdlib = {base}/Lib platstdlib = {base}/Lib purelib = {base}/Lib/site-packages platlib = {base}/Lib/site-packages include = {base}/Include platinclude = {base}/Include scripts = {base}/Scripts data = {base} Would become this: [nt] stdlib = {base}/lib platstdlib = {base}/lib purelib = {base}/lib/site-packages platlib = {base}/lib/site-packages include = {base}/include platinclude = {base}/include scripts = {base}/bin data = {base} and this (lines 86-93): [nt_user] stdlib = {userbase}/Python{py_version_nodot} platstdlib = {userbase}/Python{py_version_nodot} purelib = {userbase}/Python{py_version_nodot}/site-packages platlib = {userbase}/Python{py_version_nodot}/site-packages include = {userbase}/Python{py_version_nodot}/Include scripts = {userbase}/Scripts data = {userbase} would become this: [nt_user] stdlib = {userbase}/python{py_version_short} platstdlib = {userbase}/python{py_version_short} purelib = {userbase}/python{py_version_nodot}/site-packages platlib = {userbase}/python{py_version_nodot}/site-packages include = {userbase}/python{py_version_nodot}/include scripts = {userbase}/bin data = {userbase} > ... if you're rewriting that logic, you're just asking for bugs on a > strange platform that you don't use. I am not rewriting the logic - the logic is driven by these configuration values. And this is a platform I use, and that is why this drives me crazy! > Subdirectories are: You forgot one: C:\python32\Scripts Would change to C:\python32\bin. The python binary and scripts meant to be run direction (easy_install, etc) would all go in this directory. > C:\python32\DLLs Would not change. > C:\python32\Doc Would not change. > C:\python32\include Would be specified as lower case only - but otherwise would not change. > C:\python32\Lib Would be specified as lower case only - but otherwise would not change. > C:\python32\libs Would not change. > C:\python32\tcl Would not change. > C:\python32\Tools This proposal does not change this, although I do think that this could be eliminated or made into "examples". Thanks, Van From van.lindberg at gmail.com Wed Mar 14 16:51:31 2012 From: van.lindberg at gmail.com (VanL) Date: Wed, 14 Mar 2012 10:51:31 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F60B4BC.9040702@scottdial.com> References: <4F5FF662.4070201@v.loewis.de> <4F60B4BC.9040702@scottdial.com> Message-ID: <4F60BE83.3050102@gmail.com> On 3/14/2012 10:09 AM, Scott Dial wrote: > I think you are confusing two different configuration sections in > sysconfig.cfg: > > [nt] > stdlib = {base}/Lib > platstdlib = {base}/Lib > purelib = {base}/Lib/site-packages > platlib = {base}/Lib/site-packages > include = {base}/Include > platinclude = {base}/Include > scripts = {base}/Scripts > data = {base} > > [nt_user] > stdlib = {userbase}/Python{py_version_nodot} > platstdlib = {userbase}/Python{py_version_nodot} > purelib = {userbase}/Python{py_version_nodot}/site-packages > platlib = {userbase}/Python{py_version_nodot}/site-packages > include = {userbase}/Python{py_version_nodot}/Include > scripts = {userbase}/Scripts > data = {userbase} I was lumping them together, yes, but now note that I modify the proposal to maintain this distinction. These would change to: [nt] stdlib = {base}/lib platstdlib = {base}/lib purelib = {base}/lib/site-packages platlib = {base}/lib/site-packages include = {base}/include platinclude = {base}/include scripts = {base}/bin data = {base} [nt_user] stdlib = {userbase}/python{py_version_short} platstdlib = {userbase}/python{py_version_short} purelib = {userbase}/python{py_version_short}/site-packages platlib = {userbase}/python{py_version_short}/site-packages include = {userbase}/python{py_version_short}/include scripts = {userbase}/bin data = {userbase} From tjreedy at udel.edu Wed Mar 14 16:56:22 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 14 Mar 2012 11:56:22 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F5FF662.4070201@v.loewis.de> Message-ID: On 3/14/2012 11:22 AM, VanL wrote: > On 3/13/2012 9:58 PM, Terry Reedy wrote: > >>> Given that we already repeat it, isn't it better to be consistent? >> >> But there is no repetition currently on Windows installations. >> I though you were just proposing to switch lib (lower-cased, and scripts >> renamed as bin, and pythonxx). So I do not think I yet understand what >> the proposal is and how it would be different from what I have now. > > Aaah, I was looking at my local installations, which happen to be > "nt-user". Looking at the system installation ("nt") I see that there is > no repetition. Are you talking about 'install for all users' versus 'install for this user only'? I have always done the former as I see no point to the latter on my machine, even if another family member has an account. > I am fine with keeping the distinction between > base installs (no py_version) I have no idea what this means. As far as I can remember, each installation of Python x.y (back to 1.3 for me, on DOS) has gone into a pythonxy (no dot) directory, with subdirectories much as Jim J. described. > and user installs (including a py_version). I would just > suggest that when you have a py_version, it be the same py_version (not > dots sometimes, nodot other times). > > It also begs the question as to whether the py_version is *ever* needed. Whenever multiple versions are installed, of course a version marker is needed. Even if not, it is helpful to be able to see what version is installed. But I probably am not understanding what you mean. -- Terry Jan Reedy From tjreedy at udel.edu Wed Mar 14 16:57:10 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 14 Mar 2012 11:57:10 -0400 Subject: [Python-Dev] 2012 Language Summit Report In-Reply-To: References: Message-ID: On 3/14/2012 10:12 AM, Brian Curtin wrote: > As with last year, I've put together a summary of the Python Language > Summit which took place last week at PyCon 2012. This was compiled > from my notes as well as those of Eric Snow and Senthil Kumaran, and I > think we got decent coverage of what was said throughout the day. > > http://blog.python.org/2012/03/2012-language-summit-report.html Nicely done. Thank you. -- Terry Jan Reedy From rdmurray at bitdance.com Wed Mar 14 17:00:07 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 14 Mar 2012 12:00:07 -0400 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <4F606D57.80304@hotpy.org> References: <20120313222103.163877e6@pitrou.net> <20120314103941.7cd7c9e6@pitrou.net> <20120314095810.GA20325@sleipnir.bytereef.org> <4F606D57.80304@hotpy.org> Message-ID: <20120314160007.B786D250117@webabinitio.net> On Wed, 14 Mar 2012 10:05:11 -0000, Mark Shannon wrote: > But how do you find issues? > > I want to do some reviews, but I don't want to wade through issues on > components I know little or nothing about in order to find the ones I > can review. > > There does not seem to be a way to filter search results in the tracker. Is the advanced search ('search' link on left hand side) missing some filtering capabilities? --David From van.lindberg at gmail.com Wed Mar 14 17:03:53 2012 From: van.lindberg at gmail.com (VanL) Date: Wed, 14 Mar 2012 11:03:53 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F603B76.4050004@gmail.com> References: <4F603B76.4050004@gmail.com> Message-ID: <4F60C169.9030404@gmail.com> On 3/14/2012 1:32 AM, Mark Hammond wrote: > As per comments later in the thread, I'm -1 on including > "python{py_version_short}" in the lib directories for a number of > reasons; one further reason not outlined is that it would potentially > make running Python directly from a built tree difficult. For the same > reason, I'm also -1 on having that in the include dir. A built tree would look the same as always - the directories would only be moved (if at all) during installation. Thus, you will still be able to run python directly from a built installation. Also note that the py_version_short will not be in platform installs. >> scripts = {base/userbase}/bin > > We should note that this may cause pain for a number of projects - I've > seen quite a few projects that already assume "Scripts" on Windows - eg, > virtualenv and setuptools IIRC If you look at these projects, though, they *special case* Windows to account for the different layout. Removing this difference will allow these projects to remove their special-casing code. >- and also assume the executable is where > it currently lives - one example off the top of my head is the mozilla > "jetpack" project - see: > > https://github.com/mozilla/addon-sdk/blob/master/bin/activate.bat#L117 This code actually reinforces my point: First, this code would actually still work. The section ":FoundPython" sets the PATH to "%VIRTUAL_ENV%\bin;%PYTHONINSTALL%;%PATH%" (L80), which would still allow python.exe to be found and run. Second, look at that line again. Mozilla actually has edited this code so that the jetpack uses a cross-platform "bin" convention, just as I am proposing. Third, one element of this proposal is that there would be a key placed in the registry that points directly to the location of the python executable, making locating it trivial to locate programmatically on Windows. Thanks, Van From van.lindberg at gmail.com Wed Mar 14 17:10:05 2012 From: van.lindberg at gmail.com (VanL) Date: Wed, 14 Mar 2012 11:10:05 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F5FF662.4070201@v.loewis.de> Message-ID: On 3/14/2012 10:56 AM, Terry Reedy wrote: > Are you talking about 'install for all users' versus 'install for this > user only'? I have always done the former as I see no point to the > latter on my machine, even if another family member has an account. Yes, but some people are on corporate machines that only allow "install for this user" installations. >> I am fine with keeping the distinction between > > base installs (no py_version) > > I have no idea what this means. As far as I can remember, each > installation of Python x.y (back to 1.3 for me, on DOS) has gone into a > pythonxy (no dot) directory, with subdirectories much as Jim J. described. I am referring to the currently-existing install schemes 'nt' ('install for all users') and 'nt-user' ('install for this user only'). The *current* layouts are described at http://hg.python.org/distutils2/file/2cec52b682a9/distutils2/_backport/sysconfig.cfg: L57-65: [nt] stdlib = {base}/Lib platstdlib = {base}/Lib purelib = {base}/Lib/site-packages platlib = {base}/Lib/site-packages include = {base}/Include platinclude = {base}/Include scripts = {base}/Scripts data = {base} L86-93: [nt_user] stdlib = {userbase}/Python{py_version_nodot} platstdlib = {userbase}/Python{py_version_nodot} purelib = {userbase}/Python{py_version_nodot}/site-packages platlib = {userbase}/Python{py_version_nodot}/site-packages include = {userbase}/Python{py_version_nodot}/Include scripts = {userbase}/Scripts data = {userbase} I am proposing that these change to: [nt] stdlib = {base}/lib platstdlib = {base}/lib purelib = {base}/lib/site-packages platlib = {base}/lib/site-packages include = {base}/include platinclude = {base}/include scripts = {base}/bin data = {base} [nt_user] stdlib = {userbase}/python{py_version_short} platstdlib = {userbase}/python{py_version_short} purelib = {userbase}/python{py_version_short}/site-packages platlib = {userbase}/python{py_version_short}/site-packages include = {userbase}/python{py_version_short}/include scripts = {userbase}/bin data = {userbase} All the other diuectories that Jim talked about would not be affected by this proposal. Does this make it clearer? Thanks, Van From anacrolix at gmail.com Wed Mar 14 17:11:56 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Thu, 15 Mar 2012 00:11:56 +0800 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: <20120314132747.6c857a53@pitrou.net> References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: I have some observations regarding this: Victor's existing time.monotonic and time.wallclock make use of QueryPerformanceCounter, and CLOCK_MONOTONIC_RAW as possible. Both of these are hardware-based counters, their monotonicity is just a convenient property of the timer sources. Furthermore, time values can actually vary depending on the processor the call is served on. time.hardware()? time.monotonic_raw()? There are bug reports on Linux that CLOCK_MONOTONIC isn't always monotonic. This is why CLOCK_MONOTONIC_RAW was created. There's also the issue of time leaps (forward), which also isn't a problem with the latter form. time.monotonic(raw_only=False)? The real value of "monotonic" timers is that they don't leap backwards, and preferably don't leap forwards. Whether they are absolute is of no consequence. I would suggest that the API reflect this, and that more specific time values be obtained using the proper raw syscall wrapper (like time.clock_gettime) if required. time.relative(strict=False)? The ultimate use of the function name being established is for use in timeouts and relative timings. Where an option is present, it disallows fallbacks like CLOCK_MONOTONIC and other weaker forms: * time.hardware(fallback=True) -> reflects the source of the timing impeccably. alerts users to possible affinity issues * time.monotonic_raw() -> a bit linux specific... * time.relative(strict=False) -> matches the use case. a warning could be put regarding hardware timings * time.monotonic(raw_only=False) -> closest to the existing implementation. the keyword name i think is better. From julien at tayon.net Wed Mar 14 17:16:50 2012 From: julien at tayon.net (julien tayon) Date: Wed, 14 Mar 2012 17:16:50 +0100 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: Message-ID: Hello, 2012/3/13 Guido van Rossum : > On Mon, Mar 12, 2012 at 9:23 PM, Brian Curtin wrote: >> Downloads don't mean the code is good. Voting is gamed. I really don't >> think there's a good automated solution to tell us what the >> high-quality replacement projects are. > > Sure, these are imperfect metrics. But not having any metrics at all > is flawed too. Despite the huge flamewar we had 1-2 years ago about > PyPI comments, I think we should follow the lead of the many app > stores that pop up on the web -- users will recognize the pattern and > will tune their skepticism sensors as needed. > unittest and functional testing are obctive metrics. It it passes a unittest, and it has the same API, therefore it is a legitimate replacement for the stdlib. If a benchmark is also given that can be considered **not biased** and faster it is pretty neat. (but why not contribute to stdlib then?) Functional testing is however a little more tricky, subjective and interesting. Since stdlib replacements are mostl functionnaly equivalent (like requests) of one or more stdlib module and that is what people are searching for. People willing to be considered compliant with some functionalities of a stdlib would have to give example of porting from libA to libB plus the given *functionnal tests*. An interesting point may also be PEP compliance. (it is sometimes a tedious tasks when playing with SA to know if a python package of a DBDriver is DB-API 2.0 compliant). This would make pypi even greater if package maintainers added these metadata (implements, functions_like, pep_compliance) in their setup.py given they comply with the logic. And it would pretty much automate the search for alternative to stdlib. The huge problem is how to trust that maintainers are self disciplined enough, willing, and have enough knowledge to tag their packages properly, plus what is the extra strain on code and infrastructure to automate this ? Without these informations we may become like senior java developpers whose greatests skills are not coding, but knowing in a wide ecosystems of packages which one are revelant/reliable/compatible/stable. (needle in a haystack) Maybe the answer is not one of code but one of trend setting and Noise Signal Ratio on python hubs. (http://www.pythonmeme.com/, http://planet.python.org/, http://pypi.org (and still in a lesser way of classification)). Cheers, -- Julien Tayon From pje at telecommunity.com Wed Mar 14 17:17:06 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 14 Mar 2012 12:17:06 -0400 Subject: [Python-Dev] SocketServer issues In-Reply-To: <20120314100226.04627743@pitrou.net> References: <20120314100226.04627743@pitrou.net> Message-ID: On Wed, Mar 14, 2012 at 5:02 AM, Antoine Pitrou wrote: > On Wed, 14 Mar 2012 04:26:16 +0000 > Kristj?n Valur J?nsson wrote: > > Hi there. > > I want to mention some issues I've had with the socketserver module, and > discuss if there's a way to make it nicer. > > So, for a long time we were able to create magic stackless mixin classes > for > > it, like ThreadingMixIn, and assuming we had the appropriate socket > > replacement library, be able to use it nicely using tasklets. > > I don't really think the ability to "create magic stackless mixin > classes" should be a driving principle for the stdlib. > But not needlessly duplicating functionality already elsewhere in the stdlib probably ought to be. ;-) > So, my first question is: Why not simply rely on the already built-in > timeout > > support in the socket module? > > In case you didn't notice, the built-in timeout support *also* uses > select(). > That's not really the point; the frameworks that implement nonblocking I/O by replacing the socket module (and Stackless is only one of many) won't be using that code. If SocketServer uses only the socket module's API, then those frameworks will be told about the timeout via the socket API, and can then implement it their own way. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maker.py at gmail.com Wed Mar 14 17:22:23 2012 From: maker.py at gmail.com (=?UTF-8?Q?Michele_Orr=C3=B9?=) Date: Wed, 14 Mar 2012 17:22:23 +0100 Subject: [Python-Dev] [Issue1531415] Using PyErr_WarnEx on parsetok Message-ID: As pointed by Sean Reifschneider in issue 1531415, I'm writing this mail mainly to ask for advices concerning python's makefile. Currently, Parser/parsetok.c writes directly to stderr in case no more memory is avaible. So, it would be nice? to use, instead of a raw printf, the functions provided by Python/_warnings.c (PyErr_NoMemory and/or PyErr_WarnEx). This, right now, leads to a circular dependency, as described here: http://bugs.python.org/msg154939 . So far I've seen some functions present both in Python/ and pgenmain.c : PyErr_Occurred(), Py_FatalError(const char *msg), Py_Exit(int). This means a dirty alternative could be to implement another function PyErr_WarnEx; but probably there is a better way to organize the makefile, becouse currently I'm using the entire $(PYTHON_OBJS) (seems needed by warnigs.o). This is the first time I run into python c code, so please be patient :) -- ? From guido at python.org Wed Mar 14 17:22:36 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 14 Mar 2012 09:22:36 -0700 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: I have a totally different observation. Presumably the primary use case for these timers is to measure real time intervals for the purpose of profiling various operations. For this purpose we want them to be as "steady" as possible: tick at a constant rate, don't jump forward or backward. (And they shouldn't invoke the concept of "CPU" time -- we already have time.clock() for that, besides it's often wrong, e.g. you might be measuring some sort of I/O performance.) If this means that a second as measured by time.time() is sometimes not the same as a second measured by this timer (due to time.time() occasionally jumping due to clock adjustments), that's fine with me. If this means it's unreliable inside a VM, well, it seems that's a property of the underlying OS timer, and there's not much we can do about it except letting a knowledgeable user override the timer user. As for names, I like Jeffrey's idea of having "steady" in the name. --Guido On Wed, Mar 14, 2012 at 9:11 AM, Matt Joiner wrote: > I have some observations regarding this: > > Victor's existing time.monotonic and time.wallclock make use of > QueryPerformanceCounter, and CLOCK_MONOTONIC_RAW as possible. Both of > these are hardware-based counters, their monotonicity is just a > convenient property of the timer sources. Furthermore, time values can > actually vary depending on the processor the call is served on. > time.hardware()? time.monotonic_raw()? > > There are bug reports on Linux that CLOCK_MONOTONIC isn't always > monotonic. This is why CLOCK_MONOTONIC_RAW was created. There's also > the issue of time leaps (forward), which also isn't a problem with the > latter form. time.monotonic(raw_only=False)? > > The real value of "monotonic" timers is that they don't leap > backwards, and preferably don't leap forwards. Whether they are > absolute is of no consequence. I would suggest that the API reflect > this, and that more specific time values be obtained using the proper > raw syscall wrapper (like time.clock_gettime) if required. > time.relative(strict=False)? > > The ultimate use of the function name being established is for use in > timeouts and relative timings. > > Where an option is present, it disallows fallbacks like > CLOCK_MONOTONIC and other weaker forms: > ?* time.hardware(fallback=True) -> reflects the source of the timing > impeccably. alerts users to possible affinity issues > ?* time.monotonic_raw() -> a bit linux specific... > ?* time.relative(strict=False) -> matches the use case. a warning > could be put regarding hardware timings > ?* time.monotonic(raw_only=False) -> closest to the existing > implementation. the keyword name i think is better. -- --Guido van Rossum (python.org/~guido) From Van.Lindberg at haynesboone.com Wed Mar 14 16:51:32 2012 From: Van.Lindberg at haynesboone.com (Lindberg, Van) Date: Wed, 14 Mar 2012 15:51:32 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F60B4BC.9040702@scottdial.com> References: <4F5FF662.4070201@v.loewis.de> <4F60B4BC.9040702@scottdial.com> Message-ID: <4F60BE83.3050102@gmail.com> On 3/14/2012 10:09 AM, Scott Dial wrote: > I think you are confusing two different configuration sections in > sysconfig.cfg: > > [nt] > stdlib = {base}/Lib > platstdlib = {base}/Lib > purelib = {base}/Lib/site-packages > platlib = {base}/Lib/site-packages > include = {base}/Include > platinclude = {base}/Include > scripts = {base}/Scripts > data = {base} > > [nt_user] > stdlib = {userbase}/Python{py_version_nodot} > platstdlib = {userbase}/Python{py_version_nodot} > purelib = {userbase}/Python{py_version_nodot}/site-packages > platlib = {userbase}/Python{py_version_nodot}/site-packages > include = {userbase}/Python{py_version_nodot}/Include > scripts = {userbase}/Scripts > data = {userbase} I was lumping them together, yes, but now note that I modify the proposal to maintain this distinction. These would change to: [nt] stdlib = {base}/lib platstdlib = {base}/lib purelib = {base}/lib/site-packages platlib = {base}/lib/site-packages include = {base}/include platinclude = {base}/include scripts = {base}/bin data = {base} [nt_user] stdlib = {userbase}/python{py_version_short} platstdlib = {userbase}/python{py_version_short} purelib = {userbase}/python{py_version_short}/site-packages platlib = {userbase}/python{py_version_short}/site-packages include = {userbase}/python{py_version_short}/include scripts = {userbase}/bin data = {userbase} CIRCULAR 230 NOTICE: To ensure compliance with requirements imposed by U.S. Treasury Regulations, Haynes and Boone, LLP informs you that any U.S. tax advice contained in this communication (including any attachments) was not intended or written to be used, and cannot be used, for the purpose of (i) avoiding penalties under the Internal Revenue Code or (ii) promoting, marketing or recommending to another party any transaction or matter addressed herein. CONFIDENTIALITY NOTICE: This electronic mail transmission is confidential, may be privileged and should be read or retained only by the intended recipient. If you have received this transmission in error, please immediately notify the sender and delete it from your system. From Van.Lindberg at haynesboone.com Wed Mar 14 17:03:54 2012 From: Van.Lindberg at haynesboone.com (Lindberg, Van) Date: Wed, 14 Mar 2012 16:03:54 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F603B76.4050004@gmail.com> References: <4F603B76.4050004@gmail.com> Message-ID: <4F60C169.9030404@gmail.com> On 3/14/2012 1:32 AM, Mark Hammond wrote: > As per comments later in the thread, I'm -1 on including > "python{py_version_short}" in the lib directories for a number of > reasons; one further reason not outlined is that it would potentially > make running Python directly from a built tree difficult. For the same > reason, I'm also -1 on having that in the include dir. A built tree would look the same as always - the directories would only be moved (if at all) during installation. Thus, you will still be able to run python directly from a built installation. Also note that the py_version_short will not be in platform installs. >> scripts = {base/userbase}/bin > > We should note that this may cause pain for a number of projects - I've > seen quite a few projects that already assume "Scripts" on Windows - eg, > virtualenv and setuptools IIRC If you look at these projects, though, they *special case* Windows to account for the different layout. Removing this difference will allow these projects to remove their special-casing code. >- and also assume the executable is where > it currently lives - one example off the top of my head is the mozilla > "jetpack" project - see: > > https://github.com/mozilla/addon-sdk/blob/master/bin/activate.bat#L117 This code actually reinforces my point: First, this code would actually still work. The section ":FoundPython" sets the PATH to "%VIRTUAL_ENV%\bin;%PYTHONINSTALL%;%PATH%" (L80), which would still allow python.exe to be found and run. Second, look at that line again. Mozilla actually has edited this code so that the jetpack uses a cross-platform "bin" convention, just as I am proposing. Third, one element of this proposal is that there would be a key placed in the registry that points directly to the location of the python executable, making locating it trivial to locate programmatically on Windows. Thanks, Van CIRCULAR 230 NOTICE: To ensure compliance with requirements imposed by U.S. Treasury Regulations, Haynes and Boone, LLP informs you that any U.S. tax advice contained in this communication (including any attachments) was not intended or written to be used, and cannot be used, for the purpose of (i) avoiding penalties under the Internal Revenue Code or (ii) promoting, marketing or recommending to another party any transaction or matter addressed herein. CONFIDENTIALITY NOTICE: This electronic mail transmission is confidential, may be privileged and should be read or retained only by the intended recipient. If you have received this transmission in error, please immediately notify the sender and delete it from your system. From anacrolix at gmail.com Wed Mar 14 17:26:09 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Thu, 15 Mar 2012 00:26:09 +0800 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <20120314103941.7cd7c9e6@pitrou.net> References: <20120313222103.163877e6@pitrou.net> <20120314103941.7cd7c9e6@pitrou.net> Message-ID: > Can you give a pointer to these one-liners? > Once a patch gets a month old or older, it tends to disappear from > everyone's radar unless you somehow "ping" on the tracker, or post a > message to the mailing-list. All of these can be verified with a few minutes of checking the described code paths. http://bugs.python.org/issue13839 http://bugs.python.org/issue13872 http://bugs.python.org/issue12684 http://bugs.python.org/issue13694 Thanks From solipsis at pitrou.net Wed Mar 14 17:28:06 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 14 Mar 2012 17:28:06 +0100 Subject: [Python-Dev] SocketServer issues In-Reply-To: References: <20120314100226.04627743@pitrou.net> Message-ID: <20120314172806.633f9f33@pitrou.net> On Wed, 14 Mar 2012 08:27:08 -0700 Guido van Rossum wrote: > Hopefully it doesn't use select if no timeout is set... No, it doesn't :-) Regards Antoine. From solipsis at pitrou.net Wed Mar 14 17:29:47 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 14 Mar 2012 17:29:47 +0100 Subject: [Python-Dev] SocketServer issues In-Reply-To: References: <20120314100226.04627743@pitrou.net> Message-ID: <20120314172947.4f163e69@pitrou.net> On Wed, 14 Mar 2012 12:17:06 -0400 PJ Eby wrote: > > > So, my first question is: Why not simply rely on the already built-in > > timeout > > > support in the socket module? > > > > In case you didn't notice, the built-in timeout support *also* uses > > select(). > > > > That's not really the point; the frameworks that implement nonblocking I/O > by replacing the socket module (and Stackless is only one of many) won't be > using that code. Then they should also replace the select module. Again, I don't think SocketServer (or any other stdlib module) should be designed in this regard. Regards Antoine. From kristjan at ccpgames.com Wed Mar 14 17:42:29 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 14 Mar 2012 16:42:29 +0000 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: Yes, the intended use is relative timings and timeouts. I think we are complicating things far too much. 1) Do we really need a fallback on windows? Will QPC ever fail? 2) is it a problem for the intended use if we cannot absolutely guarantee that time won't ever tick backwards? IMHO, we shouldn't compilicate the api, and make whatever best try we can in C. On windows I would do this (pseudocode) Static last_time = 0 If (QPC_works) time = QueryPerformanceCounter() else time = GetSystemTimeAsFileTime() if (time > last_time) last_time=time else time = last_time return time in other words: 1) use QPC. If the api indicates that it isn't available (does this ever happen in real life?) use a fallback of system time 2) enforce monotonicity with a static. QPC, if the OS doesn"t is buggy (calls cpu counter rather than timer chip) can cause jitter because it is called on different cores. No options in the api. No nothing. We simply provide the best api possible and some hardware/software combos might be less accurate. K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Matt Joiner Sent: 14. mars 2012 09:12 To: Antoine Pitrou; Victor Stinner; Guido van Rossum Cc: python-dev at python.org Subject: Re: [Python-Dev] Drop the new time.wallclock() function? I have some observations regarding this: Victor's existing time.monotonic and time.wallclock make use of QueryPerformanceCounter, and CLOCK_MONOTONIC_RAW as possible. Both of these are hardware-based counters, their monotonicity is just a convenient property of the timer sources. Furthermore, time values can actually vary depending on the processor the call is served on. time.hardware()? time.monotonic_raw()? There are bug reports on Linux that CLOCK_MONOTONIC isn't always monotonic. This is why CLOCK_MONOTONIC_RAW was created. There's also the issue of time leaps (forward), which also isn't a problem with the latter form. time.monotonic(raw_only=False)? The real value of "monotonic" timers is that they don't leap backwards, and preferably don't leap forwards. Whether they are absolute is of no consequence. I would suggest that the API reflect this, and that more specific time values be obtained using the proper raw syscall wrapper (like time.clock_gettime) if required. time.relative(strict=False)? The ultimate use of the function name being established is for use in timeouts and relative timings. Where an option is present, it disallows fallbacks like CLOCK_MONOTONIC and other weaker forms: * time.hardware(fallback=True) -> reflects the source of the timing impeccably. alerts users to possible affinity issues * time.monotonic_raw() -> a bit linux specific... * time.relative(strict=False) -> matches the use case. a warning could be put regarding hardware timings * time.monotonic(raw_only=False) -> closest to the existing implementation. the keyword name i think is better. _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From guido at python.org Wed Mar 14 17:44:07 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 14 Mar 2012 09:44:07 -0700 Subject: [Python-Dev] SocketServer issues In-Reply-To: References: Message-ID: 2012/3/13 Kristj?n Valur J?nsson : > I want to mention some issues I?ve had with the socketserver module, and > discuss if there?s a way to make it nicer. > > So, for a long time we were able to create magic stackless mixin classes for > it, like ThreadingMixIn, and assuming we had the appropriate socket > replacement library, be able to use it nicely using tasklets. > > Then, at some point, the run_forever loop was changed to support timeout > through the use of select.select() before every socket.accept() call.? This > was very awkward because the whole concept of select() really goes contrary > to the approach of using microthreads, non-blocking IO and all that. I'm surprised -- surely a non-blocking framework should have no problem implementing select(), especially if it's for one file descriptor? > The only reason for this select call, was to support timeout for the > accept.? And even for vanilla applications, it necessiates an extra kernel > call for every accept, just for the timeout. I think it's fine to change the code so that if poll_interval is explicitly set to None, the select() call is skipped. I don't think the default should change though. The select() call is normally needed to support the shutdown() feature, which is very useful. And also the overridable service_actions() method. Oh, there's another select() call, in handle_request(), that should also be skipped if timeout is None. At least, I *think* a select() with a timeout of None blocks forever or until the socket is ready or until it is interrupted; I think this can always be skipped, since the immediately following I/O call will block in exactly the same way. Unless the socket is set in non-blocking mode; we may have to have provisions to avoid breaking that situation too. > The way around this for me has to do local modifications to the SocketServer > and just get rid of the select. I hope the above suggestion is sufficient? It's the best we can do while maintaining backward compatibility. This class has a lot of different features, and is designed to be subclassed, so it's hard to make changes that don't break anything. > So, my first question is:? Why not simply rely on the already built-in > timeout support in the socket module?? Setting the correct timeout value on > the accepting socket, will achieve the same thing.? Of course, one then has > to reset the timeout value on the accepted socket, but this is minor. I don't think it's the same thing at all. If you set a timeout on the socket, the accept() or recvfrom() call in get_request() will raise an exception if no request comes in within the timeout (default 0.5 sec); with the timeout implemented in serve_forever(), get_request() and its callers won't have to worry about the timeout exception. > Second: Of late the SocketServer has grown additional features and > attributes.? In particular, it now has two event objects, __shutdown_request > and __is_shut_down. > > Notice the double underscores. > > They make it impossible to subclass the SocketServer class to provide a > different implementation of run_forever().? Is there any good reason why > these attributes have been made ?private? like this?? Having just seen > Raymond?s talk on how to subclass right, this looks like the wrong way to > use the double leading underscores. Assuming you meant serve_forever(), I see no problem at all. If you override serve_forever() you also have to override shutdown(). That's all. They are marked private because they are involved in subtle invariants that are easily disturbed if users touch them. I could live with making them single-underscore protected, only to be used by knowledgeable subclasses. But not with making then public attributes. > So, two things really: > > The use of select.select in SocketServer makes it necessary to subclass it > to write a new version of run_forever() for those that wish to use a > non-blocking IO library instead of socket. And the presence of these private > attributes make it (theoretically) impossible to specialize run_forever in a > mix-in class. > > > > Any thoughs?? Is anyone interested in seeing how the timeouts can be done > without using select.select()?? And what do you think about removing the > double underscores from there and thus making serve_forever owerrideable? Let's see a patch (based on my concerns above) and then we can talk again. -- --Guido van Rossum (python.org/~guido) From solipsis at pitrou.net Wed Mar 14 17:40:19 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 14 Mar 2012 17:40:19 +0100 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: References: <20120313222103.163877e6@pitrou.net> <20120314103941.7cd7c9e6@pitrou.net> Message-ID: <20120314174019.5cead57b@pitrou.net> On Thu, 15 Mar 2012 00:26:09 +0800 Matt Joiner wrote: > > Can you give a pointer to these one-liners? > > Once a patch gets a month old or older, it tends to disappear from > > everyone's radar unless you somehow "ping" on the tracker, or post a > > message to the mailing-list. > > All of these can be verified with a few minutes of checking the > described code paths. > > http://bugs.python.org/issue13839 > http://bugs.python.org/issue13872 > http://bugs.python.org/issue12684 > http://bugs.python.org/issue13694 Thanks. You did get some comments and answers on some of these issues, though (especially on http://bugs.python.org/issue13872 ). Regards Antoine. From nadeem.vawda at gmail.com Wed Mar 14 17:47:24 2012 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Wed, 14 Mar 2012 18:47:24 +0200 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: A summary of the discussion so far, as I've understood it: - We should have *one* monotonic/steady timer function, using the sources described in Victor's original post. - By default, it should fall back to time.time if a better source is not available, but there should be a flag that can disable this fallback for users who really *need* a monotonic/steady time source. - Proposed names for the function: * monotonic * steady_clock * wallclock * realtime - Proposed names for the flag controlling fallback behavior: * strict (=False) * fallback (=True) * monotonic (=False) For the function name, I think monotonic() and steady_clock() convey the purpose of the function much better than the other two; the term "wallclock" is actively misleading, and "realtime" seems ambiguous. For the flag name, I'm -1 on "monotonic" -- it sounds like a flag to decide whether to use a monotonic time source always or never, while it actually decides between "always" and "sometimes". I think "strict" is nicer than "fallback", but I'm fine with either one. Cheers, Nadeem From tjreedy at udel.edu Wed Mar 14 17:47:21 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 14 Mar 2012 12:47:21 -0400 Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <4F606D57.80304@hotpy.org> References: <20120313222103.163877e6@pitrou.net> <20120314103941.7cd7c9e6@pitrou.net> <20120314095810.GA20325@sleipnir.bytereef.org> <4F606D57.80304@hotpy.org> Message-ID: On 3/14/2012 6:05 AM, Mark Shannon wrote: > But how do you find issues? It takes some practice. Since you patched core component dict, I tried All text: dict and Components: Interpreter Core. (Leave default Status: open as is.) 51 issues. Add Keyword: needs review. 0 issues. Whoops, seems not to be used. Change to Keyword: patch. 31 issues. That seems like a manageable number of titles to 'wade through', middle clicking ones that seem promising to open in a new tab. > I want to do some reviews, but I don't want to wade through issues on > components I know little or nothing about in order to find the ones I > can review. If you want more help, ask me here or privately. I am probably better at searching than fixing. > There does not seem to be a way to filter search results in the tracker. I am not sure what you mean as there are multiple search fields. The basic text box is, admittedly, limited in that it combines multiple words with AND, with no other choices. An OR search requires multiple searches. Exclusion is not possible that I know of, and perhaps that is what you meant. The other fields that seem most useful to me are Stage, Type, Components, and Status. -- Terry Jan Reedy From pje at telecommunity.com Wed Mar 14 18:03:56 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 14 Mar 2012 13:03:56 -0400 Subject: [Python-Dev] SocketServer issues In-Reply-To: <20120314172947.4f163e69@pitrou.net> References: <20120314100226.04627743@pitrou.net> <20120314172947.4f163e69@pitrou.net> Message-ID: On Wed, Mar 14, 2012 at 12:29 PM, Antoine Pitrou wrote: > On Wed, 14 Mar 2012 12:17:06 -0400 > PJ Eby wrote: > > That's not really the point; the frameworks that implement nonblocking > I/O > > by replacing the socket module (and Stackless is only one of many) won't > be > > using that code. > > Then they should also replace the select module. > That actually sounds like a good point. ;-) I'm not the maintainer of any of those frameworks, but IIRC some of them *do* replace it. Perhaps this would solve Stackless's problem here too? -------------- next part -------------- An HTML attachment was scrubbed... URL: From anacrolix at gmail.com Wed Mar 14 18:08:39 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Thu, 15 Mar 2012 01:08:39 +0800 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: On Thu, Mar 15, 2012 at 12:22 AM, Guido van Rossum wrote: > I have a totally different observation. Presumably the primary use > case for these timers is to measure real time intervals for the > purpose of profiling various operations. For this purpose we want them > to be as "steady" as possible: tick at a constant rate, don't jump > forward or backward. (And they shouldn't invoke the concept of "CPU" > time -- we already have time.clock() for that, besides it's often > wrong, e.g. you might be measuring some sort of I/O performance.) If > this means that a second as measured by time.time() is sometimes not > the same as a second measured by this timer (due to time.time() > occasionally jumping due to clock adjustments), that's fine with me. > If this means it's unreliable inside a VM, well, it seems that's a > property of the underlying OS timer, and there's not much we can do > about it except letting a knowledgeable user override the timer user. > As for names, I like Jeffrey's idea of having "steady" in the name. In that case I'd suggest either time.hardware(strict=True), or time.steady(strict=True), since the only timers exposed natively that are both high resolution and steady are on the hardware. A warning about CPU affinity is also still wise methinks. From kristjan at ccpgames.com Wed Mar 14 18:09:39 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 14 Mar 2012 17:09:39 +0000 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: > - By default, it should fall back to time.time if a better source is > not available, but there should be a flag that can disable this > fallback for users who really *need* a monotonic/steady time source. As pointed out on a different thread, you don"t need this "flag" since the code can easily enforce the monotonic property by maintaining a static value. This is how we worked around buggy implementations of QueryPerformanceCounter on windows (). K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Nadeem Vawda Sent: 14. mars 2012 09:47 To: Guido van Rossum Cc: Antoine Pitrou; python-dev at python.org Subject: Re: [Python-Dev] Drop the new time.wallclock() function? A summary of the discussion so far, as I've understood it: - We should have *one* monotonic/steady timer function, using the sources described in Victor's original post. - By default, it should fall back to time.time if a better source is not available, but there should be a flag that can disable this fallback for users who really *need* a monotonic/steady time source. - Proposed names for the function: * monotonic * steady_clock * wallclock * realtime - Proposed names for the flag controlling fallback behavior: * strict (=False) * fallback (=True) * monotonic (=False) For the function name, I think monotonic() and steady_clock() convey the purpose of the function much better than the other two; the term "wallclock" is actively misleading, and "realtime" seems ambiguous. For the flag name, I'm -1 on "monotonic" -- it sounds like a flag to decide whether to use a monotonic time source always or never, while it actually decides between "always" and "sometimes". I think "strict" is nicer than "fallback", but I'm fine with either one. Cheers, Nadeem _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From kristjan at ccpgames.com Wed Mar 14 17:59:47 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 14 Mar 2012 16:59:47 +0000 Subject: [Python-Dev] SocketServer issues In-Reply-To: <20120314100226.04627743@pitrou.net> References: <20120314100226.04627743@pitrou.net> Message-ID: >I don't really think the ability to "create magic stackless mixin classes" should be a driving principle for the stdlib. > I would suggest using a proper non-blocking framework such as Twisted. There is a lot of code out there that uses SocketServer. It was originally designed to be easily extensable, with various mixins to control ultimate Behavior. It just seems that there have been some design decisions made recently that make this subclassability / extensibility more difficult, and those Are the two changes I pointed out. The thing with the select.select wouldn't be so bad if I could simply override serve_forever but that function Can't be overridden because of the poor choice of adding __attributes to the class. And, you would run into the same kind of trouble if you wanted to create a TwistedMixIn, a geventMixIn, or what not. > In case you didn't notice, the built-in timeout support *also* uses select(). Yes, that's how the normal blocking framework supports timeout. Asynchronous frameworks do it differently, though. > Then they should also replace the select module. > Again, I don't think SocketServer (or any other stdlib module) should be designed in this regard. And so we do too, but now every socket accept requires two rounds round the event loop. Also, emulating select() is not a critical part of frameworks designed to help you avoid to use it in the first place. The point of frameworks such as gevent, stackless, etc, is to let you write code with zillions of sockets without ever touching select. The quick and dirty emulated version I use, uses a thread to make it non-blocking! It just seems odd to me that it was designed to use the "select" api to do timeouts, where timeouts are already part of the socket protocol and can be implemented more efficiently there. Anyway, I'm not talking about rewriting anything, I merely want to fix some small design problems that prevent SocketServer to be specialized. I'll submit a simple patch for review. K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Antoine Pitrou Sent: 14. mars 2012 02:02 To: python-dev at python.org Subject: Re: [Python-Dev] SocketServer issues On Wed, 14 Mar 2012 04:26:16 +0000 Kristj?n Valur J?nsson wrote: > Hi there. > I want to mention some issues I've had with the socketserver module, and discuss if there's a way to make it nicer. > So, for a long time we were able to create magic stackless mixin > classes for it, like ThreadingMixIn, and assuming we had the > appropriate socket replacement library, be able to use it nicely using tasklets. I don't really think the ability to "create magic stackless mixin classes" should be a driving principle for the stdlib. I would suggest using a proper non-blocking framework such as Twisted. > So, my first question is: Why not simply rely on the already built-in > timeout support in the socket module? In case you didn't notice, the built-in timeout support *also* uses select(). Regards Antoine. _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From anacrolix at gmail.com Wed Mar 14 18:15:52 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Thu, 15 Mar 2012 01:15:52 +0800 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: FWIW the name is quite important, because these kind of timings are quite important so I think it's worth the effort. > - By default, it should fall back to time.time if a better source is > ?not available, but there should be a flag that can disable this > ?fallback for users who really *need* a monotonic/steady time source. Agreed. As Guido mentioned, some platforms might not be able to access to hardware times, so falling back should be the default, lest unaware users trigger unexpected errors. > - Proposed names for the function: > ?* monotonic Doesn't indicate that the timing is also prevented from leaping forward. > ?* steady_clock I think the use of clock might infer CPU time on doc-skimming user. "clock" is overloaded here. > For the flag name, I'm -1 on "monotonic" -- it sounds like a flag to > decide whether to use a monotonic time source always or never, while > it actually decides between "always" and "sometimes". I think "strict" > is nicer than "fallback", but I'm fine with either one. I agree, "strict" fits in with existing APIs. I think time.hardware(), and time.steady() are still okay here. From guido at python.org Wed Mar 14 18:21:30 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 14 Mar 2012 10:21:30 -0700 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: +1 for steady(). On Wed, Mar 14, 2012 at 10:15 AM, Matt Joiner wrote: > FWIW the name is quite important, because these kind of timings are > quite important so I think it's worth the effort. > >> - By default, it should fall back to time.time if a better source is >> ?not available, but there should be a flag that can disable this >> ?fallback for users who really *need* a monotonic/steady time source. > > Agreed. As Guido mentioned, some platforms might not be able to access > to hardware times, so falling back should be the default, lest unaware > users trigger unexpected errors. > >> - Proposed names for the function: >> ?* monotonic > > Doesn't indicate that the timing is also prevented from leaping forward. > >> ?* steady_clock > > I think the use of clock might infer CPU time on doc-skimming user. > "clock" is overloaded here. > >> For the flag name, I'm -1 on "monotonic" -- it sounds like a flag to >> decide whether to use a monotonic time source always or never, while >> it actually decides between "always" and "sometimes". I think "strict" >> is nicer than "fallback", but I'm fine with either one. > > I agree, "strict" fits in with existing APIs. > > I think time.hardware(), and time.steady() are still okay here. -- --Guido van Rossum (python.org/~guido) From solipsis at pitrou.net Wed Mar 14 18:23:07 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 14 Mar 2012 18:23:07 +0100 Subject: [Python-Dev] SocketServer issues References: <20120314100226.04627743@pitrou.net> Message-ID: <20120314182307.4e4e9c86@pitrou.net> On Wed, 14 Mar 2012 16:59:47 +0000 Kristj?n Valur J?nsson wrote: > > It just seems odd to me that it was designed to use the "select" api to do timeouts, > where timeouts are already part of the socket protocol and can be implemented more > efficiently there. How is it more efficient if it uses the exact same system calls? And why are you worrying exactly? I don't understand why accept() would be critical for performance. Thanks Antoine. From andrew.svetlov at gmail.com Wed Mar 14 18:39:53 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Wed, 14 Mar 2012 10:39:53 -0700 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: On Wed, Mar 14, 2012 at 10:21 AM, Guido van Rossum wrote: > +1 for steady(). > +1 From regebro at gmail.com Wed Mar 14 18:42:41 2012 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 14 Mar 2012 18:42:41 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: <20120314101618.5cf1850f@pitrou.net> References: <20120314101618.5cf1850f@pitrou.net> Message-ID: On Wed, Mar 14, 2012 at 10:16, Antoine Pitrou wrote: > That's a rather awful name. ?time.time() is *the* real time. > > time.monotonic(fallback=False) would be a better API. I think calling the function "monotonic" isn't really a good name if it's not always monotonic. time.monotonic(fallback=False) Really just means time.monotonic(monotonic=False) And time.monotonic(strict=True) Really means time.monotonic(i_really_mean_it=True) This is potentially confusing. Therefore time.clock() time.time() time.realtime() time.wallclock() Are all better options if there is a flag to switch if it's monotonic or not. Since time.clock() apparently can mean different things on different platforms because of it's use of the C-API, we can't use that. I'm not sure why time.time() would differ from time.realtime(). time.time() is per definition not monotonic, but time.time(monotonic=True) is maybe a possibility? //Lennart //Lennart From tjreedy at udel.edu Wed Mar 14 18:55:27 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 14 Mar 2012 13:55:27 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F5FF662.4070201@v.loewis.de> Message-ID: On 3/14/2012 12:10 PM, VanL wrote: > On 3/14/2012 10:56 AM, Terry Reedy wrote: >> Are you talking about 'install for all users' versus 'install for this >> user only'? I have always done the former as I see no point to the >> latter on my machine, even if another family member has an account. > > Yes, but some people are on corporate machines that only allow "install > for this user" installations. Ok. On such machines, system install (by vendor) == only 'base' install == only 'all user' install. >>> I am fine with keeping the distinction between >> > base installs (no py_version) >> >> I have no idea what this means. As far as I can remember, each >> installation of Python x.y (back to 1.3 for me, on DOS) has gone into a >> pythonxy (no dot) directory, with subdirectories much as Jim J. >> described. > > I am referring to the currently-existing install schemes 'nt' ('install > for all users') and 'nt-user' ('install for this user only'). The > *current* layouts are described at > http://hg.python.org/distutils2/file/2cec52b682a9/distutils2/_backport/sysconfig.cfg: > > L57-65: > [nt] > stdlib = {base}/Lib > platstdlib = {base}/Lib > purelib = {base}/Lib/site-packages > platlib = {base}/Lib/site-packages > include = {base}/Include > platinclude = {base}/Include > scripts = {base}/Scripts > data = {base} Is this from 2.x? Currently, in 3.x, Scripts is tucked inside Tools, so it seems to be scripts = {base}/Tools/Scripts > > L86-93: > [nt_user] > stdlib = {userbase}/Python{py_version_nodot} > platstdlib = {userbase}/Python{py_version_nodot} > purelib = {userbase}/Python{py_version_nodot}/site-packages > platlib = {userbase}/Python{py_version_nodot}/site-packages > include = {userbase}/Python{py_version_nodot}/Include > scripts = {userbase}/Scripts > data = {userbase} > > I am proposing that these change to: > > [nt] > stdlib = {base}/lib > platstdlib = {base}/lib > purelib = {base}/lib/site-packages > platlib = {base}/lib/site-packages > include = {base}/include > platinclude = {base}/include > scripts = {base}/bin > data = {base} > > [nt_user] > stdlib = {userbase}/python{py_version_short} > platstdlib = {userbase}/python{py_version_short} > purelib = {userbase}/python{py_version_short}/site-packages > platlib = {userbase}/python{py_version_short}/site-packages > include = {userbase}/python{py_version_short}/include > scripts = {userbase}/bin > data = {userbase} OK, now I see where 'base' and 'userbase' come from. This is an area I have ignored, so I only have a user view of the result. > All the other diuectories that Jim talked about would not be affected by > this proposal. > > Does this make it clearer? Now that we are speaking the same language, yes. Thank you. Lowercasing 'Include' is fine with me. The only question is how it affects tools in the field. Lowercasing 'Lib' would also be fine if 'libs' were changed to 'libraries' or 'headers' or perhaps even better, '_libs', as normal users never have any reason to look therein. Just something more easily distinguished from 'lib'. Same comment about tools. The present installed directory scheme is a hodgepodge of almost all caps, initial cap, and no cap. I would not mind more consistency. The only directory I regularly look in is Lib, so my main concern is that that be visually easy to find with my less than perfect vision. -- Terry Jan Reedy From jimjjewett at gmail.com Wed Mar 14 18:59:23 2012 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Wed, 14 Mar 2012 10:59:23 -0700 (PDT) Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: Message-ID: <4f60dc7b.0ad1320a.7924.ffffb2bd@mx.google.com> In http://mail.python.org/pipermail/python-dev/2012-March/117617.html van.lindberg at gmail.com posted: > As noted earlier in the thread, I also change my proposal to maintain > the existing differences between system installs and user installs. [Wanted lower case, which should be irrelevant; sysconfig.get_python_inc already assumes lower case despite the configuration file.] [Wanted "bin" instead of "Scripts", even though they aren't binaries.] If there are to be any changes, I *am* tempted to at least harmonize the two install types, but to use the less redundant system form. If the user is deliberately trying to hide that it is version 33 (or even that it is python), then so be it; defaulting to redundant information is not an improvement. Set the base/userbase at install time, with defaults of base = %SystemDrive%\{py_version_nodot} userbase = %USERPROFILE%\Application Data\{py_version_nodot} usedbase = base for system installs; userbase for per-user installs. Then let the rest default to subdirectories; sysconfig.get_config_vars on windows explicitly doesn't provide as many variables as unix, just INCLUDEPY (which should default to {usedbase}/include) and LIBDEST and BINLIBDEST (both of which should default to {usedbase}/lib). And no, I'm not forgetting data or scripts. As best I can tell, sysconfig doesn't actually expose them, and there is no Scripts directory on my machine (except inside Tools). Perhaps some installers create it when they install their own extensions? -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From regebro at gmail.com Wed Mar 14 19:07:29 2012 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 14 Mar 2012 19:07:29 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: 2012/3/14 Kristj?n Valur J?nsson : >> - By default, it should fall back to time.time if a better source is >> ?not available, but there should be a flag that can disable this >> ?fallback for users who really *need* a monotonic/steady time source. > As pointed out on a different thread, you don"t need this "flag" since the code can easily enforce the monotonic property by maintaining a static value. With this, I think time.steady() would be clear and nice. //Lennart From kristjan at ccpgames.com Wed Mar 14 19:37:05 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 14 Mar 2012 18:37:05 +0000 Subject: [Python-Dev] SocketServer issues In-Reply-To: <20120314182307.4e4e9c86@pitrou.net> References: <20120314100226.04627743@pitrou.net> <20120314182307.4e4e9c86@pitrou.net> Message-ID: A different implementation, (e.g. one using windows IOCP), can do timeouts without using select (and must, select does not work with IOCP). So will a gevent based implementation, it will timeout the accept on each socket individually, not by calling select on each of them. The reason I'm fretting is latency. There is only one thread accepting connections. If it has to do an extra event loop dance for every socket that it accepts that adds to the delay in getting a response from the server. Accept() is indeed critical for socket server performance. Maybe this is all just nonsense, still it seems odd to jump through extra hoops to emulate a functionality that is already supported by the socket spec, and can be done in the most appropriate way for each implementation. K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Antoine Pitrou Sent: 14. mars 2012 10:23 To: python-dev at python.org Subject: Re: [Python-Dev] SocketServer issues On Wed, 14 Mar 2012 16:59:47 +0000 Kristj?n Valur J?nsson wrote: > > It just seems odd to me that it was designed to use the "select" api > to do timeouts, > where timeouts are already part of the socket protocol and can be implemented more efficiently there. How is it more efficient if it uses the exact same system calls? And why are you worrying exactly? I don't understand why accept() would be critical for performance. Thanks Antoine. From guido at python.org Wed Mar 14 19:43:32 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 14 Mar 2012 11:43:32 -0700 Subject: [Python-Dev] SocketServer issues In-Reply-To: References: <20120314100226.04627743@pitrou.net> <20120314182307.4e4e9c86@pitrou.net> Message-ID: 2012/3/14 Kristj?n Valur J?nsson : > Maybe this is all just nonsense, still it seems odd to jump through extra hoops to emulate a functionality that is already supported by the socket spec, and can be done in the most appropriate way for each implementation. I thought I had already explained why setting the timeout on the socket is not the same. -- --Guido van Rossum (python.org/~guido) From anacrolix at gmail.com Wed Mar 14 19:47:13 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Thu, 15 Mar 2012 02:47:13 +0800 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: I also can live with steady, with strict for the flag. -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.brandl at gmx.net Wed Mar 14 19:52:30 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 14 Mar 2012 19:52:30 +0100 Subject: [Python-Dev] 2012 Language Summit Report In-Reply-To: References: Message-ID: On 14.03.2012 15:12, Brian Curtin wrote: > As with last year, I've put together a summary of the Python Language > Summit which took place last week at PyCon 2012. This was compiled > from my notes as well as those of Eric Snow and Senthil Kumaran, and I > think we got decent coverage of what was said throughout the day. > > http://blog.python.org/2012/03/2012-language-summit-report.html > > If you have questions or comments about discussions which occurred > there, please create a new thread for your topic. > > Feel free to contact me directly if I've left anything out or > misprinted anything. Thanks for the comprehensive report (I'm still reading). May I request for the future that you also paste a copy in the email to the group, for purposes of archiving and ease of discussing? (Just like we also post PEPs to python-dev for discussion even when they are already online.) cheers, Georg From v+python at g.nevcal.com Wed Mar 14 19:55:30 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Wed, 14 Mar 2012 11:55:30 -0700 Subject: [Python-Dev] 2012 Language Summit Report In-Reply-To: References: Message-ID: <4F60E9A2.4090100@g.nevcal.com> On 3/14/2012 8:57 AM, Terry Reedy wrote: > On 3/14/2012 10:12 AM, Brian Curtin wrote: >> As with last year, I've put together a summary of the Python Language >> Summit which took place last week at PyCon 2012. This was compiled >> from my notes as well as those of Eric Snow and Senthil Kumaran, and I >> think we got decent coverage of what was said throughout the day. >> >> http://blog.python.org/2012/03/2012-language-summit-report.html > > Nicely done. Thank you. > Thanks. Almost feels like I was there! -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Wed Mar 14 20:12:59 2012 From: brian at python.org (Brian Curtin) Date: Wed, 14 Mar 2012 14:12:59 -0500 Subject: [Python-Dev] 2012 Language Summit Report In-Reply-To: References: Message-ID: On Wed, Mar 14, 2012 at 13:52, Georg Brandl wrote: > Thanks for the comprehensive report (I'm still reading). ?May I request > for the future that you also paste a copy in the email to the group, for > purposes of archiving and ease of discussing? ?(Just like we also post > PEPs to python-dev for discussion even when they are already online.) Certainly -- good idea. I have a few updates and corrections to make this evening, then I'll get a copy of it posted. From g.brandl at gmx.net Wed Mar 14 20:33:27 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 14 Mar 2012 20:33:27 +0100 Subject: [Python-Dev] cpython: PEP 417: Adding unittest.mock In-Reply-To: References: Message-ID: On 14.03.2012 20:25, michael.foord wrote: > http://hg.python.org/cpython/rev/2fda048ee32a > changeset: 75632:2fda048ee32a > user: Michael Foord > date: Wed Mar 14 12:24:34 2012 -0700 > summary: > PEP 417: Adding unittest.mock > > files: > Lib/unittest/mock.py | 2151 ++++++++++ > Lib/unittest/test/__init__.py | 1 + > Lib/unittest/test/testmock/__init__.py | 17 + > Lib/unittest/test/testmock/support.py | 23 + > Lib/unittest/test/testmock/testcallable.py | 159 + > Lib/unittest/test/testmock/testhelpers.py | 835 +++ > Lib/unittest/test/testmock/testmagicmethods.py | 382 + > Lib/unittest/test/testmock/testmock.py | 1258 +++++ > Lib/unittest/test/testmock/testpatch.py | 1652 +++++++ > Lib/unittest/test/testmock/testsentinel.py | 28 + > Lib/unittest/test/testmock/testwith.py | 176 + > 11 files changed, 6682 insertions(+), 0 deletions(-) I hope we also get some Dock/ ;) Georg From kristjan at ccpgames.com Wed Mar 14 20:35:54 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 14 Mar 2012 19:35:54 +0000 Subject: [Python-Dev] SocketServer issues In-Reply-To: References: <20120314100226.04627743@pitrou.net> <20120314182307.4e4e9c86@pitrou.net> Message-ID: Yes, setting a timeout and leaving it that way is not the same. But setting the timeout for _accept only_ is the "same" except one approach requires the check of a bool return, the other the handling of a socket.timeout exeption. My point is, if sockets already have nice and well defined timeout semantics, why not use them, or even improve them (perhaps with an optional timeout parameter to the accept call) rather than reimplement them with an explicit select.select() call? Anyway, I'll take another look at the problem and possibly submit a patch suggestion. Thanks. K -----Original Message----- From: gvanrossum at gmail.com [mailto:gvanrossum at gmail.com] On Behalf Of Guido van Rossum Sent: 14. mars 2012 11:44 To: Kristj?n Valur J?nsson Cc: Antoine Pitrou; python-dev at python.org Subject: Re: [Python-Dev] SocketServer issues 2012/3/14 Kristj?n Valur J?nsson : > Maybe this is all just nonsense, still it seems odd to jump through extra hoops to emulate a functionality that is already supported by the socket spec, and can be done in the most appropriate way for each implementation. I thought I had already explained why setting the timeout on the socket is not the same. -- --Guido van Rossum (python.org/~guido) From lukasz at langa.pl Wed Mar 14 20:49:55 2012 From: lukasz at langa.pl (=?iso-8859-2?Q?=A3ukasz_Langa?=) Date: Wed, 14 Mar 2012 12:49:55 -0700 Subject: [Python-Dev] cpython: PEP 417: Adding unittest.mock In-Reply-To: References: Message-ID: <38BE6358-8BA1-4080-BA27-78B4BCABDC57@langa.pl> Wiadomo?? napisana przez Georg Brandl w dniu 14 mar 2012, o godz. 12:33: > On 14.03.2012 20:25, michael.foord wrote: >> http://hg.python.org/cpython/rev/2fda048ee32a >> changeset: 75632:2fda048ee32a >> user: Michael Foord >> date: Wed Mar 14 12:24:34 2012 -0700 >> summary: >> PEP 417: Adding unittest.mock >> >> files: >> Lib/unittest/mock.py | 2151 ++++++++++ >> Lib/unittest/test/__init__.py | 1 + >> Lib/unittest/test/testmock/__init__.py | 17 + >> Lib/unittest/test/testmock/support.py | 23 + >> Lib/unittest/test/testmock/testcallable.py | 159 + >> Lib/unittest/test/testmock/testhelpers.py | 835 +++ >> Lib/unittest/test/testmock/testmagicmethods.py | 382 + >> Lib/unittest/test/testmock/testmock.py | 1258 +++++ >> Lib/unittest/test/testmock/testpatch.py | 1652 +++++++ >> Lib/unittest/test/testmock/testsentinel.py | 28 + >> Lib/unittest/test/testmock/testwith.py | 176 + >> 11 files changed, 6682 insertions(+), 0 deletions(-) > > I hope we also get some Dock/ ;) Mock the doc! -- Best regards, ?ukasz Langa Senior Systems Architecture Engineer IT Infrastructure Department Grupa Allegro Sp. z o.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Wed Mar 14 21:08:18 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 14 Mar 2012 16:08:18 -0400 Subject: [Python-Dev] [Python-checkins] cpython: PEP 417: Adding unittest.mock In-Reply-To: References: Message-ID: <4F60FAB2.7020207@udel.edu> On 3/14/2012 3:25 PM, michael.foord wrote: > +# mock.py > +# Test tools for mocking and patching. Should there be a note here about restrictions on editing this file? I notice that there are things like > +class OldStyleClass: > + pass > +ClassType = type(OldStyleClass) which are only present for running under Py2 and which would normally be removed for Py3. --- Terry Jan Reedy From fuzzyman at voidspace.org.uk Wed Mar 14 21:22:09 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Wed, 14 Mar 2012 13:22:09 -0700 Subject: [Python-Dev] [Python-checkins] cpython: PEP 417: Adding unittest.mock In-Reply-To: <4F60FAB2.7020207@udel.edu> References: <4F60FAB2.7020207@udel.edu> Message-ID: <03503024-ECFB-462F-BF08-022BF93578D0@voidspace.org.uk> On 14 Mar 2012, at 13:08, Terry Reedy wrote: > On 3/14/2012 3:25 PM, michael.foord wrote: >> +# mock.py >> +# Test tools for mocking and patching. > > Should there be a note here about restrictions on editing this file? > I notice that there are things like > > > +class OldStyleClass: > > + pass > > +ClassType = type(OldStyleClass) > > which are only present for running under Py2 and which would normally be removed for Py3. Yeah, I removed as much of the Python 2 compatibility code and thought I'd got it all. Thanks for pointing it out. I'm maintaining a "clean" (no Python 2 compatibility code) version in the standard library. I'll be maintaining mock, so I'd like to be assigned any issues on it and at least talked to before changes are made. I am maintaining a backport still, but the Python standard library version is the canonical version. All the best, Michael Foord > > --- > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From fuzzyman at voidspace.org.uk Wed Mar 14 21:27:43 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Wed, 14 Mar 2012 13:27:43 -0700 Subject: [Python-Dev] cpython: PEP 417: Adding unittest.mock In-Reply-To: References: Message-ID: On 14 Mar 2012, at 12:33, Georg Brandl wrote: > On 14.03.2012 20:25, michael.foord wrote: >> http://hg.python.org/cpython/rev/2fda048ee32a >> changeset: 75632:2fda048ee32a >> user: Michael Foord >> date: Wed Mar 14 12:24:34 2012 -0700 >> summary: >> PEP 417: Adding unittest.mock >> >> files: >> Lib/unittest/mock.py | 2151 ++++++++++ >> Lib/unittest/test/__init__.py | 1 + >> Lib/unittest/test/testmock/__init__.py | 17 + >> Lib/unittest/test/testmock/support.py | 23 + >> Lib/unittest/test/testmock/testcallable.py | 159 + >> Lib/unittest/test/testmock/testhelpers.py | 835 +++ >> Lib/unittest/test/testmock/testmagicmethods.py | 382 + >> Lib/unittest/test/testmock/testmock.py | 1258 +++++ >> Lib/unittest/test/testmock/testpatch.py | 1652 +++++++ >> Lib/unittest/test/testmock/testsentinel.py | 28 + >> Lib/unittest/test/testmock/testwith.py | 176 + >> 11 files changed, 6682 insertions(+), 0 deletions(-) > > I hope we also get some Dock/ ;) I was thinking about providing documentation, but I thought maybe I'd leave it for you Georg. It's nice to get an outside perspective when documenting a new api.... I have issue 14295 to track adding mock and it won't be closed until the docs are completed. http://bugs.python.org/issue14295 On the topic of docs.... mock documentation is about eight pages long. My intention was to strip this down to just the api documentation, along with a link to the docs on my site for further examples and so on. I was encouraged here at the sprints to include the full documentation instead (minus the mock library comparison page and the front page can be cut down). So this is what I am now intending to include. It does mean the mock documentation will be "extensive". All the best, Michael > > Georg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From benjamin at python.org Wed Mar 14 21:30:25 2012 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 14 Mar 2012 15:30:25 -0500 Subject: [Python-Dev] cpython: PEP 417: Adding unittest.mock In-Reply-To: References: Message-ID: 2012/3/14 Michael Foord : > On the topic of docs.... mock documentation is about eight pages long. My intention was to strip this down to just the api documentation, along with a link to the docs on my site for further examples and so on. I was encouraged here at the sprints to include the full documentation instead Yes, please do. -- Regards, Benjamin From tjreedy at udel.edu Wed Mar 14 21:46:29 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 14 Mar 2012 16:46:29 -0400 Subject: [Python-Dev] [Python-checkins] cpython: PEP 417: Adding unittest.mock In-Reply-To: <03503024-ECFB-462F-BF08-022BF93578D0@voidspace.org.uk> References: <4F60FAB2.7020207@udel.edu> <03503024-ECFB-462F-BF08-022BF93578D0@voidspace.org.uk> Message-ID: On 3/14/2012 4:22 PM, Michael Foord wrote: > > On 14 Mar 2012, at 13:08, Terry Reedy wrote: > >> On 3/14/2012 3:25 PM, michael.foord wrote: >>> +# mock.py +# Test tools for mocking and patching. >> Should there be a note here about restrictions on editing this >> file? I notice that there are things like >> >>> +class OldStyleClass: + pass +ClassType = type(OldStyleClass) >> >> which are only present for running under Py2 and which would >> normally be removed for Py3. > > > Yeah, I removed as much of the Python 2 compatibility code and > thought I'd got it all. Thanks for pointing it out. 2000 lines is a lot to check through. > > I'm maintaining a "clean" (no Python 2 compatibility code) version in > the standard library. Great. Here is something else, which is why I thought otherwise ;-). +def _instance_callable(obj): + """Given an object, return True if the object is callable. + For classes, return True if instances would be callable.""" + if not isinstance(obj, type): + # already an instance + return getattr(obj, '__call__', None) is not None + + klass = obj + # uses __bases__ instead of __mro__ so that we work with >>> old style classes + if klass.__dict__.get('__call__') is not None: + return True + + for base in klass.__bases__: + if _instance_callable(base): + return True + return False If you want to leave the code as is, remove or revise the comment. > I'll be maintaining mock, so I'd like to be > assigned any issues on it and at least talked to before changes are > made. I am maintaining a backport still, but the Python standard > library version is the canonical version. Add unittest.mock to devguide/experts.rst and yourself with * appended. --- Searching for 'old', I also found +def _must_skip(spec, entry, is_type): + if not isinstance(spec, type): + if entry in getattr(spec, '__dict__', {}): + # instance attribute - shouldn't skip + return False >>>+ # can't use type because of old style classes + spec = spec.__class__ + if not hasattr(spec, '__mro__'): >>>+ # old style class: can't have descriptors anyway + return is_type In testcallable.py + def test_patch_spec_callable_class(self): + class CallableX(X): + def __call__(self): + pass + + class Sub(CallableX): + pass + + class Multi(SomeClass, Sub): + pass + >>>+ class OldStyle: + def __call__(self): + pass + >>>+ class OldStyleSub(OldStyle): + pass + + for arg in 'spec', 'spec_set': >>>+ for Klass in CallableX, Sub, Multi, OldStyle, OldStyleSub: This is the last. -- Terry Jan Reedy From g.brandl at gmx.net Wed Mar 14 21:52:44 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 14 Mar 2012 21:52:44 +0100 Subject: [Python-Dev] cpython: Issue #14200: Idle shell crash on printing non-BMP unicode character. In-Reply-To: References: Message-ID: On 14.03.2012 21:46, andrew.svetlov wrote: > diff --git a/Lib/idlelib/rpc.py b/Lib/idlelib/rpc.py > --- a/Lib/idlelib/rpc.py > +++ b/Lib/idlelib/rpc.py > @@ -196,8 +196,12 @@ > return ("ERROR", "Unsupported message type: %s" % how) > except SystemExit: > raise > + except KeyboardInterrupt: > + raise > except socket.error: > raise > + except Exception as ex: > + return ("CALLEXC", ex) > except: > msg = "*** Internal Error: rpc.py:SocketIO.localcall()\n\n"\ > " Object: %s \n Method: %s \n Args: %s\n" It appears that this would be better written as except socket.error: raise except Exception: return ("CALLEXC", ex) except: # BaseException, i.e. SystemExit, KeyboardInterrupt, GeneratorExit raise Georg From nadeem.vawda at gmail.com Wed Mar 14 22:17:52 2012 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Wed, 14 Mar 2012 23:17:52 +0200 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: +1 for time.steady(strict=False). On Wed, Mar 14, 2012 at 7:09 PM, Kristj?n Valur J?nsson wrote: >> - By default, it should fall back to time.time if a better source is >> ?not available, but there should be a flag that can disable this >> ?fallback for users who really *need* a monotonic/steady time source. > As pointed out on a different thread, you don"t need this "flag" since the code can easily enforce the monotonic property by maintaining a static value. > This is how we worked around buggy implementations of QueryPerformanceCounter on windows (). > K That's fine if you just need the clock to be monotonic, but it isn't any help if you also want to prevent it from jumping forward. Cheers, Nadeem From fuzzyman at voidspace.org.uk Wed Mar 14 22:41:52 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Wed, 14 Mar 2012 14:41:52 -0700 Subject: [Python-Dev] [Python-checkins] cpython: PEP 417: Adding unittest.mock In-Reply-To: References: <4F60FAB2.7020207@udel.edu> <03503024-ECFB-462F-BF08-022BF93578D0@voidspace.org.uk> Message-ID: <5BA5E20A-C4B8-4A2A-B38B-A1286FC65247@voidspace.org.uk> On 14 Mar 2012, at 13:46, Terry Reedy wrote: > On 3/14/2012 4:22 PM, Michael Foord wrote: >> >> On 14 Mar 2012, at 13:08, Terry Reedy wrote: >> >>> On 3/14/2012 3:25 PM, michael.foord wrote: >>>> +# mock.py +# Test tools for mocking and patching. > >>> Should there be a note here about restrictions on editing this >>> file? I notice that there are things like >>> >>>> +class OldStyleClass: + pass +ClassType = type(OldStyleClass) >>> >>> which are only present for running under Py2 and which would >>> normally be removed for Py3. >> >> >> Yeah, I removed as much of the Python 2 compatibility code and >> thought I'd got it all. Thanks for pointing it out. > > 2000 lines is a lot to check through. >> >> I'm maintaining a "clean" (no Python 2 compatibility code) version in >> the standard library. > > Great. Here is something else, which is why I thought otherwise ;-). > > +def _instance_callable(obj): > + """Given an object, return True if the object is callable. > + For classes, return True if instances would be callable.""" > + if not isinstance(obj, type): > + # already an instance > + return getattr(obj, '__call__', None) is not None > + > + klass = obj > + # uses __bases__ instead of __mro__ so that we work with > >>> old style classes > + if klass.__dict__.get('__call__') is not None: > + return True > + > + for base in klass.__bases__: > + if _instance_callable(base): > + return True > + return False > > If you want to leave the code as is, remove or revise the comment. Thanks very much for finding these, I'm pretty sure I've fixed all the ones you reported - and one more case where try...except...finally can now be used. All the best, Michael Foord > >> I'll be maintaining mock, so I'd like to be >> assigned any issues on it and at least talked to before changes are >> made. I am maintaining a backport still, but the Python standard >> library version is the canonical version. > > Add unittest.mock to devguide/experts.rst and yourself with * appended. > > --- > Searching for 'old', I also found > > +def _must_skip(spec, entry, is_type): > + if not isinstance(spec, type): > + if entry in getattr(spec, '__dict__', {}): > + # instance attribute - shouldn't skip > + return False > >>>+ # can't use type because of old style classes > + spec = spec.__class__ > + if not hasattr(spec, '__mro__'): > >>>+ # old style class: can't have descriptors anyway > + return is_type > > In testcallable.py > + def test_patch_spec_callable_class(self): > + class CallableX(X): > + def __call__(self): > + pass > + > + class Sub(CallableX): > + pass > + > + class Multi(SomeClass, Sub): > + pass > + > >>>+ class OldStyle: > + def __call__(self): > + pass > + > >>>+ class OldStyleSub(OldStyle): > + pass > + > + for arg in 'spec', 'spec_set': > >>>+ for Klass in CallableX, Sub, Multi, OldStyle, OldStyleSub: > > This is the last. > > -- > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From mhammond at skippinet.com.au Wed Mar 14 23:39:53 2012 From: mhammond at skippinet.com.au (Mark Hammond) Date: Thu, 15 Mar 2012 09:39:53 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F60C169.9030404@gmail.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> Message-ID: <4F611E39.5070605@skippinet.com.au> On 15/03/2012 3:03 AM, Lindberg, Van wrote: > On 3/14/2012 1:32 AM, Mark Hammond wrote: >> As per comments later in the thread, I'm -1 on including >> "python{py_version_short}" in the lib directories for a number of >> reasons; one further reason not outlined is that it would potentially >> make running Python directly from a built tree difficult. For the same >> reason, I'm also -1 on having that in the include dir. > > A built tree would look the same as always - the directories would only > be moved (if at all) during installation. Thus, you will still be able > to run python directly from a built installation. So long as the location of the "lib" dir doesn't change, that is probably true. If lib was to change though, then PC/getpathp.c would need to change which is where my concern came from. >>> scripts = {base/userbase}/bin >> >> We should note that this may cause pain for a number of projects - I've >> seen quite a few projects that already assume "Scripts" on Windows - eg, >> virtualenv and setuptools IIRC > > If you look at these projects, though, they *special case* Windows to > account for the different layout. Removing this difference will allow > these projects to remove their special-casing code. I don't think that is true. One of the examples I offered was a .bat file - it wouldn't be possible to remove the .bat file with your proposal. The other example was the Windows specific launcher. Most things that need to locate the Python executable aren't actually Python code - once Python is running, locating the executable is as simple as sys.executable. So by their very nature, tools needing to locate Python will tend to be platform specific in the first place. Can you offer any examples of 3rd party tools which could unify code in this scheme, and particularly, where this scheme would cause them to have less code, not more? >> - and also assume the executable is where >> it currently lives - one example off the top of my head is the mozilla >> "jetpack" project - see: >> >> https://github.com/mozilla/addon-sdk/blob/master/bin/activate.bat#L117 > > This code actually reinforces my point: > > First, this code would actually still work. The section ":FoundPython" > sets the PATH to "%VIRTUAL_ENV%\bin;%PYTHONINSTALL%;%PATH%" (L80), which > would still allow python.exe to be found and run. > > Second, look at that line again. Mozilla actually has edited this code > so that the jetpack uses a cross-platform "bin" convention, just as I am > proposing. I think you misunderstand the .bat file - there is no python executable in the bin directory. The bat file is locating your already installed Python and attempting to use it. > Third, one element of this proposal is that there would be a key placed > in the registry that points directly to the location of the python > executable, making locating it trivial to locate programmatically on > Windows. That sounds reasonable, but it still causes breakage, and still causes extra code for tools needing to support earlier versions. Saying "hey, it's easy to fix" are just words to someone frustrated trying to get things working with a later version of Python. Don't get me wrong - the scheme you propose is how it should have been done in the first place, no question. My issue is the breakage this will cause versus the benefit. My other questions still remain: who specifically will benefit from this, and what would be the cost to those beneficiaries of sticking with the existing scheme? The only benefit I've seen suggested is aesthetics, and while that is laudable, I don't think it is enough to justify breakage. Cheers, Mark From van.lindberg at gmail.com Thu Mar 15 00:15:55 2012 From: van.lindberg at gmail.com (VanL) Date: Wed, 14 Mar 2012 18:15:55 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F611E39.5070605@skippinet.com.au> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> Message-ID: <4F6126AB.7020309@gmail.com> On 3/14/2012 5:39 PM, Mark Hammond wrote: > Can you offer any examples of 3rd party tools which could unify code in > this scheme, and particularly, where this scheme would cause them to > have less code, not more? How about virtualenv: """ def path_locations(home_dir): """Return the path locations for the environment (where libraries are, where scripts go, etc)""" # XXX: We'd use distutils.sysconfig.get_python_inc/lib but its # prefix arg is broken: http://bugs.python.org/issue3386 if sys.platform == 'win32': [snip a bit about spaces in the path....] lib_dir = join(home_dir, 'Lib') inc_dir = join(home_dir, 'Include') bin_dir = join(home_dir, 'Scripts') if is_jython: lib_dir = join(home_dir, 'Lib') inc_dir = join(home_dir, 'Include') bin_dir = join(home_dir, 'bin') elif is_pypy: lib_dir = home_dir inc_dir = join(home_dir, 'include') bin_dir = join(home_dir, 'bin') elif sys.platform != 'win32': lib_dir = join(home_dir, 'lib', py_version) inc_dir = join(home_dir, 'include', py_version + abiflags) bin_dir = join(home_dir, 'bin') return home_dir, lib_dir, inc_dir, bin_dir """ > I think you misunderstand the .bat file - there is no python executable > in the bin directory. The bat file is locating your already installed > Python and attempting to use it. My only point here is that it would still find the already-installed Python (I think). > My other questions still remain: who specifically will benefit from > this, and what would be the cost to those beneficiaries of sticking with > the existing scheme? I will benefit, for one. My use case is that I do cross-platform development and deployment, and I occasionally want to put an entire environment in source control. Currently the case changing and Scripts/bin distinction make this a distinct pain, such that I go in and edit my Windows python installation in the way that I am describing right now. From my actual experience with this layout, pip, virtualenv, and pypm are the only three major packages that hard-code this logic and would need to be changed slightly. Thanks, Van From skippy.hammond at gmail.com Thu Mar 15 00:30:54 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Thu, 15 Mar 2012 10:30:54 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F6126AB.7020309@gmail.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> Message-ID: <4F612A2E.9030805@gmail.com> [resending - original reply went only to Van] On 15/03/2012 10:15 AM, Lindberg, Van wrote: > On 3/14/2012 5:39 PM, Mark Hammond wrote: >> Can you offer any examples of 3rd party tools which could unify code >> in this scheme, and particularly, where this scheme would cause them >> to have less code, not more? > > How about virtualenv: > > """ > def path_locations(home_dir): > """Return the path locations for the environment (where libraries are, > where scripts go, etc)""" > # XXX: We'd use distutils.sysconfig.get_python_inc/lib but its > # prefix arg is broken: http://bugs.python.org/issue3386 > if sys.platform == 'win32': > [snip a bit about spaces in the path....] > lib_dir = join(home_dir, 'Lib') > inc_dir = join(home_dir, 'Include') > bin_dir = join(home_dir, 'Scripts') > if is_jython: > lib_dir = join(home_dir, 'Lib') > inc_dir = join(home_dir, 'Include') > bin_dir = join(home_dir, 'bin') > elif is_pypy: > lib_dir = home_dir > inc_dir = join(home_dir, 'include') > bin_dir = join(home_dir, 'bin') > elif sys.platform != 'win32': > lib_dir = join(home_dir, 'lib', py_version) > inc_dir = join(home_dir, 'include', py_version + abiflags) > bin_dir = join(home_dir, 'bin') > return home_dir, lib_dir, inc_dir, bin_dir > """ So what would that look like in your scheme? I'd expect you wind up with: if sys.platform == 'win32' and sys.versioninfo < (3,4): ... existing layout else: ... new layout. So it actually ends up as slightly *more* code. > > >> I think you misunderstand the .bat file - there is no python >> executable in the bin directory. The bat file is locating your >> already installed Python and attempting to use it. > > My only point here is that it would still find the already-installed > Python (I think). I'm fairly sure it would not - it doesn't look in %PYTHONINSTALL%\bin. > >> My other questions still remain: who specifically will benefit from >> this, and what would be the cost to those beneficiaries of sticking >> with the existing scheme? > > I will benefit, for one. My use case is that I do cross-platform > development and deployment, and I occasionally want to put an entire > environment in source control. Currently the case changing and > Scripts/bin distinction make this a distinct pain, such that I go in > and edit my Windows python installation in the way that I am > describing right now. > > From my actual experience with this layout, pip, virtualenv, and > pypm are the only three major packages that hard-code this logic and > would need to be changed slightly. So why not just standardize on that new layout for virtualenvs? Mark From cs at zip.com.au Thu Mar 15 00:38:02 2012 From: cs at zip.com.au (Cameron Simpson) Date: Thu, 15 Mar 2012 10:38:02 +1100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: <20120314233802.GA6931@cskk.homeip.net> On 13Mar2012 17:27, Guido van Rossum wrote: | I think wallclock() is an awkward name; in other contexts I've seen | "wall clock time" used to mean the time that a clock on the wall would | show, i.e. local time. This matches definition #1 of | http://www.catb.org/jargon/html/W/wall-time.html (while yours matches | #2 :-). I think this also. A "wallclock()" function that did not return real world elapsed time seconds would be misleading or at least disconcerting. | Maybe it could be called realtime()? "elapsedtime()?" It is getting a bit long though. Cheers, -- Cameron Simpson DoD#743 http://www.cskk.ezoshosting.com/cs/ "Shot my dog today." "Was he mad?" "Well, he weren't too damned pleased." - Rick Tilson, rtilson at Sun.COM From glyph at twistedmatrix.com Thu Mar 15 01:21:55 2012 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Wed, 14 Mar 2012 17:21:55 -0700 Subject: [Python-Dev] sharing sockets among processes on windows In-Reply-To: References: Message-ID: On Mar 13, 2012, at 5:27 PM, Kristj?n Valur J?nsson wrote: > Hi, > I?m interested in contributing a patch to duplicate sockets between processes on windows. > Tha api to do this is WSADuplicateSocket/WSASocket(), as already used by dup() in the _socketmodule.c > Here?s what I have: Just in case anyone is interested, we also have a ticket for this in Twisted: . It would be great to share code as much as possible. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristjan at ccpgames.com Thu Mar 15 01:28:08 2012 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Thu, 15 Mar 2012 00:28:08 +0000 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: What does "jumping forward" mean? That's what happens with every clock at every time quantum. The only effect here is that this clock will be slightly noisy, i.e. its precision becomes worse. On average it is still correct. Look at the use cases for this function 1) to enable timeouts for certaing operations, like acquiring locks: Jumping backwards is bad, because that may cause infinite wait time. But jumping forwards is ok, it may just mean that your lock times out a bit early 2) performance measurements: If you are running on a platform with a broken runtime clock, you are not likely to be running performance measurements. Really, I urge you to skip the "strict" keyword. It just adds confusion. Instead, lets just give the best monotonic clock we can do which doesn"t move backwards. Let's just provide a "practical" real time clock with high resolution that is appropriate for providing timeout functionality and so won't jump backwards for the next 20 years. Let's simply point out to people that it may not be appropriate for high precision timings on old and obsolete hardware and be done with it. K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Nadeem Vawda Sent: 14. mars 2012 14:18 To: Matt Joiner Cc: Antoine Pitrou; Guido van Rossum; python-dev at python.org Subject: Re: [Python-Dev] Drop the new time.wallclock() function? +1 for time.steady(strict=False). On Wed, Mar 14, 2012 at 7:09 PM, Kristj?n Valur J?nsson wrote: >> - By default, it should fall back to time.time if a better source is >> ?not available, but there should be a flag that can disable this >> ?fallback for users who really *need* a monotonic/steady time source. > As pointed out on a different thread, you don"t need this "flag" since the code can easily enforce the monotonic property by maintaining a static value. > This is how we worked around buggy implementations of QueryPerformanceCounter on windows (). > K That's fine if you just need the clock to be monotonic, but it isn't any help if you also want to prevent it from jumping forward. Cheers, Nadeem _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From victor.stinner at gmail.com Thu Mar 15 01:29:35 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 15 Mar 2012 01:29:35 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: Message-ID: <4F6137EF.9000000@gmail.com> On 14/03/2012 00:57, Victor Stinner wrote: > I added two functions to the time module in Python 3.3: wallclock() > and monotonic(). (...) I merged the two functions into one function: time.steady(strict=False). time.steady() should be monotonic most of the time, but may use a fallback. time.steady(strict=True) fails with OSError or NotImplementedError if reading the monotonic clock failed or if no monotonic clock is available. I patched the queue and threading modules to use time.steady() instead of time.time(). The documentation may need clarification. http://docs.python.org/dev/library/time.html#time.steady Victor From kristjan at ccpgames.com Thu Mar 15 01:32:17 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Thu, 15 Mar 2012 00:32:17 +0000 Subject: [Python-Dev] sharing sockets among processes on windows In-Reply-To: References: Message-ID: Great. I was about to write unittests for my patch, when I found out that I wanted to use multiprocessing to run them. So, I decided that the tests rather belonged in there rather than test_socket.py. This is where I stumbled upon code that multiprocessing uses to transfer sockets for unix. I need to read that code and understand it and see if it can be persuaded to magically work on windows too. K From: Glyph Lefkowitz [mailto:glyph at twistedmatrix.com] Sent: 14. mars 2012 17:22 To: Kristj?n Valur J?nsson Cc: Python-Dev (python-dev at python.org) Subject: Re: [Python-Dev] sharing sockets among processes on windows On Mar 13, 2012, at 5:27 PM, Kristj?n Valur J?nsson wrote: Hi, I?m interested in contributing a patch to duplicate sockets between processes on windows. Tha api to do this is WSADuplicateSocket/WSASocket(), as already used by dup() in the _socketmodule.c Here?s what I have: Just in case anyone is interested, we also have a ticket for this in Twisted: . It would be great to share code as much as possible. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristjan at ccpgames.com Thu Mar 15 01:40:55 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Thu, 15 Mar 2012 00:40:55 +0000 Subject: [Python-Dev] SocketServer issues In-Reply-To: References: <20120314100226.04627743@pitrou.net> <20120314182307.4e4e9c86@pitrou.net> Message-ID: Fyi: http://bugs.python.org/issue14307 -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Kristj?n Valur J?nsson Sent: 14. mars 2012 12:36 To: Guido van Rossum Cc: Antoine Pitrou; python-dev at python.org Subject: Re: [Python-Dev] SocketServer issues Yes, setting a timeout and leaving it that way is not the same. But setting the timeout for _accept only_ is the "same" except one approach requires the check of a bool return, the other the handling of a socket.timeout exeption. My point is, if sockets already have nice and well defined timeout semantics, why not use them, or even improve them (perhaps with an optional timeout parameter to the accept call) rather than reimplement them with an explicit select.select() call? Anyway, I'll take another look at the problem and possibly submit a patch suggestion. Thanks. K -----Original Message----- From: gvanrossum at gmail.com [mailto:gvanrossum at gmail.com] On Behalf Of Guido van Rossum Sent: 14. mars 2012 11:44 To: Kristj?n Valur J?nsson Cc: Antoine Pitrou; python-dev at python.org Subject: Re: [Python-Dev] SocketServer issues 2012/3/14 Kristj?n Valur J?nsson : > Maybe this is all just nonsense, still it seems odd to jump through extra hoops to emulate a functionality that is already supported by the socket spec, and can be done in the most appropriate way for each implementation. I thought I had already explained why setting the timeout on the socket is not the same. -- --Guido van Rossum (python.org/~guido) _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From victor.stinner at gmail.com Thu Mar 15 01:56:40 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 15 Mar 2012 01:56:40 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: <4F6137EF.9000000@gmail.com> References: <4F6137EF.9000000@gmail.com> Message-ID: > I merged the two functions into one function: time.steady(strict=False). I opened the issue #14309 to deprecate time.clock(): http://bugs.python.org/issue14309 time.clock() is a different clock type depending on the OS (Windows vs UNIX) and so is confusing. You should now decide between time.time() and time.steady(). time.time(): - known starting point, Epoch (1970.1.1) - may be update by the system (and so go backward or forward) => display time to the user, compare/set file modification time time.steady(): - unknown starting point - more accurate than time.time() - should be monotonic (use strict=True if you want to be sure ;-)) => benchmark, timeout Victor From kristjan at ccpgames.com Thu Mar 15 02:38:41 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Thu, 15 Mar 2012 01:38:41 +0000 Subject: [Python-Dev] [Python-checkins] cpython: PEP 417: Adding unittest.mock In-Reply-To: <5BA5E20A-C4B8-4A2A-B38B-A1286FC65247@voidspace.org.uk> References: <4F60FAB2.7020207@udel.edu> <03503024-ECFB-462F-BF08-022BF93578D0@voidspace.org.uk> <5BA5E20A-C4B8-4A2A-B38B-A1286FC65247@voidspace.org.uk> Message-ID: Fyi: http://bugs.python.org/issue14310 -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Michael Foord Sent: 14. mars 2012 14:42 To: Terry Reedy Cc: python-dev at python.org Subject: Re: [Python-Dev] [Python-checkins] cpython: PEP 417: Adding unittest.mock On 14 Mar 2012, at 13:46, Terry Reedy wrote: > On 3/14/2012 4:22 PM, Michael Foord wrote: >> >> On 14 Mar 2012, at 13:08, Terry Reedy wrote: >> >>> On 3/14/2012 3:25 PM, michael.foord wrote: >>>> +# mock.py +# Test tools for mocking and patching. > >>> Should there be a note here about restrictions on editing this >>> file? I notice that there are things like >>> >>>> +class OldStyleClass: + pass +ClassType = type(OldStyleClass) >>> >>> which are only present for running under Py2 and which would >>> normally be removed for Py3. >> >> >> Yeah, I removed as much of the Python 2 compatibility code and >> thought I'd got it all. Thanks for pointing it out. > > 2000 lines is a lot to check through. >> >> I'm maintaining a "clean" (no Python 2 compatibility code) version in >> the standard library. > > Great. Here is something else, which is why I thought otherwise ;-). > > +def _instance_callable(obj): > + """Given an object, return True if the object is callable. > + For classes, return True if instances would be callable.""" > + if not isinstance(obj, type): > + # already an instance > + return getattr(obj, '__call__', None) is not None > + > + klass = obj > + # uses __bases__ instead of __mro__ so that we work with > >>> old style classes > + if klass.__dict__.get('__call__') is not None: > + return True > + > + for base in klass.__bases__: > + if _instance_callable(base): > + return True > + return False > > If you want to leave the code as is, remove or revise the comment. Thanks very much for finding these, I'm pretty sure I've fixed all the ones you reported - and one more case where try...except...finally can now be used. All the best, Michael Foord > >> I'll be maintaining mock, so I'd like to be >> assigned any issues on it and at least talked to before changes are >> made. I am maintaining a backport still, but the Python standard >> library version is the canonical version. > > Add unittest.mock to devguide/experts.rst and yourself with * appended. > > --- > Searching for 'old', I also found > > +def _must_skip(spec, entry, is_type): > + if not isinstance(spec, type): > + if entry in getattr(spec, '__dict__', {}): > + # instance attribute - shouldn't skip > + return False > >>>+ # can't use type because of old style classes > + spec = spec.__class__ > + if not hasattr(spec, '__mro__'): > >>>+ # old style class: can't have descriptors anyway > + return is_type > > In testcallable.py > + def test_patch_spec_callable_class(self): > + class CallableX(X): > + def __call__(self): > + pass > + > + class Sub(CallableX): > + pass > + > + class Multi(SomeClass, Sub): > + pass > + > >>>+ class OldStyle: > + def __call__(self): > + pass > + > >>>+ class OldStyleSub(OldStyle): > + pass > + > + for arg in 'spec', 'spec_set': > >>>+ for Klass in CallableX, Sub, Multi, OldStyle, OldStyleSub: > > This is the last. > > -- > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From anacrolix at gmail.com Thu Mar 15 02:58:50 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Thu, 15 Mar 2012 09:58:50 +0800 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: Victor, I think that steady can always be monotonic, there are time sources enough to ensure this on the platforms I am aware of. Strict in this sense refers to not being adjusted forward, i.e. CLOCK_MONOTONIC vs CLOCK_MONOTONIC_RAW. Non monotonicity of this call should be considered a bug. Strict would be used for profiling where forward leaps would disqualify the timing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Thu Mar 15 04:46:49 2012 From: brian at python.org (Brian Curtin) Date: Wed, 14 Mar 2012 22:46:49 -0500 Subject: [Python-Dev] 2012 Language Summit Report (updated, included here) Message-ID: After a few comments and corrections, including one to post the report directly here...what follows below is the text of what was updated on the previously linked blog post[0]. Much of the changes were to add more detail from a few people. One correction lies in the importlib discussion, in that I previously mentioned the effect on explicit relative imports. This was incorrect: it should have said *implicit* relative imports. [0] http://blog.python.org/2012/03/2012-language-summit-report.html ============================= 2012 Language Summit Report ============================= This year's Language Summit took place on Wednesday March 7 in Santa Clara, CA before the start of `PyCon 2012`_. As with previous years, in attendance were members of the various Python VMs, packagers from various Linux distributions, and members of several community projects. The Namespace PEPs ================== The summit began with a discussion on PEPs `382`_ and `402`_, with Barry Warsaw leading much of the discussion. After some discussion, the decision was ultimately deferred with what appeared to be a want for parts of both PEPs. As of Monday at the PyCon sprints, both PEPs have been rejected (see the Rejection Notice at the top of each PEP). Martin von Loewis `posted to the import-sig list`_ that a resolution has been found and Eric Smith will draft a new PEP on the ideas agreed upon there. Effectively, PEP 382 has been outright rejected, while portions of PEP 402 will be accepted. ``importlib`` Status ==================== Brett Cannon announced that there is a completed and available branch of CPython using importlib at `http://hg.python.org/sandbox/bcannon/ `_. See the ``bootstrap_importlib`` named branch. Discussion began by outlining the only real existing issue, which lies in ``stat``'ing of directories. There's a minor backwards incompatibility issue with time granularity. However, everyone agreed that it's so unlikely to be of issue that it's not a showstopper and the work can move forward. Additionally, there was an optimization made around the ``stat`` calls, which was arrived at independently by each of Brett, Antoine Pitrou, and P.J. Eby. The topic of performance came up and Brett explained that the current pure-Python implementation is around 5% slower. Thomas Wouters exclaimed that 5% slower is actually really good, especially given some recent benchmark work he was doing showing that changing compilers sometimes shows a 5% difference in startup time. There was a shared feeling that 5% slower was not something to hold up integration of the code, which pushed discussion happily along. Brett went on to explain what the bootstrapping actually looks like, even asserting that the implementation finds what could be the first *real* use of frozen modules! Guido's first response was, "you mean to tell me that after 20 years we finally found a use for freezing code?" ``importlib._bootstrap`` is a frozen module containing the necessary builtins to operate, along with some re-implementations of a small number of functions. Some of the libraries included in the frozen module are ``warnings``, ``_os`` (select code from ``posix``), and ``marshal``. Another compatibility issue was brought up, but again, was decided to be an issue unworthy of halting the progress on this issue. There's a negative level count which is not supported in ``importlib``, used in implicit relative imports, and it was agreed that it's acceptable to continue not supporting it. The future will likely result in a strip down of ``import.c``, as well as the exposure of numerous hooks as well as exposure of much of the ``importlib`` API. As for merging with the ``default`` branch, it was pretty universally agreed upon that this should happen for 3.3 and it should happen soon in order to get mileage on the implementation throughout the alpha and beta cycles. Since this will be happening shortly, Brett is going to follow-up to python-dev with some cleanup details and look for reviews. Release Schedule PEPs ===================== Discussion on PEPs `407`_ and `413`_ followed the ``importlib`` talk. Like the namespace PEP discussion, several ideas were tossed around but the group didn't arrive at any conclusion on acceptability of the PEPs. Immediately, the idea of splitting out the standard library to be on its own was resurrected, which could lend itself to both PEPs. Some questions remain, namely in where would the test suite live. Additionally, there may need to be some distinction between the tests which cover standard libraries versus the tests which cover language features. The topic of versioning came up, with three distinctions needing to be made. We would seem to need a version of the language spec, a version of the implementation, and a version of the standard library. Many commenters mentioned that these PEPs make things too complicated. Additionally, there was a question about whether there are enough users who care about either of these changes being made. Several of us stated that *we* could use the quicker releases, but with so many users being stuck on old versions for one reason or another, there was a wonder of who would take the releases. Thomas Wouters mentioned a good point about the difficulty in lining up the so-called Python "LTS" releases with other Python consumers who do similar LTS-style releases. Ubuntu and their LTS schedule was a prime example, as well as the organizations who plan releases atop something like Ubuntu. Many of the Linux distribution packagers in attendance seemed to agree. One thing that seemed to have broad agreement was that shortening the standard library turnaround time would be a good thing in terms of new contributors. Few people are interested in writing new features that might not be released for over a year -- it's just not fun. Even with bug fixes, sometimes the duration can be seen as too long, to the point where users may end up just fixing our problems from within their own code if possible. Guido went on to make a comment about how we hope to avoid the mindset some have of "my package isn't accepted until it's in the standard library". The focus continues to be on projects being hosted on PyPI, being successful out in the wild, then vetted for acceptance in the standard library after maturity of the project and its APIs. It was suggested that perhaps speeding up bug fix releases could be a good move, but we would need to check with release managers to ensure they're on board and willing to expend the effort to produce more frequent releases. As with the new feature releases, we need to be sure there's an audience to take the new bug fixes. There was also some discussion about what have previously been called "sumo" releases. Given that some similar releases are already made by third-party vendors, the idea didn't seem to gain much traction. Funding from the Python Software Foundation =========================================== PSF Chairman Steve Holden joined the group after lunch to mention that the foundation has resources available to assist development efforts, especially given the sponsorship success of this year's conference. While the foundation can't and won't dictate what should be coded up, they're open to proposals about the types of work to be funded. Steve and Jesse Noller were adamant about the support not only being for all Python implementations, but also for third-party projects. What's needed to begin funding for a project is a concrete proposal on what will be accomplished. They stressed that the money is ready and waiting -- proposals are the way to unlock it. Some ideas for how to use the funding came from Steve but also from around the room. One idea which started off the discussion was the idea of funding one-month sabbaticals. Then comes the issue of who might be available. Some suggested that freelance consultants in the development community might be the ones we should try to engage. Those with full-time employment may find it harder to acquire such a sabbatical, but the possibility is open to anyone. Another thought was potential funding of someone to do spurts of full-time effort on the bug tracker, ideally someone already involved in the triage effort. This type of funding would hope to put an end to the times when it takes three days to fix a bug and three years for the patch to be accepted. Some thought this might be a nice idea in the short term, but it could be tough work and burn out the individual(s) involved. If anyone is up for it, they're encouraged to propose the idea to the foundation. Along similar lines of tracker maintenance, Glyph Lefkowitz of the Twisted project had an idea to fund code reviews over code-writing efforts. Some thought this might be a good way to push forward the ``regex``/``re`` situation, given that the ``regex`` is very large and most felt that the only thing holding it back from some form of inclusion is an in-depth review. The ``cdecimal`` module was mentioned as another project that could use some review assistance. The code review funding is also an idea to push forward some third-party project's ports to Python 3, specifically including Twisted, which the group felt was an effort which should receive some of this funding. Along the way it was remarked that the `core-mentors`_ group has been a success in involving new contributors. Kudos to those involved with that list. ``virtualenv`` Inclusion ======================== In about two minutes, discussion on PEP `405`_ came and went. Carl Meyer mentioned that a reference implementation is available and is working pretty well. A look from the OSX maintainers would be beneficial, and both Ned Deily and Ronald Oussoren were in attendance. It seemed like one of the only things left in terms of the PEP was to find someone to make a declaration on it, and Thomas Wouters put his name out there if Nick Coghlan wasn't going to do it (update: Nick will be the PEP czar). PEP 397 Inclusion ================= Without much of a Windows representation at the summit, discussion was fairly quick, but it was pretty much agreed that PEP `397`_ was something we should accept. Brian Curtin spoke in favor of the PEP, as well as mentioning ongoing work on the Windows installer to optionally add the executable's directory to the Path. After discussion outside of the summit, it was additionally agreed upon that the launcher should be installed via the 3.3 Windows installer, while it can also live as a standalone installer for those not taking 3.3. Additionally, there needs to be some work done on the PEP to remove much of the low-level detail that is coupled too tightly with the implementation, e.g., explaining of the location of the ``py.ini`` file. speed.python.org ================ After generous hardware donations, the http://speed.python.org site has gone live and is currently running PyPy benchmarks. We need to make a decision on what benchmarks can be used as well as what benchmarks *should* be used when it comes to creating a Python 3 suite. As we get implementations on Python 3 we'll want to scale back 2.7 testing and push forward with 3.x. The project suffers not from a technological problem but from a personnel problem, which was thought to be another area that funding could be used for. However, even if money is on the table, we still need to find someone with the time, the know-how, and the drive to complete the task. Ideally the starting task would be to get PyPy and CPython implementations running and comparing. After that, there are a number of infrastructure tasks in line. PEP 411 Inclusion ================= PEP `411`_ proposes the inclusion of provisional packages into the standard library. The recently discussed ``regex`` and ``ipaddr`` modules were used as examples of libraries to include under this PEP. As for how this inclusion should be implemented and denoted to users was the major discussion point. It was first suggested that documentation notes don't work -- we can't rely only on documentation to be the single notification point, especially for this type of code inclusion. Other thoughts were some type of flag on the library to specify its experimental status. Another thought was to emit a warning on import of a provisional library, but it's another thing that we'd likely want to silence by default in order to not affect user code in the hopes that developers are running their test suite with warnings enabled. However, as with other times we've gone down this path, we run the risk of developers just disabling warnings all together if they become annoying. As has been suggested on python-dev, importing a provisional library from a special package, e.g., ``from __experimental__ import foo``, was pretty strongly discouraged. If the library gains a consistent API, it penalizes users once it moves from provisional status to being officially accepted. Aliasing just exacerbates the problem. The PEP boils down to being about process, and we need to be sure that libraries being included use the ability to change APIs very carefully. We also need to make people, especially the library author, aware of the need to be responsive to feedback and open to change as the code reaches a wider audience. Looking back, Jesse Noller suggested ``multiprocessing`` would have been a good candidate for something like this PEP is suggesting. Around this time, it was suggested that Michael Foord's `mock`_ could gain some provisional inclusion within ``unittest``, perhaps as ``unittest.mock``. Instead, given ``mock``'s stable API and wide use among us, along with the need for a mocking library within our own test suite, it was agreed to just accept it directly into the standard library without any provisional status. While on the topic of ``regex``'s role within the PEP came an idea from Thomas Wouters that ``regex`` be introduced into the standard library, bypassing any provisional status. From there, the previously known ``re`` module could be moved to the ``sre`` name, and there didn't appear to be any dissenting opinion there. It should also be noted to users of provisional libraries that the library maintainers would need to exercise extreme care and be very conservative in changing of the APIs. The last thing we want to do is introduce a good library but as a moving target to its users. Keyword Arguments on all builtin functions ========================================== As recently came up on the tracker, it was suggested that wider use of keyword arguments in our APIs would likely be a good thing. Gregory P. Smith suggested that we leave single-argument APIs alone, which was agreed upon. However, the overall change got some push back as "change for change's sake". In order to support this, the ``PyArg_ParseTuple`` function would need to do more work, and it's already known to be somewhat slow. Alternatively, ``PyArg_Parse`` is much faster, and the tuple version could take a thing or two from it regardless of any wide scale change to builtins. There does exist some potential break in compatibility when replacing a builtin function with a Python one, where positional-only arguments suddenly get a potentially conflicting name. It was widely agreed upon that we should avoid any blanket rules and keep changes to places where it makes sense rather than make wholesale changes. We also need to be mindful of documentation and doc strings being kept to match the actual keyword argument names as well as keep them in sync. OrderedDict was suggested as the container for keyword arguments, but Guido and Gregory were unsure of use-cases for that. Whether or not we use a traditional or ordered dictionary, it was suggested that we could possibly use a decorator to handle some of this. We could even go as far as exposing something like ``PyArg_ParseTuple`` as a Python-level function. PEP `362`_, a proposal for a function signature object, would help here and with decorators in general. It seems that all that's left with that PEP is another look and someone to declare on it. Porting to Python 3 =================== We moved on to talk about Python 3 porting, starting with the current strategies and how they're working out. Single-codebase porting is working better than expected for most of us, although ``except`` handling is a bit messy when supporting versions like 2.4. Having a lot of options, from 3to2 to 2to3, then the single codebase through parallel trees, is a really good thing. However, it's hard for us to choose a strategy for projects, so we don't, which is why most documentation tries to lay numerous strategies out there. It was suggested that documentation could stand to gain more examples of real-world porting examples, ideally pointing to changesets of these projects. The thought of our porting documentation gaining a cookbook-style approach seemed to get some agreement as a good idea. Hash Randomization ================== Release candidates are available to all branches receiving security fixes, and in the meantime, David Malcolm found and reported a security issue in the upstream ``expat`` project. However, since the upstream fix includes many other fixes at the same time, we should pick up only the security fix at this time and leave the bug fixes for the next bug fix release of the relevant branches. New ``dict`` Implementation =========================== Since the implementation makes sense and the tests pass, it was quickly agreed upon that Mark Shannon's PEP `412`_ should be accepted. As with other changes agreed upon in this summit, we'd like for the change to be pushed soon in order to get mileage on it throughout the alpha and beta cycles. With this acceptance comes commit access for Mark so that he can maintain the code. It was also remarked that the only user-visible difference that this implementation brings is a difference in sort ordering, but the recent hash randomization work makes this a moot point. New ``pickle`` Protocol ======================= PEP `3154`_, mentioned by Lukasz Langa, specifies a new pickle protocol -- version 4. Lukasz mentioned exception pickling in ``multiprocessing`` as being an issue, and Antoine solved it with this PEP. While qualified names provide some help, it was agreed upon that this PEP needs more attention. ---- If you have any questions or comments, please post to `python-dev`_. *Thanks to Eric Snow and Senthil Kumaran for contributing to this post.* .. _PyCon 2012: https://us.pycon.org/2012/ .. _362: http://www.python.org/dev/peps/pep-0362/ .. _382: http://www.python.org/dev/peps/pep-0382/ .. _397: http://www.python.org/dev/peps/pep-0397/ .. _402: http://www.python.org/dev/peps/pep-0402/ .. _405: http://www.python.org/dev/peps/pep-0405/ .. _407: http://www.python.org/dev/peps/pep-0407/ .. _411: http://www.python.org/dev/peps/pep-0411/ .. _412: http://www.python.org/dev/peps/pep-0412/ .. _413: http://www.python.org/dev/peps/pep-0413/ .. _3154: http://www.python.org/dev/peps/pep-3154/ .. _posted to the import-sig list: http://mail.python.org/pipermail/import-sig/2012-March/000421.html .. _core-mentors: http://pythonmentors.com/ .. _mock: http://www.voidspace.org.uk/python/mock/ .. _python-dev: http://mail.python.org/mailman/listinfo/python-dev From regebro at gmail.com Thu Mar 15 05:44:00 2012 From: regebro at gmail.com (Lennart Regebro) Date: Thu, 15 Mar 2012 05:44:00 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: On Thu, Mar 15, 2012 at 02:58, Matt Joiner wrote: > Victor, I think that steady can always be monotonic, there are time sources > enough to ensure this on the platforms I am aware of. Strict in this sense > refers to not being adjusted forward, i.e. CLOCK_MONOTONIC vs > CLOCK_MONOTONIC_RAW. > > Non monotonicity of this call should be considered a bug. Strict would be > used for profiling where forward leaps would disqualify the timing. This makes sense to me. //Lennart From p.f.moore at gmail.com Thu Mar 15 09:23:35 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 15 Mar 2012 08:23:35 +0000 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: On 15 March 2012 01:58, Matt Joiner wrote: > Victor, I think that steady can always be monotonic, there are time sources > enough to ensure this on the platforms I am aware of. Strict in this sense > refers to not being adjusted forward, i.e. CLOCK_MONOTONIC vs > CLOCK_MONOTONIC_RAW. I agree - Kristj?n pointed out that you can ensure that backward jumps never occur by implementing a cache of the last value. > Non monotonicity of this call should be considered a bug. +1 > Strict would be used for profiling where forward leaps would disqualify the timing. I'm baffled as to how you even identify "forward leaps". In relation to what? A more accurate time source? I thought that by definition this was the most accurate time source we have! +1 on a simple time.steady() with guaranteed monotonicity and no flags to alter behaviour. Paul. From victor.stinner at gmail.com Thu Mar 15 10:29:26 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 15 Mar 2012 10:29:26 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: 2012/3/15 Matt Joiner : > Victor, I think that steady can always be monotonic, there are time sources > enough to ensure this on the platforms I am aware of. Strict in this sense > refers to not being adjusted forward, i.e. CLOCK_MONOTONIC vs > CLOCK_MONOTONIC_RAW. I don't think that CLOCK_MONOTONIC is available on all platforms. clock_gettime() and QueryPerformanceFrequency() can fail. In practice, it should not fail on modern OSses. But if the monotonic clock fails, Python should use another less stable clock but provide something. Otherwise, each project would have to implement its own fallback. Victor From anacrolix at gmail.com Thu Mar 15 11:06:44 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Thu, 15 Mar 2012 18:06:44 +0800 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: On Mar 15, 2012 4:23 PM, "Paul Moore" wrote: > > On 15 March 2012 01:58, Matt Joiner wrote: > > Victor, I think that steady can always be monotonic, there are time sources > > enough to ensure this on the platforms I am aware of. Strict in this sense > > refers to not being adjusted forward, i.e. CLOCK_MONOTONIC vs > > CLOCK_MONOTONIC_RAW. > > I agree - Kristj?n pointed out that you can ensure that backward jumps > never occur by implementing a cache of the last value. Without knowing more, either QPC was buggy on his platform, or he didn't account for processor affinity (QPC derives from a per processor counter). > > > Non monotonicity of this call should be considered a bug. > > +1 > > > Strict would be used for profiling where forward leaps would disqualify the timing. > > I'm baffled as to how you even identify "forward leaps". In relation > to what? A more accurate time source? I thought that by definition > this was the most accurate time source we have! Monotonic clocks are not necessarily hardware based, and may be adjusted forward by NTP. > > +1 on a simple time.steady() with guaranteed monotonicity and no flags > to alter behaviour. > > Paul. I don't mind since I'll be using it for timeouts, but clearly the strongest possible guarantee should be made. If forward leaps are okay, then by definition the timer is monotonic but not steady. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Mar 15 11:31:26 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 15 Mar 2012 11:31:26 +0100 Subject: [Python-Dev] cpython (3.1): - rename configure.in to configure.ac References: Message-ID: <20120315113126.49db0f5c@pitrou.net> On Wed, 14 Mar 2012 23:27:24 +0100 matthias.klose wrote: > http://hg.python.org/cpython/rev/55ab7a272f0a > changeset: 75659:55ab7a272f0a > branch: 3.1 > parent: 75199:df3b2b5db900 > user: Matthias Klose > date: Wed Mar 14 23:10:15 2012 +0100 > summary: > - rename configure.in to configure.ac > - change references from configure.in to configure.ac What's the rationale for this change? There doesn't seem to be an issue number. Regards Antoine. From p.f.moore at gmail.com Thu Mar 15 12:10:20 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 15 Mar 2012 11:10:20 +0000 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: On 15 March 2012 10:06, Matt Joiner wrote: >> I'm baffled as to how you even identify "forward leaps". In relation >> to what? A more accurate time source? I thought that by definition >> this was the most accurate time source we have! > > Monotonic clocks are not necessarily hardware based, and may be adjusted > forward by NTP. I appreciate that. But I'm still unclear how you would tell that had happened as part of the implementation. One call to the OS returns 12345. The next returns 13345. Is that because 100 ticks have passed, or because the clock "leapt forward"? With no point of reference, how can you tell? But I agree, the key thing is just to have the strongest guarantee possible. Paul. From nadeem.vawda at gmail.com Thu Mar 15 13:12:39 2012 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Thu, 15 Mar 2012 14:12:39 +0200 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: On Thu, Mar 15, 2012 at 1:10 PM, Paul Moore wrote: >> Monotonic clocks are not necessarily hardware based, and may be adjusted >> forward by NTP. > > I appreciate that. But I'm still unclear how you would tell that had > happened as part of the implementation. One call to the OS returns > 12345. The next returns 13345. Is that because 100 ticks have passed, > or because the clock "leapt forward"? With no point of reference, how > can you tell? The point (AIUI) is that you *can't* identify such adjustments (in the absence of some sort of log of NTP updates), so we should provide a mechanism that is guaranteed not to be affected by them. Cheers, Nadeem From p.f.moore at gmail.com Thu Mar 15 13:20:48 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 15 Mar 2012 12:20:48 +0000 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: On 15 March 2012 12:12, Nadeem Vawda wrote: > On Thu, Mar 15, 2012 at 1:10 PM, Paul Moore wrote: >> I appreciate that. But I'm still unclear how you would tell that had >> happened as part of the implementation. One call to the OS returns >> 12345. The next returns 13345. Is that because 100 ticks have passed, >> or because the clock "leapt forward"? With no point of reference, how >> can you tell? > > The point (AIUI) is that you *can't* identify such adjustments (in the > absence of some sort of log of NTP updates), so we should provide a > mechanism that is guaranteed not to be affected by them. OK, I see (sort of). But if that is the case, what's the use case for the variation that *is* affected by them? The use cases I've seen mentioned are timeouts and performance testing, both of which don't want to see clock adjustments. Note that when Victor started this thread, he said: > I prefer to keep only monotonic() because it is not affected by system > clock update and should help to fix issues on NTP update in functions > implementing a timeout. which seems to me to be exactly this point. So I guess I support Victor's original proposal. (Which is good, because he has thought about this issue far more than I have :-)) Paul. From doko at ubuntu.com Thu Mar 15 17:33:57 2012 From: doko at ubuntu.com (Matthias Klose) Date: Thu, 15 Mar 2012 17:33:57 +0100 Subject: [Python-Dev] cpython (3.1): - rename configure.in to configure.ac In-Reply-To: <20120315113126.49db0f5c@pitrou.net> References: <20120315113126.49db0f5c@pitrou.net> Message-ID: <4F6219F5.4060705@ubuntu.com> On 15.03.2012 11:31, Antoine Pitrou wrote: > On Wed, 14 Mar 2012 23:27:24 +0100 > matthias.klose wrote: >> http://hg.python.org/cpython/rev/55ab7a272f0a >> changeset: 75659:55ab7a272f0a >> branch: 3.1 >> parent: 75199:df3b2b5db900 >> user: Matthias Klose >> date: Wed Mar 14 23:10:15 2012 +0100 >> summary: >> - rename configure.in to configure.ac >> - change references from configure.in to configure.ac > > What's the rationale for this change? There doesn't seem to be an issue > number. autoconf files up to 2.13 have the .in extension, autoconf files for 2.50 and newer the .ac extension. This is a no change, except for having autoconf2.13 installed, and the autoconf then failing. From g.brandl at gmx.net Thu Mar 15 18:49:10 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 15 Mar 2012 18:49:10 +0100 Subject: [Python-Dev] cpython (3.1): - rename configure.in to configure.ac In-Reply-To: <4F6219F5.4060705@ubuntu.com> References: <20120315113126.49db0f5c@pitrou.net> <4F6219F5.4060705@ubuntu.com> Message-ID: On 15.03.2012 17:33, Matthias Klose wrote: > On 15.03.2012 11:31, Antoine Pitrou wrote: >> On Wed, 14 Mar 2012 23:27:24 +0100 >> matthias.klose wrote: >>> http://hg.python.org/cpython/rev/55ab7a272f0a >>> changeset: 75659:55ab7a272f0a >>> branch: 3.1 >>> parent: 75199:df3b2b5db900 >>> user: Matthias Klose >>> date: Wed Mar 14 23:10:15 2012 +0100 >>> summary: >>> - rename configure.in to configure.ac >>> - change references from configure.in to configure.ac >> >> What's the rationale for this change? There doesn't seem to be an issue >> number. > > autoconf files up to 2.13 have the .in extension, autoconf files for 2.50 and > newer the .ac extension. This is a no change, except for having autoconf2.13 > installed, and the autoconf then failing. Not sure it belongs in 3.1 though. Georg From benjamin at python.org Thu Mar 15 19:02:32 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 15 Mar 2012 13:02:32 -0500 Subject: [Python-Dev] cpython (3.1): - rename configure.in to configure.ac In-Reply-To: References: <20120315113126.49db0f5c@pitrou.net> <4F6219F5.4060705@ubuntu.com> Message-ID: 2012/3/15 Georg Brandl : > On 15.03.2012 17:33, Matthias Klose wrote: >> >> On 15.03.2012 11:31, Antoine Pitrou wrote: >>> >>> ?On Wed, 14 Mar 2012 23:27:24 +0100 >>> ?matthias.klose ? wrote: >>>> >>>> ?http://hg.python.org/cpython/rev/55ab7a272f0a >>>> ?changeset: ? 75659:55ab7a272f0a >>>> ?branch: ? ? ?3.1 >>>> ?parent: ? ? ?75199:df3b2b5db900 >>>> ?user: ? ? ? ?Matthias Klose >>>> ?date: ? ? ? ?Wed Mar 14 23:10:15 2012 +0100 >>>> ?summary: >>>> ? ?- rename configure.in to configure.ac >>>> ?- change references from configure.in to configure.ac >>> >>> >>> ?What's the rationale for this change? There doesn't seem to be an issue >>> ?number. >> >> >> autoconf files up to 2.13 have the .in extension, autoconf files for 2.50 >> and >> newer the .ac extension. ?This is a no change, except for having >> autoconf2.13 >> installed, and the autoconf then failing. > > > Not sure it belongs in 3.1 though. I told him he could change 3.1 for ease of maintenance in the future. -- Regards, Benjamin From gcolgate at gmail.com Thu Mar 15 19:47:07 2012 From: gcolgate at gmail.com (Gil Colgate) Date: Thu, 15 Mar 2012 11:47:07 -0700 Subject: [Python-Dev] What letter should an UnsignedLongLong get Message-ID: We use a lot of UnsignedLongLongs in our program (ids) and have been parsing in PyArg_ParseTuple with 'K', which does not do error checking. I am planning to add a new type to our local build of python for parsing Unsigned Long Longs (64 bit numbers) that errrors if the number has more than the correct number of bits. I am thinking to use the letter 'N' for this purpose, since l,k,K,U,u are all taken. Does anyone have any better ideas? -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Thu Mar 15 19:49:00 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 15 Mar 2012 13:49:00 -0500 Subject: [Python-Dev] What letter should an UnsignedLongLong get In-Reply-To: References: Message-ID: 2012/3/15 Gil Colgate : > We use a lot of UnsignedLongLongs in our program (ids) and have been parsing > in PyArg_ParseTuple with 'K', which does not do error checking. > I am planning to add a new type to our local build of python for parsing > Unsigned Long Longs (64 bit numbers) that errrors if the number has more > than the correct number of bits. > > I am thinking to use the letter 'N' for this purpose, since l,k,K,U,u are > all taken. Unfortunately, the would conflict with Py_BuildValue's 'N'. -- Regards, Benjamin From benjamin at python.org Thu Mar 15 19:56:01 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 15 Mar 2012 13:56:01 -0500 Subject: [Python-Dev] What letter should an UnsignedLongLong get In-Reply-To: References: Message-ID: 2012/3/15 Gil Colgate : > I must be using a different version of python, (3.2), I don't see that one > in use. Do you have a different suggestion? It's not used in PyArg_Parse*, but it is for Py_BuildValue. Adding it to PyArg_Parse could create confusion. > > On Thu, Mar 15, 2012 at 11:49 AM, Benjamin Peterson > wrote: >> >> 2012/3/15 Gil Colgate : >> > We use a lot of UnsignedLongLongs in our program (ids) and have been >> > parsing >> > in PyArg_ParseTuple with 'K', which does not do error checking. >> > I am planning to add a new type to our local build of python for parsing >> > Unsigned Long Longs (64 bit numbers) that errrors if the number has more >> > than the correct number of bits. >> > >> > I am thinking to use the letter 'N' for this purpose, since l,k,K,U,u >> > are >> > all taken. >> >> Unfortunately, the would conflict with Py_BuildValue's 'N'. >> >> >> >> -- >> Regards, >> Benjamin > > -- Regards, Benjamin From nadeem.vawda at gmail.com Thu Mar 15 20:02:44 2012 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Thu, 15 Mar 2012 21:02:44 +0200 Subject: [Python-Dev] What letter should an UnsignedLongLong get In-Reply-To: References: Message-ID: The lzma module ran into a similar issue with 32-bit unsigned ints. I worked around it by writing a custom converter function to use with the "O&" code. You can find the converter definition here: http://hg.python.org/cpython/file/default/Modules/_lzmamodule.c#l134 And an example usage here: http://hg.python.org/cpython/file/default/Modules/_lzmamodule.c#l261 Cheers, Nadeem From rowen at uw.edu Thu Mar 15 20:22:03 2012 From: rowen at uw.edu (Russell E. Owen) Date: Thu, 15 Mar 2012 12:22:03 -0700 Subject: [Python-Dev] Drop the new time.wallclock() function? References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: In article , Kristj??n Valur J??nsson wrote: > What does "jumping forward" mean? That's what happens with every clock at > every time quantum. The only effect here is that this clock will be slightly > noisy, i.e. its precision becomes worse. On average it is still correct. > Look at the use cases for this function > 1) to enable timeouts for certaing operations, like acquiring locks: > Jumping backwards is bad, because that may cause infinite wait time. But > jumping forwards is ok, it may just mean that your lock times out a bit early > 2) performance measurements: > If you are running on a platform with a broken runtime clock, you are not > likely to be running performance measurements. > > Really, I urge you to skip the "strict" keyword. It just adds confusion. > Instead, lets just give the best monotonic clock we can do which doesn"t move > backwards. > Let's just provide a "practical" real time clock with high resolution that is > appropriate for providing timeout functionality and so won't jump backwards > for the next 20 years. Let's simply point out to people that it may not be > appropriate for high precision timings on old and obsolete hardware and be > done with it. I agree. I prefer the name time.monotonic with no flags. It will suit most use cases. I think supplying truly steady time is a low level hardware function (e.g. buy a GPS timer card) with a driver. -- Russell From anacrolix at gmail.com Thu Mar 15 20:55:16 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 16 Mar 2012 03:55:16 +0800 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: +1. I now prefer time.monotonic(), no flags. It attempts to be as high precision as possible and guarantees never to jump backwards. Russell's comment is right, the only steady sources are from hardware, and these are too equipment and operating system specific. (For this call anyway). On Mar 16, 2012 3:23 AM, "Russell E. Owen" wrote: > In article > , > Kristj?n Valur J?nsson wrote: > > > What does "jumping forward" mean? That's what happens with every clock > at > > every time quantum. The only effect here is that this clock will be > slightly > > noisy, i.e. its precision becomes worse. On average it is still correct. > > Look at the use cases for this function > > 1) to enable timeouts for certaing operations, like acquiring locks: > > Jumping backwards is bad, because that may cause infinite wait > time. But > > jumping forwards is ok, it may just mean that your lock times out a bit > early > > 2) performance measurements: > > If you are running on a platform with a broken runtime clock, you > are not > > likely to be running performance measurements. > > > > Really, I urge you to skip the "strict" keyword. It just adds confusion. > > Instead, lets just give the best monotonic clock we can do which doesn"t > move > > backwards. > > Let's just provide a "practical" real time clock with high resolution > that is > > appropriate for providing timeout functionality and so won't jump > backwards > > for the next 20 years. Let's simply point out to people that it may not > be > > appropriate for high precision timings on old and obsolete hardware and > be > > done with it. > > I agree. I prefer the name time.monotonic with no flags. It will suit > most use cases. I think supplying truly steady time is a low level > hardware function (e.g. buy a GPS timer card) with a driver. > > -- Russell > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gcolgate at gmail.com Thu Mar 15 20:59:39 2012 From: gcolgate at gmail.com (Gil Colgate) Date: Thu, 15 Mar 2012 12:59:39 -0700 Subject: [Python-Dev] What letter should an UnsignedLongLong get In-Reply-To: References: Message-ID: How about 'G'? (Giant, or perhaps gynormous, integer?) Then I could also map 'g' to the signed version (same as L) for consistency. On Thu, Mar 15, 2012 at 11:49 AM, Benjamin Peterson wrote: > 2012/3/15 Gil Colgate : > > We use a lot of UnsignedLongLongs in our program (ids) and have been > parsing > > in PyArg_ParseTuple with 'K', which does not do error checking. > > I am planning to add a new type to our local build of python for parsing > > Unsigned Long Longs (64 bit numbers) that errrors if the number has more > > than the correct number of bits. > > > > I am thinking to use the letter 'N' for this purpose, since l,k,K,U,u are > > all taken. > > Unfortunately, the would conflict with Py_BuildValue's 'N'. > > > > -- > Regards, > Benjamin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Thu Mar 15 21:03:02 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 15 Mar 2012 15:03:02 -0500 Subject: [Python-Dev] What letter should an UnsignedLongLong get In-Reply-To: References: Message-ID: 2012/3/15 Gil Colgate : > How about 'G'? (Giant, or perhaps gynormous, integer?) > > > Then I could also map 'g' to the signed version (same as L) for consistency. Sounds okay to me. -- Regards, Benjamin From storchaka at gmail.com Thu Mar 15 22:02:40 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 15 Mar 2012 23:02:40 +0200 Subject: [Python-Dev] What letter should an UnsignedLongLong get In-Reply-To: References: Message-ID: 15.03.12 21:59, Gil Colgate ???????(??): > How about 'G'? (Giant, or perhaps gynormous, integer?) > > > Then I could also map 'g' to the signed version (same as L) for consistency. What about unsigned char, short, int, and long with overflow checking? From alexander.belopolsky at gmail.com Thu Mar 15 22:27:16 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu, 15 Mar 2012 17:27:16 -0400 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: On Thu, Mar 15, 2012 at 3:55 PM, Matt Joiner wrote: > +1. I now prefer time.monotonic(), no flags. Am I alone thinking that an adjective is an odd choice for a function name? I think monotonic_clock or monotonic_time would be a better option. From storchaka at gmail.com Thu Mar 15 22:31:05 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 15 Mar 2012 23:31:05 +0200 Subject: [Python-Dev] What letter should an UnsignedLongLong get In-Reply-To: References: Message-ID: <4F625F99.6030007@gmail.com> 15.03.12 21:59, Gil Colgate ???????(??): > How about 'G'? (Giant, or perhaps gynormous, integer?) > > > Then I could also map 'g' to the signed version (same as L) for consistency. For consistency 'g' must be `unsigned long` with overflow checking. And how about 'M'? 'K', 'L', and 'M' are neighboring letters. From carl at oddbird.net Thu Mar 15 22:43:20 2012 From: carl at oddbird.net (Carl Meyer) Date: Thu, 15 Mar 2012 14:43:20 -0700 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status Message-ID: <4F626278.7030701@oddbird.net> A brief status update on PEP 405 (built-in virtualenv) and the open issues: 1. As mentioned in the updated version of the language summit notes, Nick Coghlan has agreed to pronounce on the PEP. 2. Ned Deily discovered at the PyCon sprints that the current reference implementation does not work with an OS X framework build of Python. We're still working to discover the reason for that and determine possible fixes. 3. If anyone knows of a pair of packages in which both need to build compiled extensions, and the compilation of the second depends on header files from the first, that would be helpful to me in testing the other open issue (installation of header files). (I thought numpy and scipy might fit this bill, but I'm currently not able to install numpy at all under Python 3 using pysetup, easy_install, or pip.) Thanks, Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From van.lindberg at gmail.com Thu Mar 15 22:57:10 2012 From: van.lindberg at gmail.com (VanL) Date: Thu, 15 Mar 2012 16:57:10 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F612A2E.9030805@gmail.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> Message-ID: On 3/14/2012 6:30 PM, Mark Hammond wrote: > > So why not just standardize on that new layout for virtualenvs? That sounds like the worst of all worlds - keep all the existing special cases, and add one. The fact is that most code doesn't know about this, only installers or virtual environments. Most code just assumes that distutils does its thing correctly and that binaries are installed... wherever the binaries go. Again, I have experience with this, as I have edited my own install to do this for a couple of years. The breakage is minimal and it makes things much more consistent and easier to use for cross-platform development. Right now we are in front of the knee on major 3.x adoption - I would like to have things be standardized going forth everywhere. Thanks, Van From carl at oddbird.net Thu Mar 15 23:11:58 2012 From: carl at oddbird.net (Carl Meyer) Date: Thu, 15 Mar 2012 15:11:58 -0700 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: <4F626712.3030906@gmail.com> References: <4F626278.7030701@oddbird.net> <4F626712.3030906@gmail.com> Message-ID: <4F62692E.8040203@oddbird.net> On 03/15/2012 03:02 PM, Lindberg, Van wrote: > FYI, the location of the tcl/tk libraries does not appear to be set in > the virtualenv, even if tkinter is installed and working in the main > Python installation. As a result, tk-based apps will not run from a > virtualenv. Thanks for the report! I've added this to the list of open issues in the PEP and I'll look into it. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From tjreedy at udel.edu Thu Mar 15 23:45:25 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 15 Mar 2012 18:45:25 -0400 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: On 3/15/2012 5:27 PM, Alexander Belopolsky wrote: > On Thu, Mar 15, 2012 at 3:55 PM, Matt Joiner wrote: >> +1. I now prefer time.monotonic(), no flags. > > Am I alone thinking that an adjective is an odd choice for a function > name? I would normally agree, but in this case, it is a function of a module whose short name names what the adjective is modifying. I expect that this will normally be called with the module name. > I think monotonic_clock or monotonic_time would be a better option. time.monotonic_time seems redundant. -- Terry Jan Reedy From steve at pearwood.info Fri Mar 16 00:18:44 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 16 Mar 2012 10:18:44 +1100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> Message-ID: <4F6278D4.4040002@pearwood.info> Terry Reedy wrote: > On 3/15/2012 5:27 PM, Alexander Belopolsky wrote: >> On Thu, Mar 15, 2012 at 3:55 PM, Matt Joiner wrote: >>> +1. I now prefer time.monotonic(), no flags. >> >> Am I alone thinking that an adjective is an odd choice for a function >> name? > > I would normally agree, but in this case, it is a function of a module > whose short name names what the adjective is modifying. I expect that > this will normally be called with the module name. > >> I think monotonic_clock or monotonic_time would be a better option. > > time.monotonic_time seems redundant. Agreed. Same applies to "steady_time", and "steady" on its own is weird. Steady what? While we're bike-shedding, I'll toss in another alternative. Early Apple Macintoshes had a system function that returned the time since last reboot measured in 1/60th of a second, called "the ticks". If I have understood correctly, the monotonic timer will have similar properties: guaranteed monotonic, as accurate as the hardware can provide, but not directly translatable to real (wall-clock) time. (Wall clocks sometimes go backwards.) The two functions are not quite identical: Mac "ticks" were 32-bit integers, not floating point numbers. But the use-cases seem to be the same. time.ticks() seems right as a name to me. It suggests a steady heartbeat ticking along, without making any suggestion that it returns "the time". -- Steven From skippy.hammond at gmail.com Fri Mar 16 00:19:50 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Fri, 16 Mar 2012 10:19:50 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> Message-ID: <4F627916.7060705@gmail.com> On 16/03/2012 8:57 AM, VanL wrote: > On 3/14/2012 6:30 PM, Mark Hammond wrote: >> >> So why not just standardize on that new layout for virtualenvs? > > That sounds like the worst of all worlds - keep all the existing special > cases, and add one. I'm not so sure. My concern is that this *will* break external tools that attempt to locate the python executable from an installed directory. However, I'm not sure this requirement exists for virtual environments - such tools probably will not attempt to locate the executable in a virtual env as there is no standard place for a virtual env to live. So having a standard layout in the virtual envs still seems a win - we keep the inconsistency in the layout of the "installed" Python, but tools which work with virtualenvs still get a standardized layout. [At least I think that is your proposal - can you confirm that the directory layouts in your proposal exactly match the directory layouts in virtual envs on all other platforms? ie, that inconsistencies like the python{py_version_short} suffix will not remain?] Just to be completely clear, my current concern is only with the location of the executable in an installed Python. > The fact is that most code doesn't know about this, only installers or > virtual environments. Most code just assumes that distutils does its > thing correctly and that binaries are installed... wherever the binaries > go. Of course - but this raises 2 points: * I'm referring to *external* tools that launch Python. They obviously need to know where the binaries are to launch them. Eg, the PEP397 launcher; the (admittedly few) people who use the launcher would need to upgrade it to work under your scheme. Ditto *all* other such tools that locate and launch Python. * "most code" isn't a high enough bar. If we only considered such anecdotes, most backwards compatibility concerns would be moot. > Again, I have experience with this, as I have edited my own install to > do this for a couple of years. The breakage is minimal and it makes > things much more consistent and easier to use for cross-platform > development. All due respect, but I'm not sure that is a large enough sample to draw any conclusions from. I've offered 2 concrete examples of things that *will* break and I haven't looked for others. Also, I'm still yet to see what exactly becomes "easier" in your model? As you mention, most Python code will not care; distutils and other parts of the stdlib will "do the right thing" - and indeed, already do for Windows. So the proposal wants to change distutils and other parts of the stdlib even though "most code" won't notice. But the code that will notice will be broken! So I dispute it is "easier" for anyone; I agree it is more consistent, but given the *certainty* external tools will break, I refer to you the Zen of Python's thoughts on consistency ;) > Right now we are in front of the knee on major 3.x adoption - I would > like to have things be standardized going forth everywhere. It is a shame this wasn't done as part of py3k in the first place. But I assume you would be looking at Python 3.4 for this, right? So if people start working with Python 3.3 now and finds this change in 3.4, we are still asking them to take the burden of supporting the multiple locations. I guess I'd be less concerned if we managed to get it into 3.3 and also recommended to people that they should ignore 3.2 and earlier when porting their tools/libraries to 3.x. I think I've made all the points I can make in this discussion. As I mentioned at the start, I'm not quite -1 on the idea, so I'm not going to push this barrow any further (although I'm obviously happy to clarify anything I've said...) Cheers, Mark From carl at oddbird.net Fri Mar 16 00:48:30 2012 From: carl at oddbird.net (Carl Meyer) Date: Thu, 15 Mar 2012 16:48:30 -0700 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F627916.7060705@gmail.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> Message-ID: <4F627FCE.4040309@oddbird.net> On 03/15/2012 04:19 PM, Mark Hammond wrote: > On 16/03/2012 8:57 AM, VanL wrote: >> On 3/14/2012 6:30 PM, Mark Hammond wrote: >>> >>> So why not just standardize on that new layout for virtualenvs? >> >> That sounds like the worst of all worlds - keep all the existing special >> cases, and add one. > > I'm not so sure. My concern is that this *will* break external tools > that attempt to locate the python executable from an installed > directory. However, I'm not sure this requirement exists for virtual > environments - such tools probably will not attempt to locate the > executable in a virtual env as there is no standard place for a virtual > env to live. > > So having a standard layout in the virtual envs still seems a win - we > keep the inconsistency in the layout of the "installed" Python, but > tools which work with virtualenvs still get a standardized layout. The implementation of virtualenv (and especially PEP 405 pyvenv) are largely based around making sure that the internal layout of a virtualenv is identical to the layout of an installed Python on that same platform, to avoid any need to special-case virtualenvs in distutils. The one exception to this is the location of the python binary itself in Windows virtualenvs; we do place it inside Scripts\ so that the virtualenv can be "activated" by adding only a single path to the shell PATH. But I would be opposed to any additional special-casing of the internal layout of a virtualenv that would require tools installing software inside virtualenv to use a different install scheme than when installing to a system Python. In other words, I would much rather that tools have to understand a different layout between Windows virtualenvs and Unixy virtualenvs (because most tools don't have to care anyway, distutils just takes care of it, and to the extent they do have to care, they have to adjust anyway in order to work with installed Pythons) than that they have to understand a different layout between virtualenv and non- on the same platform. To as great an extent as possible, tools shouldn't have to care whether they are dealing with a virtualenv. A consistent layout all around would certainly be nice, but I'm not venturing any opinion on whether it's worth the backwards incompatibility. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From skippy.hammond at gmail.com Fri Mar 16 01:10:08 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Fri, 16 Mar 2012 11:10:08 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F627FCE.4040309@oddbird.net> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> Message-ID: <4F6284E0.1090606@gmail.com> On 16/03/2012 10:48 AM, Carl Meyer wrote: ... > The implementation of virtualenv (and especially PEP 405 pyvenv) are > largely based around making sure that the internal layout of a > virtualenv is identical to the layout of an installed Python on that > same platform, to avoid any need to special-case virtualenvs in > distutils. The one exception to this is the location of the python > binary itself in Windows virtualenvs; we do place it inside Scripts\ so > that the virtualenv can be "activated" by adding only a single path to > the shell PATH. But I would be opposed to any additional special-casing > of the internal layout of a virtualenv ... Unless I misunderstand, that sounds like it should keep everyone happy; there already is a special case for the executable on Windows being in a different place in an installed layout vs a virtual-env layout. Changing this to use "bin" instead of "Scripts" makes the virtualenv more consistent across platforms and doesn't impose any additional special-casing for Windows (just slightly changes the existing special-case :) Thanks, Mark From carl at oddbird.net Fri Mar 16 01:12:54 2012 From: carl at oddbird.net (Carl Meyer) Date: Thu, 15 Mar 2012 17:12:54 -0700 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F6284E0.1090606@gmail.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> Message-ID: <4F628586.8030308@oddbird.net> On 03/15/2012 05:10 PM, Mark Hammond wrote: > On 16/03/2012 10:48 AM, Carl Meyer wrote: >> The implementation of virtualenv (and especially PEP 405 pyvenv) are >> largely based around making sure that the internal layout of a >> virtualenv is identical to the layout of an installed Python on that >> same platform, to avoid any need to special-case virtualenvs in >> distutils. The one exception to this is the location of the python >> binary itself in Windows virtualenvs; we do place it inside Scripts\ so >> that the virtualenv can be "activated" by adding only a single path to >> the shell PATH. But I would be opposed to any additional special-casing >> of the internal layout of a virtualenv > ... > > Unless I misunderstand, that sounds like it should keep everyone happy; > there already is a special case for the executable on Windows being in a > different place in an installed layout vs a virtual-env layout. Changing > this to use "bin" instead of "Scripts" makes the virtualenv more > consistent across platforms and doesn't impose any additional > special-casing for Windows (just slightly changes the existing > special-case :) Changing the directory name is in fact a new and different (and much more invasive) special case, because distutils et al install scripts there, and that directory name is part of the distutils install scheme. Installers don't care where the Python binary is located, so moving it in with the other scripts has very little impact. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From anacrolix at gmail.com Fri Mar 16 01:50:09 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 16 Mar 2012 08:50:09 +0800 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: <4F6278D4.4040002@pearwood.info> References: <20120314101618.5cf1850f@pitrou.net> <20120314132747.6c857a53@pitrou.net> <4F6278D4.4040002@pearwood.info> Message-ID: Windows also has this albeit course grained and also 32 bit. I don't think ticks reflects the reason why using the timer is desirable. monotonic_time seems reasonable, there's no reason to persist short names when users can import it how they like. On Mar 16, 2012 7:20 AM, "Steven D'Aprano" wrote: > Terry Reedy wrote: > >> On 3/15/2012 5:27 PM, Alexander Belopolsky wrote: >> >>> On Thu, Mar 15, 2012 at 3:55 PM, Matt Joiner >>> wrote: >>> >>>> +1. I now prefer time.monotonic(), no flags. >>>> >>> >>> Am I alone thinking that an adjective is an odd choice for a function >>> name? >>> >> >> I would normally agree, but in this case, it is a function of a module >> whose short name names what the adjective is modifying. I expect that this >> will normally be called with the module name. >> >> I think monotonic_clock or monotonic_time would be a better option. >>> >> >> time.monotonic_time seems redundant. >> > > Agreed. Same applies to "steady_time", and "steady" on its own is weird. > Steady what? > > While we're bike-shedding, I'll toss in another alternative. Early Apple > Macintoshes had a system function that returned the time since last reboot > measured in 1/60th of a second, called "the ticks". > > If I have understood correctly, the monotonic timer will have similar > properties: guaranteed monotonic, as accurate as the hardware can provide, > but not directly translatable to real (wall-clock) time. (Wall clocks > sometimes go backwards.) > > The two functions are not quite identical: Mac "ticks" were 32-bit > integers, not floating point numbers. But the use-cases seem to be the same. > > time.ticks() seems right as a name to me. It suggests a steady heartbeat > ticking along, without making any suggestion that it returns "the time". > > > > -- > Steven > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > anacrolix%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Fri Mar 16 05:48:18 2012 From: tseaver at palladion.com (Tres Seaver) Date: Fri, 16 Mar 2012 00:48:18 -0400 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/13/2012 06:49 PM, Nick Coghlan wrote: > On Wed, Mar 14, 2012 at 8:08 AM, Guido van Rossum > wrote: >> If you can solve your problem with a suitably hacked Unpickler >> subclass that's fine with me, but I would personally use this >> opportunity to change the app to some other serialization format >> that is perhaps less general but more robust than pickle. I've been >> bitten by too many pickle-related problems to recommend pickle to >> anyone... > > It's fine for in-memory storage of (almost) arbitrary objects (I use > it to stash things in a memory backed sqlite DB via SQLAlchemy) and > for IPC, but yeah, for long-term cross-version persistent storage, > I'd be looking to something like JSON rather than pickle. Note the Zope ecosystem (including Plone) is an *enoromous* installed base[1] using pickle for storage of data over many years and multiple versions of Python: until this point, it has always been possible to arrange for old pickles to work (e.g., by providing aliases for missing module names, etc.). ]1] tens of thousands of Zope-based sites in production, including very high-profile ones: http://plone.org/support/sites Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk9ixhEACgkQ+gerLs4ltQ7hUwCfSdjbGnIIrNr6sxoztvb3pvx5 Ns0An1GmcYHClvsgx22bdru5Hl+G09nx =sm0/ -----END PGP SIGNATURE----- From tseaver at palladion.com Fri Mar 16 05:55:45 2012 From: tseaver at palladion.com (Tres Seaver) Date: Fri, 16 Mar 2012 00:55:45 -0400 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: <4F626278.7030701@oddbird.net> References: <4F626278.7030701@oddbird.net> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/15/2012 05:43 PM, Carl Meyer wrote: > A brief status update on PEP 405 (built-in virtualenv) and the open > issues: > > 1. As mentioned in the updated version of the language summit notes, > Nick Coghlan has agreed to pronounce on the PEP. > > 2. Ned Deily discovered at the PyCon sprints that the current > reference implementation does not work with an OS X framework build of > Python. We're still working to discover the reason for that and > determine possible fixes. > > 3. If anyone knows of a pair of packages in which both need to build > compiled extensions, and the compilation of the second depends on > header files from the first, that would be helpful to me in testing > the other open issue (installation of header files). (I thought numpy > and scipy might fit this bill, but I'm currently not able to install > numpy at all under Python 3 using pysetup, easy_install, or pip.) ExtensionClass and Acquisition would fit the bill, except they aren't ported to Python3 (Acquisition needs the headers from ExtensionClass). Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk9ix9EACgkQ+gerLs4ltQ5HsgCdEFbb0utGPbBJ2059+KBbhkIB M2IAnjFNoJh1UKB76k6nd6nTMfo78s3Z =T6fh -----END PGP SIGNATURE----- From eliben at gmail.com Fri Mar 16 08:38:49 2012 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 16 Mar 2012 09:38:49 +0200 Subject: [Python-Dev] Raising assertions on wrong element types in ElementTree Message-ID: Hi, [Terry suggested in http://bugs.python.org/issue13782 to raise this dilemma to python-dev. I concur.] The Element class in ElementTree (http://docs.python.org/py3k/library/xml.etree.elementtree.html) has some methods for adding new children: append, insert and extend. Currently the documentation states that extend raises AssertionError when something that's not an Element is being passed to it, and the others don't mention mention this case. There are a number of problems with this: 1. The behavior of append, insert and extend should be similar in this respect 2. AssertionError is not the customary error in such case - TypeError is much more suitable 3. The C implementation of ElementTree actually raises TypeError in all these methods, by virtue of using PyArg_ParseTuple 4. The Python implementation (at least in 3.2) actually doesn't raise even AssertionError in extend - this was commented out The suggestion for 3.3 (where compatibility between the C and Python implementations gets even more important, since the C one is now being imported by default when available) is to raise TypeError in all 3 methods in the Python implementation, to match the C implementation, and to modify the documentation accordingly. There may appear to be a backwards compatibility here, since the doc of extend mentions raising AssertionError - but as said above, the doc is wrong, so no regressions in the code are be expected. Does that sound reasonable (for 3.3)? Does it make sense to also fix this in 3.2/2.7? Or fix only the documentation? Or not touch them at all? Thanks in advance, Eli From stefan_ml at behnel.de Fri Mar 16 08:51:07 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 16 Mar 2012 08:51:07 +0100 Subject: [Python-Dev] Raising assertions on wrong element types in ElementTree In-Reply-To: References: Message-ID: Eli Bendersky, 16.03.2012 08:38: > The Element class in ElementTree > (http://docs.python.org/py3k/library/xml.etree.elementtree.html) has > some methods for adding new children: append, insert and extend. > Currently the documentation states that extend raises AssertionError > when something that's not an Element is being passed to it, and the > others don't mention mention this case. AssertionError is clearly the wrong thing to raise for user input. > There are a number of problems with this: > > 1. The behavior of append, insert and extend should be similar in this respect > 2. AssertionError is not the customary error in such case - TypeError > is much more suitable > 3. The C implementation of ElementTree actually raises TypeError in > all these methods, by virtue of using PyArg_ParseTuple > 4. The Python implementation (at least in 3.2) actually doesn't raise > even AssertionError in extend - this was commented out > > The suggestion for 3.3 (where compatibility between the C and Python > implementations gets even more important, since the C one is now being > imported by default when available) is to raise TypeError in all 3 > methods in the Python implementation, to match the C implementation, > and to modify the documentation accordingly. +1 Stefan From p.f.moore at gmail.com Fri Mar 16 09:38:18 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 16 Mar 2012 08:38:18 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F628586.8030308@oddbird.net> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> <4F628586.8030308@oddbird.net> Message-ID: On 16 March 2012 00:12, Carl Meyer wrote: > Changing the directory name is in fact a new and different (and much > more invasive) special case, because distutils et al install scripts > there, and that directory name is part of the distutils install scheme. > Installers don't care where the Python binary is located, so moving it > in with the other scripts has very little impact. Two thoughts: 1. The incompatibilities between platforms is precisely the problem that sysconfig is designed to solve, isn't it? So tools in Python will either use sysconfig (and be correct regardless of layout) or should be encouraged to change to use sysconfig (so they are layout-independent). And tools *not* in Python will be platform-specific anyway (I assume no-one is writing Perl scripts to manipulate their Python installation :-)) 2. The differences in layout between a installed Python, uninstalled builds and virtualenvs, on the same platform, are more annoying in practice than any cross-platform differences (at least for me). But again, these are known issues that can be dealt with easily enough (trivially via sysconfig from within Python). If I were "tidying up", I would consider renaming Scripts to "bin" on Windows, and putting the Python executables in there (so there's only one directory to add to PATH, and it uses the common name "bin" rather than a name that implies that it doesn't contain exes). But that offers no practical benefit, and as Mark says does break existing code, so I don't think it's worth it. If you can get Guido to lend you the time machine keys, I'd support putting it in from Python 1.5 onwards :-) Paul. From regebro at gmail.com Fri Mar 16 09:46:55 2012 From: regebro at gmail.com (Lennart Regebro) Date: Fri, 16 Mar 2012 09:46:55 +0100 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: References: <4F626278.7030701@oddbird.net> Message-ID: On Fri, Mar 16, 2012 at 05:55, Tres Seaver wrote: > ExtensionClass and Acquisition would fit the bill, except they aren't > ported to Python3 (Acquisition needs the headers from ExtensionClass). And there were no plans to port them either, really. :-) Only Zope 2 uses them afaik? Or? //Lennart From ncoghlan at gmail.com Fri Mar 16 12:54:43 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 16 Mar 2012 21:54:43 +1000 Subject: [Python-Dev] cpython: PEP 417: Adding unittest.mock In-Reply-To: References: Message-ID: On Thu, Mar 15, 2012 at 6:27 AM, Michael Foord wrote: > On the topic of docs.... mock documentation is about eight pages long. My intention was to strip this down to just the api documentation, along with a link to the docs on my site for further examples and so on. I was encouraged here at the sprints to include the full documentation instead (minus the mock library comparison page and the front page can be cut down). So this is what I am now intending to include. It does mean the mock documentation will be "extensive". Don't forgot you also have the option of splitting out a separate HOWTO tutorial section, leaving the main docs as a pure API reference. (I personally find that style easier to use than the ones which try to address both needs in the main module docs). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tseaver at palladion.com Fri Mar 16 14:50:13 2012 From: tseaver at palladion.com (Tres Seaver) Date: Fri, 16 Mar 2012 09:50:13 -0400 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: References: <4F626278.7030701@oddbird.net> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/16/2012 04:46 AM, Lennart Regebro wrote: > On Fri, Mar 16, 2012 at 05:55, Tres Seaver > wrote: >> ExtensionClass and Acquisition would fit the bill, except they >> aren't ported to Python3 (Acquisition needs the headers from >> ExtensionClass). > > And there were no plans to port them either, really. :-) Only Zope 2 > uses them afaik? Or? I don't know of plans to port them, or even how hard the port would be. The "headers needed" problem is a tricky one, and they do exercise it. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk9jRRUACgkQ+gerLs4ltQ4cpwCgnLehMsKDV8BKMkix+ZitRnPA LHgAnRLZdjc7+I9/rkepO6iNXEBg7uQo =JmOT -----END PGP SIGNATURE----- From guido at python.org Fri Mar 16 15:57:15 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 16 Mar 2012 07:57:15 -0700 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: On Thu, Mar 15, 2012 at 9:48 PM, Tres Seaver wrote: > On 03/13/2012 06:49 PM, Nick Coghlan wrote: >> On Wed, Mar 14, 2012 at 8:08 AM, Guido van Rossum >> wrote: >>> If you can solve your problem with a suitably hacked Unpickler >>> subclass that's fine with me, but I would personally use this >>> opportunity to change the app to some other serialization format >>> that is perhaps less general but more robust than pickle. I've been >>> bitten by too many pickle-related problems to recommend pickle to >>> anyone... >> >> It's fine for in-memory storage of (almost) arbitrary objects (I use >> it to stash things in a memory backed sqlite DB via SQLAlchemy) and >> for IPC, but yeah, for long-term cross-version persistent storage, >> I'd be looking to something like JSON rather than pickle. > > Note the Zope ecosystem (including Plone) is an *enoromous* installed > base[1] using pickle for storage of data over many years and multiple > versions of Python: ?until this point, it has always been possible to > arrange for old pickles to work (e.g., by providing aliases for missing > module names, etc.). > > ]1] tens of thousands of Zope-based sites in production, including very > ? ?high-profile ones: ?http://plone.org/support/sites Don't I know it. :-) So do you need any help porting to Python 3 or not? The OP didn't mention Zope. -- --Guido van Rossum (python.org/~guido) From van.lindberg at gmail.com Fri Mar 16 16:09:13 2012 From: van.lindberg at gmail.com (VanL) Date: Fri, 16 Mar 2012 10:09:13 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F627916.7060705@gmail.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> Message-ID: <4F635799.5050402@gmail.com> On 3/15/2012 6:19 PM, Mark Hammond wrote: > [At least I think that is your proposal - can you confirm that the > directory layouts in your proposal exactly match the directory > layouts in virtual envs on all other platforms? ie, that > inconsistencies like the python{py_version_short} suffix will not > remain?] Yes, that is the ideal. > Also, I'm still yet to see what exactly becomes "easier" in your > model? As you mention, most Python code will not care; distutils and > other parts of the stdlib will "do the right thing" - and indeed, > already do for Windows. Again, I have stated my use case - cross platform development where the tools use the directory layout in some way, or where the environment should be checked into source control. > It is a shame this wasn't done as part of py3k in the first place. > But I assume you would be looking at Python 3.4 for this, right? No, I would like this for 3.3. From Van.Lindberg at haynesboone.com Fri Mar 16 16:08:33 2012 From: Van.Lindberg at haynesboone.com (Lindberg, Van) Date: Fri, 16 Mar 2012 15:08:33 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F628586.8030308@oddbird.net> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> <4F628586.8030308@oddbird.net> Message-ID: <4F63576A.4080308@gmail.com> Carl - > Changing the directory name is in fact a new and different (and much > more invasive) special case, because distutils et al install scripts > there, and that directory name is part of the distutils install scheme. > Installers don't care where the Python binary is located, so moving it > in with the other scripts has very little impact. So would changing the distutils install scheme in 3.3 - as defined and declared by distutils - lead to a change in your code? Alternatively stated, do you independently figure out that your virtualenv is on Windows and then put things in Scripts, etc, or do you use sysconfig? If sysconfig gave you different (consistent) values across platforms, how would that affect your code? Thanks, VanCIRCULAR 230 NOTICE: To ensure compliance with requirements imposed by U.S. Treasury Regulations, Haynes and Boone, LLP informs you that any U.S. tax advice contained in this communication (including any attachments) was not intended or written to be used, and cannot be used, for the purpose of (i) avoiding penalties under the Internal Revenue Code or (ii) promoting, marketing or recommending to another party any transaction or matter addressed herein. CONFIDENTIALITY NOTICE: This electronic mail transmission is confidential, may be privileged and should be read or retained only by the intended recipient. If you have received this transmission in error, please immediately notify the sender and delete it from your system. From rdmurray at bitdance.com Fri Mar 16 16:33:09 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 16 Mar 2012 11:33:09 -0400 Subject: [Python-Dev] Raising assertions on wrong element types in ElementTree In-Reply-To: References: Message-ID: <20120316153310.2E7412500F8@webabinitio.net> On Fri, 16 Mar 2012 09:38:49 +0200, Eli Bendersky wrote: > 1. The behavior of append, insert and extend should be similar in this respect > 2. AssertionError is not the customary error in such case - TypeError > is much more suitable > 3. The C implementation of ElementTree actually raises TypeError in > all these methods, by virtue of using PyArg_ParseTuple > 4. The Python implementation (at least in 3.2) actually doesn't raise > even AssertionError in extend - this was commented out > > The suggestion for 3.3 (where compatibility between the C and Python > implementations gets even more important, since the C one is now being > imported by default when available) is to raise TypeError in all 3 > methods in the Python implementation, to match the C implementation, > and to modify the documentation accordingly. > > There may appear to be a backwards compatibility here, since the doc > of extend mentions raising AssertionError - but as said above, the doc > is wrong, so no regressions in the code are be expected. > > Does that sound reasonable (for 3.3)? Yes. > Does it make sense to also fix this in 3.2/2.7? Or fix only the > documentation? Or not touch them at all? Our usual approach in cases like this is to not change it in the maint releases. Why risk breaking someone's code for no particular benefit? If you want some extra work you could add it as a deprecation warning, I suppose. --David From Van.Lindberg at haynesboone.com Fri Mar 16 16:17:18 2012 From: Van.Lindberg at haynesboone.com (Lindberg, Van) Date: Fri, 16 Mar 2012 15:17:18 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> <4F628586.8030308@oddbird.net> Message-ID: <4F63597D.3020109@gmail.com> On 3/16/2012 3:38 AM, Paul Moore wrote: > On 16 March 2012 00:12, Carl Meyer wrote: >> Changing the directory name is in fact a new and different (and much >> more invasive) special case, because distutils et al install scripts >> there, and that directory name is part of the distutils install scheme. >> Installers don't care where the Python binary is located, so moving it >> in with the other scripts has very little impact. This is very interesting, as it seems to argue against Mark's point. If moving the Python binary is not an issue here, then would this change make it any more/less of an issue? > 1. The incompatibilities between platforms is precisely the problem > that sysconfig is designed to solve, isn't it? So tools in Python will > either use sysconfig (and be correct regardless of layout) or should > be encouraged to change to use sysconfig (so they are > layout-independent). Right. I want to change the default layout in sysconfig.cfg. > 2. The differences in layout between a installed Python, uninstalled > builds and virtualenvs, on the same platform, are more annoying in > practice than any cross-platform differences (at least for me). But > again, these are known issues that can be dealt with easily enough > (trivially via sysconfig from within Python). These differences are a major pain for me - and it doesn't make sense that they should need to be worked around each and every time. > If I were "tidying up", I would consider renaming Scripts to "bin" on > Windows, and putting the Python executables in there (so there's only > one directory to add to PATH, and it uses the common name "bin" rather > than a name that implies that it doesn't contain exes). But that > offers no practical benefit... This is not a "we should be consistent" argument - I know that would never fly. I do cross-platform dev all the time (develop on Windows and Mac, deploy on Linux) and so this bites me *every single time* I want to get a consistent layout between these three. That could be because I want my deployment environment to match my development environment(s), it could be because I need to introspect the layout to find some data, or because I want to check in an entire environment into source control. This is not purely aesthetics - this is an issue I deal with all the time. Thanks, VanCIRCULAR 230 NOTICE: To ensure compliance with requirements imposed by U.S. Treasury Regulations, Haynes and Boone, LLP informs you that any U.S. tax advice contained in this communication (including any attachments) was not intended or written to be used, and cannot be used, for the purpose of (i) avoiding penalties under the Internal Revenue Code or (ii) promoting, marketing or recommending to another party any transaction or matter addressed herein. CONFIDENTIALITY NOTICE: This electronic mail transmission is confidential, may be privileged and should be read or retained only by the intended recipient. If you have received this transmission in error, please immediately notify the sender and delete it from your system. From merwok at netwok.org Fri Mar 16 16:51:29 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Fri, 16 Mar 2012 16:51:29 +0100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F63597D.3020109@gmail.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> <4F628586.8030308@oddbird.net> <4F63597D.3020109@gmail.com> Message-ID: <4F636181.7050307@netwok.org> Hi, Le 16/03/2012 16:17, Lindberg, Van a ?crit : > On 3/16/2012 3:38 AM, Paul Moore wrote: >> 1. The incompatibilities between platforms is precisely the problem >> that sysconfig is designed to solve, isn't it? So tools in Python will >> either use sysconfig (and be correct regardless of layout) or should >> be encouraged to change to use sysconfig (so they are >> layout-independent). > Right. I want to change the default layout in sysconfig.cfg. A few notes: - sysconfig was extracted from distutils to the top level in 2.7 and 3.2, but distutils does not use it (due to the Great Revert two years ago after it was decided at PyCon to freeze distutils and start distutils2); there are unfortunately a few subtle differences between the install schemes in sysconfig and distutils. So even if virtualenv uses sysconfig in 2.7/3.2+, I?m not sure that the venv?s pip will install distutils-based projects in the right places. - packaging uses only sysconfig.cfg - I think a change to distutils install schemes in 3.3 would violate the freeze. Regards From p.f.moore at gmail.com Fri Mar 16 16:53:21 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 16 Mar 2012 15:53:21 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F63597D.3020109@gmail.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> <4F628586.8030308@oddbird.net> <4F63597D.3020109@gmail.com> Message-ID: On 16 March 2012 15:17, Lindberg, Van wrote: > This is not a "we should be consistent" argument - I know that would > never fly. I do cross-platform dev all the time (develop on Windows and > Mac, deploy on Linux) and so this bites me *every single time* I want to > get a consistent layout between these three. That could be because I > want my deployment environment to match my development environment(s), > it could be because I need to introspect the layout to find some data, > or because I want to check in an entire environment into source control. The only way I can read this to make sense is that you somehow consider the Python installation as part of your development environment (you mentioned source control earlier in the thread - surely you don't manage your Python installation in source control - binaries, stdlib, etc?). I can't see why you would do this, and it certainly doesn't seem like a reasonable thing to do to me. Can you clarify? Paul. From raymond.hettinger at gmail.com Fri Mar 16 17:09:55 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Fri, 16 Mar 2012 09:09:55 -0700 Subject: [Python-Dev] cpython: PEP 417: Adding unittest.mock In-Reply-To: References: Message-ID: <9EE751D8-EAA8-43D8-9C04-C2FC383CFC82@gmail.com> On Mar 16, 2012, at 4:54 AM, Nick Coghlan wrote: > Don't forgot you also have the option of splitting out a separate > HOWTO tutorial section, leaving the main docs as a pure API reference. > (I personally find that style easier to use than the ones which try to > address both needs in the main module docs). +1 The commingling of extensive examples with regular docs has made it difficult to lookup functionality in argparse for example. In contrast, the logging module's howtos were split-out to good effect. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Fri Mar 16 17:20:27 2012 From: tseaver at palladion.com (Tres Seaver) Date: Fri, 16 Mar 2012 12:20:27 -0400 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: <4F63684B.9060402@palladion.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/16/2012 10:57 AM, Guido van Rossum wrote: > On Thu, Mar 15, 2012 at 9:48 PM, Tres Seaver > wrote: >> On 03/13/2012 06:49 PM, Nick Coghlan wrote: >>> On Wed, Mar 14, 2012 at 8:08 AM, Guido van Rossum >>> wrote: >>>> If you can solve your problem with a suitably hacked Unpickler >>>> subclass that's fine with me, but I would personally use this >>>> opportunity to change the app to some other serialization >>>> format that is perhaps less general but more robust than pickle. >>>> I've been bitten by too many pickle-related problems to >>>> recommend pickle to anyone... >>> >>> It's fine for in-memory storage of (almost) arbitrary objects (I >>> use it to stash things in a memory backed sqlite DB via >>> SQLAlchemy) and for IPC, but yeah, for long-term cross-version >>> persistent storage, I'd be looking to something like JSON rather >>> than pickle. >> >> Note the Zope ecosystem (including Plone) is an *enoromous* >> installed base[1] using pickle for storage of data over many years >> and multiple versions of Python: until this point, it has always >> been possible to arrange for old pickles to work (e.g., by providing >> aliases for missing module names, etc.). >> >> ]1] tens of thousands of Zope-based sites in production, including >> very high-profile ones: http://plone.org/support/sites > > Don't I know it. :-) > > So do you need any help porting to Python 3 or not? The OP didn't > mention Zope. ZODB is actually the biggest / most important non-ported items in the Zope ecosystem. We are close to a pure-Python version of persistent and it's pickle cache, and have some work done toward pure-Python BTrees. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk9jaEsACgkQ+gerLs4ltQ4bQACfcRxaeRMLnmDRzWL2c537VLvC xsMAn2Cjql4Wvavr0MNyQxS58Af4EwMf =UT5J -----END PGP SIGNATURE----- From Van.Lindberg at haynesboone.com Fri Mar 16 17:22:49 2012 From: Van.Lindberg at haynesboone.com (Lindberg, Van) Date: Fri, 16 Mar 2012 16:22:49 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> <4F628586.8030308@oddbird.net> <4F63597D.3020109@gmail.com> Message-ID: <4F6368D9.5050200@gmail.com> On 3/16/2012 10:53 AM, Paul Moore wrote: > The only way I can read this to make sense is that you somehow > consider the Python installation as part of your development > environment (you mentioned source control earlier in the thread - > surely you don't manage your Python installation in source control - > binaries, stdlib, etc?). I can't see why you would do this, and it > certainly doesn't seem like a reasonable thing to do to me. > > Can you clarify? I don't check in the python binary itself, nor the stdlib, but I *do* check in the whole "installation", including the binaries directory. I like having my deploy environment exactly match my develop environment. So if I do have an executable program, I put it in the binaries directory and check it in. My .hgignore includes "python", "python.exe", pip, easy_install, etc. - things that are "owned by the installation - but it includes everything else. As for the stdlib, I don't check that in, so that portion of the proposal (standardize lib naming) is nice to have, but not essential to me. For example, in the following environment: env/ bin/ python pip easy_install my_script lib/ [stuff] data/ [stuff] src/ my_package I would include bin/my_script, src/, and data/ in my version control. This breaks cross-platform development if "bin" is named "Scripts". CIRCULAR 230 NOTICE: To ensure compliance with requirements imposed by U.S. Treasury Regulations, Haynes and Boone, LLP informs you that any U.S. tax advice contained in this communication (including any attachments) was not intended or written to be used, and cannot be used, for the purpose of (i) avoiding penalties under the Internal Revenue Code or (ii) promoting, marketing or recommending to another party any transaction or matter addressed herein. CONFIDENTIALITY NOTICE: This electronic mail transmission is confidential, may be privileged and should be read or retained only by the intended recipient. If you have received this transmission in error, please immediately notify the sender and delete it from your system. From tseaver at palladion.com Fri Mar 16 17:20:27 2012 From: tseaver at palladion.com (Tres Seaver) Date: Fri, 16 Mar 2012 12:20:27 -0400 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: <4F63684B.9060402@palladion.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/16/2012 10:57 AM, Guido van Rossum wrote: > On Thu, Mar 15, 2012 at 9:48 PM, Tres Seaver > wrote: >> On 03/13/2012 06:49 PM, Nick Coghlan wrote: >>> On Wed, Mar 14, 2012 at 8:08 AM, Guido van Rossum >>> wrote: >>>> If you can solve your problem with a suitably hacked Unpickler >>>> subclass that's fine with me, but I would personally use this >>>> opportunity to change the app to some other serialization >>>> format that is perhaps less general but more robust than pickle. >>>> I've been bitten by too many pickle-related problems to >>>> recommend pickle to anyone... >>> >>> It's fine for in-memory storage of (almost) arbitrary objects (I >>> use it to stash things in a memory backed sqlite DB via >>> SQLAlchemy) and for IPC, but yeah, for long-term cross-version >>> persistent storage, I'd be looking to something like JSON rather >>> than pickle. >> >> Note the Zope ecosystem (including Plone) is an *enoromous* >> installed base[1] using pickle for storage of data over many years >> and multiple versions of Python: until this point, it has always >> been possible to arrange for old pickles to work (e.g., by providing >> aliases for missing module names, etc.). >> >> ]1] tens of thousands of Zope-based sites in production, including >> very high-profile ones: http://plone.org/support/sites > > Don't I know it. :-) > > So do you need any help porting to Python 3 or not? The OP didn't > mention Zope. ZODB is actually the biggest / most important non-ported items in the Zope ecosystem. We are close to a pure-Python version of persistent and it's pickle cache, and have some work done toward pure-Python BTrees. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk9jaEsACgkQ+gerLs4ltQ4bQACfcRxaeRMLnmDRzWL2c537VLvC xsMAn2Cjql4Wvavr0MNyQxS58Af4EwMf =UT5J -----END PGP SIGNATURE----- From v+python at g.nevcal.com Fri Mar 16 17:57:01 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 16 Mar 2012 09:57:01 -0700 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F6368D9.5050200@gmail.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> <4F628586.8030308@oddbird.net> <4F63597D.3020109@gmail.com> <4F6368D9.5050200@gmail.com> Message-ID: <4F6370DD.1000207@g.nevcal.com> On 3/16/2012 9:22 AM, Lindberg, Van wrote: > On 3/16/2012 10:53 AM, Paul Moore wrote: >> > The only way I can read this to make sense is that you somehow >> > consider the Python installation as part of your development >> > environment (you mentioned source control earlier in the thread - >> > surely you don't manage your Python installation in source control - >> > binaries, stdlib, etc?). I can't see why you would do this, and it >> > certainly doesn't seem like a reasonable thing to do to me. >> > >> > Can you clarify? > I don't check in the python binary itself, nor the stdlib, but I*do* > check in the whole "installation", including the binaries directory. > > I like having my deploy environment exactly match my develop > environment. So I think I'm finally beginning to see the underlying reason why Van is desiring this consistency: It is not that he wants to check in his installation of Python, but that he wants to check in his installation of his packages and scripts into a source control environment, and then be able to check out that source control environment into an installation of Python on another machine of a different architecture. In an environment where a source control system is pervasive and well used, this would be an effective deployment alternative to developing a packaging/distribution solution using distutils, distutels2, packaging, easy_install, eggs, or peanuts, or any other such scheme. But! Source control environments don't lend themselves to being used for anything except exact replication of file and directory structure, so when the different architectures have different directory structures, this deployment technique cannot easily work.... except, as Van has discussed, by tweaking the development machine's environment to match that of the deployment machines... and that only works in the case where the deployment happens to only one architecture, and the development machine can be tweaked to match... but deploying to multiple machine having different architectures and directory structures would be impossible using the source control deployment technique, because of the different directory structures. If Van stated this goal in this thread, I missed it, and I think it is the missing link in the discussions. If I'm wrong, apologies for the noise. -------------- next part -------------- An HTML attachment was scrubbed... URL: From status at bugs.python.org Fri Mar 16 18:07:37 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 16 Mar 2012 18:07:37 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120316170737.4A0331DF08@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-03-09 - 2012-03-16) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3337 (+19) closed 22779 (+75) total 26116 (+94) Open issues with patches: 1423 Issues opened (69) ================== #10484: http.server.is_cgi fails to handle CGI URLs containing PATH_IN http://bugs.python.org/issue10484 reopened by v+python #12568: Add functions to get the width in columns of a character http://bugs.python.org/issue12568 reopened by pitrou #14202: The docs of xml.dom.pulldom are almost nonexistent http://bugs.python.org/issue14202 reopened by eric.araujo #14243: NamedTemporaryFile unusable under Windows http://bugs.python.org/issue14243 opened by dabrahams #14245: float rounding examples in FAQ are outdated http://bugs.python.org/issue14245 opened by zbysz #14249: unicodeobject.c: aliasing warnings http://bugs.python.org/issue14249 opened by skrah #14250: for string patterns regex.flags is never equal to 0 http://bugs.python.org/issue14250 opened by py.user #14254: IDLE - shell restart during readline does not reset readline http://bugs.python.org/issue14254 opened by serwy #14255: tempfile.gettempdir() didn't return the path with correct case http://bugs.python.org/issue14255 opened by ??????.??? #14260: re.groupindex available for modification and continues to work http://bugs.python.org/issue14260 opened by py.user #14261: Cleanup in smtpd module http://bugs.python.org/issue14261 opened by maker #14262: Allow using decimals as arguments to `timedelta` http://bugs.python.org/issue14262 opened by cool-RR #14263: switch_index_if_fails fails on py2 http://bugs.python.org/issue14263 opened by tarek #14264: Comparison bug in distutils2.version http://bugs.python.org/issue14264 opened by tarek #14265: Fully qualified test name in failure output http://bugs.python.org/issue14265 opened by michael.foord #14266: pyunit script as shorthand for python -m unittest http://bugs.python.org/issue14266 opened by michael.foord #14268: _move_file is broken because of a bad mock http://bugs.python.org/issue14268 opened by tarek #14269: SMTPD server does not enforce client starting mail transaction http://bugs.python.org/issue14269 opened by fruitnuke #14270: Can't install a project in a specific directory http://bugs.python.org/issue14270 opened by mlhamel #14273: distutils2: logging handler not properly initialized http://bugs.python.org/issue14273 opened by tarek #14274: pysetup does not look at requires.txt http://bugs.python.org/issue14274 opened by tarek #14275: pysetup create doesn't handle install requirements http://bugs.python.org/issue14275 opened by janjaapdriessen #14276: installing latest version of a project http://bugs.python.org/issue14276 opened by janjaapdriessen #14277: time.monotonic docstring does not mention the time unit return http://bugs.python.org/issue14277 opened by nicholas.riley #14279: packaging.pypi should support flat directories of distribution http://bugs.python.org/issue14279 opened by j1m #14280: packaging.pypi should not require checksums http://bugs.python.org/issue14280 opened by j1m #14285: Traceback wrong on ImportError while executing a package http://bugs.python.org/issue14285 opened by ms4py #14286: xxlimited.obj: unresolved external symbol __imp__PyObject_New http://bugs.python.org/issue14286 opened by skrah #14287: sys.stdin.readline and KeyboardInterrupt on windows http://bugs.python.org/issue14287 opened by miwa #14288: Make iterators pickleable http://bugs.python.org/issue14288 opened by krisvale #14290: Importing script as module causes ImportError with pickle.load http://bugs.python.org/issue14290 opened by rj3d #14292: OS X installer build script doesn't set $CXX, so it ends up as http://bugs.python.org/issue14292 opened by nicholas.riley #14293: Message methods delegated via __getattr__ inaccessible using s http://bugs.python.org/issue14293 opened by Brian.Jones #14294: Requirements are not properly copied into metatdata of dist-in http://bugs.python.org/issue14294 opened by Preston #14295: PEP 417: adding mock module http://bugs.python.org/issue14295 opened by michael.foord #14296: Compilation error on CentOS 5.8 http://bugs.python.org/issue14296 opened by Alzakath #14297: Custom string formatter doesn't work like builtin str.format http://bugs.python.org/issue14297 opened by PaulMcMillan #14299: OS X installer build script: permissions not ensured http://bugs.python.org/issue14299 opened by nicholas.riley #14300: dup_socket() on Windows should use WSA_FLAG_OVERLAPPED http://bugs.python.org/issue14300 opened by sbt #14301: xmlrpc client transport and threading problem http://bugs.python.org/issue14301 opened by kees #14302: Move python.exe to bin/ http://bugs.python.org/issue14302 opened by brian.curtin #14303: Incorrect documentation for socket.py on linux http://bugs.python.org/issue14303 opened by Shane.Hansen #14304: Implement utf-8-bmp codec http://bugs.python.org/issue14304 opened by asvetlov #14306: try/except block is both efficient and expensive? http://bugs.python.org/issue14306 opened by tshepang #14307: Make subclassing SocketServer simpler for non-blocking framewo http://bugs.python.org/issue14307 opened by krisvale #14308: '_DummyThread' object has no attribute '_Thread__block' http://bugs.python.org/issue14308 opened by Dustin.Kirkland #14309: Deprecate time.clock() http://bugs.python.org/issue14309 opened by haypo #14310: Socket duplication for windows http://bugs.python.org/issue14310 opened by krisvale #14311: ConfigParser does not parse utf-8 files with BOM bytes http://bugs.python.org/issue14311 opened by Sean.Wang #14313: zipfile does not unpack files from archive (files extracted ha http://bugs.python.org/issue14313 opened by fidoman #14314: logging smtp handler (and test) timeout issue http://bugs.python.org/issue14314 opened by r.david.murray #14315: zipfile.ZipFile() unable to open zip File http://bugs.python.org/issue14315 opened by pleed #14316: Broken link in grammar.rst http://bugs.python.org/issue14316 opened by berker.peksag #14318: clarify "may not" in time.steady docs http://bugs.python.org/issue14318 opened by Jim.Jewett #14319: cleanup index switching mechanism on packaging.pypi http://bugs.python.org/issue14319 opened by alexis #14322: More test coverage for hmac http://bugs.python.org/issue14322 opened by packetslave #14323: Normalize math precision in RGB/YIQ conversion http://bugs.python.org/issue14323 opened by packetslave #14324: Do not rely on AC_RUN_IFELSE tests in the configury http://bugs.python.org/issue14324 opened by doko #14325: Stop using the garbage collector to manage the lifetime of the http://bugs.python.org/issue14325 opened by exarkun #14326: IDLE - allow shell to support different locales http://bugs.python.org/issue14326 opened by serwy #14327: replace use of uname in the configury with macros set by AC_CA http://bugs.python.org/issue14327 opened by doko #14328: Add keyword-only parameter support to PyArg_ParseTupleAndKeywo http://bugs.python.org/issue14328 opened by larry #14329: proxy_bypass_macosx_sysconf does not handle singel ip addresse http://bugs.python.org/issue14329 opened by Serge.Droz #14330: do not use the host python for cross builds http://bugs.python.org/issue14330 opened by doko #14331: Python/import.c uses a lot of stack space due to MAXPATHLEN http://bugs.python.org/issue14331 opened by gregory.p.smith #14332: difflib.ndiff appears to ignore linejunk argument http://bugs.python.org/issue14332 opened by patena #14333: queue unittest errors http://bugs.python.org/issue14333 opened by anacrolix #14335: Reimplement multiprocessing's ForkingPickler using dispatch_ta http://bugs.python.org/issue14335 opened by sbt #1648923: HP-UX: -lcurses missing for readline.so http://bugs.python.org/issue1648923 reopened by r.david.murray Most recent 15 issues with no replies (15) ========================================== #14335: Reimplement multiprocessing's ForkingPickler using dispatch_ta http://bugs.python.org/issue14335 #14333: queue unittest errors http://bugs.python.org/issue14333 #14329: proxy_bypass_macosx_sysconf does not handle singel ip addresse http://bugs.python.org/issue14329 #14326: IDLE - allow shell to support different locales http://bugs.python.org/issue14326 #14323: Normalize math precision in RGB/YIQ conversion http://bugs.python.org/issue14323 #14319: cleanup index switching mechanism on packaging.pypi http://bugs.python.org/issue14319 #14318: clarify "may not" in time.steady docs http://bugs.python.org/issue14318 #14315: zipfile.ZipFile() unable to open zip File http://bugs.python.org/issue14315 #14304: Implement utf-8-bmp codec http://bugs.python.org/issue14304 #14303: Incorrect documentation for socket.py on linux http://bugs.python.org/issue14303 #14302: Move python.exe to bin/ http://bugs.python.org/issue14302 #14301: xmlrpc client transport and threading problem http://bugs.python.org/issue14301 #14299: OS X installer build script: permissions not ensured http://bugs.python.org/issue14299 #14297: Custom string formatter doesn't work like builtin str.format http://bugs.python.org/issue14297 #14293: Message methods delegated via __getattr__ inaccessible using s http://bugs.python.org/issue14293 Most recent 15 issues waiting for review (15) ============================================= #14335: Reimplement multiprocessing's ForkingPickler using dispatch_ta http://bugs.python.org/issue14335 #14331: Python/import.c uses a lot of stack space due to MAXPATHLEN http://bugs.python.org/issue14331 #14330: do not use the host python for cross builds http://bugs.python.org/issue14330 #14329: proxy_bypass_macosx_sysconf does not handle singel ip addresse http://bugs.python.org/issue14329 #14328: Add keyword-only parameter support to PyArg_ParseTupleAndKeywo http://bugs.python.org/issue14328 #14327: replace use of uname in the configury with macros set by AC_CA http://bugs.python.org/issue14327 #14325: Stop using the garbage collector to manage the lifetime of the http://bugs.python.org/issue14325 #14324: Do not rely on AC_RUN_IFELSE tests in the configury http://bugs.python.org/issue14324 #14323: Normalize math precision in RGB/YIQ conversion http://bugs.python.org/issue14323 #14322: More test coverage for hmac http://bugs.python.org/issue14322 #14310: Socket duplication for windows http://bugs.python.org/issue14310 #14308: '_DummyThread' object has no attribute '_Thread__block' http://bugs.python.org/issue14308 #14307: Make subclassing SocketServer simpler for non-blocking framewo http://bugs.python.org/issue14307 #14300: dup_socket() on Windows should use WSA_FLAG_OVERLAPPED http://bugs.python.org/issue14300 #14299: OS X installer build script: permissions not ensured http://bugs.python.org/issue14299 Top 10 most discussed issues (10) ================================= #14127: add st_*time_ns fileds to os.stat(), add ns keyword to os.*uti http://bugs.python.org/issue14127 21 msgs #14200: Idle shell crash on printing non-BMP unicode character http://bugs.python.org/issue14200 20 msgs #12568: Add functions to get the width in columns of a character http://bugs.python.org/issue12568 15 msgs #5758: fileinput.hook_compressed returning bytes from gz file http://bugs.python.org/issue5758 13 msgs #8739: Update to smtpd.py to RFC 5321 http://bugs.python.org/issue8739 12 msgs #14202: The docs of xml.dom.pulldom are almost nonexistent http://bugs.python.org/issue14202 11 msgs #14245: float rounding examples in FAQ are outdated http://bugs.python.org/issue14245 10 msgs #10050: urllib.request still has old 2.x urllib primitives http://bugs.python.org/issue10050 9 msgs #10484: http.server.is_cgi fails to handle CGI URLs containing PATH_IN http://bugs.python.org/issue10484 9 msgs #2377: Replace __import__ w/ importlib.__import__ http://bugs.python.org/issue2377 8 msgs Issues closed (73) ================== #2486: Recode (parts of) decimal module in C http://bugs.python.org/issue2486 closed by mark.dickinson #2843: New methods for existing Tkinter widgets http://bugs.python.org/issue2843 closed by loewis #3835: tkinter goes into an infinite loop (pydoc.gui) http://bugs.python.org/issue3835 closed by loewis #4345: Implement nb_nonzero for PyTclObject http://bugs.python.org/issue4345 closed by loewis #8247: Can't Import Tkinter http://bugs.python.org/issue8247 closed by loewis #8942: __path__ attribute of modules loaded by zipimporter is unteste http://bugs.python.org/issue8942 closed by r.david.murray #8963: test_urllibnet failure http://bugs.python.org/issue8963 closed by orsenthil #9079: Make gettimeofday available in time module http://bugs.python.org/issue9079 closed by haypo #9257: cElementTree iterparse requires events as bytes; ElementTree u http://bugs.python.org/issue9257 closed by eli.bendersky #9574: allow whitespace around central '+' in complex constructor http://bugs.python.org/issue9574 closed by python-dev #10148: st_mtime differs after shutil.copy2 http://bugs.python.org/issue10148 closed by haypo #10522: test_telnet exception http://bugs.python.org/issue10522 closed by jackdied #10543: Test discovery (unittest) does not work with jython http://bugs.python.org/issue10543 closed by michael.foord #11082: ValueError: Content-Length should be specified http://bugs.python.org/issue11082 closed by orsenthil #11199: urllib hangs when closing connection http://bugs.python.org/issue11199 closed by python-dev #11261: urlopen breaks when data parameter is used. http://bugs.python.org/issue11261 closed by python-dev #12758: time.time() returns local time instead of UTC http://bugs.python.org/issue12758 closed by r.david.murray #12818: email.utils.formataddr incorrectly quotes parens inside quoted http://bugs.python.org/issue12818 closed by r.david.murray #13394: Patch to increase aifc lib test coverage http://bugs.python.org/issue13394 closed by ezio.melotti #13450: add assertions to implement the intent in ''.format_map test http://bugs.python.org/issue13450 closed by python-dev #13703: Hash collision security issue http://bugs.python.org/issue13703 closed by gregory.p.smith #13709: Capitalization mistakes in the documentation for ctypes http://bugs.python.org/issue13709 closed by eli.bendersky #13839: -m pstats should combine all the profiles given as arguments http://bugs.python.org/issue13839 closed by pitrou #13842: Cannot pickle Ellipsis or NotImplemented http://bugs.python.org/issue13842 closed by lukasz.langa #13964: os.utimensat() and os.futimes() should accept (sec, nsec), dro http://bugs.python.org/issue13964 closed by haypo #14062: UTF-8 Email Subject problem http://bugs.python.org/issue14062 closed by r.david.murray #14104: Implement time.monotonic() on Mac OS X http://bugs.python.org/issue14104 closed by haypo #14163: tkinter: problems with hello doc example http://bugs.python.org/issue14163 closed by asvetlov #14169: compiler.compile fails on "if" statement in attached file http://bugs.python.org/issue14169 closed by terry.reedy #14179: Test coverage for lib/re.py http://bugs.python.org/issue14179 closed by ezio.melotti #14180: Factorize code to convert int/float to time_t, timeval or time http://bugs.python.org/issue14180 closed by python-dev #14184: test_recursion_limit fails on OS X when compiled with clang http://bugs.python.org/issue14184 closed by ned.deily #14186: Link to PEP 3107 in "def" part of Language Reference http://bugs.python.org/issue14186 closed by python-dev #14207: ElementTree.ParseError - needs documentation and consistent C& http://bugs.python.org/issue14207 closed by eli.bendersky #14210: add filename completion to pdb http://bugs.python.org/issue14210 closed by python-dev #14230: Delegating generator is not always visible to debugging tools http://bugs.python.org/issue14230 closed by python-dev #14232: obmalloc: mmap() returns MAP_FAILED on error, not 0 http://bugs.python.org/issue14232 closed by python-dev #14234: CVE-2012-0876 (hash table collisions CPU usage DoS) for embedd http://bugs.python.org/issue14234 closed by gregory.p.smith #14237: Special sequences \A and \Z don't work in character set [] http://bugs.python.org/issue14237 closed by georg.brandl #14238: python shouldn't need username in passwd database http://bugs.python.org/issue14238 closed by eric.araujo #14239: Uninitialised variable in _PyObject_GenericSetAttrWithDict http://bugs.python.org/issue14239 closed by benjamin.peterson #14242: Make subprocess.Popen aware of $SHELL http://bugs.python.org/issue14242 closed by gregory.p.smith #14244: No information about behaviour with groups in pattern in the d http://bugs.python.org/issue14244 closed by python-dev #14246: Accelerated ETree XMLParser cannot handle io.StringIO http://bugs.python.org/issue14246 closed by eli.bendersky #14247: "in" operator doesn't return boolean http://bugs.python.org/issue14247 closed by georg.brandl #14248: Typo in "What???s New In Python 3.3": "comparaison" http://bugs.python.org/issue14248 closed by python-dev #14251: HTMLParser decode issue http://bugs.python.org/issue14251 closed by ezio.melotti #14252: subprocess.Popen.terminate() inconsistent behavior on Windows http://bugs.python.org/issue14252 closed by pitrou #14253: print() encodes characters to native system encoding http://bugs.python.org/issue14253 closed by loewis #14256: test_logging fails if zlib is not present http://bugs.python.org/issue14256 closed by python-dev #14257: minor error in glossary wording regarding __hash__ http://bugs.python.org/issue14257 closed by python-dev #14258: Better explain re.LOCALE and re.UNICODE for \S and \W http://bugs.python.org/issue14258 closed by python-dev #14259: re.finditer() doesn't accept keyword arguments http://bugs.python.org/issue14259 closed by python-dev #14267: TimedRotatingFileHandler chooses wrong file name due to daylig http://bugs.python.org/issue14267 closed by python-dev #14271: remove setup.py from the doc http://bugs.python.org/issue14271 closed by eric.araujo #14272: ast.c: windows compile error http://bugs.python.org/issue14272 closed by skrah #14278: email.utils.localtime throws exception if time.daylight is Fal http://bugs.python.org/issue14278 closed by r.david.murray #14281: Add unit test for cgi.escape method http://bugs.python.org/issue14281 closed by python-dev #14282: lib2to3.fixer_util.touch_import('__future__', ...) can lead to http://bugs.python.org/issue14282 closed by benjamin.peterson #14283: match.pos describes that match object has methods search() and http://bugs.python.org/issue14283 closed by python-dev #14284: unicodeobject error on macosx in build process http://bugs.python.org/issue14284 closed by ezio.melotti #14289: Prebuilt CHM file on Docs download page http://bugs.python.org/issue14289 closed by python-dev #14291: Regression in Python3 of email handling of unicode strings in http://bugs.python.org/issue14291 closed by r.david.murray #14298: account for dict randomization in Design & History FAQ http://bugs.python.org/issue14298 closed by python-dev #14305: fix typos http://bugs.python.org/issue14305 closed by python-dev #14312: Convert PEP 7 to reStructuredText for readability purposes http://bugs.python.org/issue14312 closed by georg.brandl #14317: index.simple module lacking in distutils2 http://bugs.python.org/issue14317 closed by alexis #14320: set.add can return boolean indicate newly added item http://bugs.python.org/issue14320 closed by rhettinger #14321: Do not run pgen during the build if files are up to date http://bugs.python.org/issue14321 closed by doko #14334: Crash: getattr(type, '__getattribute__')(type, type) http://bugs.python.org/issue14334 closed by benjamin.peterson #989712: Support using Tk without a mainloop http://bugs.python.org/issue989712 closed by asvetlov #1006238: cross compile patch http://bugs.python.org/issue1006238 closed by doko #1178863: Variable.__init__ uses self.set(), blocking specialization http://bugs.python.org/issue1178863 closed by loewis From van.lindberg at gmail.com Fri Mar 16 18:25:10 2012 From: van.lindberg at gmail.com (VanL) Date: Fri, 16 Mar 2012 12:25:10 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F6370DD.1000207@g.nevcal.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> <4F628586.8030308@oddbird.net> <4F63597D.3020109@gmail.com> <4F6368D9.5050200@gmail.com> <4F6370DD.1000207@g.nevcal.com> Message-ID: <4F637776.6040404@gmail.com> On 3/16/2012 11:57 AM, Glenn Linderman wrote: > So I think I'm finally beginning to see the underlying reason why Van is > desiring this consistency: It is not that he wants to check in his > installation of Python, but that he wants to check in his installation > of his packages and scripts into a source control environment, and then > be able to check out that source control environment into an > installation of Python on another machine of a different architecture. > In an environment where a source control system is pervasive and well > used, this would be an effective deployment alternative to developing a > packaging/distribution solution using distutils, distutels2, packaging, > easy_install, eggs, or peanuts, or any other such scheme. > > But! > > Source control environments don't lend themselves to being used for > anything except exact replication of file and directory structure, so > when the different architectures have different directory structures, > this deployment technique cannot easily work.... except, as Van has > discussed, by tweaking the development machine's environment to match > that of the deployment machines... and that only works in the case where > the deployment happens to only one architecture, and the development > machine can be tweaked to match... but deploying to multiple machine > having different architectures and directory structures would be > impossible using the source control deployment technique, because of the > different directory structures. This is exactly correct. From Van.Lindberg at haynesboone.com Fri Mar 16 18:25:11 2012 From: Van.Lindberg at haynesboone.com (Lindberg, Van) Date: Fri, 16 Mar 2012 17:25:11 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F6370DD.1000207@g.nevcal.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> <4F628586.8030308@oddbird.net> <4F63597D.3020109@gmail.com> <4F6368D9.5050200@gmail.com> <4F6370DD.1000207@g.nevcal.com> Message-ID: <4F637776.6040404@gmail.com> On 3/16/2012 11:57 AM, Glenn Linderman wrote: > So I think I'm finally beginning to see the underlying reason why Van is > desiring this consistency: It is not that he wants to check in his > installation of Python, but that he wants to check in his installation > of his packages and scripts into a source control environment, and then > be able to check out that source control environment into an > installation of Python on another machine of a different architecture. > In an environment where a source control system is pervasive and well > used, this would be an effective deployment alternative to developing a > packaging/distribution solution using distutils, distutels2, packaging, > easy_install, eggs, or peanuts, or any other such scheme. > > But! > > Source control environments don't lend themselves to being used for > anything except exact replication of file and directory structure, so > when the different architectures have different directory structures, > this deployment technique cannot easily work.... except, as Van has > discussed, by tweaking the development machine's environment to match > that of the deployment machines... and that only works in the case where > the deployment happens to only one architecture, and the development > machine can be tweaked to match... but deploying to multiple machine > having different architectures and directory structures would be > impossible using the source control deployment technique, because of the > different directory structures. This is exactly correct.CIRCULAR 230 NOTICE: To ensure compliance with requirements imposed by U.S. Treasury Regulations, Haynes and Boone, LLP informs you that any U.S. tax advice contained in this communication (including any attachments) was not intended or written to be used, and cannot be used, for the purpose of (i) avoiding penalties under the Internal Revenue Code or (ii) promoting, marketing or recommending to another party any transaction or matter addressed herein. CONFIDENTIALITY NOTICE: This electronic mail transmission is confidential, may be privileged and should be read or retained only by the intended recipient. If you have received this transmission in error, please immediately notify the sender and delete it from your system. From guido at python.org Fri Mar 16 20:11:30 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 16 Mar 2012 12:11:30 -0700 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: <4F63684B.9060402@palladion.com> References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> <4F63684B.9060402@palladion.com> Message-ID: On Fri, Mar 16, 2012 at 9:20 AM, Tres Seaver wrote: > On 03/16/2012 10:57 AM, Guido van Rossum wrote: >> On Thu, Mar 15, 2012 at 9:48 PM, Tres Seaver >> wrote: >>> On 03/13/2012 06:49 PM, Nick Coghlan wrote: >>>> On Wed, Mar 14, 2012 at 8:08 AM, Guido van Rossum >>>> wrote: >>>>> If you can solve your problem with a suitably hacked Unpickler >>>>> subclass that's fine with me, but I would personally use this >>>>> opportunity to change the app to some other serialization >>>>> format that is perhaps less general but more robust than pickle. >>>>> I've been bitten by too many pickle-related problems to >>>>> recommend pickle to anyone... >>>> >>>> It's fine for in-memory storage of (almost) arbitrary objects (I >>>> use it to stash things in a memory backed sqlite DB via >>>> SQLAlchemy) and for IPC, but yeah, for long-term cross-version >>>> persistent storage, I'd be looking to something like JSON rather >>>> than pickle. >>> >>> Note the Zope ecosystem (including Plone) is an *enoromous* >>> installed base[1] using pickle for storage of data over many years >>> and multiple versions of Python: ?until this point, it has always >>> been possible to arrange for old pickles to work (e.g., by providing >>> aliases for missing module names, etc.). >>> >>> ]1] tens of thousands of Zope-based sites in production, including >>> very high-profile ones: ?http://plone.org/support/sites >> >> Don't I know it. :-) >> >> So do you need any help porting to Python 3 or not? The OP didn't >> mention Zope. > > ZODB is actually the biggest / most important non-ported items in the > Zope ecosystem. ?We are close to a pure-Python version of persistent and > it's pickle cache, and have some work done toward pure-Python BTrees. I take that as meaning "no, we don't need help, it's all under control." -- --Guido van Rossum (python.org/~guido) From tjreedy at udel.edu Fri Mar 16 20:28:18 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 16 Mar 2012 15:28:18 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F6368D9.5050200@gmail.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> <4F628586.8030308@oddbird.net> <4F63597D.3020109@gmail.com> <4F6368D9.5050200@gmail.com> Message-ID: On 3/16/2012 12:22 PM, Lindberg, Van wrote: > env/ > bin/ > python > pip > easy_install > my_script In http://bugs.python.org/issue14302 Brian Curtin claims "After talks at PyCon with several people, python.exe will live in C:\Python33\bin rather than C:\Python33 to come more in line with the Unix layout. This will also simplify another issue with the Path option for the 3.3 installer as well as packaging's target directory for top-level scripts (used to be Scripts/, will be bin/)." -- Terry Jan Reedy From tjreedy at udel.edu Fri Mar 16 20:49:33 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 16 Mar 2012 15:49:33 -0400 Subject: [Python-Dev] Raising assertions on wrong element types in ElementTree In-Reply-To: <20120316153310.2E7412500F8@webabinitio.net> References: <20120316153310.2E7412500F8@webabinitio.net> Message-ID: On 3/16/2012 11:33 AM, R. David Murray wrote: > On Fri, 16 Mar 2012 09:38:49 +0200, Eli Bendersky wrote: >> 1. The behavior of append, insert and extend should be similar in this respect >> 2. AssertionError is not the customary error in such case - TypeError >> is much more suitable >> 3. The C implementation of ElementTree actually raises TypeError in >> all these methods, by virtue of using PyArg_ParseTuple >> 4. The Python implementation (at least in 3.2) actually doesn't raise >> even AssertionError in extend - this was commented out >> >> The suggestion for 3.3 (where compatibility between the C and Python >> implementations gets even more important, since the C one is now being >> imported by default when available) is to raise TypeError in all 3 >> methods in the Python implementation, to match the C implementation, >> and to modify the documentation accordingly. >> >> There may appear to be a backwards compatibility here, since the doc >> of extend mentions raising AssertionError - but as said above, the doc >> is wrong, so no regressions in the code are be expected. >> >> Does that sound reasonable (for 3.3)? > > Yes. Third yes. >> Does it make sense to also fix this in 3.2/2.7? Or fix only the >> documentation? Or not touch them at all? I have no opinion about 2.7 as I have not checked what it currently says and does. In the 3.2 docs, we should remove the erroneous assertion about AssertionError. I think it would be good to also say that CElementTree raises TypeError for erroneous input but ElementTree does not and the the latter mistake is fixed in 3.3. Messy reality makes for messy docs. > Our usual approach in cases like this is to not change it in the maint > releases. Why risk breaking someone's code for no particular benefit? > If you want some extra work you could add it as a deprecation warning, > I suppose. The deprecation warning would be that ignoring the error is deprecated ;-). I think this would be a good idea since it would only appear when someone is checking for how to change their code for 3.3. -- Terry Jan Reedy From rdmurray at bitdance.com Fri Mar 16 21:02:01 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 16 Mar 2012 16:02:01 -0400 Subject: [Python-Dev] Raising assertions on wrong element types in ElementTree In-Reply-To: References: <20120316153310.2E7412500F8@webabinitio.net> Message-ID: <20120316200202.AE12D2500F8@webabinitio.net> On Fri, 16 Mar 2012 15:49:33 -0400, Terry Reedy wrote: > On 3/16/2012 11:33 AM, R. David Murray wrote: > > On Fri, 16 Mar 2012 09:38:49 +0200, Eli Bendersky wrote: > >> 1. The behavior of append, insert and extend should be similar in this respect > >> 2. AssertionError is not the customary error in such case - TypeError > >> is much more suitable > >> 3. The C implementation of ElementTree actually raises TypeError in > >> all these methods, by virtue of using PyArg_ParseTuple > >> 4. The Python implementation (at least in 3.2) actually doesn't raise > >> even AssertionError in extend - this was commented out > > > Our usual approach in cases like this is to not change it in the maint > > releases. Why risk breaking someone's code for no particular benefit? > > If you want some extra work you could add it as a deprecation warning, > > I suppose. > > The deprecation warning would be that ignoring the error is deprecated > ;-). I think this would be a good idea since it would only appear when > someone is checking for how to change their code for 3.3. Yes :). But concretely the deprecation warning is that if anyone has code that for some reason *works* with the python version of ElementTree while passing in a non-Element (duck typing?), that will no longer be allowed in 3.3. So it does seem worthwhile to do that. --David From carl at oddbird.net Fri Mar 16 21:22:03 2012 From: carl at oddbird.net (Carl Meyer) Date: Fri, 16 Mar 2012 13:22:03 -0700 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F63576A.4080308@gmail.com> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> <4F628586.8030308@oddbird.net> <4F63576A.4080308@gmail.com> Message-ID: <4F63A0EB.3060706@oddbird.net> Hi Van, On 03/16/2012 08:08 AM, Lindberg, Van wrote: >> Changing the directory name is in fact a new and different (and much >> more invasive) special case, because distutils et al install scripts >> there, and that directory name is part of the distutils install scheme. >> Installers don't care where the Python binary is located, so moving it >> in with the other scripts has very little impact. > > So would changing the distutils install scheme in 3.3 - as defined and > declared by distutils - lead to a change in your code? > > Alternatively stated, do you independently figure out that your > virtualenv is on Windows and then put things in Scripts, etc, or do you > use sysconfig? If sysconfig gave you different (consistent) values > across platforms, how would that affect your code? Both virtualenv and PEP 405 pyvenv figure out the platform at venv-creation time, and hard-code certain information about the correct layout for that platform (Scripts vs bin, as well as lib/pythonx.x vs Lib), so the internal layout of the venv matches the system layout on that platform. The key fact is that there is then no special-casing necessary when code runs within the virtualenv (particularly installer code); the same install scheme that would work in the system Python will also Just Work in the virtualenv. I'm not concerned about changes to distutils/sysconfig install schems to make them more compatible across platforms from the POV of virtualenv; we can easily update the current platform-detection code to do the right thing depending on both platform and Python version. I do share ?ric's concern about whether distutils' legacy install schemes would be updated or not, and how this would affect backwards compatibility for older installer code, but that's pretty much orthogonal to virtualenv/pyvenv. I don't want to make the internal layout of a virtualenv differ from the system Python layout on the same platform, which (IIUC) was Mark's proposal. Hope that clarifies, Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From valhallasw at arctus.nl Fri Mar 16 22:19:10 2012 From: valhallasw at arctus.nl (Merlijn van Deen) Date: Fri, 16 Mar 2012 22:19:10 +0100 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: Hi Guido, Let me start with thanking you for your long reply. It has clarified some points to me, but I am still not certain about some others. I hope I can clarify why I'm confused about this issue in the following. First of all, let me clarify that I wrote my original mail not as 'the guy who wants to serialize stuff' but as 'the guy who wonders what the best way to implement it in python is'. Of course, 'not' is a reasonable answer to that question. On 13 March 2012 23:08, Guido van Rossum wrote: > That was probably written before Python 3. Python 3 also dropped the > long-term backwards compatibilities for the language and stdlib. I am > certainly fine with adding a warning to the docs that this guarantee > does not apply to the Python 2/3 boundary. But I don't think we should > map 8-bit str instances from Python 2 to bytes in Python 3. Yes, backwards compatibility was dropped, but the current pickle module tries to work around this by using a module mapping [1] and aids in loading 8-bit str instances by asking for an encoding [2]. Last, but not least, we can /write/ old version pickles, for which the same module mapping is used, but in reverse. As such, the module suggests in many ways that it should be possible to interchange pickles between python 2 and python 3. > My snipe was mostly in reference to the many other things that can go > wrong with pickled data as your environment evolves (...) I understand your point. However, my interpretation of this issue always was 'if you only pickle built-in types, you'll be fine' - which is apparently wrong. Essentially - my point is this: considering the pickle module is already using several compatibility tricks and considering I am not the only one who would like to read binary data from a pickle in python 3 - even though it might not be the 'right' way to do it - what is there /against/ adding the possibility? Last but not least, this is what people are now doing instead: [1] s = pickle.load(f, encoding='latin1') b = s.encode('latin1') print(zlib.decompress(b)) Which hurts my eyes. In any case - again, thanks for taking the time to respond. I hope I somewhat clarified why I was/am somewhat confused on the issue, and the reasons why I think that it is still a good idea ;-) Best, Merlijn [1] http://hg.python.org/cpython/file/8b2668e60aef/Lib/_compat_pickle.py [2] http://docs.python.org/dev/library/pickle.html#module-interface [3] http://stackoverflow.com/questions/4281619/unpicking-data-pickled-in-python-2-5-in-python-3-1-then-uncompressing-with-zlib From guido at python.org Sat Mar 17 00:57:34 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 16 Mar 2012 16:57:34 -0700 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: OK, how about using encoding=bytes (yes, the type object!)? Or 'bytes' ? --Guido van Rossum (sent from Android phone) On Mar 16, 2012 2:19 PM, "Merlijn van Deen" wrote: > Hi Guido, > > Let me start with thanking you for your long reply. It has clarified > some points to me, but I am still not certain about some others. I > hope I can clarify why I'm confused about this issue in the following. > > First of all, let me clarify that I wrote my original mail not as 'the > guy who wants to serialize stuff' but as 'the guy who wonders what the > best way to implement it in python is'. Of course, 'not' is a > reasonable answer to that question. > > On 13 March 2012 23:08, Guido van Rossum wrote: > > That was probably written before Python 3. Python 3 also dropped the > > long-term backwards compatibilities for the language and stdlib. I am > > certainly fine with adding a warning to the docs that this guarantee > > does not apply to the Python 2/3 boundary. But I don't think we should > > map 8-bit str instances from Python 2 to bytes in Python 3. > > Yes, backwards compatibility was dropped, but the current pickle > module tries to work around this by using a module mapping [1] and > aids in loading 8-bit str instances by asking for an encoding [2]. > Last, but not least, we can /write/ old version pickles, for which > the same module mapping is used, but in reverse. As such, the module > suggests in many ways that it should be possible to interchange > pickles between python 2 and python 3. > > > My snipe was mostly in reference to the many other things that can go > > wrong with pickled data as your environment evolves (...) > I understand your point. However, my interpretation of this issue > always was 'if you only pickle built-in types, you'll be fine' - which > is apparently wrong. > > > Essentially - my point is this: considering the pickle module is > already using several compatibility tricks and considering I am not > the only one who would like to read binary data from a pickle in > python 3 - even though it might not be the 'right' way to do it - what > is there /against/ adding the possibility? > > Last but not least, this is what people are now doing instead: [1] > s = pickle.load(f, encoding='latin1') > b = s.encode('latin1') > print(zlib.decompress(b)) > > Which hurts my eyes. > > In any case - again, thanks for taking the time to respond. I hope I > somewhat clarified why I was/am somewhat confused on the issue, and > the reasons why I think that it is still a good idea ;-) > > Best, > Merlijn > > [1] http://hg.python.org/cpython/file/8b2668e60aef/Lib/_compat_pickle.py > [2] http://docs.python.org/dev/library/pickle.html#module-interface > [3] > http://stackoverflow.com/questions/4281619/unpicking-data-pickled-in-python-2-5-in-python-3-1-then-uncompressing-with-zlib > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhammond at skippinet.com.au Sat Mar 17 01:43:11 2012 From: mhammond at skippinet.com.au (Mark Hammond) Date: Sat, 17 Mar 2012 11:43:11 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F63A0EB.3060706@oddbird.net> References: <4F603B76.4050004@gmail.com> <4F60C169.9030404@gmail.com> <4F611E39.5070605@skippinet.com.au> <4F6126AB.7020309@gmail.com> <4F612A2E.9030805@gmail.com> <4F627916.7060705@gmail.com> <4F627FCE.4040309@oddbird.net> <4F6284E0.1090606@gmail.com> <4F628586.8030308@oddbird.net> <4F63576A.4080308@gmail.com> <4F63A0EB.3060706@oddbird.net> Message-ID: <4F63DE1F.2040605@skippinet.com.au> On 17/03/2012 7:22 AM, Carl Meyer wrote: ... > I don't want to make the internal layout of a virtualenv differ from the > system Python layout on the same platform, which (IIUC) was Mark's proposal. Just to be clear, I made that suggestion in an effort to keep both myself and Van - that the Python executable would remain in the same place for installed Pythons in the interests of b/w compat, but change it in the virtual env in an effort to keep Van happy when working in such environments. I now fully concede that was a dumb idea :) Mark From skippy.hammond at gmail.com Sat Mar 17 01:53:07 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Sat, 17 Mar 2012 11:53:07 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: <4F63E073.8070607@gmail.com> On 14/03/2012 6:43 AM, VanL wrote: > Following up on conversations at PyCon, I want to bring up one of my > personal hobby horses for change in 3.3: Fix install layout on Windows, > with a side order of making the PATH work better. ... For the sake of brain-storming, how about this: * All executables and scripts go into the root of the Python install. This directory is largely empty now - it is mainly a container for other directories. This would solve the problem of needing 2 directories on the PATH and mean existing tools which locate the executable would work fine. * If cross-platform consistency was desired, then we could consider making other platforms match this. However, if there are issues which might prevent this happening for other platforms (eg, the risk of breaking other 3rd party tools, conventions on the platform ,etc) then it might be worth conceding these considerations apply equally to the Windows installs and we just live with this platform difference. Mark From brian at python.org Sat Mar 17 02:07:19 2012 From: brian at python.org (Brian Curtin) Date: Fri, 16 Mar 2012 20:07:19 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F63E073.8070607@gmail.com> References: <4F63E073.8070607@gmail.com> Message-ID: On Fri, Mar 16, 2012 at 19:53, Mark Hammond wrote: > For the sake of brain-storming, how about this: > > * All executables and scripts go into the root of the Python install. This > directory is largely empty now - it is mainly a container for other > directories. ?This would solve the problem of needing 2 directories on the > PATH and mean existing tools which locate the executable would work fine. How are existing tools locating the executable which would break with a change to bin? As I posted on the tracker, the way which pops in my mind would be to look for "C:\\Python%d%d" % (x, y) but that's already pretty broken. The people I talked to at PyCon about this were Dino from Microsoft and he nudged the guy next to him to ask the same question (I seem to remember this guy worked for an IDE) -- both of them just wanted to be sure they can still find python.exe's location via the registry, which will be fine. I think we'll add a key to go along with InstallPath - BinaryPath probably makes sense. > * If cross-platform consistency was desired, then we could consider making > other platforms match this. ?However, if there are issues which might > prevent this happening for other platforms (eg, the risk of breaking other > 3rd party tools, conventions on the platform ,etc) then it might be worth > conceding these considerations apply equally to the Windows installs and we > just live with this platform difference. I don't think we're going to defeat the Unix army with their fleets of distro packagers and torch wielding purists. If anyone's going to move, my money's on Windows. From mhammond at skippinet.com.au Sat Mar 17 02:12:11 2012 From: mhammond at skippinet.com.au (Mark Hammond) Date: Sat, 17 Mar 2012 12:12:11 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F63E073.8070607@gmail.com> Message-ID: <4F63E4EB.2050301@skippinet.com.au> On 17/03/2012 12:07 PM, Brian Curtin wrote: > On Fri, Mar 16, 2012 at 19:53, Mark Hammond wrote: >> For the sake of brain-storming, how about this: >> >> * All executables and scripts go into the root of the Python install. This >> directory is largely empty now - it is mainly a container for other >> directories. This would solve the problem of needing 2 directories on the >> PATH and mean existing tools which locate the executable would work fine. > > How are existing tools locating the executable which would break with > a change to bin? As I posted on the tracker, the way which pops in my > mind would be to look for "C:\\Python%d%d" % (x, y) but that's already > pretty broken. As I just replied in the tracker :) They typically look up the InstallPath key in the registry and look for python.exe there - see the link to that activate.bat file I posted early in the thread. > The people I talked to at PyCon about this were Dino > from Microsoft and he nudged the guy next to him to ask the same > question (I seem to remember this guy worked for an IDE) -- both of > them just wanted to be sure they can still find python.exe's location > via the registry, which will be fine. I think we'll add a key to go > along with InstallPath - BinaryPath probably makes sense. While I wouldn't object to that, it would seem redundant - if the whole point of this is to standardize the locations, then looking for "bin/python.exe" relative to the existing InstallPath key should also be reliable and hopefully permanent. At the risk of repeating myself too many times, my concern is with 3rd party tools who (a) will break with the new scheme and need to be updated and (b) even after updating will still need the burden of supporting both the old and the new schemes. I simply don't see the benefit that makes this worthwhile. >> * If cross-platform consistency was desired, then we could consider making >> other platforms match this. However, if there are issues which might >> prevent this happening for other platforms (eg, the risk of breaking other >> 3rd party tools, conventions on the platform ,etc) then it might be worth >> conceding these considerations apply equally to the Windows installs and we >> just live with this platform difference. > > I don't think we're going to defeat the Unix army with their fleets of > distro packagers and torch wielding purists. If anyone's going to > move, my money's on Windows. Right - but why? Who wins? Where is the evidence of the pain this has caused people over the last 18 years or so since Windows has been doing this? Mark From carl at oddbird.net Sat Mar 17 02:25:45 2012 From: carl at oddbird.net (Carl Meyer) Date: Fri, 16 Mar 2012 18:25:45 -0700 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F63E073.8070607@gmail.com> References: <4F63E073.8070607@gmail.com> Message-ID: <4F63E819.3040108@oddbird.net> Hi Mark, On 03/16/2012 05:53 PM, Mark Hammond wrote: > * All executables and scripts go into the root of the Python install. > This directory is largely empty now - it is mainly a container for other > directories. This would solve the problem of needing 2 directories on > the PATH and mean existing tools which locate the executable would work > fine. I hate to seem like I'm piling on now after panning your last brainstorm :-), but... this would be quite problematic for virtualenv users, many of whom do rely on the fact that the virtualenv "stuff" is confined to within a limited set of known subdirectories, and they can overlay a virtualenv and their own project code with just a few virtualenv directories vcs-ignored. I would prefer either the status quo or the proposed cross-platform harmonization. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From v+python at g.nevcal.com Sat Mar 17 02:36:12 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 16 Mar 2012 18:36:12 -0700 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F63E819.3040108@oddbird.net> References: <4F63E073.8070607@gmail.com> <4F63E819.3040108@oddbird.net> Message-ID: <4F63EA8C.1060007@g.nevcal.com> On 3/16/2012 6:25 PM, Carl Meyer wrote: > Hi Mark, > > On 03/16/2012 05:53 PM, Mark Hammond wrote: >> * All executables and scripts go into the root of the Python install. >> This directory is largely empty now - it is mainly a container for other >> directories. This would solve the problem of needing 2 directories on >> the PATH and mean existing tools which locate the executable would work >> fine. > I hate to seem like I'm piling on now after panning your last brainstorm > :-), but... this would be quite problematic for virtualenv users, many > of whom do rely on the fact that the virtualenv "stuff" is confined to > within a limited set of known subdirectories, and they can overlay a > virtualenv and their own project code with just a few virtualenv > directories vcs-ignored. > > I would prefer either the status quo or the proposed cross-platform > harmonization. Yes, it seems fruitless to make directory structure changes without achieving cross-platform consistency. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.svetlov at gmail.com Sat Mar 17 03:30:31 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Fri, 16 Mar 2012 19:30:31 -0700 Subject: [Python-Dev] lzma Message-ID: When I build python from sources I have no lzma support (module _lzma cannot be built). There are lzma packages installed in my Ubuntu 11.10 box: lzma, lzma-dev and lzma-sources. I can see lib files (headers are also can be found in linux-headers): andrew at tiktaalik ~/p/cpython> locate lzma.so /usr/lib/liblzma.so.2 /usr/lib/liblzma.so.2.0.0 I can live with that but if somebody can point me the way to build python with lzma support please do. -- Thanks, Andrew Svetlov From senthil at uthcode.com Sat Mar 17 03:38:18 2012 From: senthil at uthcode.com (Senthil Kumaran) Date: Fri, 16 Mar 2012 19:38:18 -0700 Subject: [Python-Dev] lzma In-Reply-To: References: Message-ID: <20120317023818.GA1977@mathmagic> On Fri, Mar 16, 2012 at 07:30:31PM -0700, Andrew Svetlov wrote: > When I build python from sources I have no lzma support (module _lzma > cannot be built). > I have liblzma-dev, liblzma2 and lzma packages installed on ubuntu. I am able to build and import lzma module. Thanks, Senthil From andrew.svetlov at gmail.com Sat Mar 17 03:44:44 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Fri, 16 Mar 2012 19:44:44 -0700 Subject: [Python-Dev] lzma In-Reply-To: <20120317023818.GA1977@mathmagic> References: <20120317023818.GA1977@mathmagic> Message-ID: liblzma-dev has solved my problem. Thank you, Senthil. On Fri, Mar 16, 2012 at 7:38 PM, Senthil Kumaran wrote: > > On Fri, Mar 16, 2012 at 07:30:31PM -0700, Andrew Svetlov wrote: >> When I build python from sources I have no lzma support (module _lzma >> cannot be built). >> > > I have liblzma-dev, liblzma2 and lzma packages installed on ubuntu. ?I > am able to build and import lzma module. > > Thanks, > Senthil > > -- Thanks, Andrew Svetlov From skippy.hammond at gmail.com Sat Mar 17 06:28:57 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Sat, 17 Mar 2012 16:28:57 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F63E819.3040108@oddbird.net> References: <4F63E073.8070607@gmail.com> <4F63E819.3040108@oddbird.net> Message-ID: <4F642119.9060804@gmail.com> On 17/03/2012 12:25 PM, Carl Meyer wrote: > Hi Mark, > > On 03/16/2012 05:53 PM, Mark Hammond wrote: >> * All executables and scripts go into the root of the Python install. >> This directory is largely empty now - it is mainly a container for other >> directories. This would solve the problem of needing 2 directories on >> the PATH and mean existing tools which locate the executable would work >> fine. > > I hate to seem like I'm piling on now after panning your last brainstorm > :-), but... this would be quite problematic for virtualenv users, many > of whom do rely on the fact that the virtualenv "stuff" is confined to > within a limited set of known subdirectories, and they can overlay a > virtualenv and their own project code with just a few virtualenv > directories vcs-ignored. > > I would prefer either the status quo or the proposed cross-platform > harmonization. Yeah, fair enough. I should have indicated it was 1/2 tongue-in-cheek, but figured it worth throwing it out there anyway :) OTOH, the part that wasn't tongue-in-cheek was the part that said "why not change the other platforms instead of windows" (then wait for the inevitable replies), then "so those same reasons apply to Windows too" - eg "fleets of torch wielding windows admins" :) Breaking the few tools I'm concerned about vs asking Van etc to continue taking the pain he feels isn't going to mean the end of the world for any of us. So given the stakes in this particular discussion aren't that high, I'll try and summarize the thread over the next few days (or someone can beat me to it if they prefer) and we can ask someone semi-impartial to make a decision. I'd be happy to nominate MvL if he feels so inclined (even though I haven't asked him). Cheers, Mark From valhallasw at arctus.nl Sat Mar 17 10:14:07 2012 From: valhallasw at arctus.nl (Merlijn van Deen) Date: Sat, 17 Mar 2012 10:14:07 +0100 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: On 17 March 2012 00:57, Guido van Rossum wrote: > OK, how about using encoding=bytes (yes, the type object!)? Or 'bytes' ? encoding=bytes makes (at least intuitive) sense to me; encoding='bytes' would imply there is an encoding with name 'bytes' that somehow does b''.decode('bytes')=b'', and would disallow anyone to create a new encoding with the name 'bytes'. I'll take a look at rewriting my patch later this weekend, and I'll also give the documentation (both the 'consider not using pickle for long time storage' and the docs for this setting) some thought. Best, Merlijn From stefan_ml at behnel.de Sat Mar 17 10:43:43 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 17 Mar 2012 10:43:43 +0100 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: Guido van Rossum, 17.03.2012 00:57: > OK, how about using encoding=bytes (yes, the type object!)? Or 'bytes' ? In lxml, there was an "encoding=unicode" option that would let the XML/HTML/text serialisation function return a Unicode string. This was eventually deprecated in favour of "encoding='unicode'" when ElementTree gained this feature as well some years later. Arguably, this was because there no longer was a unicode type in the then existing Py3, but ... Anyway, given that there is at least some precedence, I'd prefer the name "bytes" over the bare bytes type. Regarding possible naming conflicts, I don't see any sense in calling an actual encoding "bytes" that does anything but returning bare bytes in a bytes object, as is the case here. Stefan From p.f.moore at gmail.com Sat Mar 17 12:57:00 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 17 Mar 2012 11:57:00 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F642119.9060804@gmail.com> References: <4F63E073.8070607@gmail.com> <4F63E819.3040108@oddbird.net> <4F642119.9060804@gmail.com> Message-ID: On 17 March 2012 05:28, Mark Hammond wrote: >> I hate to seem like I'm piling on now after panning your last brainstorm >> :-), but... this would be quite problematic for virtualenv users, many >> of whom do rely on the fact that the virtualenv "stuff" is confined to >> within a limited set of known subdirectories, and they can overlay a >> virtualenv and their own project code with just a few virtualenv >> directories vcs-ignored. >> >> I would prefer either the status quo or the proposed cross-platform >> harmonization. I work purely Windows-only, and I have a few scripts that manage virtualenvs for myself (for example, sort of a personal virtalenv-wrapper for Powershell - see https://bitbucket.org/pmoore/poshpy for a work-in-progress version). They have special casing for the differences in layout between standard installs, build directories, and virtualenvs. Changing the layout would cause these tools to need to change. In theory, putting python.exe/pythonw.exe into "Scripts" would simplify them (no need to cater for the cases where I need to put 2 directories on PATH), and changing Scripts -> bin would be trivial. But in practice, it would mean that I need to check (somehow) the Python version and adjust the layout used accordingly. As there is no way of knowing the Python version without running Python, this is too slow to be practical. So while the changes are in theory harmless in isolation (except the library locations - changing those *would* cause pain) the need to support multiple versions would make this a major issue for me. So, I prefer the status quo. If necessary, I can live with the change to rename scripts as bin and put the Python executables in there (some cost, but some small benefit as well) but I oppose changing the library locations (all cost, no gain for me). All of this presupposes that both the standard installer *and* virtualenv change. I suspect that having virtualenv respect the old layout for 3.2 and earlier, and the new one for 3.3+, could be messy, though, so that's not guaranteed, I guess... > Breaking the few tools I'm concerned about vs asking Van etc to continue > taking the pain he feels isn't going to mean the end of the world for any of > us. Agreed. I can't say my pain is any more important than Van's, but the same applies the other way round :-) > So given the stakes in this particular discussion aren't that high, > I'll try and summarize the thread over the next few days (or someone can > beat me to it if they prefer) and we can ask someone semi-impartial to make > a decision. ?I'd be happy to nominate MvL if he feels so inclined (even > though I haven't asked him). Sounds reasonable. I'd suggest that Van should probably report any other examples where someone would benefit from this change - at the moment unless I've misread the thread, it seems like he's the only example of someone who would gain. Paul. From storchaka at gmail.com Sat Mar 17 15:07:27 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 17 Mar 2012 16:07:27 +0200 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F63E073.8070607@gmail.com> <4F63E819.3040108@oddbird.net> <4F642119.9060804@gmail.com> Message-ID: 17.03.12 13:57, Paul Moore ???????(??): > As there is no > way of knowing the Python version without running Python, this is too > slow to be practical. Cold start: $ time python3 --version Python 3.1.2 real 0m0.073s user 0m0.004s sys 0m0.004s Hot start: $ time python3 --version Python 3.1.2 real 0m0.007s user 0m0.004s sys 0m0.004s From valhallasw at arctus.nl Sat Mar 17 15:20:54 2012 From: valhallasw at arctus.nl (Merlijn van Deen) Date: Sat, 17 Mar 2012 15:20:54 +0100 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: On 17 March 2012 10:43, Stefan Behnel wrote: > In lxml, there was an "encoding=unicode" option that would let the > XML/HTML/text serialisation function return a Unicode string. This was > eventually deprecated in favour of "encoding='unicode'" when ElementTree > gained this feature as well some years later. That's this issue: http://bugs.python.org/issue8047 The thread also suggests the options encoding=False and encoding=None Considering ET uses encoding="unicode" to signal 'don't encode', I agree with you that using encoding="bytes" to signal 'don't encode' would be sensible. However, ET /also/ allows encoding=str.What about allowing both encoding="bytes" /and/ encoding=bytes? Best, Merlijn From stefan_ml at behnel.de Sat Mar 17 15:28:17 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 17 Mar 2012 15:28:17 +0100 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: Merlijn van Deen, 17.03.2012 15:20: > On 17 March 2012 10:43, Stefan Behnel wrote: >> In lxml, there was an "encoding=unicode" option that would let the >> XML/HTML/text serialisation function return a Unicode string. This was >> eventually deprecated in favour of "encoding='unicode'" when ElementTree >> gained this feature as well some years later. > > That's this issue: http://bugs.python.org/issue8047 > > The thread also suggests the options > encoding=False > and > encoding=None > > Considering ET uses encoding="unicode" to signal 'don't encode', I > agree with you that using encoding="bytes" to signal 'don't encode' > would be sensible. However, ET /also/ allows encoding=str.What about > allowing both encoding="bytes" /and/ encoding=bytes? It doesn't read well for the unicode type any more because it's gone in Py3 (and "encoding=str" just looks weird). It's less awkward for the bytes type. However, why should there be two ways to do it? Stefan From p.f.moore at gmail.com Sat Mar 17 15:50:24 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 17 Mar 2012 14:50:24 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F63E073.8070607@gmail.com> <4F63E819.3040108@oddbird.net> <4F642119.9060804@gmail.com> Message-ID: On 17 March 2012 14:07, Serhiy Storchaka wrote: > 17.03.12 13:57, Paul Moore ???????(??): > >> As there is no >> way of knowing the Python version without running Python, this is too >> slow to be practical. > > > Cold start: > $ time python3 --version > Python 3.1.2 > > real ? ?0m0.073s > user ? ?0m0.004s > sys ? ? 0m0.004s > > Hot start: > $ time python3 --version > Python 3.1.2 > > real ? ?0m0.007s > user ? ?0m0.004s > sys ? ? 0m0.004s Blame it on Windows or my overloaded PC if you must :-) but I get perceptible delays on a cold start at times. Plus, I'd probably need to do this in code that enumerates all installed Pythons and virtualenvs - that could be 10 installations. I've never tried it in anger, so I could be worrying over nothing, but to an extent that's my point - I don't *need* to know the version unless I have to have version-specific code to define the layout. And if I were starting Python up, I'd probably be better just importing sys and sysconfig, and using sys.executable and sysconfig.get_path('scripts'). And there's the chicken-and-egg problem - if I don't know the version, I don't know where python.exe is, so how can I run it to find the version? Meh. None of this is a real issue. It's just some extra messy coding. But Van's point is that this proposal gives him less hard coding. Beyond pointing out that it gives me more, I don't have much to add. Paul. From guido at python.org Sat Mar 17 16:00:21 2012 From: guido at python.org (Guido van Rossum) Date: Sat, 17 Mar 2012 08:00:21 -0700 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: One reason to use 'bytes' instead of bytes is that it is a string that can be specified e.g. in a config file. On Sat, Mar 17, 2012 at 7:28 AM, Stefan Behnel wrote: > Merlijn van Deen, 17.03.2012 15:20: >> On 17 March 2012 10:43, Stefan Behnel wrote: >>> In lxml, there was an "encoding=unicode" option that would let the >>> XML/HTML/text serialisation function return a Unicode string. This was >>> eventually deprecated in favour of "encoding='unicode'" when ElementTree >>> gained this feature as well some years later. >> >> That's this issue: http://bugs.python.org/issue8047 >> >> The thread also suggests the options >> ? ? encoding=False >> and >> ? ? encoding=None >> >> Considering ET uses encoding="unicode" to signal 'don't encode', I >> agree with you that using encoding="bytes" to signal 'don't encode' >> would be sensible. However, ET /also/ allows encoding=str.What about >> allowing both encoding="bytes" /and/ encoding=bytes? > > It doesn't read well for the unicode type any more because it's gone in Py3 > (and "encoding=str" just looks weird). It's less awkward for the bytes type. > > However, why should there be two ways to do it? > > Stefan > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From storchaka at gmail.com Sat Mar 17 16:28:09 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 17 Mar 2012 17:28:09 +0200 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: 17.03.12 17:00, Guido van Rossum ???????(??): > One reason to use 'bytes' instead of bytes is that it is a string that > can be specified e.g. in a config file. Thus, there are no reasons to use bytes instead of 'bytes'. From g.brandl at gmx.net Sat Mar 17 16:49:37 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 17 Mar 2012 16:49:37 +0100 Subject: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default In-Reply-To: References: Message-ID: On 03/15/2012 01:17 AM, victor.stinner wrote: > http://hg.python.org/cpython/rev/27441e0d6a75 > changeset: 75672:27441e0d6a75 > user: Victor Stinner > date: Thu Mar 15 01:17:09 2012 +0100 > summary: > Issue #10278: Add an optional strict argument to time.steady(), False by default > > files: > Doc/library/time.rst | 7 +++- > Lib/test/test_time.py | 10 +++++ > Modules/timemodule.c | 58 +++++++++++++++++++++--------- > 3 files changed, 57 insertions(+), 18 deletions(-) > > > diff --git a/Doc/library/time.rst b/Doc/library/time.rst > --- a/Doc/library/time.rst > +++ b/Doc/library/time.rst > @@ -226,7 +226,7 @@ > The earliest date for which it can generate a time is platform-dependent. > > > -.. function:: steady() > +.. function:: steady(strict=False) > > .. index:: > single: benchmarking > @@ -236,6 +236,11 @@ > adjusted. The reference point of the returned value is undefined so only the > difference of consecutive calls is valid. > > + If available, a monotonic clock is used. By default, if *strict* is False, > + the function falls back to another clock if the monotonic clock failed or is > + not available. If *strict* is True, raise an :exc:`OSError` on error or > + :exc:`NotImplementedError` if no monotonic clock is available. This is not clear to me. Why wouldn't it raise OSError on error even with strict=False? Please clarify which exception is raised in which case. Georg From valhallasw at arctus.nl Sat Mar 17 17:09:39 2012 From: valhallasw at arctus.nl (Merlijn van Deen) Date: Sat, 17 Mar 2012 17:09:39 +0100 Subject: [Python-Dev] Unpickling py2 str as py3 bytes (and vice versa) - implementation (issue #6784) In-Reply-To: References: <8DE609C2-0FF4-4412-B26E-B453C67EF0F0@voidspace.org.uk> Message-ID: On 17 March 2012 16:28, Serhiy Storchaka wrote: > Thus, there are no reasons to use bytes instead of 'bytes'. Aesthetics ;-) I've implemented the encoding="bytes" version [1]. Thank you all for your input! Merlijn [1] http://bugs.python.org/issue6784#msg156166 From g.brandl at gmx.net Sat Mar 17 17:46:42 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 17 Mar 2012 17:46:42 +0100 Subject: [Python-Dev] cpython (3.2): 3.2 explain json.dumps for non-string keys in dicts. closes issue6566. Patch In-Reply-To: References: Message-ID: On 03/17/2012 08:41 AM, senthil.kumaran wrote: > http://hg.python.org/cpython/rev/613919591a05 > changeset: 75778:613919591a05 > branch: 3.2 > parent: 75771:32d3ecacdabf > user: Senthil Kumaran > date: Sat Mar 17 00:40:34 2012 -0700 > summary: > 3.2 explain json.dumps for non-string keys in dicts. closes issue6566. Patch contributed Kirubakaran Athmanathan > > files: > Doc/library/json.rst | 8 ++++++++ > 1 files changed, 8 insertions(+), 0 deletions(-) > > > diff --git a/Doc/library/json.rst b/Doc/library/json.rst > --- a/Doc/library/json.rst > +++ b/Doc/library/json.rst > @@ -168,6 +168,14 @@ > so trying to serialize multiple objects with repeated calls to > :func:`dump` using the same *fp* will result in an invalid JSON file. > > + .. note:: > + > + Keys in key/value pairs of JSON are always of the type :class:`str`. When > + a dictionary is converted into JSON, all the keys of the dictionary are > + coerced to strings. As a result of this, if a dictionary is convered > + into JSON and then back into a dictionary, the dictionary may not equal > + the original one. That is, ``loads(dumps(x)) != x`` if x has non-string > + keys. > > .. function:: load(fp, cls=None, object_hook=None, parse_float=None, parse_int=None, parse_constant=None, object_pairs_hook=None, **kw) This is just a minor nitpick, and it absolutely is not specific to you, Senthil: please try to keep the rst file structuring with newlines intact. In particular, I place two blank lines between top-level function/class descriptions because single blank lines already occur so often in rst markup. When you add paragraphs, please try to keep the blank lines. Georg From merwok at netwok.org Sat Mar 17 18:20:31 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 17 Mar 2012 13:20:31 -0400 Subject: [Python-Dev] [Python-checkins] cpython (3.2): this stuff will actually be new in 3.2.4 In-Reply-To: References: Message-ID: <4F64C7DF.7000506@netwok.org> Hi, > changeset: 509b222679e8 > branch: 3.2 > user: Benjamin Peterson > date: Wed Mar 07 18:49:43 2012 -0600 > summary: > this stuff will actually be new in 3.2.4 > diff --git a/Misc/NEWS b/Misc/NEWS > --- a/Misc/NEWS > +++ b/Misc/NEWS > @@ -2,6 +2,57 @@ > Python News > +++++++++++ > > +What's New in Python 3.2.4 > +========================== > + > +*Release date: XX-XX-XXXX* Thanks for sorting this out. There is however at least one mistake: > +- Issue #6884: Fix long-standing bugs with MANIFEST.in parsing in distutils > + on Windows. I?m quite sure Georg transplanted that commit to the 3.2.3 release clone, and hope you did likewise for 2.7.3. Regards From g.brandl at gmx.net Sat Mar 17 18:38:21 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 17 Mar 2012 18:38:21 +0100 Subject: [Python-Dev] [Python-checkins] cpython (3.2): this stuff will actually be new in 3.2.4 In-Reply-To: <4F64C7DF.7000506@netwok.org> References: <4F64C7DF.7000506@netwok.org> Message-ID: On 03/17/2012 06:20 PM, ?ric Araujo wrote: > Hi, > >> changeset: 509b222679e8 >> branch: 3.2 >> user: Benjamin Peterson >> date: Wed Mar 07 18:49:43 2012 -0600 >> summary: >> this stuff will actually be new in 3.2.4 > >> diff --git a/Misc/NEWS b/Misc/NEWS >> --- a/Misc/NEWS >> +++ b/Misc/NEWS >> @@ -2,6 +2,57 @@ >> Python News >> +++++++++++ >> >> +What's New in Python 3.2.4 >> +========================== >> + >> +*Release date: XX-XX-XXXX* > > Thanks for sorting this out. There is however at least one mistake: > >> +- Issue #6884: Fix long-standing bugs with MANIFEST.in parsing in distutils >> + on Windows. > > I?m quite sure Georg transplanted that commit to the 3.2.3 release > clone, and hope you did likewise for 2.7.3. Fixed, thanks. Georg From tjreedy at udel.edu Sat Mar 17 20:16:25 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 17 Mar 2012 15:16:25 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F63E073.8070607@gmail.com> <4F63E819.3040108@oddbird.net> <4F642119.9060804@gmail.com> Message-ID: On 3/17/2012 10:50 AM, Paul Moore wrote: > Meh. None of this is a real issue. It's just some extra messy coding. > But Van's point is that this proposal gives him less hard coding. > Beyond pointing out that it gives me more, I don't have much to add. I suspect a case could be made that harmonization now will benefit multiple people in, say, 5 years, especially if by then one only supported 3.3+ while supporting multiple platforms. It would be the same rationale as that for 3.0, and especially the bytes to unicode change for text. (As I remember, we are only 3 years in on that one ;-). I leave it to Van to actually explain and make the argument. -- Terry Jan Reedy From fuzzyman at voidspace.org.uk Sat Mar 17 21:47:47 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sat, 17 Mar 2012 13:47:47 -0700 Subject: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default In-Reply-To: References: Message-ID: On 17 Mar 2012, at 08:49, Georg Brandl wrote: > On 03/15/2012 01:17 AM, victor.stinner wrote: >> http://hg.python.org/cpython/rev/27441e0d6a75 >> changeset: 75672:27441e0d6a75 >> user: Victor Stinner >> date: Thu Mar 15 01:17:09 2012 +0100 >> summary: >> Issue #10278: Add an optional strict argument to time.steady(), False by default >> >> files: >> Doc/library/time.rst | 7 +++- >> Lib/test/test_time.py | 10 +++++ >> Modules/timemodule.c | 58 +++++++++++++++++++++--------- >> 3 files changed, 57 insertions(+), 18 deletions(-) >> >> >> diff --git a/Doc/library/time.rst b/Doc/library/time.rst >> --- a/Doc/library/time.rst >> +++ b/Doc/library/time.rst >> @@ -226,7 +226,7 @@ >> The earliest date for which it can generate a time is platform-dependent. >> >> >> -.. function:: steady() >> +.. function:: steady(strict=False) >> >> .. index:: >> single: benchmarking >> @@ -236,6 +236,11 @@ >> adjusted. The reference point of the returned value is undefined so only the >> difference of consecutive calls is valid. >> >> + If available, a monotonic clock is used. By default, if *strict* is False, >> + the function falls back to another clock if the monotonic clock failed or is >> + not available. If *strict* is True, raise an :exc:`OSError` on error or >> + :exc:`NotImplementedError` if no monotonic clock is available. > > This is not clear to me. Why wouldn't it raise OSError on error even with > strict=False? Please clarify which exception is raised in which case. It seems clear to me. It doesn't raise exceptions when strict=False because it falls back to a non-monotonic clock. If strict is True and a non-monotonic clock is not available it raises OSError or NotImplementedError. Michael > > Georg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From g.brandl at gmx.net Sat Mar 17 23:04:46 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 17 Mar 2012 23:04:46 +0100 Subject: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default In-Reply-To: References: Message-ID: On 03/17/2012 09:47 PM, Michael Foord wrote: > > On 17 Mar 2012, at 08:49, Georg Brandl wrote: > >> On 03/15/2012 01:17 AM, victor.stinner wrote: >>> http://hg.python.org/cpython/rev/27441e0d6a75 changeset: >>> 75672:27441e0d6a75 user: Victor Stinner >>> date: Thu Mar 15 01:17:09 2012 +0100 >>> summary: Issue #10278: Add an optional strict argument to time.steady(), >>> False by default >>> >>> files: Doc/library/time.rst | 7 +++- Lib/test/test_time.py | 10 >>> +++++ Modules/timemodule.c | 58 +++++++++++++++++++++--------- 3 files >>> changed, 57 insertions(+), 18 deletions(-) >>> >>> >>> diff --git a/Doc/library/time.rst b/Doc/library/time.rst --- >>> a/Doc/library/time.rst +++ b/Doc/library/time.rst @@ -226,7 +226,7 @@ The >>> earliest date for which it can generate a time is platform-dependent. >>> >>> >>> -.. function:: steady() +.. function:: steady(strict=False) >>> >>> .. index:: single: benchmarking @@ -236,6 +236,11 @@ adjusted. The >>> reference point of the returned value is undefined so only the difference >>> of consecutive calls is valid. >>> >>> + If available, a monotonic clock is used. By default, if *strict* is >>> False, + the function falls back to another clock if the monotonic >>> clock failed or is + not available. If *strict* is True, raise an >>> :exc:`OSError` on error or + :exc:`NotImplementedError` if no monotonic >>> clock is available. >> >> This is not clear to me. Why wouldn't it raise OSError on error even with >> strict=False? Please clarify which exception is raised in which case. > > It seems clear to me. It doesn't raise exceptions when strict=False because > it falls back to a non-monotonic clock. If strict is True and a non-monotonic > clock is not available it raises OSError or NotImplementedError. So errors are ignored when strict is false? Georg From eric at trueblade.com Sat Mar 17 23:07:33 2012 From: eric at trueblade.com (Eric V. Smith) Date: Sat, 17 Mar 2012 18:07:33 -0400 Subject: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default In-Reply-To: References: Message-ID: <4F650B25.4060901@trueblade.com> On 3/17/2012 4:47 PM, Michael Foord wrote: > > On 17 Mar 2012, at 08:49, Georg Brandl wrote: > >> On 03/15/2012 01:17 AM, victor.stinner wrote: >>> + If available, a monotonic clock is used. By default, if *strict* is False, >>> + the function falls back to another clock if the monotonic clock failed or is >>> + not available. If *strict* is True, raise an :exc:`OSError` on error or >>> + :exc:`NotImplementedError` if no monotonic clock is available. >> >> This is not clear to me. Why wouldn't it raise OSError on error even with >> strict=False? Please clarify which exception is raised in which case. > > It seems clear to me. It doesn't raise exceptions when strict=False because > it falls back to a non-monotonic clock. If strict is True and a non- > monotonic clock is not available it raises OSError or NotImplementedError. I have to agree with Georg. Looking at the code, it appears OSError can be raised with both strict=True and strict=False (since floattime() can raise OSError). The text needs to make it clear OSError can always be raised. I also think "By default, if strict is False" confuses things. If there's a default behavior with strict=False, what's the non-default behavior? I suggest dropping "By default". Eric. From fuzzyman at voidspace.org.uk Sat Mar 17 23:16:51 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sat, 17 Mar 2012 15:16:51 -0700 Subject: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default In-Reply-To: References: Message-ID: On 17 Mar 2012, at 15:04, Georg Brandl wrote: > On 03/17/2012 09:47 PM, Michael Foord wrote: >> >> On 17 Mar 2012, at 08:49, Georg Brandl wrote: >> >>> On 03/15/2012 01:17 AM, victor.stinner wrote: >>>> http://hg.python.org/cpython/rev/27441e0d6a75 changeset: >>>> 75672:27441e0d6a75 user: Victor Stinner >>>> date: Thu Mar 15 01:17:09 2012 +0100 >>>> summary: Issue #10278: Add an optional strict argument to time.steady(), >>>> False by default >>>> >>>> files: Doc/library/time.rst | 7 +++- Lib/test/test_time.py | 10 >>>> +++++ Modules/timemodule.c | 58 +++++++++++++++++++++--------- 3 files >>>> changed, 57 insertions(+), 18 deletions(-) >>>> >>>> >>>> diff --git a/Doc/library/time.rst b/Doc/library/time.rst --- >>>> a/Doc/library/time.rst +++ b/Doc/library/time.rst @@ -226,7 +226,7 @@ The >>>> earliest date for which it can generate a time is platform-dependent. >>>> >>>> >>>> -.. function:: steady() +.. function:: steady(strict=False) >>>> >>>> .. index:: single: benchmarking @@ -236,6 +236,11 @@ adjusted. The >>>> reference point of the returned value is undefined so only the difference >>>> of consecutive calls is valid. >>>> >>>> + If available, a monotonic clock is used. By default, if *strict* is >>>> False, + the function falls back to another clock if the monotonic >>>> clock failed or is + not available. If *strict* is True, raise an >>>> :exc:`OSError` on error or + :exc:`NotImplementedError` if no monotonic >>>> clock is available. >>> >>> This is not clear to me. Why wouldn't it raise OSError on error even with >>> strict=False? Please clarify which exception is raised in which case. >> >> It seems clear to me. It doesn't raise exceptions when strict=False because >> it falls back to a non-monotonic clock. If strict is True and a non-monotonic >> clock is not available it raises OSError or NotImplementedError. > > So errors are ignored when strict is false? Well, as described in the documentation an error in finding a monotonic clock causes the function to fallback to a different clock. So you could interpret that as either errors are ignored, or it isn't an error in the first place. I don't see how the following is ambiguous, but you're obviously having difficulty with it. Perhaps you can suggest another wording. if *strict* is False, the function falls back to another clock if the monotonic clock failed or is not available. The note from Eric notwithstanding though. Michael > > Georg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From barry at python.org Sat Mar 17 23:43:16 2012 From: barry at python.org (Barry Warsaw) Date: Sat, 17 Mar 2012 18:43:16 -0400 Subject: [Python-Dev] [Python-checkins] cpython (2.6): Added tag v2.6.8rc2 for changeset bd9e1a02e3e3 In-Reply-To: References: Message-ID: <20120317184316.7245fcce@limelight.wooz.org> On Mar 17, 2012, at 11:34 PM, barry.warsaw wrote: >http://hg.python.org/cpython/rev/6144a0748a95 >changeset: 75809:6144a0748a95 >branch: 2.6 >parent: 75806:bd9e1a02e3e3 >user: Barry Warsaw >date: Sat Mar 17 18:34:05 2012 -0400 >summary: > Added tag v2.6.8rc2 for changeset bd9e1a02e3e3 > >files: > .hgtags | 1 + > 1 files changed, 1 insertions(+), 0 deletions(-) > > >diff --git a/.hgtags b/.hgtags >--- a/.hgtags >+++ b/.hgtags >@@ -140,3 +140,4 @@ > caab08cd2b3eb5a6f78479b2513b65d36c754f41 v2.6.8rc1 > 5356b6c7fd66664679f9bd71f7cd085239934e43 v2.6.8rc1 > 1d1b7b9fad48bd0dc60dc8a06cca4459ef273127 v2.6.8rc2 >+bd9e1a02e3e329fa7b6da06113090a401909c4ea v2.6.8rc2 I don't know why Mercurial does this. Here's what *I* did: * After I committed the last of my PEP 101 changes to the 2.6 branch, I did release.py --tag 2.6.8rc2. It sure looks like that added the tag. * Then I switched to the 2.7 branch, pulled and merged the changes from 2.6, `hg revert -ar .` then marked all conflicts as resolved `hg resolve -am`. AFAICT, that's the way to null merge 2.6 to 2.7. * Then I pushed my changes. * Switching back to my 2.6 branch, I then tried to `release -export 2.6.8rc2` and got an error that v2.6.8rc2 tag wasn't found. At this point I *retagged* for 2.6.8rc2, switch to default and pushed. This makes me a little wary about what's actually in the 2.6.8rc2 tarball, but I'm building and testing it now, and from visual inspection it *looks* okay, so I'm inclined to chalk this up to either a Mercurial wart, or my boneheaded use of it. Cheers, -Barry From barry at python.org Sun Mar 18 00:13:18 2012 From: barry at python.org (Barry Warsaw) Date: Sat, 17 Mar 2012 19:13:18 -0400 Subject: [Python-Dev] [Python-checkins] cpython (2.6): Added tag v2.6.8rc2 for changeset bd9e1a02e3e3 In-Reply-To: References: <20120317184316.7245fcce@limelight.wooz.org> Message-ID: <20120317191318.29f88bc2@limelight.wooz.org> On Mar 18, 2012, at 12:03 AM, Georg Brandl wrote: >I'm afraid it's the latter: tags are entries in .hgtags. So when you >completely null-merge your 2.6 changes into 2.7, you are basically removing >the tag from the 2.7 branch. And since to find tags, Mercurial looks in the >.hgtags files of all active branch heads, you are basically hiding the tag >when you merge 2.6 into 2.7, at which point it becomes an inactive branch >head. D'oh. Okay, so leave the .hgtags file alone when null merging 2.6->2.7. Actually, that probably applies to all forward merges, so I think this would be useful information for either the devguide or PEP 101, or both. I updated the latter right at the --tag step. Thanks, -Barry From barry at python.org Sun Mar 18 05:13:10 2012 From: barry at python.org (Barry Warsaw) Date: Sun, 18 Mar 2012 00:13:10 -0400 Subject: [Python-Dev] [Python-checkins] cpython (2.6): Added tag v2.6.8rc2 for changeset bd9e1a02e3e3 In-Reply-To: <4F651AF8.8030700@netwok.org> References: <20120317184316.7245fcce@limelight.wooz.org> <4F651AF8.8030700@netwok.org> Message-ID: <20120318001310.1ce284b5@limelight.wooz.org> On Mar 17, 2012, at 07:15 PM, ?ric Araujo wrote: >Note that duplicate entries in .hgtags (when a tag was redone) should >not be ?cleaned up?: the presence of the old changeset hash greatly >helps conflict resolution. (If someone pulled the repo with the old tag >and later pulls and updates, then they don?t have to find out which hash >is the right tag, they just accept all changes from the updated file >into their local file.) But if someone wants to grab the 2.6.8rc2 tag, which changeset do they get? I guess the last one... maybe? >This problem in the future can be avoided by merging all changesets from >X.Y to X.Y+1, not null-merging, unless I misunderstand something. Except in this case, there's probably not much useful in the 2.6.8 changes that are appropriate for 2.7. -Barry From g.brandl at gmx.net Sun Mar 18 07:29:46 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 18 Mar 2012 07:29:46 +0100 Subject: [Python-Dev] cpython (2.6): Added tag v2.6.8rc2 for changeset bd9e1a02e3e3 In-Reply-To: <4F651AF8.8030700@netwok.org> References: <20120317184316.7245fcce@limelight.wooz.org> <4F651AF8.8030700@netwok.org> Message-ID: On 03/18/2012 12:15 AM, ?ric Araujo wrote: > Hi, > > Le 17/03/2012 19:03, Georg Brandl a ?crit : >> On 03/17/2012 11:43 PM, Barry Warsaw wrote: >> I'm afraid it's the latter: tags are entries in .hgtags. So when you completely >> null-merge your 2.6 changes into 2.7, you are basically removing the tag from >> the 2.7 branch. And since to find tags, Mercurial looks in the .hgtags files >> of all active branch heads, you are basically hiding the tag when you merge >> 2.6 into 2.7, at which point it becomes an inactive branch head. > > The plus side to this concept of tags as entries in a file is that it?s > trivial to add the missing 2.6 tags in the 2.7 branch. > > Note that duplicate entries in .hgtags (when a tag was redone) should > not be ?cleaned up?: the presence of the old changeset hash greatly > helps conflict resolution. (If someone pulled the repo with the old tag > and later pulls and updates, then they don?t have to find out which hash > is the right tag, they just accept all changes from the updated file > into their local file.) I don't understand that argument: especially when there is no change in the tree between the two tags. Georg From ncoghlan at gmail.com Sun Mar 18 14:49:25 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 18 Mar 2012 23:49:25 +1000 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Closes #14306: clarify expensiveness of try-except and update code snippet In-Reply-To: References: Message-ID: On Sun, Mar 18, 2012 at 1:58 AM, georg.brandl wrote: > +catching an exception is expensive. ?In versions of Python prior to 2.0 it was > +common to use this idiom:: Actually, given the "prior to 2.0" caveat, "mydict.has_key(key)" is right: the "key in mydict" version was only added in 2.2. This answer probably needs more improvements than just modernising the example that is already there. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From fuzzyman at voidspace.org.uk Sun Mar 18 19:44:28 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sun, 18 Mar 2012 18:44:28 +0000 Subject: [Python-Dev] cpython: PEP 417: Adding unittest.mock In-Reply-To: References: Message-ID: <6E1A6E69-F862-43F7-8C8D-94CBFD0B19AA@voidspace.org.uk> On 16 Mar 2012, at 11:54, Nick Coghlan wrote: > On Thu, Mar 15, 2012 at 6:27 AM, Michael Foord > wrote: >> On the topic of docs.... mock documentation is about eight pages long. My intention was to strip this down to just the api documentation, along with a link to the docs on my site for further examples and so on. I was encouraged here at the sprints to include the full documentation instead (minus the mock library comparison page and the front page can be cut down). So this is what I am now intending to include. It does mean the mock documentation will be "extensive". > > Don't forgot you also have the option of splitting out a separate > HOWTO tutorial section, leaving the main docs as a pure API reference. > (I personally find that style easier to use than the ones which try to > address both needs in the main module docs). > The docs are already organised as API docs and then simple and advanced HOWTO style sections. There are minimal examples inline with the API docs and separate paragraphs of explanations below on particular topics. Feel free to critique the mock docs as they are, or wait until I have committed modified versions. Michael > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From martin at v.loewis.de Mon Mar 19 02:19:20 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 19 Mar 2012 02:19:20 +0100 Subject: [Python-Dev] cpython: PEP 417: Adding unittest.mock In-Reply-To: <9EE751D8-EAA8-43D8-9C04-C2FC383CFC82@gmail.com> References: <9EE751D8-EAA8-43D8-9C04-C2FC383CFC82@gmail.com> Message-ID: <4F668998.7090706@v.loewis.de> > The commingling of extensive examples with regular docs has > made it difficult to lookup functionality in argparse for example. I have now come to think that this should be considered a subordinate use case. The primary use case of the documentation should be copy-paste style examples. At least, that's the feedback I always get for the Python documentation (typically contrasting it with the PHP documentation, where the specification-style portion is typically ignored by readers, which then move on to the user-contributed examples). Regards, Martin From benjamin at python.org Mon Mar 19 02:43:56 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 18 Mar 2012 20:43:56 -0500 Subject: [Python-Dev] [RELEASED] Second release candidates for Python 2.6.8, 2.7.3, 3.1.5, and 3.2.3 Message-ID: We're chuffed to announce the immediate availability of the second release candidates for Python 2.6.8, 2.7.3, 3.1.5, and 3.2.3. The only change from the first release candidates is the patching of an additional security hole. The security issue fixed in the second release candidates is in the expat XML parsing library. expat had the same hash security issue detailed below as Python's core types. The hashing algorithm used in the expat library is now randomized. A more thorough explanation of the "hash attack" security hole follows. The main impetus for these releases is fixing a security issue in Python's hash based types, dict and set, as described below. Python 2.7.3 and 3.2.3 include the security patch and the normal set of bug fixes. Since Python 2.6 and 3.1 are maintained only for security issues, 2.6.8 and 3.1.5 contain only various security patches. The security issue exploits Python's dict and set implementations. Carefully crafted input can lead to extremely long computation times and denials of service. [1] Python dict and set types use hash tables to provide amortized constant time operations. Hash tables require a well-distributed hash function to spread data evenly across the hash table. The security issue is that an attacker could compute thousands of keys with colliding hashes; this causes quadratic algorithmic complexity when the hash table is constructed. To alleviate the problem, the new releases add randomization to the hashing of Python's string types (bytes/str in Python 3 and str/unicode in Python 2), datetime.date, and datetime.datetime. This prevents an attacker from computing colliding keys of these types without access to the Python process. Hash randomization causes the iteration order of dicts and sets to be unpredictable and differ across Python runs. Python has never guaranteed iteration order of keys in a dict or set, and applications are advised to never rely on it. Historically, dict iteration order has not changed very often across releases and has always remained consistent between successive executions of Python. Thus, some existing applications may be relying on dict or set ordering. Because of this and the fact that many Python applications which don't accept untrusted input are not vulnerable to this attack, in all stable Python releases mentioned here, HASH RANDOMIZATION IS DISABLED BY DEFAULT. There are two ways to enable it. The -R commandline option can be passed to the python executable. It can also be enabled by setting an environmental variable PYTHONHASHSEED to "random". (Other values are accepted, too; pass -h to python for complete description.) More details about the issue and the patch can be found in the oCERT advisory [1] and the Python bug tracker [2]. These releases are releases candidates and thus not recommended for production use. Please test your applications and libraries with them, and report any bugs you encounter. We are especially interested in any buggy behavior observed using hash randomization. Excepting major calamity, final versions should appear after several weeks. Downloads are at http://python.org/download/releases/2.6.8/ http://python.org/download/releases/2.7.3/ http://python.org/download/releases/3.1.5/ http://python.org/download/releases/3.2.3/ Please test these candidates and report bugs to http://bugs.python.org/ With regards, The Python release team Barry Warsaw (2.6), Georg Brandl (3.2), Benjamin Peterson (2.7 and 3.1) [1] http://www.ocert.org/advisories/ocert-2011-003.html [2] http://bugs.python.org/issue13703 From pmoody at google.com Mon Mar 19 03:44:18 2012 From: pmoody at google.com (Peter Moody) Date: Sun, 18 Mar 2012 19:44:18 -0700 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: Message-ID: On Mon, Mar 12, 2012 at 9:15 AM, Peter Moody wrote: >>> - iterable APIs should consistently produce iterators (leaving users >>> free to wrap list() around the calls if they want the concrete >>> realisation) I might've missed earlier discussion somewhere, but can someone point me at an example of an iteratable api in ipaddr/ipaddress where an iterator isn't consistently produced? Cheers, peter -- Peter Moody? ? ? Google? ? 1.650.253.7306 Security Engineer? pgp:0xC3410038 From ncoghlan at gmail.com Mon Mar 19 04:04:48 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 19 Mar 2012 13:04:48 +1000 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: Message-ID: On Mon, Mar 19, 2012 at 12:44 PM, Peter Moody wrote: > On Mon, Mar 12, 2012 at 9:15 AM, Peter Moody wrote: > >>>> - iterable APIs should consistently produce iterators (leaving users >>>> free to wrap list() around the calls if they want the concrete >>>> realisation) > > I might've missed earlier discussion somewhere, but can someone point > me at an example of an iteratable api in ipaddr/ipaddress where an > iterator isn't consistently produced? There was at least one that I recall, now to find it again... And searching for "list" in the PEP 3144 branch source highlights subnet() vs iter_subnets() as the main problem child: https://code.google.com/p/ipaddr-py/source/browse/branches/3144/ipaddress.py#1004 A single "subnets()" method that produced the iterator would seem to make more sense (with a "list()" call wrapped around it when the consumer really wants a concrete list). There are a few other cases that produce a list that are less clearcut. I *think* summarising the address range could be converted to an iterator, since the "networks" accumulation list doesn't get referenced by the summarising algorithm. Similarly, there doesn't appear to be a compelling reason for "address_exclude" to produce a concrete list (I also noticed a couple of "assert True == False" statements in that method for "this should never happen" code branches. An explicit "raise AssertionError" is a better way to handle such cases, so the code remains present even under -O and -OO) Collapsing the address list has to build the result list anyway to actually handle the deduplication part of its job, so returning a concrete list makes sense in that case. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Mon Mar 19 04:24:12 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 19 Mar 2012 13:24:12 +1000 Subject: [Python-Dev] cpython: PEP 417: Adding unittest.mock In-Reply-To: <4F668998.7090706@v.loewis.de> References: <9EE751D8-EAA8-43D8-9C04-C2FC383CFC82@gmail.com> <4F668998.7090706@v.loewis.de> Message-ID: On Mon, Mar 19, 2012 at 11:19 AM, "Martin v. L?wis" wrote: >> The commingling of extensive examples with regular docs has >> made it difficult to lookup functionality in argparse for example. > > I have now come to think that this should be considered a subordinate > use case. The primary use case of the documentation should be copy-paste > style examples. At least, that's the feedback I always get for the > Python documentation (typically contrasting it with the PHP > documentation, where the specification-style portion is typically > ignored by readers, which then move on to the user-contributed > examples). That's why the 3.2 logging docs are such a good model to follow. They have *4* pieces: - the formal API reference ("What features does logging offer?" and "Exactly how does the X API work, again?") - a "quick start" tutorial ("What's the bare minimum I need to know to get started with the logging module?") - an "advanced" tutorial ("What are some other cool things the logging infrastructure lets me do?") - a "cookbook" section ("How do I achieve task Y with the logging module?") The first of those is in the standard library *reference*, with clear pointers directly to the other 3 (which live in the "HOWTO" section of the docs). Different audiences have different needs. If you just want to get something working quickly and aren't interested in understanding the details right now, then "cargo cult programming" can be a good way to go and "cookbook" style docs are a great resource for that. If you're just trying to remember a precise incantation for something you already know the module can do, then you want a formal reference that spells out the API details. Tutorials land somewhere in between - trying to teach people enough about the module that they can make more effective use of both the formal API reference (when figuring things out from scratch) and the cookbook examples (when trying to accomplish a common task without caring too much about the details of how and why it works). As much as I like argparse, the existing docs don't do a great job of advertising its capabilities, since they're currently a mixture of tutorial-and-reference-and-cookbook that means they don't excel at serving any of the possible audiences. (I've posted a few suggestions on the issue tracker for specific changes I think would help improve the situation). The key point though is that there are multiple reasons people look up documentation, and the appropriate structure varies based on the reason someone is reading the docs at all. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncdave4life at gmail.com Mon Mar 19 05:13:55 2012 From: ncdave4life at gmail.com (ncdave4life) Date: Sun, 18 Mar 2012 21:13:55 -0700 (PDT) Subject: [Python-Dev] inspect.py change for pygame Message-ID: <1332130435675-4631993.post@n6.nabble.com> I noticed that pydoc doesn't work for pygame under python 3.2.1 for Win32: NotImplementedError: scrap module not available (ImportError: No module named scrap) I made a small patch to inspect.py to solve the problem (I just added a try/expect around the failing statement in ismethoddescriptor). Here's the diff: http://www.burtonsys.com/python32/inspect.diff http://www.burtonsys.com/python32/inspect.diff With that patch, pydoc works with pygame, and reports just a few issues: *scrap* = *sndarray* = *surfarray* = Sorry, I'm a newbie to python-dev, so please forgive my ignorance, but what do I need to do get this fix (or something similar) into a future release? -- View this message in context: http://python.6.n6.nabble.com/inspect-py-change-for-pygame-tp4631993p4631993.html Sent from the Python - python-dev mailing list archive at Nabble.com. From brian at python.org Mon Mar 19 05:19:03 2012 From: brian at python.org (Brian Curtin) Date: Sun, 18 Mar 2012 23:19:03 -0500 Subject: [Python-Dev] inspect.py change for pygame In-Reply-To: <1332130435675-4631993.post@n6.nabble.com> References: <1332130435675-4631993.post@n6.nabble.com> Message-ID: On Sun, Mar 18, 2012 at 23:13, ncdave4life wrote: > I noticed that pydoc doesn't work for pygame under python 3.2.1 for Win32: > > NotImplementedError: scrap module not available (ImportError: No module > named scrap) > > I made a small patch to inspect.py to solve the problem (I just added a > try/expect around the failing statement in ismethoddescriptor). ? Here's the > diff: > http://www.burtonsys.com/python32/inspect.diff > http://www.burtonsys.com/python32/inspect.diff > > With that patch, pydoc works with pygame, and reports just a few issues: > *scrap* = > *sndarray* = > *surfarray* = > > Sorry, I'm a newbie to python-dev, so please forgive my ignorance, but what > do I need to do get this fix (or something similar) into a future release? Patches to fix Python should be posted to http://bugs.python.org/. >From there they'll be classified, reviewed, and if all is well, committed. It's much easier for patches to be tracked on there instead of email. From ncdave4life at gmail.com Mon Mar 19 05:23:01 2012 From: ncdave4life at gmail.com (ncdave4life) Date: Sun, 18 Mar 2012 21:23:01 -0700 (PDT) Subject: [Python-Dev] inspect.py change for pygame In-Reply-To: References: <1332130435675-4631993.post@n6.nabble.com> Message-ID: Thank you, Brian! On Mon, Mar 19, 2012 at 12:20 AM, Brian Curtin [via Python] < ml-node+s6n4632000h78 at n6.nabble.com> wrote: > On Sun, Mar 18, 2012 at 23:13, ncdave4life <[hidden email]> > wrote: > > ...Sorry, I'm a newbie to python-dev, so please forgive my ignorance, > but what > > do I need to do get this fix (or something similar) into a future > release? > > Patches to fix Python should be posted to http://bugs.python.org/. > From there they'll be classified, reviewed, and if all is well, > committed. It's much easier for patches to be tracked on there instead > of email. > -- View this message in context: http://python.6.n6.nabble.com/inspect-py-change-for-pygame-tp4631993p4632010.html Sent from the Python - python-dev mailing list archive at Nabble.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristjan at ccpgames.com Mon Mar 19 10:07:59 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Mon, 19 Mar 2012 09:07:59 +0000 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: Do you really want to add an obscure Boolean flag to the function just so that python can warn you that perhaps your platform is so old and so weird that Python can't guarantee that the performance measurements are to a certain _undefined_ quality? Please note, that the function makes no claims to the resolution or precision of the timer involved. Only that it moves only forward. It is therefore completely and utterly redundant to add a "strict" value, because we would only be behave "strictly" according to an _undefined specification_. K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Lennart Regebro Sent: 15. mars 2012 04:44 To: Matt Joiner Cc: Python Dev Subject: Re: [Python-Dev] Drop the new time.wallclock() function? On Thu, Mar 15, 2012 at 02:58, Matt Joiner wrote: > Victor, I think that steady can always be monotonic, there are time > sources enough to ensure this on the platforms I am aware of. Strict > in this sense refers to not being adjusted forward, i.e. > CLOCK_MONOTONIC vs CLOCK_MONOTONIC_RAW. > > Non monotonicity of this call should be considered a bug. Strict would > be used for profiling where forward leaps would disqualify the timing. This makes sense to me. //Lennart _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From kristjan at ccpgames.com Mon Mar 19 10:26:30 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Mon, 19 Mar 2012 09:26:30 +0000 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: <4F62692E.8040203@oddbird.net> References: <4F626278.7030701@oddbird.net> <4F626712.3030906@gmail.com> <4F62692E.8040203@oddbird.net> Message-ID: Hi Carl. I'm very interested in this work. At CCP we work heavily with virtual environments. Except that we don't use virtualenv because it is just a pain in the neck. We like to be able to run virtual python environments of various types as they arrive checked out of source control repositories, without actually "installing" anything. For some background, please see: http://blog.ccpgames.com/kristjan/2010/10/09/using-an-isolated-python-exe/. It's a rather quick read, actually. The main issue for us is: How to prevent your local python.exe from reading environment variables and running some global site.py? There are a number of points raised in the above blog, please take a look at the "Musings" at the end. Best regards, Kristj?n -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Carl Meyer Sent: 15. mars 2012 22:12 To: python-dev Subject: Re: [Python-Dev] PEP 405 (built-in virtualenv) status On 03/15/2012 03:02 PM, Lindberg, Van wrote: > FYI, the location of the tcl/tk libraries does not appear to be set in > the virtualenv, even if tkinter is installed and working in the main > Python installation. As a result, tk-based apps will not run from a > virtualenv. Thanks for the report! I've added this to the list of open issues in the PEP and I'll look into it. Carl From stefan at bytereef.org Mon Mar 19 13:26:37 2012 From: stefan at bytereef.org (Stefan Krah) Date: Mon, 19 Mar 2012 13:26:37 +0100 Subject: [Python-Dev] svn.python.org and buildbots down Message-ID: <20120319122637.GA28176@sleipnir.bytereef.org> Hello, you might be aware of it already. In case not, it appears that svn.python.org and the buildbots are down. Stefan Krah From victor.stinner at gmail.com Mon Mar 19 13:31:00 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 19 Mar 2012 13:31:00 +0100 Subject: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default In-Reply-To: <4F650B25.4060901@trueblade.com> References: <4F650B25.4060901@trueblade.com> Message-ID: > I have to agree with Georg. Looking at the code, it appears OSError can > be raised with both strict=True and strict=False (since floattime() can > raise OSError). This is an old bug in floattime(): I opened the issue #14368 to remove the unused exception. In practice, it never happens (or it is *very* unlikely today). IMO it's a bug in floattime(). > I also think "By default, if strict is False" confuses things. I agree, I replaced it by "By default,". Victor From victor.stinner at gmail.com Mon Mar 19 13:35:49 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 19 Mar 2012 13:35:49 +0100 Subject: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default In-Reply-To: References: Message-ID: >>> This is not clear to me. ?Why wouldn't it raise OSError on error even with >>> strict=False? ?Please clarify which exception is raised in which case. >> >> It seems clear to me. It doesn't raise exceptions when strict=False because >> it falls back to a non-monotonic clock. If strict is True and a non-monotonic >> clock is not available it raises OSError or NotImplementedError. > > So errors are ignored when strict is false? Said differently: time.steady(strict=True) is always monotonic (*), whereas time.steady() may or may not be monotonic, depending on what is avaiable. time.steady() is a best-effort steady clock. (*) time.steady(strict=True) relies on the OS monotonic clock. If the OS provides a "not really monotonic" clock, Python cannot do better. For example, clock_gettime(CLOCK_MONOTONIC) speed can be adjusted by NTP on Linux. Python tries to use clock_gettime(CLOCK_MONOTONIC_RAW) which doesn't have this issue. Victor From solipsis at pitrou.net Mon Mar 19 14:25:39 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 19 Mar 2012 14:25:39 +0100 Subject: [Python-Dev] svn.python.org and buildbots down References: <20120319122637.GA28176@sleipnir.bytereef.org> Message-ID: <20120319142539.7e83c3fb@pitrou.net> On Mon, 19 Mar 2012 13:26:37 +0100 Stefan Krah wrote: > Hello, > > you might be aware of it already. In case not, it appears that svn.python.org > and the buildbots are down. The buildbots should be back now. As for svn.python.org, is anyone using it? (I don't know how to restart it) Regards Antoine. From stefan at bytereef.org Mon Mar 19 14:45:37 2012 From: stefan at bytereef.org (Stefan Krah) Date: Mon, 19 Mar 2012 14:45:37 +0100 Subject: [Python-Dev] svn.python.org and buildbots down In-Reply-To: <20120319142539.7e83c3fb@pitrou.net> References: <20120319122637.GA28176@sleipnir.bytereef.org> <20120319142539.7e83c3fb@pitrou.net> Message-ID: <20120319134537.GA28530@sleipnir.bytereef.org> Antoine Pitrou wrote: > The buildbots should be back now. As for svn.python.org, is anyone > using it? (I don't know how to restart it) Thanks! I'm using svn.python.org for the automated sphinx checkout in Doc/ (make html) and sometimes to dig through pre-hg history. But don't bother to find out how to restart it just for me. I presume Martin knows the setup and will do it later. Stefan Krah From brett at python.org Mon Mar 19 15:56:02 2012 From: brett at python.org (Brett Cannon) Date: Mon, 19 Mar 2012 10:56:02 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14347: Update Misc/README list of files. In-Reply-To: References: Message-ID: The two files that were added back in should probably just disappear (README.aix and README.coverty). Anyone disagree? On Sat, Mar 17, 2012 at 13:52, ned.deily wrote: > http://hg.python.org/cpython/rev/65a0a6fab127 > changeset: 75797:65a0a6fab127 > user: Ned Deily > date: Sat Mar 17 10:52:08 2012 -0700 > summary: > Issue #14347: Update Misc/README list of files. > (Initial patch by Dionysios Kalofonos) > > files: > Misc/README | 3 ++- > 1 files changed, 2 insertions(+), 1 deletions(-) > > > diff --git a/Misc/README b/Misc/README > --- a/Misc/README > +++ b/Misc/README > @@ -8,7 +8,6 @@ > ---------------- > > ACKS Acknowledgements > -build.sh Script to build and test latest Python from the > repository > gdbinit Handy stuff to put in your .gdbinit file, if you > use gdb > HISTORY News from previous releases -- oldest last > indent.pro GNU indent profile approximating my C style > @@ -19,6 +18,8 @@ > python.pc.in Package configuration info template for > pkg-config > python-wing*.wpr Wing IDE project file > README The file you're reading now > +README.AIX Information about using Python on AIX > +README.coverity Information about running Coverity's Prevent on > Python > README.valgrind Information for Valgrind users, see > valgrind-python.supp > RPM (Old) tools to build RPMs > svnmap.txt Map of old SVN revs and branches to hg changeset > ids > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Mon Mar 19 15:59:20 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 19 Mar 2012 15:59:20 +0100 Subject: [Python-Dev] cpython: Issue #14347: Update Misc/README list of files. References: Message-ID: <20120319155920.46561fdf@pitrou.net> On Mon, 19 Mar 2012 10:56:02 -0400 Brett Cannon wrote: > The two files that were added back in should probably just disappear > (README.aix and README.coverty). Anyone disagree? README.AIX was recently updated in http://bugs.python.org/issue10709. Regards Antoine. From tshepang at gmail.com Mon Mar 19 16:25:44 2012 From: tshepang at gmail.com (Tshepang Lekhonkhobe) Date: Mon, 19 Mar 2012 17:25:44 +0200 Subject: [Python-Dev] regarding HTML mail Message-ID: apology: I searched for a few minutes and could not find a code of conduct regarding HTML mail. Can we have some guideline to allow only plain text emails, so as to avoid cases like http://mail.python.org/pipermail/docs/2012-March/007999.html, where you are forced to scroll horizontally in order to read the text. From nad at acm.org Mon Mar 19 18:20:39 2012 From: nad at acm.org (Ned Deily) Date: Mon, 19 Mar 2012 10:20:39 -0700 Subject: [Python-Dev] svn.python.org and buildbots down References: <20120319122637.GA28176@sleipnir.bytereef.org> <20120319142539.7e83c3fb@pitrou.net> Message-ID: In article <20120319142539.7e83c3fb at pitrou.net>, Antoine Pitrou wrote: > [...] As for svn.python.org, is anyone > using it? The repo for the website (www.python.org) is maintained there. -- Ned Deily, nad at acm.org From v+python at g.nevcal.com Mon Mar 19 18:29:12 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Mon, 19 Mar 2012 10:29:12 -0700 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: References: <4F626278.7030701@oddbird.net> <4F626712.3030906@gmail.com> <4F62692E.8040203@oddbird.net> Message-ID: <4F676CE8.6070107@g.nevcal.com> On 3/19/2012 2:26 AM, Kristj?n Valur J?nsson wrote: > Hi Carl. > I'm very interested in this work. > At CCP we work heavily with virtual environments. Except that we don't use virtualenv because it is just a pain in the neck. We like to be able to run virtual python environments of various types as they arrive checked out of source control repositories, without actually "installing" anything. > For some background, please see:http://blog.ccpgames.com/kristjan/2010/10/09/using-an-isolated-python-exe/. It's a rather quick read, actually. > > The main issue for us is: How to prevent your local python.exe from reading environment variables and running some global site.py? > > There are a number of points raised in the above blog, please take a look at the "Musings" at the end. > > Best regards, > > Kristj?n I found that a very interesting reverse-engineering of what needs to be done to isolate multiple pythons on a machine. I concur that this is a feature that would be good to: 1) at least document the behavior well 2) preferably make an extensible feature, along the lines that Kristj?n suggests There are likely some bootstrapping issues, but I find the idea that the difference between an embedded Python and an installed Python and a built-but-not-installed Python being conceptually isolated to the python.exe and/or site.py rather than python.dll to be a clever concept; of course, where the code lives is less relevant than the conditions under which it is invoked; I doubt the size of the code is the issue regarding where it lives. -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Mar 19 19:20:46 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 19 Mar 2012 14:20:46 -0400 Subject: [Python-Dev] regarding HTML mail In-Reply-To: References: Message-ID: <20120319142046.5a2ce86d@resist.wooz.org> On Mar 19, 2012, at 05:25 PM, Tshepang Lekhonkhobe wrote: >apology: I searched for a few minutes and could not find a code of >conduct regarding HTML mail. I'm not sure it's written down anywhere, but in general we strongly discourage HTML email for the lists @python.org >Can we have some guideline to allow only plain text emails, so as to >avoid cases like >http://mail.python.org/pipermail/docs/2012-March/007999.html, where >you are forced to scroll horizontally in order to read the text. docs is a different mailing list than python-dev, but still neither list is doing any content filtering. We could always enable that if we wanted to get strict about it. In this case, this isn't html email, it's likely this bug: https://bugs.launchpad.net/mailman/+bug/558294 Cheers, -Barry From jnoller at gmail.com Mon Mar 19 19:52:01 2012 From: jnoller at gmail.com (Jesse Noller) Date: Mon, 19 Mar 2012 14:52:01 -0400 Subject: [Python-Dev] regarding HTML mail In-Reply-To: <20120319142046.5a2ce86d@resist.wooz.org> References: <20120319142046.5a2ce86d@resist.wooz.org> Message-ID: I'd like to discuss top-posting. On Monday, March 19, 2012 at 2:20 PM, Barry Warsaw wrote: > On Mar 19, 2012, at 05:25 PM, Tshepang Lekhonkhobe wrote: > > > apology: I searched for a few minutes and could not find a code of > > conduct regarding HTML mail. > > > > I'm not sure it's written down anywhere, but in general we strongly discourage > HTML email for the lists @python.org (http://python.org) > > > Can we have some guideline to allow only plain text emails, so as to > > avoid cases like > > http://mail.python.org/pipermail/docs/2012-March/007999.html, where > > you are forced to scroll horizontally in order to read the text. > > > > docs is a different mailing list than python-dev, but still neither list is > doing any content filtering. We could always enable that if we wanted to get > strict about it. > > In this case, this isn't html email, it's likely this bug: > > https://bugs.launchpad.net/mailman/+bug/558294 > > Cheers, > -Barry > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org (mailto:Python-Dev at python.org) > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jnoller%40gmail.com From pje at telecommunity.com Mon Mar 19 19:53:47 2012 From: pje at telecommunity.com (PJ Eby) Date: Mon, 19 Mar 2012 14:53:47 -0400 Subject: [Python-Dev] svn.python.org and buildbots down In-Reply-To: References: <20120319122637.GA28176@sleipnir.bytereef.org> <20120319142539.7e83c3fb@pitrou.net> Message-ID: On Mar 19, 2012 1:20 PM, "Ned Deily" wrote: > > In article <20120319142539.7e83c3fb at pitrou.net>, > Antoine Pitrou wrote: > > [...] As for svn.python.org, is anyone > > using it? > > The repo for the website (www.python.org) is maintained there. It's also still setuptools' official home, though I've been doing some work recently on migrating it to hg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From v+python at g.nevcal.com Mon Mar 19 19:56:41 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Mon, 19 Mar 2012 11:56:41 -0700 Subject: [Python-Dev] regarding HTML mail In-Reply-To: References: <20120319142046.5a2ce86d@resist.wooz.org> Message-ID: <4F678169.6030906@g.nevcal.com> On 3/19/2012 11:52 AM, Jesse Noller wrote: > I'd like to discuss top-posting. Somewhere else, please. Oh, that was your point :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimjjewett at gmail.com Mon Mar 19 20:12:20 2012 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Mon, 19 Mar 2012 12:12:20 -0700 (PDT) Subject: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives In-Reply-To: <4F5FDE77.7030601@pearwood.info> Message-ID: <4f678514.e855320a.12cb.ffffe24d@mx.google.com> In http://mail.python.org/pipermail/python-dev/2012-March/117570.html Steven D'Aprano posted: > "Need" is awfully strong. I don't believe it is the responsibility > of the standard library to be judge and reviewer of third party > packages that it doesn't control. It is, however, user-friendly to indicate when the stdlib selections are particularly likely to be for reasons other than "A bunch of experts believe this is the best way to do this." Cpython's documentation is (de facto) the documentation for python in general, and pointing people towards other resources (particularly pypi itself) is quite reasonable. Many modules are in the stdlib in part because they are an *acceptable* way of doing something, and the "best" ways are either changing too quickly or are so complicated that it doesn't make sense to burden the *standard* libary for specialist needs. In those cases, I do think the documentation should say so. Specific examples: http://docs.python.org/library/numeric.html quite reasonably has subsections only for what ships with Python. But I think the introductory paragraph could stand to have an extra sentence explaining why and when people should look beyond the stanard library, such as: Applications centered around mathematics may benefit from specialist 3rd party libraries, such as numpy < http://pypi.python.org/pypi/numpy/ >, gmpy < http://pypi.python.org/pypi/gmpy >, and scipy< http://pypi.python.org/pypi/scipy >. I would add a similar sentence to the web section, or the internet protocols section if web is still not broken out separately. http://docs.python.org/dev/library/internet.html Note that some web conventions are still evolving too quickly for covenient encapsulation in a stable library. Many applications will therefore prefer functional replacements from third parties, such as requests or httplib2, or frameworks such as Django and Zope. www-related products can be found by browsing PyPI for top internet subtopic www/http. < http://pypi.python.org/pypi?:action=browse&c=319&c=326 > [I think that searching by classifier -- which first requires browse, and can't be reached from the list of classifiers -- could be improved.] > Should we recommend wxPython over Pyjamas or PyGUI or PyGtk? Actually, I think the existing http://docs.python.org/library/othergui.html does a pretty good job; I would not object to adding mentions of other tools as well, but wiki reference is probably sufficient. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From carl at oddbird.net Mon Mar 19 20:18:40 2012 From: carl at oddbird.net (Carl Meyer) Date: Mon, 19 Mar 2012 13:18:40 -0600 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: References: <4F626278.7030701@oddbird.net> <4F626712.3030906@gmail.com> <4F62692E.8040203@oddbird.net> Message-ID: <4F678690.3000600@oddbird.net> Hello Kristj?n, On 03/19/2012 03:26 AM, Kristj?n Valur J?nsson wrote: > Hi Carl. I'm very interested in this work. At CCP we work heavily > with virtual environments. Except that we don't use virtualenv > because it is just a pain in the neck. We like to be able to run > virtual python environments of various types as they arrive checked > out of source control repositories, without actually "installing" > anything. For some background, please see: > http://blog.ccpgames.com/kristjan/2010/10/09/using-an-isolated-python-exe/. > It's a rather quick read, actually. > > The main issue for us is: How to prevent your local python.exe from > reading environment variables and running some global site.py? > > There are a number of points raised in the above blog, please take a > look at the "Musings" at the end. Thanks, I read the blog post. I think there are some useful points there; I too find the startup sys.path behavior of Python a bit more complex and magical than I'd prefer (I'm sure it's grown organically over the years to address a variety of different needs and concerns, not all of which I understand or am even aware of). I think there's one important (albeit odd and magical) bit of Python's current behavior that you are missing in your blog post. All of the initial sys.path directories are constructed relative to sys.prefix and sys.exec_prefix, and those values in turn are determined (if PYTHONHOME is not set), by walking up the filesystem tree from the location of the Python binary, looking for the existence of a file at the relative path "lib/pythonX.X/os.py" (or "Lib/os.py" on Windows). Python takes the existence of this file to mean that it's found the standard library, and sets sys.prefix accordingly. Thus, you can achieve reliable full isolation from any installed Python, with no need for environment variables, simply by placing a file (it can even be empty) at that relative location from the location of your Python binary. You will still get some default paths added on sys.path, but they will all be relative to your Python binary and thus presumably under your control; nothing from any other location will be on sys.path. I doubt you will find this solution satisfyingly elegant, but you might nonetheless find it practically useful. The essence of PEP 405 is simply to provide a less magical way to achieve this same result, by locating a "pyvenv.cfg" file next to (or one directory up from) the Python binary. The bulk of the work in PEP 405 is aimed towards a rather different goal from yours - to be able to share an installed Python's copy of the standard library among a number of virtual environments. This is the purpose of the "home" key in pyvenv.cfg and the added sys.base_prefix (which point to the Python installation whose standard library will be used). I think this serves a valuable and common use case, but I wonder if your use case couldn't also be served with a minor tweak to PEP 405. Currently it ignores a pyvenv.cfg file with no "home" key; instead, it could set sys.prefix and sys.base_prefix both to the location of that pyvenv.cfg. For most purposes, this would result in a broken Python (no standard library), but it might help you? Beyond that possible tweak, while I certainly wouldn't oppose any effort to clean up / document / make-optional Python's startup sys.path-setting behavior, I think it's mostly orthogonal to PEP 405, and I don't think it would be helpful to expand the scope of PEP 405 to include that effort. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From ethan at stoneleaf.us Mon Mar 19 20:37:19 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 19 Mar 2012 12:37:19 -0700 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: Message-ID: <4F678AEF.4080002@stoneleaf.us> Nick Coghlan wrote: > Collapsing the address list has to build the result list anyway to > actually handle the deduplication part of its job, so returning a > concrete list makes sense in that case. Having only one function return a list instead of an iterator seems questionable. Depending on the code it could either keep track of what it has returned so far in a set and avoid duplication that way; or, just return an `iter(listobject)` instead of `listobject`. ~Ethan~ From guido at python.org Mon Mar 19 20:55:11 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 19 Mar 2012 12:55:11 -0700 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: <4F678AEF.4080002@stoneleaf.us> References: <4F678AEF.4080002@stoneleaf.us> Message-ID: On Mon, Mar 19, 2012 at 12:37 PM, Ethan Furman wrote: > Nick Coghlan wrote: >> >> Collapsing the address list has to build the result list anyway to >> actually handle the deduplication part of its job, so returning a >> concrete list makes sense in that case. > > > Having only one function return a list instead of an iterator seems > questionable. > > Depending on the code it could either keep track of what it has returned so > far in a set and avoid duplication that way; or, just return an > `iter(listobject)` instead of `listobject`. I know I'm lacking context, but is the list ever expected to be huge? If not, what's wrong with always returning a list? -- --Guido van Rossum (python.org/~guido) From ethan at stoneleaf.us Mon Mar 19 21:13:56 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 19 Mar 2012 13:13:56 -0700 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: <4F678AEF.4080002@stoneleaf.us> Message-ID: <4F679384.4040602@stoneleaf.us> Guido van Rossum wrote: > On Mon, Mar 19, 2012 at 12:37 PM, Ethan Furman wrote: >> Nick Coghlan wrote: >>> Collapsing the address list has to build the result list anyway to >>> actually handle the deduplication part of its job, so returning a >>> concrete list makes sense in that case. >> >> Having only one function return a list instead of an iterator seems >> questionable. >> >> Depending on the code it could either keep track of what it has returned so >> far in a set and avoid duplication that way; or, just return an >> `iter(listobject)` instead of `listobject`. > > I know I'm lacking context, but is the list ever expected to be huge? > If not, what's wrong with always returning a list? Nothing wrong in and of itself. It just seems to me that if we have several functions that deal with ip addresses/networks/etc, and all but one return iterators, that one is going to be a pain... 'Which one returns a list again? Oh yeah, that one.' Granted it's mostly a stylistic preference for consistency. ~Ethan~ From ethan at stoneleaf.us Mon Mar 19 21:42:57 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 19 Mar 2012 13:42:57 -0700 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: <4F678690.3000600@oddbird.net> References: <4F626278.7030701@oddbird.net> <4F626712.3030906@gmail.com> <4F62692E.8040203@oddbird.net> <4F678690.3000600@oddbird.net> Message-ID: <4F679A51.4070309@stoneleaf.us> Carl Meyer wrote: > The bulk of the work in PEP 405 is aimed towards a rather different goal > from yours - to be able to share an installed Python's copy of the > standard library among a number of virtual environments. This is the > purpose of the "home" key in pyvenv.cfg and the added sys.base_prefix > (which point to the Python installation whose standard library will be > used). I think this serves a valuable and common use case, but I wonder > if your use case couldn't also be served with a minor tweak to PEP 405. > Currently it ignores a pyvenv.cfg file with no "home" key; instead, it > could set sys.prefix and sys.base_prefix both to the location of that > pyvenv.cfg. For most purposes, this would result in a broken Python (no > standard library), but it might help you? Instead of no home key, how about an empty home key? Explicit being better, and all that. ~Ethan~ From jimjjewett at gmail.com Mon Mar 19 22:01:19 2012 From: jimjjewett at gmail.com (Jim Jewett) Date: Mon, 19 Mar 2012 17:01:19 -0400 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Fixes Issue 14234: fix for the previous commit, keep compilation when In-Reply-To: References: Message-ID: Does this mean that if Python is updated before expat, python will compile out the expat randomization, and therefore not use if even after expat is updated? -jJ On Thu, Mar 15, 2012 at 2:01 PM, benjamin.peterson wrote: > http://hg.python.org/cpython/rev/ada6bfbeceb8 > changeset: ? 75699:ada6bfbeceb8 > branch: ? ? ?2.7 > user: ? ? ? ?Gregory P. Smith > date: ? ? ? ?Wed Mar 14 18:12:23 2012 -0700 > summary: > ?Fixes Issue 14234: fix for the previous commit, keep compilation when > using --with-system-expat working when the system expat does not have > salted hash support. > > files: > ?Modules/expat/expat.h | ?2 ++ > ?Modules/pyexpat.c ? ? | ?5 +++++ > ?2 files changed, 7 insertions(+), 0 deletions(-) > > > diff --git a/Modules/expat/expat.h b/Modules/expat/expat.h > --- a/Modules/expat/expat.h > +++ b/Modules/expat/expat.h > @@ -892,6 +892,8 @@ > ?XML_SetHashSalt(XML_Parser parser, > ? ? ? ? ? ? ? ? unsigned long hash_salt); > > +#define XML_HAS_SET_HASH_SALT ?/* Python Only: Defined for pyexpat.c. */ > + > ?/* If XML_Parse or XML_ParseBuffer have returned XML_STATUS_ERROR, then > ? ?XML_GetErrorCode returns information about the error. > ?*/ > diff --git a/Modules/pyexpat.c b/Modules/pyexpat.c > --- a/Modules/pyexpat.c > +++ b/Modules/pyexpat.c > @@ -1302,8 +1302,13 @@ > ? ? else { > ? ? ? ? self->itself = XML_ParserCreate(encoding); > ? ? } > +#if ((XML_MAJOR_VERSION >= 2) && (XML_MINOR_VERSION >= 1)) || defined(XML_HAS_SET_HASH_SALT) > + ? ?/* This feature was added upstream in libexpat 2.1.0. ?Our expat copy > + ? ? * has a backport of this feature where we also define XML_HAS_SET_HASH_SALT > + ? ? * to indicate that we can still use it. */ > ? ? XML_SetHashSalt(self->itself, > ? ? ? ? ? ? ? ? ? ? (unsigned long)_Py_HashSecret.prefix); > +#endif > ? ? self->intern = intern; > ? ? Py_XINCREF(self->intern); > ?#ifdef Py_TPFLAGS_HAVE_GC > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From benjamin at python.org Mon Mar 19 22:03:37 2012 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 19 Mar 2012 16:03:37 -0500 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Fixes Issue 14234: fix for the previous commit, keep compilation when In-Reply-To: References: Message-ID: 2012/3/19 Jim Jewett : > Does this mean that if Python is updated before expat, python will > compile out the expat randomization, and therefore not use if even > after expat is updated? If you're using --with-system-expat -- Regards, Benjamin From guido at python.org Mon Mar 19 22:21:50 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 19 Mar 2012 14:21:50 -0700 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: <4F679384.4040602@stoneleaf.us> References: <4F678AEF.4080002@stoneleaf.us> <4F679384.4040602@stoneleaf.us> Message-ID: On Mon, Mar 19, 2012 at 1:13 PM, Ethan Furman wrote: > Guido van Rossum wrote: >> >> On Mon, Mar 19, 2012 at 12:37 PM, Ethan Furman wrote: >>> >>> Nick Coghlan wrote: >>>> >>>> Collapsing the address list has to build the result list anyway to >>>> actually handle the deduplication part of its job, so returning a >>>> concrete list makes sense in that case. >>> >>> >>> Having only one function return a list instead of an iterator seems >>> questionable. >>> >>> Depending on the code it could either keep track of what it has returned >>> so >>> far in a set and avoid duplication that way; or, just return an >>> `iter(listobject)` instead of `listobject`. >> >> >> I know I'm lacking context, but is the list ever expected to be huge? >> If not, what's wrong with always returning a list? > > > Nothing wrong in and of itself. ?It just seems to me that if we have several > functions that deal with ip addresses/networks/etc, and all but one return > iterators, that one is going to be a pain... 'Which one returns a list > again? Oh yeah, that one.' It depends on whether they really are easy to confuse. If they are, indeed that feels like poor API design. But sometimes the only time two things seem confusingly similar is when you have not actually tried to use them. A naming convention often helps too. > Granted it's mostly a stylistic preference for consistency. And remember that consistency is good in moderation, but if it becomes a goal in itself you may have a problem. -- --Guido van Rossum (python.org/~guido) From martin at v.loewis.de Mon Mar 19 22:23:32 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 19 Mar 2012 22:23:32 +0100 Subject: [Python-Dev] svn.python.org and buildbots down In-Reply-To: <20120319134537.GA28530@sleipnir.bytereef.org> References: <20120319122637.GA28176@sleipnir.bytereef.org> <20120319142539.7e83c3fb@pitrou.net> <20120319134537.GA28530@sleipnir.bytereef.org> Message-ID: <4F67A3D4.20707@v.loewis.de> > But don't bother to find out how to restart it just for me. I presume > Martin knows the setup and will do it later. It seems to be working fine now, and I didn't do anything. Thomas rebooted the system for hardware inspection at 15:02 (and brought it back up at 15:18), so most likely, it started as part of the regular boot process (as it should have done on the previous reboot as well). Regards, Martin P.S. FWIW, the hardware inspection didn't reveal any hardware problems, so it remains unclear what is causing the outages. From pmoody at google.com Mon Mar 19 22:58:12 2012 From: pmoody at google.com (Peter Moody) Date: Mon, 19 Mar 2012 14:58:12 -0700 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: <4F678AEF.4080002@stoneleaf.us> Message-ID: On Mon, Mar 19, 2012 at 12:55 PM, Guido van Rossum wrote: > On Mon, Mar 19, 2012 at 12:37 PM, Ethan Furman wrote: >> Nick Coghlan wrote: >>> >>> Collapsing the address list has to build the result list anyway to >>> actually handle the deduplication part of its job, so returning a >>> concrete list makes sense in that case. >> >> >> Having only one function return a list instead of an iterator seems >> questionable. >> >> Depending on the code it could either keep track of what it has returned so >> far in a set and avoid duplication that way; or, just return an >> `iter(listobject)` instead of `listobject`. > > I know I'm lacking context, but is the list ever expected to be huge? > If not, what's wrong with always returning a list? It's possible to return massive lists, (eg, returning the 4+ billion /128 subnets in /96 or something even larger, but I don't think that's very common). I've generally tried to avoid confusion by having 'iter' in the iterating methods, but if more of the methods return iterators, maybe I need to rethink that? > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/pmoody%40google.com -- Peter Moody? ? ? Google? ? 1.650.253.7306 Security Engineer? pgp:0xC3410038 From guido at python.org Mon Mar 19 23:04:52 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 19 Mar 2012 15:04:52 -0700 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: <4F678AEF.4080002@stoneleaf.us> Message-ID: On Mon, Mar 19, 2012 at 2:58 PM, Peter Moody wrote: > On Mon, Mar 19, 2012 at 12:55 PM, Guido van Rossum wrote: >> On Mon, Mar 19, 2012 at 12:37 PM, Ethan Furman wrote: >>> Nick Coghlan wrote: >>>> >>>> Collapsing the address list has to build the result list anyway to >>>> actually handle the deduplication part of its job, so returning a >>>> concrete list makes sense in that case. >>> >>> >>> Having only one function return a list instead of an iterator seems >>> questionable. >>> >>> Depending on the code it could either keep track of what it has returned so >>> far in a set and avoid duplication that way; or, just return an >>> `iter(listobject)` instead of `listobject`. >> >> I know I'm lacking context, but is the list ever expected to be huge? >> If not, what's wrong with always returning a list? > > It's possible to return massive lists, (eg, returning the 4+ billion > /128 subnets in /96 or something even larger, but I don't think that's > very common). I've generally tried to avoid confusion by having 'iter' > in the iterating methods, but if more of the methods return iterators, > maybe I need to rethink that? I personally like having 'iter' in the name (e.g. iterkeys() -- note that we dropped this in Py3k because it's no longer an iterator, it's a dict view now. But I don't want to promote that style for ipaddr.py. -- --Guido van Rossum (python.org/~guido) From ethan at stoneleaf.us Mon Mar 19 22:50:22 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 19 Mar 2012 14:50:22 -0700 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: <4F678AEF.4080002@stoneleaf.us> <4F679384.4040602@stoneleaf.us> Message-ID: <4F67AA1E.6070907@stoneleaf.us> Guido van Rossum wrote: > On Mon, Mar 19, 2012 at 1:13 PM, Ethan Furman wrote: >> Nothing wrong in and of itself. It just seems to me that if we have several >> functions that deal with ip addresses/networks/etc, and all but one return >> iterators, that one is going to be a pain... 'Which one returns a list >> again? Oh yeah, that one.' > > It depends on whether they really are easy to confuse. If they are, > indeed that feels like poor API design. But sometimes the only time > two things seem confusingly similar is when you have not actually > tried to use them. Heh -- true, I have not tried to use them (yet) -- just offering another viewpoint. ;) >> Granted it's mostly a stylistic preference for consistency. > > And remember that consistency is good in moderation, but if it becomes > a goal in itself you may have a problem. While I agree that consistency as a goal in and of itself is not good, I consider it more important than 'moderation' implies; in my own code I try to only be inconsistent when there is a good reason to be. To me, "it's already a list" isn't a good reason -- yes, that's easier for the library author, but is it easier for the library user? What is the library user gaining by having a list returned instead of an iterator? Of course, the flip-side also holds: what is the library user losing by getting an iterator when a list was available? When we way the pros and cons, and it comes down to a smidgeon of performance in trade for consistency [1], I would vote for consistency. ~Ethan~ [1] I'm assuming that 'iter(some_list)' is a quick operation. From tjreedy at udel.edu Mon Mar 19 23:44:44 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 19 Mar 2012 18:44:44 -0400 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: <4F678AEF.4080002@stoneleaf.us> Message-ID: On 3/19/2012 6:04 PM, Guido van Rossum wrote: > On Mon, Mar 19, 2012 at 2:58 PM, Peter Moody wrote: >> On Mon, Mar 19, 2012 at 12:55 PM, Guido van Rossum wrote: >>> On Mon, Mar 19, 2012 at 12:37 PM, Ethan Furman wrote: >>>> Nick Coghlan wrote: >>>>> >>>>> Collapsing the address list has to build the result list anyway to >>>>> actually handle the deduplication part of its job, so returning a >>>>> concrete list makes sense in that case. >>>> >>>> >>>> Having only one function return a list instead of an iterator seems >>>> questionable. >>>> >>>> Depending on the code it could either keep track of what it has returned so >>>> far in a set and avoid duplication that way; or, just return an >>>> `iter(listobject)` instead of `listobject`. >>> >>> I know I'm lacking context, but is the list ever expected to be huge? >>> If not, what's wrong with always returning a list? >> >> It's possible to return massive lists, (eg, returning the 4+ billion >> /128 subnets in /96 or something even larger, but I don't think that's >> very common). I've generally tried to avoid confusion by having 'iter' >> in the iterating methods, but if more of the methods return iterators, >> maybe I need to rethink that? > > I personally like having 'iter' in the name (e.g. iterkeys() -- note > that we dropped this in Py3k because it's no longer an iterator, it's > a dict view now. But I don't want to promote that style for ipaddr.py. I am not sure which way you are pointing, but the general default in 3.x is to return iterators: range, zip, enumerate, map, filter, reversed, open (file objects), as well at the dict methods. I am quite happy to be rid of the 'iter' prefix on the latter. This is aside from itertools. The main exceptions I can think of are str.split and sorted. For sorted, a list *must* be constructed anyway, so might as well return it. This apparently matches the case under consideration. If name differentiation is wanted, call it xxxlist. -- Terry Jan Reedy From tjreedy at udel.edu Mon Mar 19 23:47:45 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 19 Mar 2012 18:47:45 -0400 Subject: [Python-Dev] svn.python.org and buildbots down In-Reply-To: <20120319142539.7e83c3fb@pitrou.net> References: <20120319122637.GA28176@sleipnir.bytereef.org> <20120319142539.7e83c3fb@pitrou.net> Message-ID: On 3/19/2012 9:25 AM, Antoine Pitrou wrote: > The buildbots should be back now. As for svn.python.org, is anyone > using it? Last I knew, some files there are required to fully build Python on Windows. I would be happy if that has or were to change. -- Terry Jan Reedy From guido at python.org Tue Mar 20 00:34:17 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 19 Mar 2012 16:34:17 -0700 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: <4F678AEF.4080002@stoneleaf.us> Message-ID: On Mon, Mar 19, 2012 at 3:44 PM, Terry Reedy wrote: > I am not sure which way you are pointing, but the general default in 3.x is > to return iterators: range, zip, enumerate, map, filter, reversed, open > (file objects), as well at the dict methods. Actually as I tried to say, the dict methods (keys() etc.) DON'T return iterators. They return "views" which are iterable. Anyway, I also tried to imply that it matters if the number of list items would ever be huge. It seems that is indeed possible (even if not likely) so I think iterators are useful. > I am quite happy to be rid of > the 'iter' prefix on the latter. This is aside from itertools. The main > exceptions I can think of are str.split and sorted. For sorted, a list > *must* be constructed anyway, so might as well return it. This apparently > matches the case under consideration. If name differentiation is wanted, > call it xxxlist. Agreed, ideally you don't need to know or it'll be obvious from the name without an explicit 'list' or 'iter'. -- --Guido van Rossum (python.org/~guido) From stephen at xemacs.org Tue Mar 20 01:43:10 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 20 Mar 2012 09:43:10 +0900 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: <4F678AEF.4080002@stoneleaf.us> Message-ID: On Tue, Mar 20, 2012 at 8:34 AM, Guido van Rossum wrote: > Anyway, I also tried to imply that it matters if the number of list > items would ever be huge. It seems that is indeed possible (even if > not likely) so I think iterators are useful. But according to Nick's post, there's some sort of uniquification that is done, and the algorithm currently used computes the whole list anyway. I suppose that one could do the uniquification lazily, or find some other way to avoid that computation. Is it worth it to optimize an unlikely case? From ncoghlan at gmail.com Tue Mar 20 02:19:01 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 20 Mar 2012 11:19:01 +1000 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: <4F678AEF.4080002@stoneleaf.us> Message-ID: On Tue, Mar 20, 2012 at 10:43 AM, Stephen J. Turnbull wrote: > But according to Nick's post, there's some sort of uniquification that > is done, and the algorithm currently used computes the whole list anyway. > > I suppose that one could do the uniquification lazily, or find some other > way to avoid that computation. ?Is it worth it to optimize an unlikely case? Yeah, the only where I thought retaining the list output made particular sense was "collapse_address_list". I have no problem with that operation expecting a real sequence as input and producing an actual list as output, since the entire (deduplicated) sequence will eventually end up in memory for checking purposes anyway, even if the result was an iterator rather than a list and it already has "list" in its name. The other instances I noticed should all just be a matter of replacing "output.append(value)" calls with "yield value" instead, so it seems sensible to standardise on a Py3k style iterators-instead-of-lists API for the standard library version of the module. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From steve at pearwood.info Tue Mar 20 03:51:37 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 20 Mar 2012 13:51:37 +1100 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: <4F67AA1E.6070907@stoneleaf.us> References: <4F678AEF.4080002@stoneleaf.us> <4F679384.4040602@stoneleaf.us> <4F67AA1E.6070907@stoneleaf.us> Message-ID: <20120320025137.GA28460@ando> On Mon, Mar 19, 2012 at 02:50:22PM -0700, Ethan Furman wrote: > Guido van Rossum wrote: [...] > >And remember that consistency is good in moderation, but if it becomes > >a goal in itself you may have a problem. > > While I agree that consistency as a goal in and of itself is not good, I > consider it more important than 'moderation' implies; in my own code I > try to only be inconsistent when there is a good reason to be. I think we're probably in violent agreement, but I would put it this way: Consistency for its own sake *is* good, since consistency makes it easier for people to reason about the behaviour of functions on the basis that they are similar to other functions. But it is not the *only* good, and it is legitimate to trade-off one good for another good as needed. > To me, "it's already a list" isn't a good reason -- yes, that's easier > for the library author, but is it easier for the library user? What is > the library user gaining by having a list returned instead of an iterator? I guess this discussion really hinges on which of these two positions you take: 1. The function naturally returns a list, should we compromise that simplicity by returning an iterator to be consistent with the other related/similar functions in the library? 2. These related/similar functions naturally return iterators, should we compromise that consistency by allowing one of them to return a list as it simplifies the implementation? > Of course, the flip-side also holds: what is the library user losing by > getting an iterator when a list was available? > > When we way the pros and cons, and it comes down to a smidgeon of > performance in trade for consistency [1], I would vote for consistency. I lean that way as well. > ~Ethan~ > > [1] I'm assuming that 'iter(some_list)' is a quick operation. For very small lists, it's about half as expensive as creating the list in the first place: steve at runes:~$ python3.2 -m timeit -s "x = (1,2,3)" "list(x)" 1000000 loops, best of 3: 0.396 usec per loop steve at runes:~$ python3.2 -m timeit -s "x = (1,2,3)" "iter(list(x))" 1000000 loops, best of 3: 0.614 usec per loop For large lists, it's approximately free: steve at runes:~$ python3.2 -m timeit -s "x = (1,2,3)*10000" "list(x)" 10000 loops, best of 3: 111 usec per loop steve at runes:~$ python3.2 -m timeit -s "x = (1,2,3)*10000" "iter(list(x))" 10000 loops, best of 3: 111 usec per loop On the other hand, turning the list iterator into a list again is probably not quite so cheap. -- Steven From jimjjewett at gmail.com Tue Mar 20 04:51:35 2012 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Mon, 19 Mar 2012 20:51:35 -0700 (PDT) Subject: [Python-Dev] Issue #10278 -- why not just an attribute? In-Reply-To: Message-ID: <4f67fec7.65aa320a.62e4.0780@mx.google.com> In http://mail.python.org/pipermail/python-dev/2012-March/117762.html Georg Brandl posted: >> + If available, a monotonic clock is used. By default, if *strict* is False, >> + the function falls back to another clock if the monotonic clock failed or is >> + not available. If *strict* is True, raise an :exc:`OSError` on error or >> + :exc:`NotImplementedError` if no monotonic clock is available. > This is not clear to me. Why wouldn't it raise OSError on error even with > strict=False? Please clarify which exception is raised in which case. Passing strict as an argument seems like overkill since it will always be meaningless on some (most?) platforms. Why not just use a function attribute? Those few users who do care can check the value of time.steady.monotonic before calling time.steady(); exceptions raised will always be whatever the clock actually raises. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From steve at pearwood.info Tue Mar 20 07:54:57 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 20 Mar 2012 17:54:57 +1100 Subject: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default In-Reply-To: References: Message-ID: <20120320065456.GC28460@ando> On Mon, Mar 19, 2012 at 01:35:49PM +0100, Victor Stinner wrote: > Said differently: time.steady(strict=True) is always monotonic (*), > whereas time.steady() may or may not be monotonic, depending on what > is avaiable. > > time.steady() is a best-effort steady clock. > > (*) time.steady(strict=True) relies on the OS monotonic clock. If the > OS provides a "not really monotonic" clock, Python cannot do better. I don't think that is true. Surely Python can guarantee that the clock will never go backwards by caching the last value. A sketch of an implementation: def monotonic(_last=[None]): t = system_clock() # best effort, but sometimes goes backwards if _last[0] is not None: t = max(t, _last[0]) _last[0] = t return t Overhead if done in Python may be excessive, in which case do it in C. Unless I've missed something, that guarantees monotonicity -- it may not advance from one call to the next, but it will never go backwards. There's probably even a cleverer implementation that will not repeat the same value more than twice in a row. I leave that as an exercise :) As far as I can tell, "steady" is a misnomer. We can't guarantee that the timer will tick at a steady rate. That will depend on the quality of the hardware clock. -- Steven From tshepang at gmail.com Tue Mar 20 08:24:17 2012 From: tshepang at gmail.com (Tshepang Lekhonkhobe) Date: Tue, 20 Mar 2012 09:24:17 +0200 Subject: [Python-Dev] regarding HTML mail In-Reply-To: <20120319142046.5a2ce86d@resist.wooz.org> References: <20120319142046.5a2ce86d@resist.wooz.org> Message-ID: On Mon, Mar 19, 2012 at 20:20, Barry Warsaw wrote: >>Can we have some guideline to allow only plain text emails, so as to >>avoid cases like >>http://mail.python.org/pipermail/docs/2012-March/007999.html, where >>you are forced to scroll horizontally in order to read the text. > > docs is a different mailing list than python-dev, but still neither list is > doing any content filtering. ?We could always enable that if we wanted to get > strict about it. > > In this case, this isn't html email, it's likely this bug: > > https://bugs.launchpad.net/mailman/+bug/558294 Maybe it is, but it's strange that my reply to that mail was nicely done: http://mail.python.org/pipermail/docs/2012-March/008001.html. From anacrolix at gmail.com Tue Mar 20 08:33:51 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Tue, 20 Mar 2012 15:33:51 +0800 Subject: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default In-Reply-To: <20120320065456.GC28460@ando> References: <20120320065456.GC28460@ando> Message-ID: I believe we should make a monotonic_time method that assures monotonicity and be done with it. Forward steadiness can not be guaranteed. No parameters. On Mar 20, 2012 2:56 PM, "Steven D'Aprano" wrote: > On Mon, Mar 19, 2012 at 01:35:49PM +0100, Victor Stinner wrote: > > > Said differently: time.steady(strict=True) is always monotonic (*), > > whereas time.steady() may or may not be monotonic, depending on what > > is avaiable. > > > > time.steady() is a best-effort steady clock. > > > > (*) time.steady(strict=True) relies on the OS monotonic clock. If the > > OS provides a "not really monotonic" clock, Python cannot do better. > > I don't think that is true. Surely Python can guarantee that the clock > will never go backwards by caching the last value. A sketch of an > implementation: > > def monotonic(_last=[None]): > t = system_clock() # best effort, but sometimes goes backwards > if _last[0] is not None: > t = max(t, _last[0]) > _last[0] = t > return t > > Overhead if done in Python may be excessive, in which case do it in C. > > Unless I've missed something, that guarantees monotonicity -- it may not > advance from one call to the next, but it will never go backwards. > > There's probably even a cleverer implementation that will not repeat the > same value more than twice in a row. I leave that as an exercise :) > > As far as I can tell, "steady" is a misnomer. We can't guarantee that > the timer will tick at a steady rate. That will depend on the quality of > the hardware clock. > > > -- > Steven > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Tue Mar 20 09:43:44 2012 From: glyph at twistedmatrix.com (Glyph) Date: Tue, 20 Mar 2012 04:43:44 -0400 Subject: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default In-Reply-To: References: <20120320065456.GC28460@ando> Message-ID: On Mar 20, 2012, at 3:33 AM, Matt Joiner wrote: > I believe we should make a monotonic_time method that assures monotonicity and be done with it. Forward steadiness can not be guaranteed. No parameters. > I think this discussion has veered off a bit into the overly-theoretical. Python cannot really "guarantee" anything here; alternately, it guarantees everything, since if you don't like what Python gives you you can always get your money back :). It's the OS's job to guarantee things. We can all agree that a monotonic clock of some sort is useful. However, maybe my application wants CLOCK_MONOTONIC and maybe it wants CLOCK_MONOTONIC_RAW. Sometimes I want GetTickCount64 and sometimes I want QueryUnbiasedInterruptTime. While these distinctions are probably useless to most applications, they may be of interest to some, and Python really shouldn't make it unduly difficult to get at them. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Tue Mar 20 06:09:52 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 20 Mar 2012 18:09:52 +1300 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: References: <4F678AEF.4080002@stoneleaf.us> Message-ID: <4F681120.90502@canterbury.ac.nz> Guido van Rossum wrote: > I personally like having 'iter' in the name (e.g. iterkeys() -- note > that we dropped this in Py3k because it's no longer an iterator, it's > a dict view now. But I don't want to promote that style for ipaddr.py. +1 from me too on having all methods that return iterators clearly indicating so. It's an important distinction, and it can be very confusing if some methods of an API return iterators and others don't with no easy way of remembering which is which. -- Greg From victor.stinner at gmail.com Tue Mar 20 10:25:13 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 20 Mar 2012 10:25:13 +0100 Subject: [Python-Dev] Issue #10278 -- why not just an attribute? In-Reply-To: <4f67fec7.65aa320a.62e4.0780@mx.google.com> References: <4f67fec7.65aa320a.62e4.0780@mx.google.com> Message-ID: 2012/3/20 Jim J. Jewett : > > > In http://mail.python.org/pipermail/python-dev/2012-March/117762.html > Georg Brandl posted: > >>> + ? If available, a monotonic clock is used. By default, if *strict* is False, >>> + ? the function falls back to another clock if the monotonic clock failed or is >>> + ? not available. If *strict* is True, raise an :exc:`OSError` on error or >>> + ? :exc:`NotImplementedError` if no monotonic clock is available. > >> This is not clear to me. ?Why wouldn't it raise OSError on error even with >> strict=False? ?Please clarify which exception is raised in which case. > > Passing strict as an argument seems like overkill since it will always > be meaningless on some (most?) platforms. ?Why not just use a function > attribute? ?Those few users who do care can check the value of > time.steady.monotonic before calling time.steady(); exceptions raised > will always be whatever the clock actually raises. The clock is chosen at runtime. You might use a different clock at each call. In most cases, Python should chose a clock at the first call and reuse it for next calls. For example, on Linux the following clocks are tested: - clock_gettime(CLOCK_MONONOTIC_RAW) - clock_gettime(CLOCK_MONONOTIC) - gettimeofday() - ftime() Victor From victor.stinner at gmail.com Tue Mar 20 10:33:11 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 20 Mar 2012 10:33:11 +0100 Subject: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default In-Reply-To: References: <20120320065456.GC28460@ando> Message-ID: > I think this discussion has veered off a bit into the overly-theoretical. > ?Python cannot really "guarantee" anything here That's why the function name was changed from time.monotonic() to time.steady(strict=True). If you want to change something, you should change the documentation to list OS limitations. > It's the OS's job to guarantee things Correct, most Python modules exposing OS functions are thin wrappers and don't add any magic. When we need a higher level API, we write a new module: like shutil enhancing the os module. Victor From skippy.hammond at gmail.com Tue Mar 20 11:48:27 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Tue, 20 Mar 2012 21:48:27 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: <4F68607B.7060307@gmail.com> For those who missed it, in http://bugs.python.org/issue14302, Martin recently commented: """ After more discussion, it appears that this change is too incompatible to be done in a single release. Therefore, I propose a long-term change into this direction, with the actual change not happening until 3.5. """ While I'm still unclear on the actual benefits of this, Martin's approach strikes a reasonable compromise so I withdraw my objections. Thanks, Mark From rdmurray at bitdance.com Tue Mar 20 13:32:10 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 20 Mar 2012 08:32:10 -0400 Subject: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default In-Reply-To: References: <20120320065456.GC28460@ando> Message-ID: <20120320123211.AA9F52500E7@webabinitio.net> On Tue, 20 Mar 2012 04:43:44 -0400, Glyph wrote: > > On Mar 20, 2012, at 3:33 AM, Matt Joiner wrote: > > > I believe we should make a monotonic_time method that assures monotonicity and be done with it. Forward steadiness can not be guaranteed. No parameters. > > > > I think this discussion has veered off a bit into the overly-theoretical. Python cannot really "guarantee" anything here; alternately, it guarantees everything, since if you don't like what Python gives you you can always get your money back :). It's the OS's job to guarantee things. We can all agree that a monotonic clock of some sort is useful. > > However, maybe my application wants CLOCK_MONOTONIC and maybe it wants CLOCK_MONOTONIC_RAW. Sometimes I want GetTickCount64 and sometimes I want QueryUnbiasedInterruptTime. While these distinctions are probably useless to most applications, they may be of interest to some, and Python really shouldn't make it unduly difficult to get at them. Something like: time.steady(require_clock=None) where require_clock can be any of BEST, CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, GetTickCount64, QueryUnbiastedInterruptTime, etc? Then None would mean it is allowable to use time.time and the cache-the-last-time-returned algorithm, and BEST would be Victor's current 'strict=True'. And if you require a Linux clock on Windows or vice-versa, on your own head be it :) --David From victor.stinner at gmail.com Tue Mar 20 13:32:43 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 20 Mar 2012 13:32:43 +0100 Subject: [Python-Dev] pysandbox 1.5 released Message-ID: pysandbox is a Python sandbox. By default, untrusted code executed in the sandbox cannot modify the environment (write a file, use print or import a module). But you can configure the sandbox to choose exactly which features are allowed or not, e.g. import sys module and read /etc/issue file. http://pypi.python.org/pypi/pysandbox https://github.com/haypo/pysandbox/ Main changes since pysandbox 1.0.3: - More modules and functions are allowed: math, random and time modules, and the compile() builtin function for example - Drop the timeout feature: it was not effective on CPU intensive functions implemented in C - (Read the ChangeLog to see all changes.) pysandbox has known limitations: - it is unable to limit memory or CPU - it does not protect against bugs (e.g. crash) or vulnerabilities in CPython - dict methods able to modify a dict (e.g. dict.update) are disabled to protect the sandbox namespace, but dict[key]=value is still accepted It is recommanded to run untrusted code in a subprocess to workaround these limitations. pysandbox doesn't provide an helper yet. pysandbox is used by an IRC bot (fschfsch) to evaluate a Python expression. The bot uses fork() and setrlimit() to limit memory and to implement a timeout. https://github.com/haypo/pysandbox/wiki/fschfsch -- The limitation on dict methods is required to deny the modification of the __builtins__ dictionary. I proposed the PEP 416 (frozendict) but Guido van Rossum is going to reject it. I don't see how to fix this limitation without modifying CPython. http://www.python.org/dev/peps/pep-0416/ Victor From ethan at stoneleaf.us Tue Mar 20 13:23:32 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 20 Mar 2012 05:23:32 -0700 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: <4F681120.90502@canterbury.ac.nz> References: <4F678AEF.4080002@stoneleaf.us> <4F681120.90502@canterbury.ac.nz> Message-ID: <4F6876C4.7050808@stoneleaf.us> Greg Ewing wrote: > Guido van Rossum wrote: > >> I personally like having 'iter' in the name (e.g. iterkeys() -- note >> that we dropped this in Py3k because it's no longer an iterator, it's >> a dict view now. But I don't want to promote that style for ipaddr.py. > > +1 from me too on having all methods that return iterators > clearly indicating so. It's an important distinction, and > it can be very confusing if some methods of an API return > iterators and others don't with no easy way of remembering > which is which. With the prevalence of iterators in Python 3 [1], the easy way is to have the API default to iterators, drop 'iter' from the names, and use 'list' in the names to signal the oddball cases where a list is returned instead. ~Ethan~ [1] http://mail.python.org/pipermail/python-dev/2012-March/117815.html From Van.Lindberg at haynesboone.com Tue Mar 20 15:08:10 2012 From: Van.Lindberg at haynesboone.com (Lindberg, Van) Date: Tue, 20 Mar 2012 14:08:10 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68607B.7060307@gmail.com> References: <4F68607B.7060307@gmail.com> Message-ID: <4F688F48.9010101@gmail.com> On 3/20/2012 5:48 AM, Mark Hammond wrote: > While I'm still unclear on the actual benefits of this, Martin's > approach strikes a reasonable compromise so I withdraw my objections. Ok. I was out of town and so could not respond to most of the latest discussion. A question for you Mark, Paul, (and anyone else): ?ric correctly points out that there are actually two distinct changes proposed here: 1. Moving the Python binary 2. Changing from "Scripts" to "bin" So far, the primary resistance seems to be to item #1 - moving the python binary. There have been a few people who have noted that #2 will require some code to change (i.e. Paul), but I don't see lots of resistance. Am I reading you correctly? Thanks, VanCIRCULAR 230 NOTICE: To ensure compliance with requirements imposed by U.S. Treasury Regulations, Haynes and Boone, LLP informs you that any U.S. tax advice contained in this communication (including any attachments) was not intended or written to be used, and cannot be used, for the purpose of (i) avoiding penalties under the Internal Revenue Code or (ii) promoting, marketing or recommending to another party any transaction or matter addressed herein. CONFIDENTIALITY NOTICE: This electronic mail transmission is confidential, may be privileged and should be read or retained only by the intended recipient. If you have received this transmission in error, please immediately notify the sender and delete it from your system. From van.lindberg at gmail.com Tue Mar 20 15:08:08 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 20 Mar 2012 09:08:08 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68607B.7060307@gmail.com> References: <4F68607B.7060307@gmail.com> Message-ID: <4F688F48.9010101@gmail.com> On 3/20/2012 5:48 AM, Mark Hammond wrote: > While I'm still unclear on the actual benefits of this, Martin's > approach strikes a reasonable compromise so I withdraw my objections. Ok. I was out of town and so could not respond to most of the latest discussion. A question for you Mark, Paul, (and anyone else): ?ric correctly points out that there are actually two distinct changes proposed here: 1. Moving the Python binary 2. Changing from "Scripts" to "bin" So far, the primary resistance seems to be to item #1 - moving the python binary. There have been a few people who have noted that #2 will require some code to change (i.e. Paul), but I don't see lots of resistance. Am I reading you correctly? Thanks, Van From mail at timgolden.me.uk Tue Mar 20 15:26:59 2012 From: mail at timgolden.me.uk (Tim Golden) Date: Tue, 20 Mar 2012 14:26:59 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F688F48.9010101@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> Message-ID: <4F6893B3.7090600@timgolden.me.uk> On 20/03/2012 14:08, VanL wrote: > On 3/20/2012 5:48 AM, Mark Hammond wrote: >> While I'm still unclear on the actual benefits of this, Martin's >> approach strikes a reasonable compromise so I withdraw my objections. > > > Ok. I was out of town and so could not respond to most of the latest > discussion. > > A question for you Mark, Paul, (and anyone else): ?ric correctly points > out that there are actually two distinct changes proposed here: > > 1. Moving the Python binary > 2. Changing from "Scripts" to "bin" > > So far, the primary resistance seems to be to item #1 - moving the > python binary. There have been a few people who have noted that #2 will > require some code to change (i.e. Paul), but I don't see lots of > resistance. Speaking for myself, I think that's true. At present I tend to add /scripts to my path and I can just as easily add /bin (for whatever version I'm running most often on that machine). TJG From van.lindberg at gmail.com Tue Mar 20 15:27:51 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 20 Mar 2012 09:27:51 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68607B.7060307@gmail.com> References: <4F68607B.7060307@gmail.com> Message-ID: Germane to this discussion, I reached out for feedback. Most people didn't care about the issue, or were slightly inclined to have it be uniform across platforms. As Terry mentioned, I think that long-term uniformity will benefit everybody down the line, and that is the way to go. The most interesting feedback, though, related to moving the Python exe and placing it on the PATH. I got one argument back that I thought was persuasive here: We want things to 'just work.' Specifically, the following sequence of events should not require any fiddling on Windows: 1. Install python. 2. Open up a shell and run "python" 3. Use pip or easy_install to install regetron (a package that installs an executable file). 4. Run regetron. For step #2, the python exe needs to be on the PATH. For steps 3 and 4, the binaries directory needs to be on the PATH. In hearing from a couple people who teach python to beginners, this is a substantial hurdle - the first thing they need to do is to edit their environment to add these directories to the PATH. This is orthogonal to the Scripts/bin issue, but I thought it should be brought up. From martin at v.loewis.de Tue Mar 20 16:52:44 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 20 Mar 2012 16:52:44 +0100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F68607B.7060307@gmail.com> Message-ID: <4F68A7CC.4000606@v.loewis.de> > In hearing from a couple people who teach python to beginners, this is a > substantial hurdle - the first thing they need to do is to edit their > environment to add these directories to the PATH. This is something I never understood. On Windows, it's custom to launch programs from the start menu, and Python is easy enough to find on the start menu (e.g. by typing "Python"). Why do people want to launch it by opening a shell window, then typing python? In any case, I have given up my resistance to the feature request for automatic path fiddling several years ago, and was since waiting for a contribution of a patch that makes it happen. Regards, Martin From yselivanov.ml at gmail.com Tue Mar 20 16:57:48 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 20 Mar 2012 11:57:48 -0400 Subject: [Python-Dev] pysandbox 1.5 released In-Reply-To: References: Message-ID: <0D8D9E6B-75DD-4925-BE5A-C60F7C6CF271@gmail.com> Well, I really hope that the PEP regarding frozendict will be accepted. Especially due to the fact that the required changes are small. With the recent projects like clojure-py, blog posts like http://goo.gl/bFB5x (Python becomes a platform), your pysandbox, it became clear that people start evaluating Python on a different level. And for developing custom languages, deeply experimenting with coroutines and who knows what else, frozendict is a missing concept in python's immutable types structure. On 2012-03-20, at 8:32 AM, Victor Stinner wrote: > pysandbox is a Python sandbox. By default, untrusted code executed in > the sandbox cannot modify the environment (write a file, use print or > import a module). But you can configure the sandbox to choose exactly > which features are allowed or not, e.g. import sys module and read > /etc/issue file. > > http://pypi.python.org/pypi/pysandbox > https://github.com/haypo/pysandbox/ > > Main changes since pysandbox 1.0.3: > > - More modules and functions are allowed: math, random and time > modules, and the compile() builtin function for example > - Drop the timeout feature: it was not effective on CPU intensive > functions implemented in C > - (Read the ChangeLog to see all changes.) > > pysandbox has known limitations: > > - it is unable to limit memory or CPU > - it does not protect against bugs (e.g. crash) or vulnerabilities in CPython > - dict methods able to modify a dict (e.g. dict.update) are disabled > to protect the sandbox namespace, but dict[key]=value is still > accepted > > It is recommanded to run untrusted code in a subprocess to workaround > these limitations. pysandbox doesn't provide an helper yet. > > pysandbox is used by an IRC bot (fschfsch) to evaluate a Python > expression. The bot uses fork() and setrlimit() to limit memory and to > implement a timeout. > > https://github.com/haypo/pysandbox/wiki/fschfsch > > -- > > The limitation on dict methods is required to deny the modification of > the __builtins__ dictionary. I proposed the PEP 416 (frozendict) but > Guido van Rossum is going to reject it. I don't see how to fix this > limitation without modifying CPython. > > http://www.python.org/dev/peps/pep-0416/ > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com From van.lindberg at gmail.com Tue Mar 20 17:02:27 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 20 Mar 2012 11:02:27 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68A7CC.4000606@v.loewis.de> References: <4F68607B.7060307@gmail.com> <4F68A7CC.4000606@v.loewis.de> Message-ID: <4F68AA13.9030004@gmail.com> On 3/20/2012 10:52 AM, "Martin v. L?wis" wrote: >> In hearing from a couple people who teach python to beginners, this is a >> substantial hurdle - the first thing they need to do is to edit their >> environment to add these directories to the PATH. > This is something I never understood. On Windows, it's custom to launch > programs from the start menu, and Python is easy enough to find on the > start menu (e.g. by typing "Python"). Why do people want to launch it by > opening a shell window, then typing python? Because the workflow you suggest is broken when you are developing with Python. Assume that you are iteratively building up a program in Python. You aren't sure if it is right yet, so you want to get it into python to test it and see the output. There are three ways to do this. 1. Run python from the start menu. - Import sys, fiddle with sys.path to add my module, import/run my module, do my tests. When you exit /hard error out, the python window disappears. 2. Double-click the .py file - Runs the file, but then disappears immediately (unless you put in something like input/raw_input just to keep the window open) - and if it errors out, you never see the traceback - it disappears too fast. 3. Get a shell and run python. This requires cd'ing to the directory where my .py file is, but then I run/import it and I see the information. To repeat the process, either type python again or just press arrow-up. 4. (Not relevant here) - do it in an IDE that does #3 for you. #3 is the only reasonable way to do development if you are not in an IDE. Thanks, Van From brian at python.org Tue Mar 20 17:04:53 2012 From: brian at python.org (Brian Curtin) Date: Tue, 20 Mar 2012 11:04:53 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68A7CC.4000606@v.loewis.de> References: <4F68607B.7060307@gmail.com> <4F68A7CC.4000606@v.loewis.de> Message-ID: On Tue, Mar 20, 2012 at 10:52, "Martin v. L?wis" wrote: >> In hearing from a couple people who teach python to beginners, this is a >> substantial hurdle - the first thing they need to do is to edit their >> environment to add these directories to the PATH. > > This is something I never understood. On Windows, it's custom to launch > programs from the start menu, and Python is easy enough to find on the > start menu (e.g. by typing "Python"). Why do people want to launch it by > opening a shell window, then typing python? I've never thought about doing it otherwise. If I want to run the C:\Users\brian\example\sample.py script, I'd open a CMD and move to the example directory and execute the sample script. The class of about 60 people I taught a few years back at a previous employer all did the same thing without me specifying. Everyone was used to working in the command line for other tasks, from using other languages to running our products, so it was natural to them to run it that way. > In any case, I have given up my resistance to the feature request for > automatic path fiddling several years ago, and was since waiting for > a contribution of a patch that makes it happen. I'm working on the changes we discussed at PyCon. http://bugs.python.org/issue3561 has an version of the patch doing it the old way - I hope to have the new way figured out soon. From martin at v.loewis.de Tue Mar 20 17:19:55 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 20 Mar 2012 17:19:55 +0100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68AA13.9030004@gmail.com> References: <4F68607B.7060307@gmail.com> <4F68A7CC.4000606@v.loewis.de> <4F68AA13.9030004@gmail.com> Message-ID: <4F68AE2B.1010507@v.loewis.de> > 1. Run python from the start menu. > - Import sys, fiddle with sys.path to add my module, import/run my > module, do my tests. When you exit /hard error out, the python window > disappears. > > 2. Double-click the .py file > - Runs the file, but then disappears immediately (unless you put in > something like input/raw_input just to keep the window open) - and if it > errors out, you never see the traceback - it disappears too fast. > > 3. Get a shell and run python. > This requires cd'ing to the directory where my .py file is, but then I > run/import it and I see the information. To repeat the process, either > type python again or just press arrow-up. > > 4. (Not relevant here) - do it in an IDE that does #3 for you. > > #3 is the only reasonable way to do development if you are not in an IDE. No - there is an version #3a: 3.a) Get a shell and run the script CD into the directory, then directly run foo.py, without prefixing it with python.exe. This doesn't require any changes to the path, and is shorter in usage than having the path specified. With PEP 397, you will be able to run "py foo.py" without path modification, and it will get the correct Python version even (which neither the path manipulation nor the file association could achieve). Regards, Martin From van.lindberg at gmail.com Tue Mar 20 17:24:33 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 20 Mar 2012 11:24:33 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68AE2B.1010507@v.loewis.de> References: <4F68607B.7060307@gmail.com> <4F68A7CC.4000606@v.loewis.de> <4F68AA13.9030004@gmail.com> <4F68AE2B.1010507@v.loewis.de> Message-ID: <4F68AF41.2040502@gmail.com> On 3/20/2012 11:19 AM, "Martin v. L?wis" wrote: > No - there is an version #3a: 3.a) Get a shell and run the script CD > into the directory, then directly run foo.py, without prefixing it > with python.exe. This doesn't require any changes to the path, and is > shorter in usage than having the path specified. Fair enough - but notice: 1) You are still in the shell, instead of running from the start menu; and 2) what if you want to import it and test a function with various inputs? You either implement a request/response in a __main__ block, or type "python" and then import foo. From tjreedy at udel.edu Tue Mar 20 17:24:31 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 20 Mar 2012 12:24:31 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68AA13.9030004@gmail.com> References: <4F68607B.7060307@gmail.com> <4F68A7CC.4000606@v.loewis.de> <4F68AA13.9030004@gmail.com> Message-ID: On 3/20/2012 12:02 PM, VanL wrote: > On 3/20/2012 10:52 AM, "Martin v. L?wis" wrote: >>> In hearing from a couple people who teach python to beginners, this is a >>> substantial hurdle - the first thing they need to do is to edit their >>> environment to add these directories to the PATH. >> This is something I never understood. On Windows, it's custom to launch >> programs from the start menu, and Python is easy enough to find on the >> start menu (e.g. by typing "Python"). Why do people want to launch it by >> opening a shell window, then typing python? Perhaps as the number of *nix users increases, the number of (*nix & windows) developer/users increases. > 3. Get a shell and run python. > This requires cd'ing to the directory where my .py file is, but then I > run/import it and I see the information. When IDLE crashes, running it from a cmd window is the only way to get a traceback to help diagnose the problem. -- Terry Jan Reedy From van.lindberg at gmail.com Tue Mar 20 17:28:04 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 20 Mar 2012 11:28:04 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68AE2B.1010507@v.loewis.de> References: <4F68607B.7060307@gmail.com> <4F68A7CC.4000606@v.loewis.de> <4F68AA13.9030004@gmail.com> <4F68AE2B.1010507@v.loewis.de> Message-ID: <4F68B014.20906@gmail.com> On 3/20/2012 11:19 AM, "Martin v. L?wis" wrote: > No - there is an version #3a: 3.a) Get a shell and run the script CD > into the directory, then directly run foo.py, without prefixing it > with python.exe. This doesn't require any changes to the path, and is > shorter in usage than having the path specified. With PEP 397, you > will be able to run "py foo.py" without path modification, and it will > get the correct Python version even (which neither the path > manipulation nor the file association could achieve). There is also one more scenario, assuming that your project includes other libraries. You can run setup.py directly in your example, but what about pip or easy_install? Both of those require the binaries directory to also be on the PATH - requiring at least a little PATH manipulation. Thanks, Van From p.f.moore at gmail.com Tue Mar 20 19:19:26 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 20 Mar 2012 18:19:26 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F688F48.9010101@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> Message-ID: On 20 March 2012 14:08, Lindberg, Van wrote: > On 3/20/2012 5:48 AM, Mark Hammond wrote: >> While I'm still unclear on the actual benefits of this, Martin's >> approach strikes a reasonable compromise so I withdraw my objections. > > > Ok. I was out of town and so could not respond to most of the latest > discussion. > > A question for you Mark, Paul, (and anyone else): ?ric correctly points > out that there are actually two distinct changes proposed here: > > 1. Moving the Python binary > 2. Changing from "Scripts" to "bin" > > So far, the primary resistance seems to be to item #1 - moving the > python binary. There have been a few people who have noted that #2 will > require some code to change (i.e. Paul), but I don't see lots of resistance. > > Am I reading you correctly? Somewhat. I don't really object to #1, but mildly object to #2. I also note that the proposals round the Lib directory seem to have disappeared. I assume those have been dropped - they were the ones I did object to. I also note that I'm assuming virtualenv will change to match whatever the Python version it's referencing does. I don't see how you can guarantee that, but if there are discrepancies between virtualenvs and installed Pythons, my level of objection goes up a little more. Martin's suggestion of an intermediate registry entry to ease transition doesn't help me. So I don't care about that. See a later message for my comments on PATH as it affects this discussion, though. Paul. From martin at v.loewis.de Tue Mar 20 19:17:05 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 20 Mar 2012 19:17:05 +0100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F68A7CC.4000606@v.loewis.de> <4F68AA13.9030004@gmail.com> Message-ID: <20120320191705.Horde.N5RmG7uWis5PaMmhOTiH9RA@webmail.df.eu> > When IDLE crashes, running it from a cmd window is the only way to > get a traceback to help diagnose the problem. Certainly. In this case, there is no PATH issue, though: you have to CD into the Python installation, anyway, to start IDLE - and there you have python.exe in the current directory. Plus, you can still launch Lib\idlelib\idle.py without prefixing it with python.exe. Regards, Martin From carl at oddbird.net Tue Mar 20 19:35:09 2012 From: carl at oddbird.net (Carl Meyer) Date: Tue, 20 Mar 2012 12:35:09 -0600 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> Message-ID: <4F68CDDD.6080000@oddbird.net> On 03/20/2012 12:19 PM, Paul Moore wrote: > I also note that I'm assuming virtualenv will change to match whatever > the Python version it's referencing does. I don't see how you can > guarantee that, but if there are discrepancies between virtualenvs and > installed Pythons, my level of objection goes up a little more. Virtualenv will follow whatever Python does, yes. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From valhallasw at arctus.nl Tue Mar 20 19:50:42 2012 From: valhallasw at arctus.nl (Merlijn van Deen) Date: Tue, 20 Mar 2012 19:50:42 +0100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: On 13 March 2012 20:43, VanL wrote: > Following up on conversations at PyCon, I want to bring up one of my > personal hobby horses for change in 3.3: Fix install layout on Windows, with > a side order of making the PATH work better. As this is being considered an 'incompatible change' on the bug tracker item [1] in any case, I'd like to mention that this might also be a convenient moment to re-think the default install location. After all, software is supposed to be installed in %programfiles% on windows, not in c:\. I asked a question about this on IRC, to which the response was that there were two main reasons to install python in c:\pythonxy: 1 - issues due to spaces ('Program Files') or non-ascii characters in the path ('Fi?iere Program' on a Romanian windows). These issues are supposed to be fixed by now (?). 2 - issues due to permissions - installing python / packages in %programfiles% may require administrator rights. Historical note: in python 1.5 the install location was changed to \program files\.., but in python 1.6/2.0 it was changed (back?) to \pythonxy. [2 @ 10618, 10850, 13804] [1] http://bugs.python.org/issue14302 [2] http://hg.python.org/cpython/file/a5add01e96be/Misc/HISTORY From p.f.moore at gmail.com Tue Mar 20 19:56:58 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 20 Mar 2012 18:56:58 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F68607B.7060307@gmail.com> Message-ID: On 20 March 2012 14:27, VanL wrote: > Germane to this discussion, I reached out for feedback. Most people didn't > care about the issue, or were slightly inclined to have it be uniform across > platforms. > > As Terry mentioned, I think that long-term uniformity will benefit everybody > down the line, and that is the way to go. > > The most interesting feedback, though, related to moving the Python exe and > placing it on the PATH. I got one argument back that I thought was > persuasive here: We want things to 'just work.' Specifically, the following > sequence of events should not require any fiddling on Windows: > > 1. Install python. > 2. Open up a shell and run "python" > 3. Use pip or easy_install to install regetron (a package that installs an > executable file). > 4. Run regetron. > > For step #2, the python exe needs to be on the PATH. For steps 3 and 4, the > binaries directory needs to be on the PATH. This is covered (better, IMO) by PEP 397 - Python Launcher for Windows. Step 2, just run "py". If you prefer a particular version, run "py -2" or "py -3" or "py -3.2". Adding python.exe to PATH actually makes this message *worse* as it confuses the issue. In a post-PEP 397 world, I would say that we should be telling Windows users *not* to run python.exe at all. (Unless they are using virtualenvs - maybe PEP 397 could do with an extra option to run the Python from the currently active virtualenv, but that's a side issue). If we do put python.exe on PATH (whether it's in bin or not), we have to debate how to handle people having multiple versions of python on their machine. In a post-PEP 397 world, no Python is "the machine default" - .py files are associated with py.exe, not python.exe, so we have to consider the following 3 commands being run from a shell prompt: 1. myprog.py 2. py myprog.py 3. python myprog.py 1 and 2 will always do the same thing. However, 3 could easily do something completely different, if the Python in the #! line differs from the one found on PATH. To me, this implies that it's better for (3) to need explicit user action (setting PATH) if it's to do anything other than give an error. But maybe that's just me. I've been hit too often by confusion caused by *not* remembering this fact. Note: I am using Vinay's py.exe all the time these days, so my comments about a "post-PEP 397 world" are from my direct experience. For your steps 3 and 4, there is certainly user intervention required as things stand. It would indeed be nice if "regetron" just worked as expected. But I'd argue a few points here: 1. On Windows, if regetron was not explicitly an application for working with my Python installation (like pip, easy_install, nose, etc) then I'd prefer it to be packaged as a standalone application using cx_Freeze or py2exe. I've had too many "applications" break because I accidentally uninstalled a dependency from my Python installation to want anything that is an end-user application installed into my Python scripts/bin directory. 2. If regetron is not an end-user application, it should really be getting installed in, and run from, a virtualenv. And in that case, activating the right virtualenv is part of the workflow. And that sets up PATH as needed, so there's no problem here. The problem with your example is that it depends on the package/executable. I looked at regetron (I thought it was a made up example at first!) and it seems clear to me that it should either be packaged up with py2exe/cx_Freeze, or (if it's sufficiently version-independent) sit outside of Python's installation tree. I can't see any reason why I'd expect a "regetron" command to work or not depending on which copy of Python on my PC I have active. But other applications may differ, I guess. I concede that the picture is much simpler when people only ever have a single version of Python on their machine. So for that case alone, maybe the "Make this Python the default" option in the installer should add the bin directory (or Scripts and the root, under the current layout) to the PATH. But equally, if the installer detects another copy of Python already installed, it should probably warn the user loudly that it's important to understand the implications of setting "make this Python the default", and should not set it by default (this may be the current behaviour, I don't know). I have no idea what proportion of Python users on Windows have multiple versions installed. I also have no idea how many such users work on the command line. My guess would be "not that many" and "not that many of the first group" respectively... But there are big groups like scientists and web developers who could sway these figures a lot. > In hearing from a couple people who teach python to beginners, this is a > substantial hurdle - the first thing they need to do is to edit their > environment to add these directories to the PATH. I'd be curious as to how much PEP 397's py.exe would have helped those people. But yes, it's an issue. Although someone at some point will have to introduce those beginners to the question of Python 2 vs Python 3, and PATH pain will hit them then, anyway. > This is orthogonal to the Scripts/bin issue, but I thought it should be > brought up. Agreed (both that it's orthogonal and that it should be discussed). Once Python 2 is dead and gone, these issues will be a lot simpler - but I don't think that's going to be for a few years yet. Paul. From van.lindberg at gmail.com Tue Mar 20 20:02:02 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 20 Mar 2012 14:02:02 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: On 3/20/2012 1:50 PM, Merlijn van Deen wrote: > On 13 March 2012 20:43, VanL wrote: >> Following up on conversations at PyCon, I want to bring up one of my >> personal hobby horses for change in 3.3: Fix install layout on Windows, with >> a side order of making the PATH work better. > > As this is being considered an 'incompatible change' on the bug > tracker item [1] in any case, I'd like to mention that this might also > be a convenient moment to re-think the default install location. After > all, software is supposed to be installed in %programfiles% on > windows, not in c:\. I don't particularly care about this issue, as I always install to my own location (c:\lib\python\X.Y), but I don't think that the default location of the install should be confounded with this issue - or should I say these issues, because we already have two. From van.lindberg at gmail.com Tue Mar 20 20:22:55 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 20 Mar 2012 14:22:55 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F68607B.7060307@gmail.com> Message-ID: <4F68D90F.1080504@gmail.com> On 3/20/2012 1:56 PM, Paul Moore wrote: > This is covered (better, IMO) by PEP 397 - Python Launcher for > Windows. Step 2, just run "py". If you prefer a particular version, > run "py -2" or "py -3" or "py -3.2". Interesting. I haven't played around with that at all, so can't comment. I will have to try it out. > Adding python.exe to PATH actually makes this message *worse* as it > confuses the issue. In a post-PEP 397 world, I would say that we > should be telling Windows users *not* to run python.exe at all. > (Unless they are using virtualenvs - maybe PEP 397 could do with an > extra option to run the Python from the currently active virtualenv, > but that's a side issue). I think that having the PATH manipulation be optional might address this issue. I also think that the PEP 397 launcher should respect virtualenvs - at least the built-in pyvenvs - or else there will be lots of confusion. > For your steps 3 and 4, there is certainly user intervention > required as things stand. It would indeed be nice if "regetron" just > worked as expected. But I'd argue a few points here: > > 1. On Windows, if regetron was not explicitly an application for > working with my Python installation (like pip, easy_install, nose, > etc) then I'd prefer it to be packaged as a standalone application > using cx_Freeze or py2exe. I've had too many "applications" break > because I accidentally uninstalled a dependency from my Python > installation to want anything that is an end-user application > installed into my Python scripts/bin directory. Maybe so - and I would probably agree that for any packaged application, bundling it into its own environment (or at least its own virtualenv) is the best practice. But what about pip, easy_install, nose, cython, pygments, PIL, etc, that do this and are meant to be associated with a particular python version? Substitute "nose" for "regetron" if you want, and there is still a problem. > The problem with your example is that it depends on the > package/executable. I looked at regetron (I thought it was a made up > example at first!) ...! I got the name from the feedback I received. I thought it was made up too. > I have no idea what proportion of Python users on Windows have > multiple versions installed. I also have no idea how many such users > work on the command line. My guess would be "not that many" and "not > that many of the first group" respectively... But there are big > groups like scientists and web developers who could sway these > figures a lot. There are a number of casual users that probably only have one version installed, but every python user/dev on windows that I know has one python that they consider to be "python," and everything else needs to be launched with a suffix (e.g., python26.exe). This is usually put earlier on the PATH so that it gets picked up first. For example, right now I have 2.6, 2.7, 3.2, jython, and pypy all installed, and I have "python" pointing to 2.7. > I'd be curious as to how much PEP 397's py.exe would have helped > those people. But yes, it's an issue. Although someone at some point > will have to introduce those beginners to the question of Python 2 vs > Python 3, and PATH pain will hit them then, anyway. I would imagine that it would help steps 1 and 2, but 3 and 4 would be problematic (how can you pip install something using py?) unless you were in a virtualenv, and then (unless py respected the virtualenv) the whole thing would be problematic, because there wouldn't be one clear way to do it. From Van.Lindberg at haynesboone.com Tue Mar 20 20:35:12 2012 From: Van.Lindberg at haynesboone.com (Lindberg, Van) Date: Tue, 20 Mar 2012 19:35:12 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> Message-ID: <4F68DBEF.9070105@gmail.com> On 3/20/2012 1:19 PM, Paul Moore wrote: > Somewhat. I don't really object to #1, but mildly object to #2. I also > note that the proposals round the Lib directory seem to have > disappeared. I assume those have been dropped - they were the ones I > did object to. They are of secondary importance to me, and I would be mostly ok to drop them - but I would like to understand your objection. I would like to know if you would object to user lib installs matching the system install. I.e., would it cause problems with you if it were just 'lib' everywhere, with no 'lib/python{version}'? It sounded like adding the version directory was the issue. Thanks, VanCIRCULAR 230 NOTICE: To ensure compliance with requirements imposed by U.S. Treasury Regulations, Haynes and Boone, LLP informs you that any U.S. tax advice contained in this communication (including any attachments) was not intended or written to be used, and cannot be used, for the purpose of (i) avoiding penalties under the Internal Revenue Code or (ii) promoting, marketing or recommending to another party any transaction or matter addressed herein. CONFIDENTIALITY NOTICE: This electronic mail transmission is confidential, may be privileged and should be read or retained only by the intended recipient. If you have received this transmission in error, please immediately notify the sender and delete it from your system. From benjamin at python.org Tue Mar 20 21:09:04 2012 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 20 Mar 2012 16:09:04 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14328: Add keyword-only parameters to PyArg_ParseTupleAndKeywords. In-Reply-To: References: Message-ID: 2012/3/20 larry.hastings : > http://hg.python.org/cpython/rev/052779d34945 > changeset: ? 75842:052779d34945 > parent: ? ? ?75839:1c0058991740 > user: ? ? ? ?Larry Hastings > date: ? ? ? ?Tue Mar 20 20:06:16 2012 +0000 > summary: > ?Issue #14328: Add keyword-only parameters to PyArg_ParseTupleAndKeywords. > > They're optional-only for now (unlike in pure Python) but that's all > I needed. ?The syntax can easily be relaxed if we want to support > required keyword-only arguments for extension types in the future. > > files: > ?Doc/c-api/arg.rst ? ? ? ? | ? 9 +++ > ?Lib/test/test_getargs2.py | ?74 ++++++++++++++++++++++++++- > ?Modules/_testcapimodule.c | ?20 ++++++- > ?Python/getargs.c ? ? ? ? ?| ?34 ++++++++++++- > ?4 files changed, 134 insertions(+), 3 deletions(-) Forgot about Misc/NEWS? -- Regards, Benjamin From p.f.moore at gmail.com Tue Mar 20 21:15:49 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 20 Mar 2012 20:15:49 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68DBEF.9070105@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68DBEF.9070105@gmail.com> Message-ID: On 20 March 2012 19:35, Lindberg, Van wrote: > I would like to know if you would object to user lib installs matching > the system install. I.e., would it cause problems with you if it were > just 'lib' everywhere, with no 'lib/python{version}'? It sounded like > adding the version directory was the issue. User lib installs don't bother me as I don't use them. But yes, it's the version directory that bothers me. So if you're proposing simply making the user lib install match the system install, both being lust "lib", then that's fine. I was somewhat confused about what you were proposing, that's all. Paul. From p.f.moore at gmail.com Tue Mar 20 21:31:00 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 20 Mar 2012 20:31:00 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68D90F.1080504@gmail.com> References: <4F68607B.7060307@gmail.com> <4F68D90F.1080504@gmail.com> Message-ID: On 20 March 2012 19:22, VanL wrote: > There are a number of casual users that probably only have one version > installed, but every python user/dev on windows that I know has one python > that they consider to be "python," and everything else needs to be launched > with a suffix (e.g., python26.exe). This is usually put earlier on the PATH > so that it gets picked up first. For example, right now I have 2.6, 2.7, > 3.2, jython, and pypy all installed, and I have "python" pointing to 2.7. But no Python I am aware of *has* a suffixed version (python26.exe). Renaming/copying is (in my view) a far more invasive change than simply modifying PATH (and it doesn't help the whole nose/regetron situation either). Serious question: Given a brand new PC, if you were installing Python 2.7, 3.2, 3.3a1, jython, and pypy, what would you do (beyond simply running 5 installers) to get your environment set up the way you want? For me, I'd 1. Install the Python launcher (only until 3.3 includes it) 2. Edit py.ini to tailor py.exe to my preferred defaults for Python and Python3. 3. Install my powershell module which allows me to switch which Python is on PATH Done. (That doesn't cater for pypy or jython, as I don't use them. But I'd probably use a couple of aliases for the rare uses I'd make of them) >> ?I'd be curious as to how much PEP 397's py.exe would have helped >> ?those people. But yes, it's an issue. Although someone at some point >> ?will have to introduce those beginners to the question of Python 2 vs >> ?Python 3, and PATH pain will hit them then, anyway. > > I would imagine that it would help steps 1 and 2, but 3 and 4 would be > problematic (how can you pip install something using py?) unless you were in > a virtualenv, and then (unless py respected the virtualenv) the whole thing > would be problematic, because there wouldn't be one clear way to do it. There isn't one clear way right now. And adding one particular version to PATH only helps if you only *have* one version. My current preference is as follows: 1. If you only ever have one Python on your machine, add it (and its scripts dir) to PATH and be done with it. Unfortunately, we're in the throes of the Python 2-3 transition, and not many people can manage with the one-Python restriction (I certainly can't). Also the Python installer can't detect if that's what you want. 2. Otherwise, use virtualenvs for anything that isn't being packaged up as a standalone environment. Activate as needed. 3. To access your system python(s) use py.exe with a version flag if needed. Never (or nearly never) install packages in the system Python. 4. To run scripts, use #! lines and the py.exe association (and set PATHEXT if you want) to associate the precise Python you want with the script. I have to say, I've recently discovered virtualenv, so the above is the opinion of a newly-converted zealot - so take with a pinch of salt :-) Paul. From van.lindberg at gmail.com Tue Mar 20 22:40:08 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 20 Mar 2012 16:40:08 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68DBEF.9070105@gmail.com> Message-ID: On 3/20/2012 3:15 PM, Paul Moore wrote: > On 20 March 2012 19:35, Lindberg, Van wrote: >> I would like to know if you would object to user lib installs matching >> the system install. I.e., would it cause problems with you if it were >> just 'lib' everywhere, with no 'lib/python{version}'? It sounded like >> adding the version directory was the issue. > > User lib installs don't bother me as I don't use them. But yes, it's > the version directory that bothers me. > > So if you're proposing simply making the user lib install match the > system install, both being lust "lib", then that's fine. I was > somewhat confused about what you were proposing, that's all. I was originally going to make it match posix-user installs, but just plain posix (no version directory) is just fine with me too. From v+python at g.nevcal.com Tue Mar 20 22:33:33 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Tue, 20 Mar 2012 14:33:33 -0700 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: <4F68F7AD.5060502@g.nevcal.com> On 3/20/2012 11:50 AM, Merlijn van Deen wrote: > As this is being considered an 'incompatible change' on the bug > tracker item [1] in any case, I'd like to mention that this might also > be a convenient moment to re-think the default install location. After > all, software is supposed to be installed in %programfiles% on > windows, not in c:\. > > I asked a question about this on IRC, to which the response was that > there were two main reasons to install python in c:\pythonxy: > > 1 - issues due to spaces ('Program Files') or non-ascii characters in > the path ('Fi?iere Program' on a Romanian windows). These issues are > supposed to be fixed by now (?). > 2 - issues due to permissions - installing python / packages in > %programfiles% may require administrator rights. I also wondered about %programfiles%, and had heard of issue #1, and would hope that it is not a real issue in modern times, but haven't attempted to test to determine otherwise. However, the in the first quoted paragraph there is an incorrect statement... the last sentence is simply not true. While software that is installed "for everyone" on the computer is supposed to be installed in %programfiles%, software that is installed for "user only" need not be, and in fact, it is recommended (at least by installer software I've used) that the alternate path is (XP) C:\Documents and Settings\\Local Settings\Application Data or (7) C:\Users\\AppData\Local (I think, I haven't found certain documentation about this). Or is it even possible to install something for "user only" anymore? I haven't been involved with installers lately (have been doing portable apps, no install needed). Certainly the "program files (x86)" business adds an extra wrinkle to it, somehow, on 64 bit machines, and I'm not hitting the right sites on my Google searches to discover anything about that, so that's why I'm wondering if it has been deprecated. Speaking of which, it would be nice to have "Portable Python" be part of the standard reportoire of packages available. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Van.Lindberg at haynesboone.com Tue Mar 20 22:54:16 2012 From: Van.Lindberg at haynesboone.com (Lindberg, Van) Date: Tue, 20 Mar 2012 21:54:16 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F68D90F.1080504@gmail.com> Message-ID: <4F68FC88.7060408@gmail.com> On 3/20/2012 3:31 PM, Paul Moore wrote: > Serious question: Given a brand new PC, if you were installing Python > 2.7, 3.2, 3.3a1, jython, and pypy, what would you do (beyond simply > running 5 installers) to get your environment set up the way you want? I install each python in its own directory: C:/lib/python/2.7 C:/lib/python/3.2 C:/lib/python/3.3 C:/lib/jython C:/lib/pypy Jython and Pypy get their own directories because they can have different version compatibilities. I then edit my distutils.command.install and patch pip/virtualenv so that all my directories are 'bin'/'lib'/'include'. I have never used the py.exe runner, but I then choose whichever Python is my default (right now 2.7, but hoping that I will be able to switch during the 3.3 timeframe) and that gets put on the PATH, along with its 'bin' directory. The other root dirs/bin directories get put on the PATH after the default Python. I don't remember whether I did it or whether it is installed that way, but I have a python2.6.exe and an pythonw.2.6.exe, etc, and all the individual installers include both a pip and a pip-2.6 version (or whatever that install has). I honestly don't remember - I would love to have someone else check. With this setup, I get my default choice anytime I type "python" and a specific interpreter version when I specify it. Same with installers, etc. I then install virtualenv and virtualenvwrapper-powershell and do all of my development out of virtualenvs. Occasionally I will install something to the system python if it is a pain to compile and I am installing a binary version from somewhere, but I generally try to keep the system python(s) clean.CIRCULAR 230 NOTICE: To ensure compliance with requirements imposed by U.S. Treasury Regulations, Haynes and Boone, LLP informs you that any U.S. tax advice contained in this communication (including any attachments) was not intended or written to be used, and cannot be used, for the purpose of (i) avoiding penalties under the Internal Revenue Code or (ii) promoting, marketing or recommending to another party any transaction or matter addressed herein. CONFIDENTIALITY NOTICE: This electronic mail transmission is confidential, may be privileged and should be read or retained only by the intended recipient. If you have received this transmission in error, please immediately notify the sender and delete it from your system. From mhammond at skippinet.com.au Tue Mar 20 22:49:42 2012 From: mhammond at skippinet.com.au (Mark Hammond) Date: Wed, 21 Mar 2012 08:49:42 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F688F48.9010101@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> Message-ID: <4F68FB76.7010303@skippinet.com.au> On 21/03/2012 1:08 AM, Lindberg, Van wrote: > On 3/20/2012 5:48 AM, Mark Hammond wrote: >> While I'm still unclear on the actual benefits of this, Martin's >> approach strikes a reasonable compromise so I withdraw my objections. > > > Ok. I was out of town and so could not respond to most of the latest > discussion. > > A question for you Mark, Paul, (and anyone else): ?ric correctly points > out that there are actually two distinct changes proposed here: > > 1. Moving the Python binary > 2. Changing from "Scripts" to "bin" > > So far, the primary resistance seems to be to item #1 - moving the > python binary. There have been a few people who have noted that #2 will > require some code to change (i.e. Paul), but I don't see lots of resistance. > > Am I reading you correctly? Well - as Paul implies, there are 2 distinct changes being proposed, but in 2 different environments. For an installed Python: If it has to move, it may as well move to somewhere consistent with other platforms. IOW, moving to "bin" seems preferable to moving to Scripts. My initial objection was to moving it at all in an installed Python. For a virtual env, we are talking about moving it *from* Scripts to bin, which may cause some people different problems. However, that isn't the concern I was expressing and I'd hate to see virtual envs remain inconsistent with an installed Python after this effort. So I'm assuming that: * The executable (and DLL) are moved to a "bin" directory in an installed Python. * distutils etc will change to install all "scripts" (or executables generated from scripts) into that same directory. IOW, "Scripts" would die. * A virtual-env would have an almost identical layout to an installed Python. Cheers, Mark From van.lindberg at gmail.com Tue Mar 20 23:00:07 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 20 Mar 2012 17:00:07 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68FB76.7010303@skippinet.com.au> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> Message-ID: <4F68FDE7.40505@gmail.com> On 3/20/2012 4:49 PM, Mark Hammond wrote: > > So I'm assuming that: > * The executable (and DLL) are moved to a "bin" directory in an > installed Python. > * distutils etc will change to install all "scripts" (or executables > generated from scripts) into that same directory. IOW, "Scripts" > would die. > * A virtual-env would have an almost identical layout to an installed > Python. Yes. I would make your point #3 stronger - I would like a virtualenv to have an identical layout to an installed python, at least with reference to the names of directories and the location of binaries. The base directory would be the only difference. Thanks, Van From p.f.moore at gmail.com Tue Mar 20 23:07:06 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 20 Mar 2012 22:07:06 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F68FDE7.40505@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> Message-ID: On 20 March 2012 22:00, VanL wrote: > On 3/20/2012 4:49 PM, Mark Hammond wrote: >> >> So I'm assuming that: >> * The executable (and DLL) are moved to a "bin" directory in an installed >> Python. >> * distutils etc will change to install all "scripts" (or executables >> generated from scripts) into that same directory. ?IOW, "Scripts" would die. It's worth remembering ?ric's point - distutils is frozen and changes are in theory not allowed. This part of the proposal is not possible without an exception to that ruling. Personally, I don't see how making this change could be a problem, but I'm definitely not an expert. If distutils doesn't change, bdist_wininst installers built using distutils rather than packaging will do the wrong thing with regard to this change. End users won't be able to tell how an installer has been built. Paul. From skippy.hammond at gmail.com Tue Mar 20 23:09:38 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Wed, 21 Mar 2012 09:09:38 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: <4F690022.9080002@gmail.com> On 21/03/2012 5:50 AM, Merlijn van Deen wrote: > On 13 March 2012 20:43, VanL wrote: >> Following up on conversations at PyCon, I want to bring up one of my >> personal hobby horses for change in 3.3: Fix install layout on Windows, with >> a side order of making the PATH work better. > > As this is being considered an 'incompatible change' on the bug > tracker item [1] in any case, I'd like to mention that this might also > be a convenient moment to re-think the default install location. After > all, software is supposed to be installed in %programfiles% on > windows, not in c:\. > > I asked a question about this on IRC, to which the response was that > there were two main reasons to install python in c:\pythonxy: > > 1 - issues due to spaces ('Program Files') or non-ascii characters in > the path ('Fi?iere Program' on a Romanian windows). These issues are > supposed to be fixed by now (?). > 2 - issues due to permissions - installing python / packages in > %programfiles% may require administrator rights. Apart from personal preference (ie, I prefer the status quo here), the second issue is a bit of a killer. Even an administrator can not write to Program Files unless they are using an "elevated" process (ie, explicitly use "Run as Administrator" and confirm the prompt. This means that any installer wanting to write .py files into the Python install must be elevated, and any Python process wanting to generate .pyc files must also be elevated. So even if an installer does arrange elevation, unless that installer also compiles all .pyc and .pyo files at install time, Python would fail to generate the .pyc files on first use. While Python will probably fail silently and still continue to work, it will have a significant performance impact. Mark From g.brandl at gmx.net Tue Mar 20 23:38:53 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 20 Mar 2012 23:38:53 +0100 Subject: [Python-Dev] Playing with a new theme for the docs Message-ID: Hi all, recently I've grown a bit tired of seeing our default Sphinx theme, especially as so many other projects use it. I decided to play around with something "clean" this time, and this is the result: http://www.python.org/~gbrandl/build/html/ The corresponding sandbox repo is at http://hg.python.org/sandbox/doc-theme/#doc-theme Let me know what you think, or play around and send some improvements. (The collapsible sidebar is not adapted to it yet, but will definitely be integrated before I consider applying a new theme to the real docs.) Thanks, Georg From guido at python.org Tue Mar 20 23:45:12 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 20 Mar 2012 15:45:12 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: Nice and clean, but looks too similar to newer Google properties... Also I see that (like Google) you're falling for the fallacy of using less contrast. From an accessibility perspective that's questionable -- and I don't mean the legally blind, just people like myself whose eyes are getting a bit older. This also means I don't particularly like adding background color (no matter how light) to text samples. --Guido On Tue, Mar 20, 2012 at 3:38 PM, Georg Brandl wrote: > Hi all, > > recently I've grown a bit tired of seeing our default Sphinx theme, > especially as so many other projects use it. ?I decided to play around > with something "clean" this time, and this is the result: > > ?http://www.python.org/~gbrandl/build/html/ > > The corresponding sandbox repo is at > > ?http://hg.python.org/sandbox/doc-theme/#doc-theme > > Let me know what you think, or play around and send some improvements. > (The collapsible sidebar is not adapted to it yet, but will definitely > be integrated before I consider applying a new theme to the real docs.) -- --Guido van Rossum (python.org/~guido) From rdmurray at bitdance.com Tue Mar 20 23:45:43 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 20 Mar 2012 18:45:43 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F690022.9080002@gmail.com> References: <4F690022.9080002@gmail.com> Message-ID: <20120320224544.903B0250075@webabinitio.net> On Wed, 21 Mar 2012 09:09:38 +1100, Mark Hammond wrote: > On 21/03/2012 5:50 AM, Merlijn van Deen wrote: > > I asked a question about this on IRC, to which the response was that > > there were two main reasons to install python in c:\pythonxy: > > > > 1 - issues due to spaces ('Program Files') or non-ascii characters in > > the path ('Fi??iere Program' on a Romanian windows). These issues are > > supposed to be fixed by now (?). > > 2 - issues due to permissions - installing python / packages in > > %programfiles% may require administrator rights. > > Apart from personal preference (ie, I prefer the status quo here), the > second issue is a bit of a killer. Even an administrator can not write > to Program Files unless they are using an "elevated" process (ie, > explicitly use "Run as Administrator" and confirm the prompt. > > This means that any installer wanting to write .py files into the Python > install must be elevated, and any Python process wanting to generate > .pyc files must also be elevated. So even if an installer does arrange > elevation, unless that installer also compiles all .pyc and .pyo files > at install time, Python would fail to generate the .pyc files on first > use. While Python will probably fail silently and still continue to > work, it will have a significant performance impact. So windows requires admin privileges to install to Program Files, but not to install to '/'? How novel. (You can perhaps tell that I'm not a windoows user). My understanding, though, is that Python does make a distinction between a system install of Python and a per-user one, so I don't think your objection really applies. That said, there is an open bug in the tracker about the insecurity of a system install of python (exactly that the files are writable by anyone). So that would have to be solved first. I'd say this is definitely a separate issue from Van's discussion, and the *only* reason one might want to tie them together at all is "well, we're changing the directory layout anyway". --David From jxo6948 at rit.edu Tue Mar 20 23:55:46 2012 From: jxo6948 at rit.edu (John O'Connor) Date: Tue, 20 Mar 2012 18:55:46 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: On Tue, Mar 20, 2012 at 6:38 PM, Georg Brandl wrote: > recently I've grown a bit tired of seeing our default Sphinx theme, > especially as so many other projects use it. I think regardless of the chosen style, giving the Python 3 docs a different look and feel also has a psychological benefit that might further encourage users to consider moving to Python 3. It could be a bit of a wake-up call. From rdmurray at bitdance.com Wed Mar 21 00:17:40 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 20 Mar 2012 19:17:40 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: <20120320231741.E7E93250075@webabinitio.net> On Tue, 20 Mar 2012 23:38:53 +0100, Georg Brandl wrote: > Hi all, > > recently I've grown a bit tired of seeing our default Sphinx theme, > especially as so many other projects use it. I decided to play around > with something "clean" this time, and this is the result: > > http://www.python.org/~gbrandl/build/html/ The font looks better in my browser, but otherwise I prefer the current style. The biggest thing I don't like about the new style is the fact that the content is not set off from the chrome by shading. Having it shaded makes it easier for my eye to ignore it and just focus on the content. Hey, maybe you could make the sidebar only appear if the browser supports javascript? Then I'd never have to see it, and that I would consider "clean" :) Thanks for working on improving things. --David From skippy.hammond at gmail.com Wed Mar 21 00:25:34 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Wed, 21 Mar 2012 10:25:34 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <20120320224544.903B0250075@webabinitio.net> References: <4F690022.9080002@gmail.com> <20120320224544.903B0250075@webabinitio.net> Message-ID: <4F6911EE.1030400@gmail.com> On 21/03/2012 9:45 AM, R. David Murray wrote: > On Wed, 21 Mar 2012 09:09:38 +1100, Mark Hammond wrote: >> On 21/03/2012 5:50 AM, Merlijn van Deen wrote: >>> I asked a question about this on IRC, to which the response was that >>> there were two main reasons to install python in c:\pythonxy: >>> >>> 1 - issues due to spaces ('Program Files') or non-ascii characters in >>> the path ('Fi??iere Program' on a Romanian windows). These issues are >>> supposed to be fixed by now (?). >>> 2 - issues due to permissions - installing python / packages in >>> %programfiles% may require administrator rights. >> >> Apart from personal preference (ie, I prefer the status quo here), the >> second issue is a bit of a killer. Even an administrator can not write >> to Program Files unless they are using an "elevated" process (ie, >> explicitly use "Run as Administrator" and confirm the prompt. >> >> This means that any installer wanting to write .py files into the Python >> install must be elevated, and any Python process wanting to generate >> .pyc files must also be elevated. So even if an installer does arrange >> elevation, unless that installer also compiles all .pyc and .pyo files >> at install time, Python would fail to generate the .pyc files on first >> use. While Python will probably fail silently and still continue to >> work, it will have a significant performance impact. > > So windows requires admin privileges to install to Program Files, but > not to install to '/'? How novel. (You can perhaps tell that I'm > not a windoows user). My understanding, though, is that Python > does make a distinction between a system install of Python and > a per-user one, so I don't think your objection really applies. I think it does. Consider I've installed Python as a "system install". Now I want to install some other package - ideally that installer will request elevation - all well and good - the .py files are installed. However, next time I want to run Python, it will fail to generate the .pyc files - even though I'm an administrator. I would need to explicitly tell Python to execute "as administrator" (or run it from an already elevated command-prompt) to have things work as expected. Thus, the "usual" case would be that Python is unable to update any files in its install directory. If Python installed for a single user didn't install into Program Files (which it probably couldn't do without an administrator providing credentials anyway) then it wouldn't be a problem - but then we have multiple possible default install locations, which sounds like more trouble than it is worth... > That said, there is an open bug in the tracker about the insecurity > of a system install of python (exactly that the files are writable > by anyone). So that would have to be solved first. I'd say this > is definitely a separate issue from Van's discussion, and the *only* > reason one might want to tie them together at all is "well, we're > changing the directory layout anyway". Agreed. Mark From v+python at g.nevcal.com Wed Mar 21 00:35:32 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Tue, 20 Mar 2012 16:35:32 -0700 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F6911EE.1030400@gmail.com> References: <4F690022.9080002@gmail.com> <20120320224544.903B0250075@webabinitio.net> <4F6911EE.1030400@gmail.com> Message-ID: <4F691444.9040607@g.nevcal.com> On 3/20/2012 4:25 PM, Mark Hammond wrote: > I think it does. Consider I've installed Python as a "system > install". Now I want to install some other package - ideally that > installer will request elevation - all well and good - the .py files > are installed. However, next time I want to run Python, it will fail > to generate the .pyc files - even though I'm an administrator. I > would need to explicitly tell Python to execute "as administrator" (or > run it from an already elevated command-prompt) to have things work as > expected. Thus, the "usual" case would be that Python is unable to > update any files in its install directory. > > If Python installed for a single user didn't install into Program > Files (which it probably couldn't do without an administrator > providing credentials anyway) then it wouldn't be a problem - but then > we have multiple possible default install locations, which sounds like > more trouble than it is worth... > >> That said, there is an open bug in the tracker about the insecurity >> of a system install of python (exactly that the files are writable >> by anyone). So that would have to be solved first. I'd say this >> is definitely a separate issue from Van's discussion, and the *only* >> reason one might want to tie them together at all is "well, we're >> changing the directory layout anyway". Indeed, the single user "place" isn't a single place, unless you consider the per-user $APPDATA environment variable sufficient to determine it (or the Windows API that returns the initial boot up value of $APPDATA/ %APPDATA%, which is the preferred technique for code). But it does solve the security problem (stuff in APPDATA is accessible only to a single login by default). So that might be justification for putting it there, for single users. For multi-user installs, %PROGRAMFILES% is appropriate, but, like I've heard some Linux distributions do, *.pyc might have to be prebuilt and installed along with Python (or generated during install, instead of waiting for first use). -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Wed Mar 21 00:44:43 2012 From: tseaver at palladion.com (Tres Seaver) Date: Tue, 20 Mar 2012 19:44:43 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/20/2012 06:45 PM, Guido van Rossum wrote: > Nice and clean, but looks too similar to newer Google properties... > Also I see that (like Google) you're falling for the fallacy of using > less contrast. From an accessibility perspective that's questionable > -- and I don't mean the legally blind, just people like myself whose > eyes are getting a bit older. This also means I don't particularly > like adding background color (no matter how light) to text samples. +1. Even make comments low-contrast defeats their purpose (italic works fine for that). Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk9pFmsACgkQ+gerLs4ltQ7fpwCeOY5p2HnqotHrWrN5vqsHfcsl 2EYAn3cnlemVO/RKavU3SC4w5b+q66S6 =Oryl -----END PGP SIGNATURE----- From solipsis at pitrou.net Wed Mar 21 01:37:17 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 21 Mar 2012 01:37:17 +0100 Subject: [Python-Dev] Playing with a new theme for the docs References: Message-ID: <20120321013717.1126e14f@pitrou.net> Hi Georg, On Tue, 20 Mar 2012 23:38:53 +0100 Georg Brandl wrote: > Hi all, > > recently I've grown a bit tired of seeing our default Sphinx theme, > especially as so many other projects use it. I decided to play around > with something "clean" this time, and this is the result: > > http://www.python.org/~gbrandl/build/html/ Not enough colours, and/or not enough visual cues for page structure. cheers Antoine. From cs at zip.com.au Wed Mar 21 01:51:35 2012 From: cs at zip.com.au (Cameron Simpson) Date: Wed, 21 Mar 2012 11:51:35 +1100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: <20120321005134.GA4058@cskk.homeip.net> On 20Mar2012 15:45, Guido van Rossum wrote: | Nice and clean, but looks too similar to newer Google properties... | Also I see that (like Google) you're falling for the fallacy of using | less contrast. From an accessibility perspective that's questionable | -- and I don't mean the legally blind, just people like myself whose | eyes are getting a bit older. This also means I don't particularly | like adding background color (no matter how light) to text samples. Conversely, I like the text samples slightly shaded; I find a bare rectangle on the perimeter of a DIV just a tad more like noise, whereas a very slightly shaded block makes it very clear to me. I know it is a PITA, but how hard is it to make a tiny tiny CSS control block somewhere so a user can tune the style in coarse ways (i.e. tweak the properties of the class for shaded blocks)? I think the font choice in the new style is better; cleaner, less noisy, like a sans serif font versus a serifed font. So much so that I thought the new style used annoyingly more whitespace, but putting them side by side shows the new style to be more compact. Win win! One thing that bothers me about both styles is the fixed width text versus proportional size difference. Let me say in advance that I'm viewing both in Firefox on a Mac. To take an example, in the argparse module the opening sentence says "The argparse module". For me the word "argparse" is distinctly shorter in vertical height, which is a bit jarring. (the difference is smaller in the new style.) Is there a way to specify fonts that keeps this height attribute the same? Example screen shots (just those three words): http://dl.dropbox.com/u/2607515/screenshots/argparse-new1.png http://dl.dropbox.com/u/2607515/screenshots/argparse-old1.png Cheers, -- Cameron Simpson DoD#743 http://www.cskk.ezoshosting.com/cs/ That's just the sort of bloody stupid name they would choose. - Reginald Mitchell, designer of the Spitfire From raymond.hettinger at gmail.com Wed Mar 21 01:57:27 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 20 Mar 2012 17:57:27 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <20120321013717.1126e14f@pitrou.net> References: <20120321013717.1126e14f@pitrou.net> Message-ID: <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> On Mar 20, 2012, at 5:37 PM, Antoine Pitrou wrote: > Georg Brandl wrote: >> Hi all, >> >> recently I've grown a bit tired of seeing our default Sphinx theme, >> especially as so many other projects use it. I decided to play around >> with something "clean" this time, and this is the result: >> >> http://www.python.org/~gbrandl/build/html/ > > Not enough colours, and/or not enough visual cues for page structure. > > cheers > > Antoine. Like Antoine, I'm having a hard time navigating the page. For me, the current theme is *much* better. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Wed Mar 21 02:39:41 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 20 Mar 2012 21:39:41 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: On 3/20/2012 6:38 PM, Georg Brandl wrote: The current green on the front page is too heavy. Otherwise I prefer the old. I like the color on the index chart of the builtin-functions page. You un-bolded most (not all) of the entries and then are definitely too thin now. You unbolded the blue elsewhere and it is definitely harder for me to read. My eyesight does not correct to 20/20 and I have trouble reading many things, but the current docs work pretty well for me. -- Terry Jan Reedy From van.lindberg at gmail.com Wed Mar 21 02:40:58 2012 From: van.lindberg at gmail.com (VanL) Date: Tue, 20 Mar 2012 20:40:58 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> Message-ID: On Tuesday, March 20, 2012 at 5:07 PM, Paul Moore wrote: > It's worth remembering ?ric's point - distutils is frozen and changes > are in theory not allowed. This part of the proposal is not possible > without an exception to that ruling. Personally, I don't see how > making this change could be a problem, but I'm definitely not an > expert. > > If distutils doesn't change, bdist_wininst installers built using > distutils rather than packaging will do the wrong thing with regard to > this change. End users won't be able to tell how an installer has been > built. > > This is a good point. Who can make this call - Guido, or someone else? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ned at nedbatchelder.com Wed Mar 21 02:58:57 2012 From: ned at nedbatchelder.com (Ned Batchelder) Date: Tue, 20 Mar 2012 21:58:57 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: <4F6935E1.2030309@nedbatchelder.com> On 3/20/2012 6:38 PM, Georg Brandl wrote: > Let me know what you think, or play around and send some improvements. > (The collapsible sidebar is not adapted to it yet, but will definitely > be integrated before I consider applying a new theme to the real docs.) Not to add to the chorus of tweakers, but if I could change just one thing about the current theme, it would be to remove full justification of the text. In text like ours with frequent long expressions, URLs, and the like, full justification is just an invitation to mangle the spacing of a paragraph. The paragraphs are also quite short and often interrupted by samples, lists, headings, and so on, losing the design advantage of a clean right edge anyway. Books, magazines, and newspapers look good with full justification, web pages do not. Can we switch to left-justified instead? --Ned. > Thanks, > Georg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ned%40nedbatchelder.com > From merwok at netwok.org Wed Mar 21 04:41:47 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Tue, 20 Mar 2012 23:41:47 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> Message-ID: <4F694DFB.7050400@netwok.org> Hi, Le 20/03/2012 21:40, VanL a ?crit : > On Tuesday, March 20, 2012 at 5:07 PM, Paul Moore wrote: >> It's worth remembering ?ric's point - distutils is frozen and changes >> are in theory not allowed. This part of the proposal is not possible >> without an exception to that ruling. Personally, I don't see how >> making this change could be a problem, but I'm definitely not an >> expert. >> >> If distutils doesn't change, bdist_wininst installers built using >> distutils rather than packaging will do the wrong thing with regard to >> this change. End users won't be able to tell how an installer has been >> built. > > This is a good point. Who can make this call - Guido, or someone else? >From the top of my head the developers with the most experience about Windows deployment are Martin v. L?wis, Mark Hammond and Marc-Andr? Lemburg (not sure about the Windows part for MAL, but he maintains a library that extends distutils and has been broken in the past). I think their approval is required for this kind of huge change. The point of the distutils freeze (i.e. feature moratorium) is that we just can?t know what complicated things people are doing with undocumented internals, because distutils appeared unmaintained and under-documented for years and people had to work with and around it; since the start of the distutils2 project we can Just Say No? to improvements and features in distutils. ?I don?t see what could possibly go wrong? is a classic line in both horror movies and distutils development . Renaming Scripts to bin on Windows would have effects on some tools we know and surely on many tools we don?t know. We don?t want to see again people who use or extend distutils come with torches and pitchforks because internals were changed and we have to revert. So in my opinion, to decide to go ahead with the change we need strong +1s from the developers I named above and an endorsement by Tarek, or if he can?t participate in the discussion, Guido. As a footnote, distutils is already broken in 3.3. Now we give users or system administrators the possibility to edit the install schemes at will in sysconfig.cfg, but distutils hard-codes the old scheme. I tend to think it should be fixed, to make the distutils-packaging transition/cohabitation possible. Regards From greg.ewing at canterbury.ac.nz Wed Mar 21 06:03:26 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 21 Mar 2012 18:03:26 +1300 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <20120320224544.903B0250075@webabinitio.net> References: <4F690022.9080002@gmail.com> <20120320224544.903B0250075@webabinitio.net> Message-ID: <4F69611E.7090400@canterbury.ac.nz> R. David Murray wrote: > My understanding, though, is that Python > does make a distinction between a system install of Python and > a per-user one, so I don't think your objection really applies. Seems to me that for Python at least, the important distinction is not so much where the files are placed, but whether the registry entries are made machine-wide or user-local. -- Greg From g.brandl at gmx.net Wed Mar 21 06:57:34 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 21 Mar 2012 06:57:34 +0100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: On 20.03.2012 23:45, Guido van Rossum wrote: > Nice and clean, but looks too similar to newer Google properties... > Also I see that (like Google) you're falling for the fallacy of using > less contrast. From an accessibility perspective that's questionable > -- and I don't mean the legally blind, just people like myself whose > eyes are getting a bit older. This also means I don't particularly > like adding background color (no matter how light) to text samples. Well, to be fair, the current theme also has a lot of shading, and the text in the sidebar is at lower contrast too. But I can see that the main text should remain at as high contrast as possible. Georg From g.brandl at gmx.net Wed Mar 21 06:58:21 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 21 Mar 2012 06:58:21 +0100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <20120320231741.E7E93250075@webabinitio.net> References: <20120320231741.E7E93250075@webabinitio.net> Message-ID: On 21.03.2012 00:17, R. David Murray wrote: > On Tue, 20 Mar 2012 23:38:53 +0100, Georg Brandl wrote: >> Hi all, >> >> recently I've grown a bit tired of seeing our default Sphinx theme, >> especially as so many other projects use it. I decided to play around >> with something "clean" this time, and this is the result: >> >> http://www.python.org/~gbrandl/build/html/ > > The font looks better in my browser, but otherwise I prefer the current > style. The biggest thing I don't like about the new style is the fact > that the content is not set off from the chrome by shading. Having it > shaded makes it easier for my eye to ignore it and just focus on > the content. Not sure what "the unshaded chrome" is -- only the header bar, since the sidebar is shaded already? Georg From g.brandl at gmx.net Wed Mar 21 07:00:36 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 21 Mar 2012 07:00:36 +0100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> Message-ID: On 21.03.2012 01:57, Raymond Hettinger wrote: > > On Mar 20, 2012, at 5:37 PM, Antoine Pitrou wrote: > >> Georg Brandl > wrote: >>> Hi all, >>> >>> recently I've grown a bit tired of seeing our default Sphinx theme, >>> especially as so many other projects use it. I decided to play around >>> with something "clean" this time, and this is the result: >>> >>> http://www.python.org/~gbrandl/build/html/ >> >> Not enough colours, and/or not enough visual cues for page structure. >> >> cheers >> >> Antoine. > > Like Antoine, I'm having a hard time navigating the page. > For me, the current theme is *much* better. OK, that seems to be the main point people make... let me see if I can come up with a better compromise. Georg From martin at v.loewis.de Wed Mar 21 08:02:15 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Wed, 21 Mar 2012 08:02:15 +0100 Subject: [Python-Dev] GSoC 2012: Python Core Participation? Message-ID: <20120321080215.Horde.F0tnBFNNcXdPaXz3hMjTO8A@webmail.df.eu> I'm wondering whether Python Core should participate in GSoC 2012 or not, as core contributors have shown little interest in acting as mentors in the past. If you are a core committer and volunteer as GSoC mentor for 2012, please let me know by Friday (March 23rd). Regards, Martin From dirkjan at ochtman.nl Wed Mar 21 09:25:59 2012 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Wed, 21 Mar 2012 09:25:59 +0100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> Message-ID: On Wed, Mar 21, 2012 at 07:00, Georg Brandl wrote: > OK, that seems to be the main point people make... let me see if I can > come up with a better compromise. Would it be possible to limit the width of the page? On my 1920px monitor, the lines get awfully long, making them harder to read. Cheers, Dirkjan From anacrolix at gmail.com Wed Mar 21 10:31:41 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Wed, 21 Mar 2012 17:31:41 +0800 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> Message-ID: Turn your monitor portrait or make the window smaller :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tartley at tartley.com Wed Mar 21 10:33:13 2012 From: tartley at tartley.com (Jonathan Hartley) Date: Wed, 21 Mar 2012 09:33:13 +0000 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> Message-ID: <4F69A059.2030601@tartley.com> On 21/03/2012 08:25, Dirkjan Ochtman wrote: > On Wed, Mar 21, 2012 at 07:00, Georg Brandl wrote: >> OK, that seems to be the main point people make... let me see if I can >> come up with a better compromise. > Would it be possible to limit the width of the page? On my 1920px > monitor, the lines get awfully long, making them harder to read. I realise this is bikeshedding by now, but FWIW, please don't. If people want shorter lines, they can narrow their browser, without forcing that preference on the rest of us. > Cheers, > > Dirkjan > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/tartley%40tartley.com > -- Jonathan Hartley tartley at tartley.com http://tartley.com Made of meat. +44 7737 062 225 twitter/skype: tartley From phd at phdru.name Wed Mar 21 11:16:21 2012 From: phd at phdru.name (Oleg Broytman) Date: Wed, 21 Mar 2012 14:16:21 +0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F69A059.2030601@tartley.com> References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> <4F69A059.2030601@tartley.com> Message-ID: <20120321101621.GA5612@iskra.aviel.ru> On Wed, Mar 21, 2012 at 09:33:13AM +0000, Jonathan Hartley wrote: > On 21/03/2012 08:25, Dirkjan Ochtman wrote: > >On Wed, Mar 21, 2012 at 07:00, Georg Brandl wrote: > >>OK, that seems to be the main point people make... let me see if I can > >>come up with a better compromise. > >Would it be possible to limit the width of the page? On my 1920px > >monitor, the lines get awfully long, making them harder to read. > I realise this is bikeshedding by now, but FWIW, please don't. If > people want shorter lines, they can narrow their browser, without > forcing that preference on the rest of us. Seconded. My display is 1920x1200 but I use very large fonts and I'm satisfied with line lengths. Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From ben+python at benfinney.id.au Wed Mar 21 11:48:25 2012 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 21 Mar 2012 21:48:25 +1100 Subject: [Python-Dev] Playing with a new theme for the docs References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> Message-ID: <874ntikz5y.fsf@benfinney.id.au> Dirkjan Ochtman writes: > On Wed, Mar 21, 2012 at 07:00, Georg Brandl wrote: > > OK, that seems to be the main point people make... let me see if I > > can come up with a better compromise. > > Would it be possible to limit the width of the page? On my 1920px > monitor, the lines get awfully long, making them harder to read. ?1. Please, web designers, don't presume to know what width the viewer wants. We can change the window size if that's what we want. -- \ ?I hope some animal never bores a hole in my head and lays its | `\ eggs in my brain, because later you might think you're having a | _o__) good idea but it's just eggs hatching.? ?Jack Handey | Ben Finney From solipsis at pitrou.net Wed Mar 21 12:09:49 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 21 Mar 2012 12:09:49 +0100 Subject: [Python-Dev] Playing with a new theme for the docs References: Message-ID: <20120321120949.31c790dc@pitrou.net> On Tue, 20 Mar 2012 21:39:41 -0400 Terry Reedy wrote: > On 3/20/2012 6:38 PM, Georg Brandl wrote: > > The current green on the front page is too heavy. Green? hmm... you mean blue, right? :) Antoine. From solipsis at pitrou.net Wed Mar 21 12:10:19 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 21 Mar 2012 12:10:19 +0100 Subject: [Python-Dev] Playing with a new theme for the docs References: <4F6935E1.2030309@nedbatchelder.com> Message-ID: <20120321121019.6e95e192@pitrou.net> On Tue, 20 Mar 2012 21:58:57 -0400 Ned Batchelder wrote: > On 3/20/2012 6:38 PM, Georg Brandl wrote: > > Let me know what you think, or play around and send some improvements. > > (The collapsible sidebar is not adapted to it yet, but will definitely > > be integrated before I consider applying a new theme to the real docs.) > Not to add to the chorus of tweakers, but if I could change just one > thing about the current theme, it would be to remove full justification > of the text. Ow, I hate non-justified text myself :( Bikeshedding Antoine. From chris at simplistix.co.uk Wed Mar 21 11:55:20 2012 From: chris at simplistix.co.uk (Chris Withers) Date: Wed, 21 Mar 2012 10:55:20 +0000 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F69A059.2030601@tartley.com> References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> <4F69A059.2030601@tartley.com> Message-ID: <4F69B398.10708@simplistix.co.uk> On 21/03/2012 09:33, Jonathan Hartley wrote: > On 21/03/2012 08:25, Dirkjan Ochtman wrote: >> On Wed, Mar 21, 2012 at 07:00, Georg Brandl wrote: >>> OK, that seems to be the main point people make... let me see if I can >>> come up with a better compromise. >> Would it be possible to limit the width of the page? On my 1920px >> monitor, the lines get awfully long, making them harder to read. > I realise this is bikeshedding by now, but FWIW, please don't. If people > want shorter lines, they can narrow their browser, without forcing that > preference on the rest of us. + sys.maxint Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk From ned at nedbatchelder.com Wed Mar 21 13:38:50 2012 From: ned at nedbatchelder.com (Ned Batchelder) Date: Wed, 21 Mar 2012 08:38:50 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <20120321101621.GA5612@iskra.aviel.ru> References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> <4F69A059.2030601@tartley.com> <20120321101621.GA5612@iskra.aviel.ru> Message-ID: <4F69CBDA.8010405@nedbatchelder.com> On 3/21/2012 6:16 AM, Oleg Broytman wrote: > On Wed, Mar 21, 2012 at 09:33:13AM +0000, Jonathan Hartley wrote: >> On 21/03/2012 08:25, Dirkjan Ochtman wrote: >>> On Wed, Mar 21, 2012 at 07:00, Georg Brandl wrote: >>>> OK, that seems to be the main point people make... let me see if I can >>>> come up with a better compromise. >>> Would it be possible to limit the width of the page? On my 1920px >>> monitor, the lines get awfully long, making them harder to read. >> I realise this is bikeshedding by now, but FWIW, please don't. If >> people want shorter lines, they can narrow their browser, without >> forcing that preference on the rest of us. > Seconded. My display is 1920x1200 but I use very large fonts and I'm > satisfied with line lengths. The best thing to do is to set a max-width in ems, say 50em. This leaves the text at a reasonable width, but adapts naturally for people with larger fonts. --Ned. > > Oleg. From rdmurray at bitdance.com Wed Mar 21 14:03:48 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 21 Mar 2012 09:03:48 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <20120320231741.E7E93250075@webabinitio.net> Message-ID: <20120321130348.B00992500E3@webabinitio.net> On Wed, 21 Mar 2012 06:58:21 +0100, Georg Brandl wrote: > On 21.03.2012 00:17, R. David Murray wrote: > > On Tue, 20 Mar 2012 23:38:53 +0100, Georg Brandl wrote: > >> Hi all, > >> > >> recently I've grown a bit tired of seeing our default Sphinx theme, > >> especially as so many other projects use it. I decided to play around > >> with something "clean" this time, and this is the result: > >> > >> http://www.python.org/~gbrandl/build/html/ > > > > The font looks better in my browser, but otherwise I prefer the current > > style. The biggest thing I don't like about the new style is the fact > > that the content is not set off from the chrome by shading. Having it > > shaded makes it easier for my eye to ignore it and just focus on > > the content. > > Not sure what "the unshaded chrome" is -- only the header bar, since the > sidebar is shaded already? Header bar and footer. But I also like the fact that the current site shades the sidebar all the way down (and darker, though obviously we have contrast issues from some folks that don't bother me). Otherwise that whitespace on the left just looks...wrong. But that last is considerably less important. --David From storchaka at gmail.com Wed Mar 21 14:48:00 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 21 Mar 2012 15:48:00 +0200 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F69CBDA.8010405@nedbatchelder.com> References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> <4F69A059.2030601@tartley.com> <20120321101621.GA5612@iskra.aviel.ru> <4F69CBDA.8010405@nedbatchelder.com> Message-ID: 21.03.12 14:38, Ned Batchelder ???????(??): > The best thing to do is to set a max-width in ems, say 50em. This leaves > the text at a reasonable width, but adapts naturally for people with > larger fonts. It's good for books, magazines, and newspapers, but not for technical site. ;) From kristjan at ccpgames.com Wed Mar 21 14:35:50 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 21 Mar 2012 13:35:50 +0000 Subject: [Python-Dev] PEP 405 (built-in virtualenv) status In-Reply-To: <4F678690.3000600@oddbird.net> References: <4F626278.7030701@oddbird.net> <4F626712.3030906@gmail.com> <4F62692E.8040203@oddbird.net> <4F678690.3000600@oddbird.net> Message-ID: > -----Original Message----- > From: Carl Meyer [mailto:carl at oddbird.net] > Sent: 19. mars 2012 19:19 > To: Kristj?n Valur J?nsson > Cc: Python-Dev (python-dev at python.org) > Subject: Re: [Python-Dev] PEP 405 (built-in virtualenv) status > > Hello Kristj?n, > I think there's one important (albeit odd and magical) bit of Python's current > behavior that you are missing in your blog post. All of the initial sys.path > directories are constructed relative to sys.prefix and sys.exec_prefix, and > those values in turn are determined (if PYTHONHOME is not set), by walking > up the filesystem tree from the location of the Python binary, looking for the > existence of a file at the relative path "lib/pythonX.X/os.py" (or "Lib/os.py" > on Windows). Python takes the existence of this file to mean that it's found > the standard library, and sets sys.prefix accordingly. Thus, you can achieve > reliable full isolation from any installed Python, with no need for > environment variables, simply by placing a file (it can even be empty) at that > relative location from the location of your Python binary. You will still get > some default paths added on sys.path, but they will all be relative to your > Python binary and thus presumably under your control; nothing from any > other location will be on sys.path. I doubt you will find this solution > satisfyingly elegant, but you might nonetheless find it practically useful. > Right. Thanks for explaining this. Although, it would appear that Python also has a mechanism for detecting that it is being run from a build environment and ignore PYTHONHOME in that case too. > > Beyond that possible tweak, while I certainly wouldn't oppose any effort to > clean up / document / make-optional Python's startup sys.path-setting > behavior, I think it's mostly orthogonal to PEP 405, and I don't think it would > be helpful to expand the scope of PEP 405 to include that effort. Well, it sounds as this pep can definitely be used as the basis for work to completely customize the startup behaviour. In my case, it would be desirable to be able to completely ignore any PYTHONHOME environment variable (and any others). I'd also like to be able to manually set up the sys.path. Perhaps if we can set things up that one key (ignore_env) will cause the environment variables to be ignored, and then, an empty home key will set the sys.path to point to the directory of the .cfg file. Presumably, this would then cause a site.py found at that place to be executed and one could code whatever extra logic one wants into that file. Possibly a "site" key in the .cfg file would achieve the same goal, allowing the user to call this setup file whatever he wants. With something like this in place, the built in behaviour of python.exe to realize that it is running from a "build" environment and in that case ignore PYTHONPATH and set a special sys.path, could all be removed from being hardcoded into being coded into some buildsite.py in the cpython root folder. Kristj?n From storchaka at gmail.com Wed Mar 21 14:44:24 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 21 Mar 2012 15:44:24 +0200 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6935E1.2030309@nedbatchelder.com> References: <4F6935E1.2030309@nedbatchelder.com> Message-ID: 21.03.12 03:58, Ned Batchelder ???????(??): > Books, magazines, and newspapers look good with full justification, web > pages do not. Can we switch to left-justified instead? You can add line p {text-align: left !important} to your browser custom stylesheet. If you are using Firefox or Chrome (Chromium), for them there are extensions (Stylish) that allow to apply the style to the particular site. From ned at nedbatchelder.com Wed Mar 21 15:18:31 2012 From: ned at nedbatchelder.com (Ned Batchelder) Date: Wed, 21 Mar 2012 10:18:31 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6935E1.2030309@nedbatchelder.com> Message-ID: <4F69E337.7010501@nedbatchelder.com> On 3/21/2012 9:44 AM, Serhiy Storchaka wrote: > 21.03.12 03:58, Ned Batchelder ???????(??): >> Books, magazines, and newspapers look good with full justification, web >> pages do not. Can we switch to left-justified instead? > > You can add line > > p {text-align: left !important} > > to your browser custom stylesheet. > > If you are using Firefox or Chrome (Chromium), for them there are > extensions (Stylish) that allow to apply the style to the particular > site. > Any of the tweaks people are suggesting could be applied individually using this technique. We could just as easily choose to make the site left-justified, and let the full-justification fans use custom stylesheets to get it. The challenge for the maintainer of the docs site is to choose a good design that most people will see. We're bound to disagree on what that design should be, and I suggest that probably none of us are designer enough to come up with the best one. Perhaps we could find an interested designer to help? --Ned. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ned%40nedbatchelder.com From Van.Lindberg at haynesboone.com Wed Mar 21 15:22:04 2012 From: Van.Lindberg at haynesboone.com (Lindberg, Van) Date: Wed, 21 Mar 2012 14:22:04 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F694DFB.7050400@netwok.org> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> Message-ID: <4F69E40C.6050901@gmail.com> Mark, MAL, Martin, Tarek, Could you comment on this? This is in the context of changing the name of the 'Scripts' directory on windows to 'bin'. ?ric brings up the point (explained more below) that if we make this change, packages made/installed the new packaging infrastructure and those made/installed with bdist_winist and the old (frozen) distutils will be inconsistent. The reason why is that the old distutils has a hard-coded dict in distutils.command.install that would point to the old locations. If we were to make this change in sysconfig.cfg, we would probably want to make a corresponding change in the INSTALL_SCHEMES dict in distutils.command.install. More context: On 3/20/2012 10:41 PM, ?ric Araujo wrote: > Le 20/03/2012 21:40, VanL a ?crit : >> On Tuesday, March 20, 2012 at 5:07 PM, Paul Moore wrote: >>> It's worth remembering ?ric's point - distutils is frozen and changes >>> are in theory not allowed. This part of the proposal is not possible >>> without an exception to that ruling. Personally, I don't see how >>> making this change could be a problem, but I'm definitely not an >>> expert. >>> >>> If distutils doesn't change, bdist_wininst installers built using >>> distutils rather than packaging will do the wrong thing with regard to >>> this change. End users won't be able to tell how an installer has been >>> built. Looking at the code in bdist_wininst, it loops over the keys in the INSTALL_SCHEMES dict to find the correct locations. If the hard-coded dict were changed, then the installer would 'just work' with the right location - and this matches my experience having made this sort of change. When I change the INSTALL_SCHEMES dict, things get installed according to the new scheme without difficulty using the standard tools. The only time when something is trouble is if it does its own install routine and hard-codes 'Scripts' as the name of the install directory - and I have only seen that in PyPM a couple versions ago. > From the top of my head the developers with the most experience about > Windows deployment are Martin v. L?wis, Mark Hammond and Marc-Andr? > Lemburg (not sure about the Windows part for MAL, but he maintains a > library that extends distutils and has been broken in the past). I > think their approval is required for this kind of huge change. Note the above - this is why I would like your comment. > The point of the distutils freeze (i.e. feature moratorium) is that we > just can?t know what complicated things people are doing with > undocumented internals, because distutils appeared unmaintained and > under-documented for years and people had to work with and around it; > since the start of the distutils2 project we can Just Say No? to > improvements and features in distutils. ?I don?t see what could > possibly go wrong? is a classic line in both horror movies and distutils > development. > > Renaming Scripts to bin on Windows would have effects on some tools we > know and surely on many tools we don?t know. We don?t want to see again > people who use or extend distutils come with torches and pitchforks > because internals were changed and we have to revert. So in my opinion, > to decide to go ahead with the change we need strong +1s from the > developers I named above and an endorsement by Tarek, or if he can?t > participate in the discussion, Guido. > > As a footnote, distutils is already broken in 3.3. Now we give users or > system administrators the possibility to edit the install schemes at > will in sysconfig.cfg, but distutils hard-codes the old scheme. I tend > to think it should be fixed, to make the distutils-packaging > transition/cohabitation possible. Any comment? Thanks, Van CIRCULAR 230 NOTICE: To ensure compliance with requirements imposed by U.S. Treasury Regulations, Haynes and Boone, LLP informs you that any U.S. tax advice contained in this communication (including any attachments) was not intended or written to be used, and cannot be used, for the purpose of (i) avoiding penalties under the Internal Revenue Code or (ii) promoting, marketing or recommending to another party any transaction or matter addressed herein. CONFIDENTIALITY NOTICE: This electronic mail transmission is confidential, may be privileged and should be read or retained only by the intended recipient. If you have received this transmission in error, please immediately notify the sender and delete it from your system. From mal at egenix.com Wed Mar 21 15:54:21 2012 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 21 Mar 2012 15:54:21 +0100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F69E40C.6050901@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> Message-ID: <4F69EB9D.1060701@egenix.com> Lindberg, Van wrote: > Mark, MAL, Martin, Tarek, > > Could you comment on this? > > This is in the context of changing the name of the 'Scripts' directory > on windows to 'bin'. ?ric brings up the point (explained more below) > that if we make this change, packages made/installed the new packaging > infrastructure and those made/installed with bdist_winist and the old > (frozen) distutils will be inconsistent. > > The reason why is that the old distutils has a hard-coded dict in > distutils.command.install that would point to the old locations. If we > were to make this change in sysconfig.cfg, we would probably want to > make a corresponding change in the INSTALL_SCHEMES dict in > distutils.command.install. I'm not sure I understand the point in making that change. Could you expand on the advantage of using "bin" instead of "Scripts" ? Note that distutils just provides defaults for these installation locations. All of them can be overridden using command line arguments to the install command. FWIW: I've dropped support for bdist_wininst in mxSetup.py since bdist_msi provides much better system integration. > More context: > > On 3/20/2012 10:41 PM, ?ric Araujo wrote: >> Le 20/03/2012 21:40, VanL a ?crit : >>> On Tuesday, March 20, 2012 at 5:07 PM, Paul Moore wrote: >>>> It's worth remembering ?ric's point - distutils is frozen and changes >>>> are in theory not allowed. This part of the proposal is not possible >>>> without an exception to that ruling. Personally, I don't see how >>>> making this change could be a problem, but I'm definitely not an >>>> expert. >>>> >>>> If distutils doesn't change, bdist_wininst installers built using >>>> distutils rather than packaging will do the wrong thing with regard to >>>> this change. End users won't be able to tell how an installer has been >>>> built. > > Looking at the code in bdist_wininst, it loops over the keys in the > INSTALL_SCHEMES dict to find the correct locations. If the hard-coded > dict were changed, then the installer would 'just work' with the right > location - and this matches my experience having made this sort of > change. When I change the INSTALL_SCHEMES dict, things get installed > according to the new scheme without difficulty using the standard tools. > The only time when something is trouble is if it does its own install > routine and hard-codes 'Scripts' as the name of the install directory - > and I have only seen that in PyPM a couple versions ago. > > >> From the top of my head the developers with the most experience about >> Windows deployment are Martin v. L?wis, Mark Hammond and Marc-Andr? >> Lemburg (not sure about the Windows part for MAL, but he maintains a >> library that extends distutils and has been broken in the past). I >> think their approval is required for this kind of huge change. > > Note the above - this is why I would like your comment. > > >> The point of the distutils freeze (i.e. feature moratorium) is that we >> just can?t know what complicated things people are doing with >> undocumented internals, because distutils appeared unmaintained and >> under-documented for years and people had to work with and around it; >> since the start of the distutils2 project we can Just Say No? to >> improvements and features in distutils. ?I don?t see what could >> possibly go wrong? is a classic line in both horror movies and distutils >> development. >> >> Renaming Scripts to bin on Windows would have effects on some tools we >> know and surely on many tools we don?t know. We don?t want to see again >> people who use or extend distutils come with torches and pitchforks >> because internals were changed and we have to revert. So in my opinion, >> to decide to go ahead with the change we need strong +1s from the >> developers I named above and an endorsement by Tarek, or if he can?t >> participate in the discussion, Guido. >> >> As a footnote, distutils is already broken in 3.3. Now we give users or >> system administrators the possibility to edit the install schemes at >> will in sysconfig.cfg, but distutils hard-codes the old scheme. I tend >> to think it should be fixed, to make the distutils-packaging >> transition/cohabitation possible. > > Any comment? > > Thanks, > Van > > CIRCULAR 230 NOTICE: To ensure compliance with requirements imposed by > U.S. Treasury Regulations, Haynes and Boone, LLP informs you that any > U.S. tax advice contained in this communication (including any > attachments) was not intended or written to be used, and cannot be > used, for the purpose of (i) avoiding penalties under the Internal > Revenue Code or (ii) promoting, marketing or recommending to another > party any transaction or matter addressed herein. > > CONFIDENTIALITY NOTICE: This electronic mail transmission is confidential, > may be privileged and should be read or retained only by the intended > recipient. If you have received this transmission in error, please > immediately notify the sender and delete it from your system. > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Mar 21 2012) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2012-04-03: Python Meeting Duesseldorf 13 days to go ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From lrekucki at gmail.com Wed Mar 21 16:06:11 2012 From: lrekucki at gmail.com (=?UTF-8?Q?=C5=81ukasz_Rekucki?=) Date: Wed, 21 Mar 2012 16:06:11 +0100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F69CBDA.8010405@nedbatchelder.com> References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> <4F69A059.2030601@tartley.com> <20120321101621.GA5612@iskra.aviel.ru> <4F69CBDA.8010405@nedbatchelder.com> Message-ID: On 21 March 2012 13:38, Ned Batchelder wrote: > On 3/21/2012 6:16 AM, Oleg Broytman wrote: >> >> On Wed, Mar 21, 2012 at 09:33:13AM +0000, Jonathan Hartley wrote: >>> >>> On 21/03/2012 08:25, Dirkjan Ochtman wrote: >>>> >>>> On Wed, Mar 21, 2012 at 07:00, Georg Brandl ? wrote: >>>>> >>>>> OK, that seems to be the main point people make... let me see if I can >>>>> come up with a better compromise. >>>> >>>> Would it be possible to limit the width of the page? On my 1920px >>>> monitor, the lines get awfully long, making them harder to read. >>> >>> I realise this is bikeshedding by now, but FWIW, please don't. If >>> people want shorter lines, they can narrow their browser, without >>> forcing that preference on the rest of us. >> >> ? ?Seconded. My display is 1920x1200 but I use very large fonts and I'm >> satisfied with line lengths. > > The best thing to do is to set a max-width in ems, say 50em. This leaves the > text at a reasonable width, but adapts naturally for people with larger > fonts. > > --Ned. FYI, the current paragraph font size on docs.python.org is 16px, while for http://www.python.org/~gbrandl/build/html/ it's 13px, so increasing that should help readability :) You can use @media queries to adjust it to screen resolution, which should solve the problem with long lines. -- ?ukasz Rekucki From storchaka at gmail.com Wed Mar 21 16:18:19 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 21 Mar 2012 17:18:19 +0200 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F69E337.7010501@nedbatchelder.com> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> Message-ID: 21.03.12 16:18, Ned Batchelder ???????(??): > We could just as easily choose to make the site > left-justified, and let the full-justification fans use custom > stylesheets to get it. I find justified text convenient and pleasant for the eyes. Many people hate left-aligned text. I think that the best would be the use of left-aligned text at the small width of the window (640px and less), when they become obvious drawbacks of justified text, and use justified text with a large width. > Perhaps we could find an interested designer to help? Isn't Georg Brandl a designer? The proposed design looks professional for me and is not worse than the design of large corporations (though there are some defects). The current design is also very good. Optimum is, I suppose, in the middle. From yselivanov.ml at gmail.com Wed Mar 21 16:19:31 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 21 Mar 2012 11:19:31 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> <4F69A059.2030601@tartley.com> <20120321101621.GA5612@iskra.aviel.ru> <4F69CBDA.8010405@nedbatchelder.com> Message-ID: <84AF9CD0-14EE-4BBA-8156-8CD79A0E9270@gmail.com> On 2012-03-21, at 11:06 AM, ?ukasz Rekucki wrote: > FYI, the current paragraph font size on docs.python.org is 16px, while > for http://www.python.org/~gbrandl/build/html/ it's 13px, so > increasing that should help readability :) You can use @media queries > to adjust it to screen resolution, which should solve the problem with > long lines. +1. It's much harder to read text in the new design. I would also make links a bit darker. - Yury From guido at python.org Wed Mar 21 17:00:55 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 21 Mar 2012 09:00:55 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F69CBDA.8010405@nedbatchelder.com> References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> <4F69A059.2030601@tartley.com> <20120321101621.GA5612@iskra.aviel.ru> <4F69CBDA.8010405@nedbatchelder.com> Message-ID: On Mar 21, 2012 5:44 AM, "Ned Batchelder" wrote: > The best thing to do is to set a max-width in ems, say 50em. This leaves the text at a reasonable width, but adapts naturally for people with larger fonts. Please, no, not even this "improved" version of coddling. If you're formatting e.g. a newspaper or a book, by all means (though I still think the user should be given ultimate control -- and I don't mean editing the CSS using the browser's development tools :-). But when reading docs there are all sorts of reasons why I might want to stretch the window to maximum width and nothing's more frustrating than a website that forces clipping, folding or a horizontal scroll bar even when I make the window wide enough. And sometimes I just don't care that much about reading the text, but having more things visible at once (vertically) is worth it. (Can you see why I invented a whitespace-sensitive language? I have a whitespace-sensitive brain. :-) -- --Guido van Rossum (python.org/~guido) From phd at phdru.name Wed Mar 21 17:08:57 2012 From: phd at phdru.name (Oleg Broytman) Date: Wed, 21 Mar 2012 20:08:57 +0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: <20120321160857.GA14426@iskra.aviel.ru> On Tue, Mar 20, 2012 at 11:38:53PM +0100, Georg Brandl wrote: > recently I've grown a bit tired of seeing our default Sphinx theme, > especially as so many other projects use it. I decided to play around > with something "clean" this time, and this is the result: > > http://www.python.org/~gbrandl/build/html/ Looks very nice! A few notes, if you don't mind. 1. I'd prefer a little bit bigger fonts. 2. IWBN IMHO to extend the grayish background of the navigation bar at the left to the bottom of the page. White space below short boxes looks strange for me. 3. A lot of small adjacent code snippets with a different background make my eyes hurt. See for example the note number 5 at http://www.python.org/~gbrandl/build/html/library/stdtypes.html#sequence-types-str-bytes-bytearray-list-tuple-range I'd like inline code snippets to have the same background. Bold font and/or a different foreground color would be better, in my opinion. Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From guido at python.org Wed Mar 21 17:11:54 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 21 Mar 2012 09:11:54 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F69E337.7010501@nedbatchelder.com> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> Message-ID: On Wed, Mar 21, 2012 at 7:18 AM, Ned Batchelder wrote: > The challenge for the maintainer of the docs site is to choose a good design > that most people will see. ?We're bound to disagree on what that design > should be, and I suggest that probably none of us are designer enough to > come up with the best one. ?Perhaps we could find an interested designer to > help? I've come to the conclusion that "good design" is not so much a matter of finding the "best" of anything (font, spacing rules, colors, icons, artowork, etc.). Good design is highly subjective to fashion, and the people who are recognized to be the best designers are more often than not just those with a strong enough opinion to push their creative ideas through. Then other designers, who are not quite as good but still have a nose for the latest fashion, copy their ideas and for a while anything that hasn't been redesigned looks "old-fashioned". (Before you say something about limitations of old technology, note how often designers go back to older styles and manage to make them look fashionable again.) If you want something that attracts attention through controversy, get one of those initial thought leaders. If you want something that looks "current" today but which will probably be out of style next year, use one of the style-following designers. If you want something that is maximally useful, get a scientist with an ounce of style sense to do your design... Oh hey, Georg *is* a scientist! And he's got more than an ounce of style. So just let him do it and let's not try to micromanage things. (I had to speak up about the low contrast because Georg has young eyes and may not realize that this issue exists for older Pythonistas.) -- --Guido van Rossum (python.org/~guido) From storchaka at gmail.com Wed Mar 21 18:04:23 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 21 Mar 2012 19:04:23 +0200 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: If I can get my five cents, I will tell about my impressions. I really liked the background of allocated blocks (such as notes and code snippets) has become less diverse (but still visible). The border around these blocks have become more accurate and more pleasant to emphasize blocks. It is very good that the sidebar is no longer confused look. And everything looks quite nice. But the font is a little bit small for my eyes (on the contrary current theme font a little bit big). This leads to too long (in characters) lines. Less obvious was the structure of the document (due to decrease the font size of the header and the removal of the dividing lines). I would like to that the background color of ".note tt" has become a little lighter and quieter. From storchaka at gmail.com Wed Mar 21 18:09:30 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 21 Mar 2012 19:09:30 +0200 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> <4F69A059.2030601@tartley.com> <20120321101621.GA5612@iskra.aviel.ru> <4F69CBDA.8010405@nedbatchelder.com> Message-ID: 21.03.12 18:00, Guido van Rossum ???????(??): > (Can you see why I invented a whitespace-sensitive language? I have a > whitespace-sensitive brain. :-) It should be added to favorite quotes. From tjreedy at udel.edu Wed Mar 21 18:31:40 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 21 Mar 2012 13:31:40 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <20120321120949.31c790dc@pitrou.net> References: <20120321120949.31c790dc@pitrou.net> Message-ID: On 3/21/2012 7:09 AM, Antoine Pitrou wrote: > On Tue, 20 Mar 2012 21:39:41 -0400 > Terry Reedy wrote: >> On 3/20/2012 6:38 PM, Georg Brandl wrote: >> >> The current green on the front page is too heavy. > > Green? > hmm... you mean blue, right? > :) Yeh, a muddy slightly greenish blue. I would prefer what I call a real blue, as in the logo, or the quoted text above on Thunderbird. -- Terry Jan Reedy From ned at nedbatchelder.com Wed Mar 21 19:40:04 2012 From: ned at nedbatchelder.com (Ned Batchelder) Date: Wed, 21 Mar 2012 14:40:04 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: <4F6A2084.90508@nedbatchelder.com> On 3/21/2012 1:04 PM, Serhiy Storchaka wrote: > If I can get my five cents, I will tell about my impressions. I really > liked the background of allocated blocks (such as notes and code > snippets) has become less diverse (but still visible). The border > around these blocks have become more accurate and more pleasant to > emphasize blocks. It is very good that the sidebar is no longer > confused look. And everything looks quite nice. But the font is a > little bit small for my eyes (on the contrary current theme font a > little bit big). This leads to too long (in characters) lines. Less > obvious was the structure of the document (due to decrease the font > size of the header and the removal of the dividing lines). > You can use Ctrl-+ to increase the size of the text, and modern browsers remember that for the next time you visit the site. --Ned. > I would like to that the background color of ".note tt" has become a > little lighter and quieter. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ned%40nedbatchelder.com > From guido at python.org Wed Mar 21 19:46:36 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 21 Mar 2012 11:46:36 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6A2084.90508@nedbatchelder.com> References: <4F6A2084.90508@nedbatchelder.com> Message-ID: On Wed, Mar 21, 2012 at 11:40 AM, Ned Batchelder wrote: > You can use Ctrl-+ to increase the size of the text, and modern browsers > remember that for the next time you visit the site. That doesn't mean the web designer shouldn't think at least twice before specifying a smaller font than the browser default. -- --Guido van Rossum (python.org/~guido) From fdrake at acm.org Wed Mar 21 20:06:17 2012 From: fdrake at acm.org (Fred Drake) Date: Wed, 21 Mar 2012 15:06:17 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6A2084.90508@nedbatchelder.com> Message-ID: On Wed, Mar 21, 2012 at 2:46 PM, Guido van Rossum wrote: > That doesn't mean the web designer shouldn't think at least twice > before specifying a smaller font than the browser default. Yet 90% of designers (or more) insist on making text insanely small, commonly specifying the size in pixles or (if we're lucky) points. Not sure there's any lesson to be learned from this, aside from designers really having it out for anyone who needs to read. -Fred -- Fred L. Drake, Jr.? ? "A person who won't read has no advantage over one who can't read." ?? --Samuel Langhorne Clemens From phd at phdru.name Wed Mar 21 20:10:34 2012 From: phd at phdru.name (Oleg Broytman) Date: Wed, 21 Mar 2012 23:10:34 +0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6A2084.90508@nedbatchelder.com> References: <4F6A2084.90508@nedbatchelder.com> Message-ID: <20120321191034.GA18727@iskra.aviel.ru> On Wed, Mar 21, 2012 at 02:40:04PM -0400, Ned Batchelder wrote: > You can use Ctrl-+ to increase the size of the text, and modern > browsers remember that for the next time you visit the site. Browsers usually remember the setting for the entire site, not only documentation. Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From ned at nedbatchelder.com Wed Mar 21 20:12:17 2012 From: ned at nedbatchelder.com (Ned Batchelder) Date: Wed, 21 Mar 2012 15:12:17 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6A2084.90508@nedbatchelder.com> Message-ID: <4F6A2811.5080806@nedbatchelder.com> On 3/21/2012 2:46 PM, Guido van Rossum wrote: > On Wed, Mar 21, 2012 at 11:40 AM, Ned Batchelder wrote: >> You can use Ctrl-+ to increase the size of the text, and modern browsers >> remember that for the next time you visit the site. > That doesn't mean the web designer shouldn't think at least twice > before specifying a smaller font than the browser default. > Yes, sorry, that was exactly my point earlier in this thread. I was being a bit snarky with Serhiy. Seems the standard here is for people to request their personal favorite tweaks, and then tell others that they can use browser customizations to get what they want. Guido, you encouraged us to use science, but only after describing my science-based maximum line-length suggestion as "coddling," then said we should let Georg get on with it, but only after reiterating your personal favorite tweak (which I happen to agree with). There's no way a committee (which this thread effectively is) will come up with a good design. Everyone will dislike something about it. I think it would be interesting to use the power of the web to provide docs whose style could be adjusted a few ways to make people happy, but that is probably more than anyone is willing to volunteer for, I know I can't step up to do it. Personally, I think two Python projects that have focused on docs and done a good job of it are Django and readthedocs.org. Perhaps we could follow their lead? --Ned. From ned at nedbatchelder.com Wed Mar 21 20:13:39 2012 From: ned at nedbatchelder.com (Ned Batchelder) Date: Wed, 21 Mar 2012 15:13:39 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6A2084.90508@nedbatchelder.com> Message-ID: <4F6A2863.1080200@nedbatchelder.com> On 3/21/2012 3:06 PM, Fred Drake wrote: > On Wed, Mar 21, 2012 at 2:46 PM, Guido van Rossum wrote: >> That doesn't mean the web designer shouldn't think at least twice >> before specifying a smaller font than the browser default. > Yet 90% of designers (or more) insist on making text insanely small, commonly > specifying the size in pixles or (if we're lucky) points. > > Not sure there's any lesson to be learned from this, aside from designers > really having it out for anyone who needs to read. There are bad designers, or more to the point, designers who favor the overall look of the page at the expense of the utility of the page. That doesn't mean all designers are bad, or that "design" is bad. Don't throw out the baby with the bathwater. --Ned. > > -Fred > From pje at telecommunity.com Wed Mar 21 20:38:49 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 21 Mar 2012 15:38:49 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <20120321013717.1126e14f@pitrou.net> <21CFF6A5-DB47-455B-A7AF-39344739F597@gmail.com> <4F69A059.2030601@tartley.com> <20120321101621.GA5612@iskra.aviel.ru> <4F69CBDA.8010405@nedbatchelder.com> Message-ID: On Mar 21, 2012 12:00 PM, "Guido van Rossum" wrote: > > On Mar 21, 2012 5:44 AM, "Ned Batchelder" wrote: > > The best thing to do is to set a max-width in ems, say 50em. This leaves the text at a reasonable width, but adapts naturally for people with larger fonts. > > Please, no, not even this "improved" version of coddling. If you're > formatting e.g. a newspaper or a book, by all means (though I still > think the user should be given ultimate control -- and I don't mean > editing the CSS using the browser's development tools :-). But when > reading docs there are all sorts of reasons why I might want to > stretch the window to maximum width and nothing's more frustrating > than a website that forces clipping, folding or a horizontal scroll > bar even when I make the window wide enough. Well, the only thing that's more frustrating than that is having to resize my window to make the text readable, and then *still* having to scroll horizontally for the wide bits, or have to alternate sizes of the window. Just because flowing text paragraphs are set to a moderate max-width, that doesn't mean that code samples, tables, etc. all have to be the *same* max-width, or have any max-width at all. That is, keeping flowing text readable is not incompatible with having arbitrarily-wide code, tables, etc. (Text width is an ergonomic consideration as much as font size and color: too wide in absolute characters, and the eye has to hunt up and down to find where to start reading the next line.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Mar 21 20:39:18 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 21 Mar 2012 12:39:18 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6A2811.5080806@nedbatchelder.com> References: <4F6A2084.90508@nedbatchelder.com> <4F6A2811.5080806@nedbatchelder.com> Message-ID: On Wed, Mar 21, 2012 at 12:12 PM, Ned Batchelder wrote: > On 3/21/2012 2:46 PM, Guido van Rossum wrote: >> >> On Wed, Mar 21, 2012 at 11:40 AM, Ned Batchelder >> ?wrote: >>> >>> You can use Ctrl-+ to increase the size of the text, and modern browsers >>> remember that for the next time you visit the site. >> >> That doesn't mean the web designer shouldn't think at least twice >> before specifying a smaller font than the browser default. >> > Yes, sorry, that was exactly my point earlier in this thread. ?I was being a > bit snarky with Serhiy. ?Seems the standard here is for people to request > their personal favorite tweaks, and then tell others that they can use > browser customizations to get what they want. > > Guido, you encouraged us to use science, but only after describing my > science-based maximum line-length suggestion as "coddling," then said we > should let Georg get on with it, but only after reiterating your personal > favorite tweak (which I happen to agree with). I have a fair number of strong usability gripes about current (and past :-) design trends, but I know I can't design a decent looking website myself if my life depended on it. > There's no way a committee (which this thread effectively is) will come up > with a good design. ?Everyone will dislike something about it. ?I think it > would be interesting to use the power of the web to provide docs whose style > could be adjusted a few ways to make people happy, but that is probably more > than anyone is willing to volunteer for, I know I can't step up to do it. I think it's fine to have a bunch of folks submit their pet peeves (and argue them to the death :-) to the design czar and then let the czar (i.e. Georg) decide. > Personally, I think two Python projects that have focused on docs and done a > good job of it are Django and readthedocs.org. ?Perhaps we could follow > their lead? I think they are actually more trend-followers,and they seem to make a bunch of the mistakes I've fulminated against here. But again, I'll leave it to Georg. -- --Guido van Rossum (python.org/~guido) From ned at nedbatchelder.com Wed Mar 21 20:59:59 2012 From: ned at nedbatchelder.com (Ned Batchelder) Date: Wed, 21 Mar 2012 15:59:59 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6A2FF6.5030207@stoneleaf.us> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A2FF6.5030207@stoneleaf.us> Message-ID: <4F6A333F.6060501@nedbatchelder.com> On 3/21/2012 3:45 PM, Ethan Furman wrote: > Guido van Rossum wrote: >> On Wed, Mar 21, 2012 at 7:18 AM, Ned Batchelder >> wrote: >>> The challenge for the maintainer of the docs site is to choose a >>> good design >>> that most people will see. We're bound to disagree on what that design >>> should be, and I suggest that probably none of us are designer >>> enough to >>> come up with the best one. Perhaps we could find an interested >>> designer to >>> help? >> >> I've come to the conclusion that "good design" is not so much a matter >> of finding the "best" of anything (font, spacing rules, colors, icons, >> artowork, etc.). Good design is highly subjective to fashion, and the >> people who are recognized to be the best designers are more often than >> not just those with a strong enough opinion to push their creative >> ideas through. Then other designers, who are not quite as good but >> still have a nose for the latest fashion, copy their ideas and for a >> while anything that hasn't been redesigned looks "old-fashioned". >> >> (Before you say something about limitations of old technology, note >> how often designers go back to older styles and manage to make them >> look fashionable again.) >> >> If you want something that attracts attention through controversy, get >> one of those initial thought leaders. If you want something that looks >> "current" today but which will probably be out of style next year, use >> one of the style-following designers. If you want something that is >> maximally useful, get a scientist with an ounce of style sense to do >> your design... Oh hey, Georg *is* a scientist! And he's got more than >> an ounce of style. So just let him do it and let's not try to >> micromanage things. (I had to speak up about the low contrast because >> Georg has young eyes and may not realize that this issue exists for >> older Pythonistas.) > > +1000 > Deriding the entire discipline of design because some of its practitioners are hacks is like pointing at PHP kiddies as the reason why you don't need "software architects." Yes, we could make the mistake of over-designing it, and that would be a mistake. The science you seek is something that designers are well-versed in. --Ned. From ethan at stoneleaf.us Wed Mar 21 20:45:58 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 21 Mar 2012 12:45:58 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> Message-ID: <4F6A2FF6.5030207@stoneleaf.us> Guido van Rossum wrote: > On Wed, Mar 21, 2012 at 7:18 AM, Ned Batchelder wrote: >> The challenge for the maintainer of the docs site is to choose a good design >> that most people will see. We're bound to disagree on what that design >> should be, and I suggest that probably none of us are designer enough to >> come up with the best one. Perhaps we could find an interested designer to >> help? > > I've come to the conclusion that "good design" is not so much a matter > of finding the "best" of anything (font, spacing rules, colors, icons, > artowork, etc.). Good design is highly subjective to fashion, and the > people who are recognized to be the best designers are more often than > not just those with a strong enough opinion to push their creative > ideas through. Then other designers, who are not quite as good but > still have a nose for the latest fashion, copy their ideas and for a > while anything that hasn't been redesigned looks "old-fashioned". > > (Before you say something about limitations of old technology, note > how often designers go back to older styles and manage to make them > look fashionable again.) > > If you want something that attracts attention through controversy, get > one of those initial thought leaders. If you want something that looks > "current" today but which will probably be out of style next year, use > one of the style-following designers. If you want something that is > maximally useful, get a scientist with an ounce of style sense to do > your design... Oh hey, Georg *is* a scientist! And he's got more than > an ounce of style. So just let him do it and let's not try to > micromanage things. (I had to speak up about the low contrast because > Georg has young eyes and may not realize that this issue exists for > older Pythonistas.) +1000 From tseaver at palladion.com Wed Mar 21 21:37:37 2012 From: tseaver at palladion.com (Tres Seaver) Date: Wed, 21 Mar 2012 16:37:37 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6A2863.1080200@nedbatchelder.com> References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/21/2012 03:13 PM, Ned Batchelder wrote: > On 3/21/2012 3:06 PM, Fred Drake wrote: >> On Wed, Mar 21, 2012 at 2:46 PM, Guido van Rossum >> wrote: >>> That doesn't mean the web designer shouldn't think at least twice >>> before specifying a smaller font than the browser default. >> Yet 90% of designers (or more) insist on making text insanely small, >> commonly specifying the size in pixles or (if we're lucky) points. >> >> Not sure there's any lesson to be learned from this, aside from >> designers really having it out for anyone who needs to read. > There are bad designers, or more to the point, designers who favor the > overall look of the page at the expense of the utility of the page. > That doesn't mean all designers are bad, or that "design" is bad. > Don't throw out the baby with the bathwater. Designers who care more about utility / accessibility more than their hipster karma seem to be a tiny minority in the current web world (without even counting "web designers" who think a Photoshop document is the final deliverable). Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk9qPBEACgkQ+gerLs4ltQ5fMwCcD8cHLDch/cIlBpVY4htmlDN4 fzQAmgNUVVn+uByZRBI22TB7ETdkLzmP =ZHdF -----END PGP SIGNATURE----- From fdrake at acm.org Wed Mar 21 21:38:14 2012 From: fdrake at acm.org (Fred Drake) Date: Wed, 21 Mar 2012 16:38:14 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6A2863.1080200@nedbatchelder.com> References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> Message-ID: On Wed, Mar 21, 2012 at 3:13 PM, Ned Batchelder wrote: > There are bad designers, or more to the point, designers who favor the > overall look of the page at the expense of the utility of the page. ?That > doesn't mean all designers are bad, or that "design" is bad. ?Don't throw > out the baby with the bathwater. I get that. I'm not bad-mouthing actual design, and there are definitely good designers out there. It's unfortunate they're so seriously outnumbered. -Fred -- Fred L. Drake, Jr.? ? "A person who won't read has no advantage over one who can't read." ?? --Samuel Langhorne Clemens From ned at nedbatchelder.com Wed Mar 21 21:44:56 2012 From: ned at nedbatchelder.com (Ned Batchelder) Date: Wed, 21 Mar 2012 16:44:56 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> Message-ID: <4F6A3DC8.3010804@nedbatchelder.com> On 3/21/2012 4:38 PM, Fred Drake wrote: > On Wed, Mar 21, 2012 at 3:13 PM, Ned Batchelder wrote: >> There are bad designers, or more to the point, designers who favor the >> overall look of the page at the expense of the utility of the page. That >> doesn't mean all designers are bad, or that "design" is bad. Don't throw >> out the baby with the bathwater. > I get that. I'm not bad-mouthing actual design, and there are definitely > good designers out there. > > It's unfortunate they're so seriously outnumbered. Yeah, just like software architects... :-( --Ned. > > -Fred > From rdmurray at bitdance.com Wed Mar 21 21:48:14 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 21 Mar 2012 16:48:14 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6A2084.90508@nedbatchelder.com> <4F6A2811.5080806@nedbatchelder.com> Message-ID: <20120321204815.DF4682500E3@webabinitio.net> On Wed, 21 Mar 2012 12:39:18 -0700, Guido van Rossum wrote: > On Wed, Mar 21, 2012 at 12:12 PM, Ned Batchelder wrote: > > Personally, I think two Python projects that have focused on docs and done a > > good job of it are Django and readthedocs.org. ??Perhaps we could follow > > their lead? > > I think they are actually more trend-followers,and they seem to make a > bunch of the mistakes I've fulminated against here. But again, I'll > leave it to Georg. I'm pretty sure they are following the trend set by Python/Georg...it comes withs Sphinx, after all, and looks pretty good in general. --David From bradallen137 at gmail.com Wed Mar 21 21:38:50 2012 From: bradallen137 at gmail.com (Brad Allen) Date: Wed, 21 Mar 2012 15:38:50 -0500 Subject: [Python-Dev] Issue 13524: subprocess on Windows Message-ID: I tripped over this one trying to make one of our Python at work Windows compatible. We had no idea that a magic 'SystemRoot' environment variable would be required, and it was causing issues for pyzmq. It might be nice to reflect the findings of this email thread on the subprocess documentation page: http://docs.python.org/library/subprocess.html Currently the docs mention this: "Note If specified, env must provide any variables required for the program to execute. On Windows, in order to run a side-by-side assembly the specified env must include a valid SystemRoot." How about rewording that to: "Note If specified, env must provide any variables required for the program to execute. On Windows, a valid SystemRoot environment variable is required for some Python libraries such as the 'random' module. Also, in order to run a side-by-side assembly the specified env must include a valid SystemRoot." From victor.stinner at gmail.com Wed Mar 21 23:22:51 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 21 Mar 2012 23:22:51 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: References: Message-ID: >> http://hg.python.org/cpython/rev/7355550d5357 >> changeset: ? 75850:7355550d5357 >> user: ? ? ? ?Stefan Krah >> date: ? ? ? ?Wed Mar 21 18:25:23 2012 +0100 >> summary: >> ?Issue #7652: Integrate the decimal floating point libmpdec library to speed >> up the decimal module. Performance gains of the new C implementation are >> between 12x and 80x, depending on the application. Congrats Stefan! And thanks for the huge chunk of code. Victor From greg.ewing at canterbury.ac.nz Wed Mar 21 23:28:26 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 22 Mar 2012 11:28:26 +1300 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F69E337.7010501@nedbatchelder.com> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> Message-ID: <4F6A560A.6050207@canterbury.ac.nz> Ned Batchelder wrote: > Any of the tweaks people are suggesting could be applied individually > using this technique. We could just as easily choose to make the site > left-justified, and let the full-justification fans use custom > stylesheets to get it. Is it really necessary for the site to specify the justification at all? Why not leave it to the browser and whatever customisation the user chooses to make? -- Greg From skippy.hammond at gmail.com Wed Mar 21 23:43:05 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Thu, 22 Mar 2012 09:43:05 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F69E40C.6050901@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> Message-ID: <4F6A5979.3000801@gmail.com> On 22/03/2012 1:22 AM, Lindberg, Van wrote: > Mark, MAL, Martin, Tarek, > > Could you comment on this? Eric is correct - tools will be broken by this change. However, people seem willing to push forward on this and accept such breakage as the necessary cost. MAL, in his followup, asks what the advantages are of such a change. I've actually been asking for the same thing in this thread and the only real answer I've got is "consistency". So while I share MAL's concerns, people seem willing to push forward on this anyway, without the benefits having been explained. IOW, this isn't the decision I would make, but I think I've already made that point a number of times in this thread. Beyond that, there doesn't seem much for me to add... Mark From songofacandy at gmail.com Wed Mar 21 23:45:40 2012 From: songofacandy at gmail.com (INADA Naoki) Date: Thu, 22 Mar 2012 07:45:40 +0900 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: +10 for new design. +1 for respecting default font size rather than "div.body {font-size: smaller;}" Users loving smaller font can set their browser's default font size. On Wed, Mar 21, 2012 at 7:38 AM, Georg Brandl wrote: > Hi all, > > recently I've grown a bit tired of seeing our default Sphinx theme, > especially as so many other projects use it. ?I decided to play around > with something "clean" this time, and this is the result: > > ?http://www.python.org/~gbrandl/build/html/ > > The corresponding sandbox repo is at > > ?http://hg.python.org/sandbox/doc-theme/#doc-theme > > Let me know what you think, or play around and send some improvements. > (The collapsible sidebar is not adapted to it yet, but will definitely > be integrated before I consider applying a new theme to the real docs.) > > Thanks, > Georg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/songofacandy%40gmail.com -- INADA Naoki? From p.f.moore at gmail.com Thu Mar 22 00:03:08 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Mar 2012 23:03:08 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: <4F6A5979.3000801@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F6A5979.3000801@gmail.com> Message-ID: On 21 March 2012 22:43, Mark Hammond wrote: > On 22/03/2012 1:22 AM, Lindberg, Van wrote: >> >> Mark, MAL, Martin, Tarek, >> >> Could you comment on this? > > > Eric is correct - tools will be broken by this change. ?However, people seem > willing to push forward on this and accept such breakage as the necessary > cost. > > MAL, in his followup, asks what the advantages are of such a change. I've > actually been asking for the same thing in this thread and the only real > answer I've got is "consistency". ?So while I share MAL's concerns, people > seem willing to push forward on this anyway, without the benefits having > been explained. > > IOW, this isn't the decision I would make, but I think I've already made > that point a number of times in this thread. ?Beyond that, there doesn't > seem much for me to add... I agree on all points here. I don't understand quite why backward compatibility is being treated so lightly here. But equally, I've made my points and have little further to add. One thought though - maybe this should need a PEP at least, to document the proposal and record the various arguments made in this thread? Paul. From doboy0 at gmail.com Thu Mar 22 00:39:48 2012 From: doboy0 at gmail.com (Huan Do) Date: Wed, 21 Mar 2012 16:39:48 -0700 Subject: [Python-Dev] New PEP Message-ID: *Hi, I am a graduating Berkeley student that loves python and would like to propose an enhancement to python. My proposal introduces a concept of slicing generator. For instance, if one does x[:] it returns a list which is a copy of x. Sometimes programmers would want to iterate over a slice of x, but they do not like the overhead of constructing another list. Instead we can create a similar operator that returns a generator. My proposed syntax is x(:). The programmers are of course able to set lower, upper, and step size like the following. x(1::-1) This would make code much cleaner in a lot of instances, one example lets say we have a very large list x and we want to sum all the numbers but the last 20, and we only want to loop through the even indices. We would have to do something like this. sum(x[:-20:2]) or we can do a workaround to save space for time and do something like this. sum( value for i, value in enumerate(x) if i < -20 and not i % 2 ) But with my proposal we are able do the following. sum(x(:-20:2)) Which affords space without sacrificing expressiveness. For another example lets say we have a problem that we want to check a condition is true for every pairwise element in a list x. def allfriends(x): for i in range(len(x)): for j in range(i+1, len(x)): if not friends(x[i], x[j]): return False return True A more pythonic way is to actually loop through the values instead of the indices like this. def allfriends(x): for i, a in enumerate(x): for j, b in enumerate(x[i+1:]): if not friends(a, b): return False return True This however bring a lot of overhead because we have to construct a new list for every slice call. With my proposal we are able to do this. def allfriends(x): for i, a in enumerate(x): for j, b in enumerate(x(i+1:)): if not friends(a, b): return False return True This proposal however goes against one heuristic in the zen of python, namely ?Special cases aren?t special enough to break the rules.? The way that the proposal breaks this rule is because the syntax x(:), uses a function call syntax but would be a special case here. I chose using parenthesis because I wanted this operation to be analogous to the generator syntax in list comprehensions. ListGeneratorsComprehension[ x for x in L ]( x for x in L )SlicingL[a:b:c] L(a:b:c) Tell me what you guys think. Thanks!* -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Thu Mar 22 00:49:26 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 21 Mar 2012 16:49:26 -0700 Subject: [Python-Dev] PEP 416: Add a frozendict builtin type In-Reply-To: <4F5B30A4.6080306@gmail.com> References: <1330541549.7844.69.camel@surprise> <4F5B30A4.6080306@gmail.com> Message-ID: To close the loop, I've rejected the PEP, adding the following rejection notice: """ I'm rejecting this PEP. A number of reasons (not exhaustive): * According to Raymond Hettinger, use of frozendict is low. Those that do use it tend to use it as a hint only, such as declaring global or class-level "constants": they aren't really immutable, since anyone can still assign to the name. * There are existing idioms for avoiding mutable default values. * The potential of optimizing code using frozendict in PyPy is unsure; a lot of other things would have to change first. The same holds for compile-time lookups in general. * Multiple threads can agree by convention not to mutate a shared dict, there's no great need for enforcement. Multiple processes can't share dicts. * Adding a security sandbox written in Python, even with a limited scope, is frowned upon by many, due to the inherent difficulty with ever proving that the sandbox is actually secure. Because of this we won't be adding one to the stdlib any time soon, so this use case falls outside the scope of a PEP. On the other hand, exposing the existing read-only dict proxy as a built-in type sounds good to me. (It would need to be changed to allow calling the constructor.) """ -- --Guido van Rossum (python.org/~guido) From victor.stinner at gmail.com Thu Mar 22 00:51:03 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 22 Mar 2012 00:51:03 +0100 Subject: [Python-Dev] PEP 416: Add a frozendict builtin type In-Reply-To: References: <1330541549.7844.69.camel@surprise> <4F5B30A4.6080306@gmail.com> Message-ID: 2012/3/22 Guido van Rossum : > To close the loop, I've rejected the PEP, adding the following rejection notice: > > """ > I'm rejecting this PEP. (...) Hum, you may specify who is "I" in the PEP. Victor From animelovin at gmail.com Thu Mar 22 00:55:43 2012 From: animelovin at gmail.com (Etienne Robillard) Date: Wed, 21 Mar 2012 19:55:43 -0400 Subject: [Python-Dev] New PEP In-Reply-To: References: Message-ID: <4F6A6A7F.9070202@gmail.com> On 03/21/2012 07:39 PM, Huan Do wrote: > *Hi, > > I am a graduating Berkeley student that loves python and would like to > propose an enhancement to python. My proposal introduces a concept of > slicing generator. For instance, if one does x[:] it returns a list > which is a copy of x. Sometimes programmers would want to iterate over a > slice of x, but they do not like the overhead of constructing another > list. Instead we can create a similar operator that returns a generator. > My proposed syntax is x(:). The programmers are of course able to set > lower, upper, and step size like the following. > > x(1::-1) > > > This would make code much cleaner in a lot of instances, one example > lets say we have a very large list x and we want to sum all the numbers > but the last 20, and we only want to loop through the even indices. > > We would have to do something like this. > > sum(x[:-20:2]) > > > or we can do a workaround to save space for time and do something like this. > > sum( value for i, value in enumerate(x) if i < -20 and not i % 2 ) > > > But with my proposal we are able do the following. > > sum(x(:-20:2)) > > > Which affords space without sacrificing expressiveness. > > For another example lets say we have a problem that we want to check a > condition is true for every pairwise element in a list x. > > def allfriends(x): > > for i in range(len(x)): > > for j in range(i+1, len(x)): > > if not friends(x[i], x[j]): > > return False > > return True > > > A more pythonic way is to actually loop through the values instead of > the indices like this. > > def allfriends(x): > > for i, a in enumerate(x): > > for j, b in enumerate(x[i+1:]): > > if not friends(a, b): > > return False > > return True > > > This however bring a lot of overhead because we have to construct a new > list for every slice call. With my proposal we are able to do this. > > def allfriends(x): > > for i, a in enumerate(x): > > for j, b in enumerate(x(i+1:)): > > if not friends(a, b): > > return False > > return True > > > This proposal however goes against one heuristic in the zen of python, > namely ?Special cases aren?t special enough to break the rules.? The way > that the proposal breaks this rule is because the syntax x(:), uses a > function call syntax but would be a special case here. I chose using > parenthesis because I wanted this operation to be analogous to the > generator syntax in list comprehensions. > > List Generators > Comprehension [ x for x in L ] ( x for x in L ) > Slicing L[a:b:c] L(a:b:c) > > > > Tell me what you guys think. > > Thanks!* > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/animelovin%40gmail.com Hi, I'm not sure i get it.. Assuming your PEP is accepted, what should happens then to the lambda op and standard function calls ? Or Is this merely another case of metaprogramming, which obviously should not be confused with languages such as lisp? Thank you, Etienne From victor.stinner at gmail.com Thu Mar 22 00:57:55 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 22 Mar 2012 00:57:55 +0100 Subject: [Python-Dev] New PEP In-Reply-To: References: Message-ID: > My proposed syntax is x(:) Change the Python syntax is not a good start. You can already experiment your idea using the slice() type. > We would have to do something like this. > sum(x[:-20:2]) Do you know the itertools module? It looks like itertools.islice(). Victor From doboy0 at gmail.com Thu Mar 22 01:28:17 2012 From: doboy0 at gmail.com (Huan Do) Date: Wed, 21 Mar 2012 17:28:17 -0700 Subject: [Python-Dev] New PEP In-Reply-To: <4F6A6DB2.9030103@stoneleaf.us> References: <4F6A6DB2.9030103@stoneleaf.us> Message-ID: @Ethan Furman each call to x(:) would return a different iterator, so both sides will have their own information about where they are. Also it is the case that checking for equality of generators does not make the generators to expand out, so checking for equality becomes to checking if they are the same generator object. The following example shows this Python3 >> (x for x in range(10)) == (x for x in range(10)) False @Etienne "lambda" is a keyword and would get captured by the lexer, so this should conflict with adding the grammar that would make this work. This is different than function calls because currently arguments of function calls cannot have ":", causing `x(:)` to be a syntax error. The grammar that would have to be added would be mutually exclusive from current functionality. @Victor I was not completely familiar with itertools but itertools.islice() seems to have the functionality that I propose. It is great that there already exist a solution that does not change python's syntax. Unless anyone wants to pursue this proposal I will drop it next week. Thanks for your feedback guys On Wed, Mar 21, 2012 at 5:09 PM, Ethan Furman wrote: > Huan Do wrote: > >> *Hi, >> >> >> I am a graduating Berkeley student that loves python and would like to >> propose an enhancement to python. My proposal introduces a concept of >> slicing generator. For instance, if one does x[:] it returns a list which >> is a copy of x. Sometimes programmers would want to iterate over a slice of >> x, but they do not like the overhead of constructing another list. Instead >> we can create a similar operator that returns a generator. My proposed >> syntax is x(:). The programmers are of course able to set lower, upper, and >> step size like the following. >> >> x(1::-1) >> > > The biggest problem with your proposal is that generators don't remember > what they have already yielded, so > > x(:) != x(:) # first time gets everything, second time gets nothing > > ~Ethan~ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Thu Mar 22 01:42:22 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Thu, 22 Mar 2012 09:42:22 +0900 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F6A5979.3000801@gmail.com> Message-ID: Cleaning up the absurd CC line.... On Thu, Mar 22, 2012 at 8:03 AM, Paul Moore wrote: > I agree on all points here. I don't understand quite why backward > compatibility is being treated so lightly here. But equally, I've made > my points and have little further to add. As a non-Windows user who occasionally is the only one available to help Windows users do something (other than install Linux and learn to live free), consistency would be nice. I often have trouble finding the right advice for Windows, even if I feel like looking for it. Dunno if that's a common or important use case, though. From ethan at stoneleaf.us Thu Mar 22 01:09:22 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 21 Mar 2012 17:09:22 -0700 Subject: [Python-Dev] New PEP In-Reply-To: References: Message-ID: <4F6A6DB2.9030103@stoneleaf.us> Huan Do wrote: > *Hi, > > I am a graduating Berkeley student that loves python and would like to > propose an enhancement to python. My proposal introduces a concept of > slicing generator. For instance, if one does x[:] it returns a list > which is a copy of x. Sometimes programmers would want to iterate over a > slice of x, but they do not like the overhead of constructing another > list. Instead we can create a similar operator that returns a generator. > My proposed syntax is x(:). The programmers are of course able to set > lower, upper, and step size like the following. > > x(1::-1) The biggest problem with your proposal is that generators don't remember what they have already yielded, so x(:) != x(:) # first time gets everything, second time gets nothing ~Ethan~ From ncoghlan at gmail.com Thu Mar 22 02:15:21 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 22 Mar 2012 11:15:21 +1000 Subject: [Python-Dev] New PEP In-Reply-To: References: <4F6A6DB2.9030103@stoneleaf.us> Message-ID: On Thu, Mar 22, 2012 at 10:28 AM, Huan Do wrote: > I was not completely familiar with itertools but?itertools.islice() seems to > have the functionality that I propose. It is great that ?there already exist > a solution that does not change python's syntax. Unless anyone wants to > pursue this proposal I will drop it next week. Just as a further follow-up on the recommended approach for making suggestions: for initial concepts like this one, the "python-ideas" mailing list is the preferred venue. It's intended for initial validation and refinement of suggestions to see if they're a reasonable topic for the main development list. Many ideas don't make it past the python-ideas stage (either because they have too many problems, they get redirected to a third party PyPI project, or existing alternatives are pointed out, as happened in this case). Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From victor.stinner at gmail.com Thu Mar 22 02:53:32 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 22 Mar 2012 02:53:32 +0100 Subject: [Python-Dev] PEP 416: Add a frozendict builtin type In-Reply-To: References: <1330541549.7844.69.camel@surprise> <4F5B30A4.6080306@gmail.com> Message-ID: > On the other hand, exposing the existing read-only dict proxy as a > built-in type sounds good to me. ?(It would need to be changed to > allow calling the constructor.) I wrote a small patch to implement this request: http://bugs.python.org/issue14386 I also opened the following issue to support other types than dict for __builtins__: http://bugs.python.org/issue14385 This issue is directly related to pysandbox, but it may help other purpose. Victor From merwok at netwok.org Thu Mar 22 03:37:34 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Wed, 21 Mar 2012 22:37:34 -0400 Subject: [Python-Dev] GSoC 2012: Python Core Participation? In-Reply-To: <20120321080215.Horde.F0tnBFNNcXdPaXz3hMjTO8A@webmail.df.eu> References: <20120321080215.Horde.F0tnBFNNcXdPaXz3hMjTO8A@webmail.df.eu> Message-ID: <4F6A906E.3060704@netwok.org> Good evening, > If you are a core committer and volunteer as GSoC > mentor for 2012, please let me know by Friday > (March 23rd). There is a number of interesting things to implement in packaging, and at least one student who manifested their interest, but unfortunately I am presently unable to say if I?ll have the time to mentor. If other core developers would like to act as mentors like happened last year, I will be available for questions and reviews. Regards From merwok at netwok.org Thu Mar 22 04:04:54 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Wed, 21 Mar 2012 23:04:54 -0400 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 1 In-Reply-To: References: <4F54711E.2020006@python.org> <4f5668fa.a81e340a.358a.4fa6@mx.google.com> Message-ID: <4F6A96D6.1050208@netwok.org> Le 06/03/2012 15:31, Giampaolo Rodol? a ?crit : > That's why I once proposed to include whatsnew.rst changes every time > a new feature is added/committed. > Assigning that effort to the release manager or whoever is supposed to > take care of this, is both impractical and prone to forgetfulness. Well, it?s the call of the whatsnew author. I think amk wrote the original instructions at the top of each whatsnew file which explain that NEWS is the primary location for logging changes and whatsnew is composed from that file. If Raymond or the new whatsnew author wants to change the rules so that important changes are noted in whatsnew in addition to NEWS, nothing prevents that. Cheers From mail at timgolden.me.uk Thu Mar 22 07:08:24 2012 From: mail at timgolden.me.uk (Tim Golden) Date: Thu, 22 Mar 2012 06:08:24 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F6A5979.3000801@gmail.com> Message-ID: <4F6AC1D8.30604@timgolden.me.uk> On 21/03/2012 23:03, Paul Moore wrote: > On 21 March 2012 22:43, Mark Hammond wrote: >> On 22/03/2012 1:22 AM, Lindberg, Van wrote: >>> >>> Mark, MAL, Martin, Tarek, >>> >>> Could you comment on this? >> >> >> Eric is correct - tools will be broken by this change. However, people seem >> willing to push forward on this and accept such breakage as the necessary >> cost. >> >> MAL, in his followup, asks what the advantages are of such a change. I've >> actually been asking for the same thing in this thread and the only real >> answer I've got is "consistency". So while I share MAL's concerns, people >> seem willing to push forward on this anyway, without the benefits having >> been explained. >> >> IOW, this isn't the decision I would make, but I think I've already made >> that point a number of times in this thread. Beyond that, there doesn't >> seem much for me to add... > > I agree on all points here. I don't understand quite why backward > compatibility is being treated so lightly here. But equally, I've made > my points and have little further to add. Well I've gone through (and deleted) three draft contributions to the ideas proposed here over the last week or so. In short, I'm with Paul & Mark. The OP seems far more casual towards breakage than would be the case if, eg, code were involved. If this had been proposed for Python 3k I'd have said: go for it - why not? But for this to drop in now means, as others have said, that I'll have to adjust various small tools which assume the location of python.exe according to the (minor) version I'm running. I can certainly cope with the change and without too much difficulty, but I'm afraid it does smack of a too foolish consistency. And it's not as though I've seen crowds of people chiming in with a me-too! The only person strongly supporting the change (as distinct from not opposing it) is VanL, who appears to need it for his particular setup. In short, I'm -1 but I'm not going to storm off in a huff if it goes ahead, merely be a little bewildered at why this was needed by anyone else and exactly what real-world problem it's solving for thousands of Windows Python users. TJG From g.brandl at gmx.net Thu Mar 22 07:48:44 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 22 Mar 2012 07:48:44 +0100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6A2084.90508@nedbatchelder.com> <4F6A2811.5080806@nedbatchelder.com> Message-ID: On 21.03.2012 20:39, Guido van Rossum wrote: >> Guido, you encouraged us to use science, but only after describing my >> science-based maximum line-length suggestion as "coddling," then said we >> should let Georg get on with it, but only after reiterating your personal >> favorite tweak (which I happen to agree with). > > I have a fair number of strong usability gripes about current (and > past :-) design trends, but I know I can't design a decent looking > website myself if my life depended on it. > >> There's no way a committee (which this thread effectively is) will come up >> with a good design. Everyone will dislike something about it. I think it >> would be interesting to use the power of the web to provide docs whose style >> could be adjusted a few ways to make people happy, but that is probably more >> than anyone is willing to volunteer for, I know I can't step up to do it. > > I think it's fine to have a bunch of folks submit their pet peeves > (and argue them to the death :-) to the design czar and then let the > czar (i.e. Georg) decide. > >> Personally, I think two Python projects that have focused on docs and done a >> good job of it are Django and readthedocs.org. Perhaps we could follow >> their lead? > > I think they are actually more trend-followers,and they seem to make a > bunch of the mistakes I've fulminated against here. But again, I'll > leave it to Georg. Thanks for the vote of confidence. I'll know what to consider for the next iteration thanks to the lively participation here :) Georg From regebro at gmail.com Thu Mar 22 09:46:29 2012 From: regebro at gmail.com (Lennart Regebro) Date: Thu, 22 Mar 2012 09:46:29 +0100 Subject: [Python-Dev] New PEP In-Reply-To: References: Message-ID: On Thu, Mar 22, 2012 at 00:39, Huan Do wrote: > Tell me what you guys think. I don't really want to add more things to the language, so I hate to say this: It makes sense to me. However, the syntax is very close to the syntax for function annotations. But that's when defining, and this is when calling, so it might work anyway, I don't have the knowledge necessary to know. So put it up on python-ideas. I'm not on that list, but people who know more about this are, so they can tell you if this is feasible or not and if it is a good idea or not. //Lennart -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Thu Mar 22 13:44:00 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 22 Mar 2012 13:44:00 +0100 Subject: [Python-Dev] dictproxy for other mapping types than dict Message-ID: Hi, I created the following issue to expose the dictproxy as a builtin type. http://bugs.python.org/issue14386 I would be interesting to accept any mapping type, not only dict. dictproxy implementation supports any mapping, even list or tuple, but I don't want to support sequences because a missing key would raise an IndexError instead of a KeyError. My problem is to check the type in C. issubclass(collections.ChainMap, collections.abc.Sequence) is False which is the expected result, so I need to implement this check in C. "The "PyMapping_Check(dict) && !PyMapping_Check(dict)" fails on ChainMap: type.__new__(ChainMap) fills tp_as_sequence->sq_item slot is defined because ChainMap has a __getitem__ method. Do you have an idea how to implement such test? Victor From stefan at bytereef.org Thu Mar 22 13:45:55 2012 From: stefan at bytereef.org (Stefan Krah) Date: Thu, 22 Mar 2012 13:45:55 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: References: Message-ID: <20120322124555.GA12503@sleipnir.bytereef.org> Victor Stinner wrote: > >> Issue #7652: Integrate the decimal floating point libmpdec library to speed > >> up the decimal module. Performance gains of the new C implementation are > >> between 12x and 80x, depending on the application. > > Congrats Stefan! And thanks for the huge chunk of code. Thanks, much appreciated. I'll take the opportunity to thank you in return for the gigantic amount of work you've done on Python in the past year! Stefan Krah From dirkjan at ochtman.nl Thu Mar 22 13:58:36 2012 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Thu, 22 Mar 2012 13:58:36 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: References: Message-ID: On Wed, Mar 21, 2012 at 23:22, Victor Stinner wrote: >>> http://hg.python.org/cpython/rev/7355550d5357 >>> changeset: ? 75850:7355550d5357 >>> user: ? ? ? ?Stefan Krah >>> date: ? ? ? ?Wed Mar 21 18:25:23 2012 +0100 >>> summary: >>> ?Issue #7652: Integrate the decimal floating point libmpdec library to speed >>> up the decimal module. Performance gains of the new C implementation are >>> between 12x and 80x, depending on the application. As a Python user, this looks really cool, thanks! As a packager, is the libmpdec library used elsewhere? For Gentoo, we generally prefer to package libraries separately and have Python depend on them. From the site, it seems like you more or less wrote libmpdec for usage in Python, but if it's general-purpose and actually used in other software, it would be nice if Python grew a configure option to make it use the system libmpdec. Cheers, Dirkjan From van.lindberg at gmail.com Thu Mar 22 15:17:00 2012 From: van.lindberg at gmail.com (VanL) Date: Thu, 22 Mar 2012 09:17:00 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: <4F69EB9D.1060701@egenix.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> Message-ID: <4F6B345C.1020406@gmail.com> As this has been brought up a couple times in this subthread, I figured that I would lay out the rationale here. There are two proposals on the table: 1) Regularize the install layout, and 2) move the python binary to the binaries directory. This email will deal with the first, and a second email will deal with the second. 1) Regularizing the install layout: One of Python's strengths is its cross-platform appeal. Carefully- written Python programs are frequently portable between operating systems and Python implementations with very few changes. Over the years, substantial effort has been put into maintaining platform parity and providing consistent interfaces to available functionality, even when different underlying implementations are necessary (such as with ntpath and posixpath). One place where Python is unnecessarily different, however, is in the layout and organization of the Python environment. This is most visible in the name of the directory for binaries on the Windows platform ("Scripts") versus the name of the directory for binaries on every other platform ("bin"), but a full listing of the layouts shows substantial differences in layout and capitalization across platforms. Sometimes the include is capitalized ("Include"), and sometimes not; and the python version may or may not be included in the path to the standard library or not. This may seem like a harmless inconsistency, and if that were all it was, I wouldn't care. (That said, cross-platform consistency is its own good). But it becomes a real pain when combined with tools like virtualenv or the new pyvenv to create cross-platform development environments. In particular, I regularly do development on both Windows and a Mac, and then deploy on Linux. I do this in virtualenvs, so that I have a controlled and regular environment. I keep them in sync using source control. The problem comes when I have executable scripts that I want to include in my dvcs - I can't have it in the obvious place - the binaries directory - because *the name of the directory changes when you move between platforms.* More concretely, I can't hg add "Scripts/runner.py? on my windows environment (where it is put in the PATH by virtualenv) and thendo a pull on Mac or Linux and have it end up properly in "bin/runner.py" which is the correct PATH for those platforms. This applies anytime there are executable scripts that you want to manage using source control across platforms. Django projects regularly have these, and I suspect we will be seeing more of this with the new "project" support in virtualenvwrapper. While a few people have wondered why I would want this -- hopefully answered above -- I have not heard any opposition to this part of the proposal. This first proposal is just to make the names of the directories match across platforms. There are six keys defined in the installer files (sysconfig.cfg and distutils.command.install): 'stdlib', 'purelib', 'platlib', 'headers', 'scripts', and 'data'. Currently on Windows, there are two different layouts defined: 'nt': { 'stdlib': '{base}/Lib', 'platstdlib': '{base}/Lib', 'purelib': '{base}/Lib/site-packages', 'platlib': '{base}/Lib/site-packages', 'include': '{base}/Include', 'platinclude': '{base}/Include', 'scripts': '{base}/Scripts', 'data' : '{base}', }, 'nt_user': { 'stdlib': '{userbase}/Python{py_version_nodot}', 'platstdlib': '{userbase}/Python{py_version_nodot}', 'purelib': '{userbase}/Python{py_version_nodot}/site-packages', 'platlib': '{userbase}/Python{py_version_nodot}/site-packages', 'include': '{userbase}/Python{py_version_nodot}/Include', 'scripts': '{userbase}/Scripts', 'data' : '{userbase}', }, The proposal is to make all the layouts change to: 'nt': { 'stdlib': '{base}/lib', 'platstdlib': '{base}/lib', 'purelib': '{base}/lib/site-packages', 'platlib': '{base}/lib/site-packages', 'include': '{base}/include', 'platinclude': '{base}/include', 'scripts': '{base}/bin', 'data' : '{base}', }, The change here is that 'Scripts' will change to 'bin' and the capitalization will be removed. Also, "user installs" of Python will have the same internal layout as "system installs" of Python. This will also, not coincidentally, match the install layout for posix, at least with regard to the 'bin', 'lib', and 'include' directories. Again, I have not heard *anyone* objecting to this part of the proposal as it is laid out here. (Paul had a concern with the lib directory earlier, but he said he was ok with the above). Please let me know if you have any problems or concerns with this part 1. Thanks, Van From van.lindberg at gmail.com Thu Mar 22 15:47:10 2012 From: van.lindberg at gmail.com (VanL) Date: Thu, 22 Mar 2012 09:47:10 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 2: Moving the python.exe) In-Reply-To: <4F69EB9D.1060701@egenix.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> Message-ID: <4F6B3B6E.2000402@gmail.com> [PART 2: Moving the python binary] There are two proposals on the table: 1) Regularize the install layout, and 2) move the python binary to the binaries directory. This email deals with the second issue exclusively. This has been the more contentious issue. 2) Moving the Python exe: A regular complaint of those new to Python on windows (and new to programming generally) has been that one of the first things that they need to do is to edit the PATH to allow Python to be run. In particular, this is the difficult sequence: 1. Install python. 2. Open up a shell and run "python" 3. Use pip or easy_install to install regetron (a package that installs an executable file). 4. Run regetron. For step #2, the python exe needs to be on the PATH. For steps 3 and 4, the binaries directory needs to be on the PATH. Currently, neither of these are true, so the path needs to be edited to include both the python root (where python.exe is, for step 2) and the "Scripts" (hopefully soon "bin") directory where "pip" and "regetron" are (for steps 3 and 4). You can substitute "regetron" for "nose," "cython," or other packages as well. MvL asked why anyone would want to run python directly from a cmd shell instead of running it from the start menu. There are two immediate responses to that: 1) observed behavior is that people prefer to run "python" from the cmd shell when developing (as observed by various people teaching Python, including Brian Curtin in this thread), and 2) running python or python programs from the shell is sometimes the only way to get a proper traceback when developing, making it a better way to work. The proposal here is to move the python.exe into the binaries directory (whatever it is called) and add an option to the windows installer to add that one directory to the PATH on install (and clean up the PATH on uninstall). A new registry key would be added pointing to the location of the python binary (wherever it is). Brian Curtin suggested this part of the proposal and has implemented it in a branch. MvL suggested a gradual transition to this over a three-release period. Open Issues: The PEP 397 Installer: As pointed out by Paul Moore, it may not matter once PEP 397 lands if python.exe is in the PATH or not - and it may be better if it is not. As he put it: """If we do put python.exe on PATH (whether it's in bin or not), we have to debate how to handle people having multiple versions of python on their machine. In a post-PEP 397 world, no Python is "the machine default" - .py files are associated with py.exe, not python.exe, so we have to consider the following 3 commands being run from a shell prompt: 1. myprog.py 2. py myprog.py 3. python myprog.py 1 and 2 will always do the same thing. However, 3 could easily do something completely different, if the Python in the #! line differs from the one found on PATH. To me, this implies that it's better for (3) to need explicit user action (setting PATH) if it's to do anything other than give an error. But maybe that's just me. I've been hit too often by confusion caused by *not* remembering this fact.""" One possible response here is that the moving of the python.exe binary and the setting of the PATH would be tied to an unchecked-by-default installer option, making an explicit user choice needed to invoke the new functionality. Breakage of existing tools: Mark Hammond, Paul Moore, and Tim Golden have all expressed that they have existing tools that would break and would need to be adjusted to match the new location of the python.exe, because that location is assumed to be at the root of the python install. A related issue is that this portion of the proposal has met with some resistance, but not much support here on Python-dev. The reason for that is selection bias: Those who are on Python-dev are much more likely to have tools that do advanced things with Python, such as introspect on the location of the binary, and are also much more likely to be comfortable with things like editing the PATH on windows. In contrast, the people that have trouble with this issue are those that are newest to Python and programming generally - those for whom editing the PATH is a challenge and whom are likely to be confused by the distinction between python.exe and a python program - and why, even after they add python to the path, the python program is not directly executable. From pete.alex.harris at gmail.com Thu Mar 22 12:26:49 2012 From: pete.alex.harris at gmail.com (Peter Harris) Date: Thu, 22 Mar 2012 11:26:49 +0000 Subject: [Python-Dev] Python-Dev Digest, Vol 104, Issue 79 In-Reply-To: References: Message-ID: > > On 03/21/2012 07:39 PM, Huan Do wrote: > > *Hi, > > > > I am a graduating Berkeley student that loves python and would like to > > propose an enhancement to python. My proposal introduces a concept of > > slicing generator. For instance, if one does x[:] it returns a list > > which is a copy of x. Sometimes programmers would want to iterate over a > > slice of x, but they do not like the overhead of constructing another > > list. Instead we can create a similar operator that returns a generator. > > My proposed syntax is x(:). The programmers are of course able to set > lower, upper, and step size like the following. -1 on the syntax And have you looked at itertools.islice ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Thu Mar 22 18:12:32 2012 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Thu, 22 Mar 2012 13:12:32 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6A560A.6050207@canterbury.ac.nz> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> Message-ID: <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> On Mar 21, 2012, at 6:28 PM, Greg Ewing wrote: > Ned Batchelder wrote: >> Any of the tweaks people are suggesting could be applied individually using this technique. We could just as easily choose to make the site left-justified, and let the full-justification fans use custom stylesheets to get it. > > Is it really necessary for the site to specify the justification > at all? Why not leave it to the browser and whatever customisation > the user chooses to make? It's design. It's complicated. Maybe yes, if you look at research related to default usage patterns, and saccade distance, reading speed and retention latency. Maybe no, if you look at research related to fixation/focus time, eye strain, and non-linear access patterns. Maybe maybe, if you look at the subjective aesthetic of the page according to various criteria, like "does it look like a newspaper" and "do I have to resize my browser every time I visit a new site to get a decent width for reading". As has been said previously in this thread several times, it's best to leave this up to a design czar who will at least make some decisions who will make some people happy. I'm fairly certain it's not possible to create a design that's optimal for all readers in all cases. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Thu Mar 22 18:02:43 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 23 Mar 2012 04:02:43 +1100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> Message-ID: <4F6B5B33.9020406@pearwood.info> Fred Drake wrote: > On Wed, Mar 21, 2012 at 3:13 PM, Ned Batchelder wrote: >> There are bad designers, or more to the point, designers who favor the >> overall look of the page at the expense of the utility of the page. That >> doesn't mean all designers are bad, or that "design" is bad. Don't throw >> out the baby with the bathwater. > > I get that. I'm not bad-mouthing actual design, and there are definitely > good designers out there. > > It's unfortunate they're so seriously outnumbered. As they say, the 99% who are lousy designers give the rest a bad name. *wink* My first impression of this page: http://www.python.org/~gbrandl/build/html/index.html was that the grey side-bar gives the page a somber, perhaps even dreary, look. First impressions count, and I'm afraid that first look didn't work for me. But clicking through onto other pages with more text improved my feelings. A big +1 on the pale green shading of code blocks. The basic design seems good to me. I'd prefer a serif font for blocks of text, while keeping sans serif for the headings, but that's a mild preference. Looking forward to seeing the next iteration. -- Steven From p.f.moore at gmail.com Thu Mar 22 18:59:34 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 22 Mar 2012 17:59:34 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: <4F6B345C.1020406@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> Message-ID: On 22 March 2012 14:17, VanL wrote: > As this has been brought up a couple times in this subthread, I figured that > I would lay out the rationale here. I'm repeating myself here after I promised not to. My apologies, but I don't think this posting captures the debate completely. One reason I suggested a PEP is to better ensure that the arguments both pro and con were captured, as that is a key part of the PEP process. > One place where Python is unnecessarily different, however, is in > the layout and organization of the Python environment. This is most > visible in the name of the directory for binaries on the Windows platform > ("Scripts") versus the name of the directory for binaries on every other > platform ("bin"), First of all, this difference is almost entirely *invisible*. Apart from possibly setting PATH (once!) users should not be digging around in the Python installation directory. Certainly on Windows, it is a very unusual application that expects users to even care how the application is laid out internally. And I suspect that is also true on Unix and Mac. Secondly, the layouts are not as similar as you claim here, if I understand things correctly. on Unix Python is installed in /usr/local/bin so there isn't even a "Python installation directory" there. And Macs use some sort of Mac-specific bundle technology as I understand it. To be honest, I don't think that there's a lot of similarity even with your proposed changes. > but a full listing of the layouts shows > substantial differences in layout and capitalization across platforms. That's true, but largely incidental. And given that (a) Windows is case insensitive and (b) the capitalisation, although inconsistent, follows platform standards (Unix all lowercase, Windows capitalised) it makes little practical difference. > Sometimes the include is capitalized ("Include"), and sometimes not; and > the python version may or may not be included in the path to the > standard library or not. Given that on Windows the Python library is usually in something like C:\Python32\Lib whereas on Unix it's in /usr/lib/python3.2 (I think), the difference is reasonable because the Python *base* location (C:\Python32 on Windows vs /usr on Unix) is version specific in one case but not the other. To keep the correspondence complete, you should be suggesting installing in /python32 on Unix (which I doubt would gain much support :-)) > This may seem like a harmless inconsistency, and if that were all it was, I > wouldn't care. (That said, cross-platform consistency is its own good). But > it becomes a real pain when combined with tools like virtualenv or the new > pyvenv to create cross-platform development environments. The issue with virtualenv and pyvenv may be more significant. But you're only mentioning those incidentally. As a straw-man suggestion, why can virtualenv not be changed to maintain a platform-neutral layout in spite of what core Python does? This is a straw-man because I know the answer, but can we get the point out in the open - it's related to how distutils installs code, and that in turn hits you straight up against the distutils freeze. If distutils' behaviour is the issue here, then argue for more flexibility in packaging, and use that extra flexibility to argue for changes to virtualenv and pyvenv to maintain a standard cross-platform layout. Breaking the Python installation layout isn't the only option here, and I'd like to see a clear analysis of the tradeoffs. (I also get a sense of undue haste - "we can change the Python layout for 3.3, but changing packaging and virtualenv is a much longer process"...) > In particular, I regularly do development on both Windows and a Mac, and > then deploy on Linux. I do this in virtualenvs, so that I have a controlled > and regular environment. I keep them in sync using source control. > > The problem comes when I have executable scripts that I want to include in > my dvcs - I can't have it in the obvious place - the binaries directory - > because *the name of the directory changes when you move between platforms.* > More concretely, I can't hg add "Scripts/runner.py? on my windows > environment (where it is put in the PATH by virtualenv) and thendo a pull on > Mac or Linux and have it end up properly in "bin/runner.py" which is the > correct PATH for those platforms. This presupposes that your development workflow - developing in place in the virtualenv itself - is "the obvious approach". From what I've seen of tools like virtualenvwrapper, you're not alone in this. And I'm pretty new to using virtualenv so I wouldn't like to claim to be any expert. But can I respectfully suggest that other ways of working wouldn't hit these issues? WhatI do is develop my project in a project specific directory, just as I would if I were using the system Python. And I have an activated virtualenv *located somewhere else* that I install required 3rd party modules in, etc. I then do standard "pip install" to install and test my project, etc etc. This adds an "install to test" cycle (or you use something like setuptools' "deploy" techniques, which I know nothing about myself) but in exchange you are independent of the virtualenv layout (as pip install puts your scripts in the appropriate Scripts/bin directory). Also, by doing this you test your installation process, which you don't if you develop in place. Again, you may not like this way of working, and that's fine. But can you acknowledge (and document) that "change your way of working" is another alternative to "change Python". > This applies anytime there are executable scripts that you want to manage > using source control across platforms. Django projects regularly have these, > and I suspect we will be seeing more of this with the new "project" support > in virtualenvwrapper. ... if you develop inside the virtualenv. > While a few people have wondered why I would want this -- hopefully answered > above -- I have not heard any opposition to this part of the proposal. See above. There has been opposition from a number of people. It's relatively mild, simply because it's a niche area and doesn't cause huge pain, but it's there. And you seem (based on the above analysis) to be overstating the benefits, so the summary here is weighted heavily in favour of change. Also, you have made no attempt that I've seen to address the question of why this is important enough to break backward compatibility. Maybe it is - but why? Backward compatibility is a very strong rule, and should be broken only with good justification. Consistency, and "it makes my way of working easier" really shouldn't be sufficient. Has anyone checked whether this will affect people like Enthought and ActiveState who distribute their own versions of Python? Is ActiveState's PPM tool affected? > This first proposal is just to make the names of the directories match > across platforms. There are six keys defined in the installer files > (sysconfig.cfg and distutils.command.install): 'stdlib', 'purelib', > 'platlib', 'headers', 'scripts', ?and 'data'. > > Currently on Windows, there are two different layouts defined: > > ?'nt': { > ? ?'stdlib': '{base}/Lib', > ? ?'platstdlib': '{base}/Lib', > ? ?'purelib': '{base}/Lib/site-packages', > ? ?'platlib': '{base}/Lib/site-packages', > ? ?'include': '{base}/Include', > ? ?'platinclude': '{base}/Include', > ? ?'scripts': '{base}/Scripts', > ? ?'data' ? : '{base}', > ? ?}, > > ?'nt_user': { > ? ?'stdlib': '{userbase}/Python{py_version_nodot}', > ? ?'platstdlib': '{userbase}/Python{py_version_nodot}', > ? ?'purelib': '{userbase}/Python{py_version_nodot}/site-packages', > ? ?'platlib': '{userbase}/Python{py_version_nodot}/site-packages', > ? ?'include': '{userbase}/Python{py_version_nodot}/Include', > ? ?'scripts': '{userbase}/Scripts', > ? ?'data' ? : '{userbase}', > ? ?}, > > > The proposal is to make all the layouts change to: > > ?'nt': { > ? ?'stdlib': '{base}/lib', > ? ?'platstdlib': '{base}/lib', > ? ?'purelib': '{base}/lib/site-packages', > ? ?'platlib': '{base}/lib/site-packages', > ? ?'include': '{base}/include', > ? ?'platinclude': '{base}/include', > ? ?'scripts': '{base}/bin', > ? ?'data' ? : '{base}', > ? ?}, > > The change here is that 'Scripts' will change to 'bin' and the > capitalization will be removed. Also, "user installs" of Python will have > the same internal layout as "system installs" of Python. This will also, not > coincidentally, match the install layout for posix, at least with regard to > the 'bin', 'lib', and 'include' directories. Note - that is not "Regularizing the layout". You have not made any changes to OS/2 (which matches Windows at the moment). And it doesn't match Posix at all. I won't copy a large chunk of sysconfig.py in here, but I'm tempted to, because your proposal as described really doesn't match the reality of sysconfig._INSTALL_SCHEMES. I'd encourage people to go and read sysconfig.py for details. There really isn't much consistency at the moment, and fiddling with Windows but nothing else doesn't regularise anything. > Again, I have not heard *anyone* objecting to this part of the proposal as > it is laid out here. (Paul had a concern with the lib directory earlier, but > he said he was ok with the above). That's somewhat odd, as I did hear a number of concerns. But it was certainly not easy to tell which related to which part of the proposal. And all of the objections were mild, mostly because it's not a huge practical issue either way. > Please let me know if you have any problems or concerns with this part 1. Personally, my main concerns are around procedure and policy. The more the discussion goes on, the more I feel that there should be a PEP to capture the details of the debate clearly. Too much is getting lost in the noise. And I think you should provide a clear statement of why this issue is important enough to justify violating the backward compatibility policies. As Mark said (I think it was Mark...) if this had been proposed for 3.0, it would have been fine. Now we're at 3.2 with 3.3 close to release, and it just seems too late to be worth the risk. One plus point about your posting this separately. It's made me think through the issue in a bit more detail, and I'm now a solid -1 on the proposal. Paul. From rowen at uw.edu Thu Mar 22 19:49:54 2012 From: rowen at uw.edu (Russell E. Owen) Date: Thu, 22 Mar 2012 11:49:54 -0700 Subject: [Python-Dev] Playing with a new theme for the docs References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> <4F6B5B33.9020406@pearwood.info> Message-ID: In article <4F6B5B33.9020406 at pearwood.info>, Steven D'Aprano wrote: >... > My first impression of this page: > > http://www.python.org/~gbrandl/build/html/index.html > > was that the grey side-bar gives the page a somber, perhaps even dreary, > look. > First impressions count, and I'm afraid that first look didn't work for me. > > But clicking through onto other pages with more text improved my feelings. A > big +1 on the pale green shading of code blocks. > > The basic design seems good to me. I'd prefer a serif font for blocks of > text, > while keeping sans serif for the headings, but that's a mild preference. > > Looking forward to seeing the next iteration. I like the overall design, but one thing seems to be missing is an overview of what Python is (hence what the page is about). Naturally we don't need that, but a one-line overview with a link to more information would be helpful. -- Russell From van.lindberg at gmail.com Thu Mar 22 19:57:17 2012 From: van.lindberg at gmail.com (VanL) Date: Thu, 22 Mar 2012 13:57:17 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> Message-ID: <4F6B760D.3010004@gmail.com> Hi Paul, To start with, I appreciate your comments, and it is worth having both sides expressed. On 3/22/2012 12:59 PM, Paul Moore wrote: > I'm repeating myself here after I promised not to. My apologies, but I > don't think this posting captures the debate completely. One reason I > suggested a PEP is to better ensure that the arguments both pro and > con were captured, as that is a key part of the PEP process. I would be happy to write up a PEP. > First of all, this difference is almost entirely *invisible*. Apart > from possibly setting PATH (once!) users should not be digging around > in the Python installation directory. Certainly on Windows, it is a > very unusual application that expects users to even care how the > application is laid out internally. And I suspect that is also true on > Unix and Mac. This is a good point; it is mostly visible in the virtualenvs. If it only changed in virtualenvs, I would be happy. The policy, though, is that the virtualenv follows the platform policy. > Secondly, the layouts are not as similar as you claim here, if I > understand things correctly. on Unix Python is installed in > /usr/local/bin so there isn't even a "Python installation directory" > there. And Macs use some sort of Mac-specific bundle technology as I > understand it. To be honest, I don't think that there's a lot of > similarity even with your proposed changes. I was summarizing here because, frankly, there are hardly any OS/2 users, so it would be mostly Windows users affected by this change. Also as noted, I suggest that all platforms standardize on bin, lib, and include, just as I laid out. That said, while I think that the above is a good idea, my personal ambitions are more modest: If the names of the top-level directories only were changed to 'bin', 'lib', and 'include' - never mind differences under 'lib' - I would be happy. In fact, even the one change of 'Scripts' to 'bin' everywhere would get 90% of my uses. > The issue with virtualenv and pyvenv may be more significant. But > you're only mentioning those incidentally. I am approaching it from the platform level because of the policy that virtualenvs match the platform install layout. If instead virtualenv layouts were standardized, that would end up making me just as happy. > (I also get a sense of undue haste - > "we can change the Python layout for 3.3, but changing packaging and > virtualenv is a much longer process"...) Honestly, I didn't expect that much resistance. None of the people I talked to in person even cared, or if they did, they thought that consistency was a benefit. But now that virtualenvs are going in in 3.3, I see this as the last good chance to change this. > This presupposes that your development workflow - developing in place > in the virtualenv itself - is "the obvious approach". From what I've > seen of tools like virtualenvwrapper, you're not alone in this. [...] But can > you acknowledge (and document) that "change your way of working" is > another alternative to "change Python". > Acknowledged. What you say is true - but people wanted to know what the benefit would be. I laid out my concrete use-case as a rationale. And as you note, I am not alone in this type of development. Sure, I care here because it affects my style of development, and there are other styles that have other benefits (and tradeoffs). I don't see that this part of the proposal would negatively affect those styles. > >> While a few people have wondered why I would want this -- hopefully answered >> above -- I have not heard any opposition to this part of the proposal. > See above. There has been opposition from a number of people. It's > relatively mild, simply because it's a niche area and doesn't cause > huge pain, but it's there. And you seem (based on the above analysis) > to be overstating the benefits, so the summary here is weighted > heavily in favour of change. If I have misrepresented anyone, I am sorry - but to the best of my understanding, no one had (prior to you, right now) objected to *this part* of the proposal. Mark, at least, specified that his concern was with the moving of the python binary and that he didn't care about this part. I believe Tim indicated that too, but perhaps I have on my rose-colored glasses and misunderstood him. > Also, you have made no attempt that I've seen to address the question > of why this is important enough to break backward compatibility. Maybe > it is - but why? Backward compatibility is a very strong rule, and > should be broken only with good justification. Consistency, and "it > makes my way of working easier" really shouldn't be sufficient. In general, yes, I agree with you. However, the break with backwards compatibility is, as you point out, minor, and there is a benefit to consistency - especially given virtualenv-centric development. > Has anyone checked whether this will affect people like Enthought and > ActiveState who distribute their own versions of Python? Is > ActiveState's PPM tool affected? I have been running like this for several years across multiple Python versions, so I have experience with the "breakage" from this part of the proposal. I have found four packages that would need to be updated: Pip, virtualenv, PyPM, and Egginst would need 1-2 line patches. I have these patches, I would/could provide them. Generally these tools have something like: if platform == 'win32': bin_dir = 'Scripts' else: bin_dir = 'bin' The patches just remove the special casing - bin_dir just gets set to 'bin'. > Note - that is not "Regularizing the layout". You have not made any > changes to OS/2 (which matches Windows at the moment). And it doesn't > match Posix at all. See my 'summarizing' content above - layouts should match. I also didn't want to post chunks of sysconfig.py. You also missed distutils.command.install, which is subtly different. The OS X framework is already posixy inside, and virtualenvs on Mac OS X follow the posix-user layout. It is true that some Linux distributions place lib, include, etc in the system-wide directories. However, altinstalls that are confined to a directory (i.e., have a 'layout' in the sense that I am describing. My proposal would prohibit people from having multiple versions installed on top of each other, but they would clobber each other anyway with the current setup. > >> Again, I have not heard *anyone* objecting to this part of the proposal as >> it is laid out here. (Paul had a concern with the lib directory earlier, but >> he said he was ok with the above). > That's somewhat odd, as I did hear a number of concerns. But it was > certainly not easy to tell which related to which part of the > proposal. And all of the objections were mild, mostly because it's not > a huge practical issue either way. True. And again, I tried not to misrepresent anyone, but I have not heard anyone (including you) who would actually have code broken by *this* change. Do you have code? I really, really would like to know. I have not found anything that needs a patch except for the four packages above. > Personally, my main concerns are around procedure and policy. The more > the discussion goes on, the more I feel that there should be a PEP to > capture the details of the debate clearly. Too much is getting lost in > the noise. And I think you should provide a clear statement of why > this issue is important enough to justify violating the backward > compatibility policies. As Mark said (I think it was Mark...) if this > had been proposed for 3.0, it would have been fine. Now we're at 3.2 > with 3.3 close to release, and it just seems too late to be worth the > risk. One plus point about your posting this separately. It's made me > think through the issue in a bit more detail, and I'm now a solid -1 > on the proposal. I have been trying at various PyCons and in various conversations to move this for years. No one cares. The current urgency is driven by pyvenv - changes now will be much, much easier than changes later. Again, I am happy to write a PEP. If I were to summarize (on this issue only): 1. The current backwards compatibility hit is minimal; I would be happy to contact and provide patches to the four packages I have found (and anyone else who wants one). Backwards compatibility in the future will probably be harder to deal with. 2. There are advantages to cross-platform consistency and to virtualenv-based development. I believe that these will grow in the future. 3. Most people won't care. To the extent that people notice, I think they will appreciate the consistency. From rowen at uw.edu Thu Mar 22 20:05:00 2012 From: rowen at uw.edu (Russell E. Owen) Date: Thu, 22 Mar 2012 12:05:00 -0700 Subject: [Python-Dev] Playing with a new theme for the docs References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> <4F6B5B33.9020406@pearwood.info> Message-ID: In article , "Russell E. Owen" wrote: > In article <4F6B5B33.9020406 at pearwood.info>, > Steven D'Aprano wrote: > > >... > > My first impression of this page: > > > > http://www.python.org/~gbrandl/build/html/index.html > > > > was that the grey side-bar gives the page a somber, perhaps even dreary, > > look. > > First impressions count, and I'm afraid that first look didn't work for me. > > > > But clicking through onto other pages with more text improved my feelings. > > A > > big +1 on the pale green shading of code blocks. > > > > The basic design seems good to me. I'd prefer a serif font for blocks of > > text, > > while keeping sans serif for the headings, but that's a mild preference. > > > > Looking forward to seeing the next iteration. > > I like the overall design, but one thing seems to be missing is an > overview of what Python is (hence what the page is about). Naturally we > don't need that, but a one-line overview with a link to more information > would be helpful. > > -- Russell I'm afraid my last sentence was incoherent. I meant to say: Naturally we, as Python users, don't need that; but a one-line overview with a link to more information would be helpful to others who are not familiar with the language. -- Russell From glyph at twistedmatrix.com Thu Mar 22 20:35:19 2012 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Thu, 22 Mar 2012 15:35:19 -0400 Subject: [Python-Dev] Issue 13524: subprocess on Windows In-Reply-To: References: Message-ID: <61913F1B-1DFC-4EA5-AF29-25BBF24DA4DA@twistedmatrix.com> On Mar 21, 2012, at 4:38 PM, Brad Allen wrote: > I tripped over this one trying to make one of our Python at work > Windows compatible. We had no idea that a magic 'SystemRoot' > environment variable would be required, and it was causing issues for > pyzmq. > > It might be nice to reflect the findings of this email thread on the > subprocess documentation page: > > http://docs.python.org/library/subprocess.html > > Currently the docs mention this: > > "Note If specified, env must provide any variables required for the > program to execute. On Windows, in order to run a side-by-side > assembly the specified env must include a valid SystemRoot." > > How about rewording that to: > > "Note If specified, env must provide any variables required for the > program to execute. On Windows, a valid SystemRoot environment > variable is required for some Python libraries such as the 'random' > module. Also, in order to run a side-by-side assembly the specified > env must include a valid SystemRoot." Also, in order to execute in any installation environment where libraries are found in non-default locations, you will need to set LD_LIBRARY_PATH. Oh, and you will also need to set $PATH on UNIX so that libraries can find their helper programs and %PATH% on Windows so that any compiled dynamically-loadable modules and/or DLLs can be loaded. And by the way you will also need to relay DYLD_LIBRARY_PATH if you did a UNIX-style build on OS X, not LD_LIBRARY_PATH. Don't forget that you probably also need PYTHONPATH to make sure any subprocess environments can import the same modules as their parent. Not to mention SSH_AUTH_SOCK if your application requires access to _remote_ process spawning, rather than just local. Oh and DISPLAY in case your subprocesses need GUI support from an X11 program (which sometimes you need just to initialize certain libraries which don't actually do anything with a GUI). Oh and __CF_USER_TEXT_ENCODING is important sometimes too, don't forget that. And if your subprocess is in Perl or Ruby or Java you may need a couple dozen other variables which your deployment environment has set for you too. Did I mention CFLAGS or LC_ALL yet? Let me tell you a story about this one HP/UX machine... Ahem. Bottom line: it seems like screwing with the process spawning environment to make it minimal is a good idea for simplicity, for security, and for modularity. But take it from me, it isn't. I guarantee you that you don't actually know what is in your operating system's environment, and initializing it is a complicated many-step dance which some vendor or sysadmin or product integrator figured out how to do much better than your hapless Python program can. %SystemRoot% is just the tip of a very big, very nasty iceberg. Better not to keep refining why exactly it's required, or someone will eventually be adding a new variable (starting with %APPDATA% and %HOMEPATH%) that can magically cause your subprocess not to spawn properly to this page every six months for eternity. If you're spawning processes as a regular user, you should just take the environment you're given, perhaps with a few specific light additions whose meaning you understand. If you're spawning a process as an administrator or root, you should probably initialize the environment for the user you want to spawn that process as using an OS-specific mechanism like login(1). (Sorry that I don't know the Windows equivalent.) -glyph From g.brandl at gmx.net Thu Mar 22 20:45:17 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 22 Mar 2012 20:45:17 +0100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> <4F6B5B33.9020406@pearwood.info> Message-ID: On 22.03.2012 20:05, Russell E. Owen wrote: >> I like the overall design, but one thing seems to be missing is an >> overview of what Python is (hence what the page is about). Naturally we >> don't need that, but a one-line overview with a link to more information >> would be helpful. >> >> -- Russell > > > I'm afraid my last sentence was incoherent. I meant to say: > > Naturally we, as Python users, don't need that; but a one-line overview > with a link to more information would be helpful to others who are not > familiar with the language. Hi Russell, note that the page is not supposed to replace python.org, but is just a new styling of the Python documentation, docs.python.org, where it is kind of assumed that you know what Python is... Georg From g.brandl at gmx.net Thu Mar 22 20:46:36 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 22 Mar 2012 20:46:36 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: References: Message-ID: On 21.03.2012 23:22, Victor Stinner wrote: >>> http://hg.python.org/cpython/rev/7355550d5357 >>> changeset: 75850:7355550d5357 >>> user: Stefan Krah >>> date: Wed Mar 21 18:25:23 2012 +0100 >>> summary: >>> Issue #7652: Integrate the decimal floating point libmpdec library to speed >>> up the decimal module. Performance gains of the new C implementation are >>> between 12x and 80x, depending on the application. > > Congrats Stefan! And thanks for the huge chunk of code. Seconded. This is the kind of stuff that will make 3.3 the most awesomest 3.x release ever (and hopefully convince people that it does make sense to port)... cheers, Georg From v+python at g.nevcal.com Thu Mar 22 20:50:09 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 22 Mar 2012 12:50:09 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6B5B33.9020406@pearwood.info> References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> <4F6B5B33.9020406@pearwood.info> Message-ID: <4F6B8271.3030901@g.nevcal.com> On 3/22/2012 10:02 AM, Steven D'Aprano wrote: > > As they say, the 99% who are lousy designers give the rest a bad name. > > *wink* :) > My first impression of this page: > > http://www.python.org/~gbrandl/build/html/index.html > > was that the grey side-bar gives the page a somber, perhaps even > dreary, look. First impressions count, and I'm afraid that first look > didn't work for me. The dark sidebar continued down the whole page, and made it clearer why that space was being wasted at the bottom of long pages... I had never noticed, until doing the side-by-side comparison, that it was possible to collapse the TOC sidebar, but this seems to be true only in the old layout. After looking at both a while, my suggestions would be: 1. Preserve the collapsability of the TOC, but possible enhance its recognizability with an X in the upper right of the TOC sidebar, as well as the << in the middle. 2. Make the header fixed, so that the bread crumb trail at the top is available even after scrolling way down a long page. 3. Make the sidebar separately scrollable, so that it stays visible when scrolling down in the text. This would make it much easier to jump from section to section, if the TOC didn't get lost in the process. I have no particular preferences for colors or background colors, as long as they are reasonably legible. I do have a preference for serif fonts, especially if the font gets small. Can anyone point me to any legibility studies that show any font being more legible, more easily readable, than Times Roman? (And yes, I know a lot of people that dislike Times Roman, and none of them ever have.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Thu Mar 22 21:57:18 2012 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 23 Mar 2012 07:57:18 +1100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6B8271.3030901@g.nevcal.com> References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> <4F6B5B33.9020406@pearwood.info> <4F6B8271.3030901@g.nevcal.com> Message-ID: On Fri, Mar 23, 2012 at 6:50 AM, Glenn Linderman wrote: > 3. Make the sidebar separately scrollable, so that it stays visible when > scrolling down in the text.? This would make it much easier to jump from > section to section, if the TOC didn't get lost in the process. -1. The downside of separate scrolling is that you lose the ability to scroll-wheel the main docs if your mouse is over the TOC. I'd rather it stay as a single page, so it doesn't matter where my pointer is when I scroll. But I realise I'm bikeshedding this a bit. Bottom line: I'll use Python, and docs.python.org, regardless of the design and layout... so I'll let the expert(s) decide this one. ChrisA From tjreedy at udel.edu Thu Mar 22 22:13:27 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 22 Mar 2012 17:13:27 -0400 Subject: [Python-Dev] PendingDeprecationWarning Message-ID: My impression is that the original reason for PendingDeprecationWarning versus DeprecationWarning was to be off by default until the last release before removal. But having DeprecationWarnings on by default was found to be too obnoxious and it too is off by default. So do we still need PendingDeprecationWarnings? My impression is that it is mostly not used, as it is a nuisance to remember to change from one to the other. The deprecation message can always indicate the planned removal time. I searched the Developer's Guide for both deprecation and DeprecationWarning and found nothing. -- Terry Jan Reedy From guido at python.org Thu Mar 22 22:14:54 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 22 Mar 2012 14:14:54 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> <4F6B5B33.9020406@pearwood.info> <4F6B8271.3030901@g.nevcal.com> Message-ID: Georg, please start a new thread when you have a new design for review. I'm muting this one... -- --Guido van Rossum (python.org/~guido) From rdmurray at bitdance.com Thu Mar 22 22:18:35 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 22 Mar 2012 17:18:35 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> <4F6B5B33.9020406@pearwood.info> <4F6B8271.3030901@g.nevcal.com> Message-ID: <20120322211850.512872500E3@webabinitio.net> On Fri, 23 Mar 2012 07:57:18 +1100, Chris Angelico wrote: > On Fri, Mar 23, 2012 at 6:50 AM, Glenn Linderman wrote: > > 3. Make the sidebar separately scrollable, so that it stays visible when > > scrolling down in the text.?? This would make it much easier to jump from > > section to section, if the TOC didn't get lost in the process. > > -1. The downside of separate scrolling is that you lose the ability to > scroll-wheel the main docs if your mouse is over the TOC. I'd rather > it stay as a single page, so it doesn't matter where my pointer is > when I scroll. I agree, and I don't use a mousewheel much. I use pentadactyl, and in that case sometimes the keyboard focus ends up in the TOC and then the scrolling keys scroll the wrong thing (or appear to do nothing, since such things are generally shorter than my screen...) and I can never remember the key sequence to change the focus, because *most* sites I don't have that problem with. For whatever that's worth :) --David From ethan at stoneleaf.us Thu Mar 22 22:18:56 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 22 Mar 2012 14:18:56 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6B8271.3030901@g.nevcal.com> References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> <4F6B5B33.9020406@pearwood.info> <4F6B8271.3030901@g.nevcal.com> Message-ID: <4F6B9740.1060302@stoneleaf.us> Glenn Linderman wrote: > After looking at both a while, my suggestions would be: > > 1. Preserve the collapsability of the TOC, but possible enhance its > recognizability with an X in the upper right of the TOC sidebar, as well > as the << in the middle. > > 2. Make the header fixed, so that the bread crumb trail at the top is > available even after scrolling way down a long page. > > 3. Make the sidebar separately scrollable, so that it stays visible when > scrolling down in the text. This would make it much easier to jump from > section to section, if the TOC didn't get lost in the process. +1 +1 and, of course, +1 Having to go clear back to the top is a pain, and I never knew `til now that the sidebar was collapsible. ~Ethan~ From amauryfa at gmail.com Thu Mar 22 22:44:44 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 22 Mar 2012 22:44:44 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: References: Message-ID: 2012/3/22 Georg Brandl : >> Congrats Stefan! And thanks for the huge chunk of code. > > Seconded. ?This is the kind of stuff that will make 3.3 the most awesomest > 3.x release ever (and hopefully convince people that it does make sense to > port)... On the other hand, porting PyPy to 3.3 will be more work ;-) Fortunately the libmpdec directory should be reusable as is. Nice work! -- Amaury Forgeot d'Arc From lukasz at langa.pl Thu Mar 22 22:49:11 2012 From: lukasz at langa.pl (=?iso-8859-2?Q?=A3ukasz_Langa?=) Date: Thu, 22 Mar 2012 22:49:11 +0100 Subject: [Python-Dev] PendingDeprecationWarning In-Reply-To: References: Message-ID: Wiadomo?? napisana przez Terry Reedy w dniu 22 mar 2012, o godz. 22:13: > My impression is that the original reason for PendingDeprecationWarning versus DeprecationWarning was to be off by default until the last release before removal. But having DeprecationWarnings on by default was found to be too obnoxious and it too is off by default. So do we still need PendingDeprecationWarnings? It is also my understanding that DeprecationWarnings have been effectively made to behave like PendingDeprecationWarnings. This makes the latter redundant. Should we raise DeprecationWarnings upon usage of PendingDeprecationWarnings? ;) -- Best regards, ?ukasz Langa Senior Systems Architecture Engineer IT Infrastructure Department Grupa Allegro Sp. z o.o. From lukasz at langa.pl Thu Mar 22 22:44:44 2012 From: lukasz at langa.pl (=?iso-8859-2?Q?=A3ukasz_Langa?=) Date: Thu, 22 Mar 2012 22:44:44 +0100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6B9740.1060302@stoneleaf.us> References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> <4F6B5B33.9020406@pearwood.info> <4F6B8271.3030901@g.nevcal.com> <4F6B9740.1060302@stoneleaf.us> Message-ID: Wiadomo?? napisana przez Ethan Furman w dniu 22 mar 2012, o godz. 22:18: > Glenn Linderman wrote: >> After looking at both a while, my suggestions would be: >> 1. Preserve the collapsability of the TOC, but possible enhance its recognizability with an X in the upper right of the TOC sidebar, as well as the << in the middle. >> 2. Make the header fixed, so that the bread crumb trail at the top is available even after scrolling way down a long page. >> 3. Make the sidebar separately scrollable, so that it stays visible when scrolling down in the text. This would make it much easier to jump from section to section, if the TOC didn't get lost in the process. > > +1 > +1 > and, of course, > +1 Something like that: http://packages.python.org/lck.django/lck.django.tags.models.html ? (breadcrumbs are at the bottom) -- Best regards, ?ukasz Langa Senior Systems Architecture Engineer IT Infrastructure Department Grupa Allegro Sp. z o.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu Mar 22 23:56:10 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 23 Mar 2012 11:56:10 +1300 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6B5B33.9020406@pearwood.info> References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> <4F6B5B33.9020406@pearwood.info> Message-ID: <4F6BAE0A.4080108@canterbury.ac.nz> Can we please get rid of the sidebar, or at least provide a way of turning it off? I don't think it's anywhere near useful enough to be worth the space it takes up. You can only use it when you're scrolled to the top of the page, otherwise it's just a useless empty space. Also, I often want to put the documentation side by side with the code I'm working on, and having about a quarter to a third of the horizontal space taken up with junk makes that much more awkward than it needs to be. A table of contents as a separate page is a lot more usable for me. I can keep it open in a browser tab and switch to it when I want to look at it. Most of the time I don't want to look at it and don't want it taking up space on the page. Also I agree about the grey text being suboptimal. Deliberately throwing away contrast, especially for the main body text, is insane. -- Greg From mhammond at skippinet.com.au Thu Mar 22 23:56:27 2012 From: mhammond at skippinet.com.au (Mark Hammond) Date: Fri, 23 Mar 2012 09:56:27 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 2: Moving the python.exe) In-Reply-To: <4F6B3B6E.2000402@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B3B6E.2000402@gmail.com> Message-ID: <4F6BAE1B.8080005@skippinet.com.au> I'm responding to both of Van's recent messages in one: On 23/03/2012 1:47 AM, VanL wrote: > [PART 2: Moving the python binary] ... > A regular complaint of those new to Python on windows (and new to > programming generally) has been that one of the first things that > they need to do is to edit the PATH to allow Python to be run. In > particular, this is the difficult sequence: > > 1. Install python. 2. Open up a shell and run "python" 3. Use pip or > easy_install to install regetron (a package that installs an > executable file). 4. Run regetron. ... > One possible response here is that the moving of the python.exe > binary and the setting of the PATH would be tied to an > unchecked-by-default installer option, making an explicit user choice > needed to invoke the new functionality. Given an off-by-default setting, I fail to see how it fixes your "difficult sequence" above. What would the instructions above now say? That the user should re-install Python ensuring to set that checkbox? Cover both cases, including how the user can tell if it is on the PATH and how to fix it otherwise? Something else? > Breakage of existing tools: Mark Hammond, Paul Moore, and Tim Golden > have all expressed that they have existing tools that would break > and would need to be adjusted to match the new location of the > python.exe, because that location is assumed to be at the root of the > python install. > > A related issue is that this portion of the proposal has met with > some resistance, but not much support here on Python-dev. The reason > for that is selection bias: Those who are on Python-dev are much more > likely to have tools that do advanced things with Python, such as > introspect on the location of the binary, and are also much more > likely to be comfortable with things like editing the PATH on > windows. In contrast, the people that have trouble with this issue > are those that are newest to Python and programming generally - those > for whom editing the PATH is a challenge and whom are likely to be > confused by the distinction between python.exe and a python program - > and why, even after they add python to the path, the python program > is not directly executable. Here you are referring to the PATH, but that isn't really where the objections are. I would claim a selection bias on Python-dev, where subscribers are less likely to use Windows regularly for development and therefore less likely to have developed or use tools for finding and launching Python. IMO, the lack of objections on Python-dev to renaming the binary directory is the same reason you aren't seeing overwhelming *support* for the change either. Without the perspective of being regular Windows users, people are happy to agree "consistent is better". All other things being equal, I'd agree too. Really, we have just one anecdote from you about your process and as Paul says, no attempt to outline other alternatives. For example, couldn't your "activate.bat" add both Scripts *and* bin to the PATH whereas your "activate.sh" adds just "bin"? > I have been running like this for several years across multiple > Python versions, so I have experience with the "breakage" from this > part of the proposal. I have found four packages that would need to > be updated: Pip, virtualenv, PyPM, and Egginst would need 1-2 line > patches. With all due respect, I find this disingenuous. Your lack of experience with the tools that are out there doesn't mean they don't exist and I've already offered a couple of examples. I certainly can't claim to know what most of them are; I expect that I am underestimating them. IMO, your list is a fraction of the tools impacted. > I have these patches, I would/could provide them. Generally these tools have something like: > > if platform == 'win32': > bin_dir = 'Scripts' > else: > bin_dir = 'bin' > > The patches just remove the special casing - bin_dir just gets set to 'bin'. So none of those tools need to work with previous Python versions? But even if what you say is strictly true, I don't think a reasonable response to "but what about backwards compatibility and tool breakage" is "the breakage is simple and the fix is trivial" - the bar has never been that low for changes to the language itself. I don't see why tooling around the language shouldn't be held to any less account. So my summary of the situation is: * There has been *exactly one* concrete case listed that would benefit from this, and I believe that one case can be mitigated by you having 2 directories on the PATH in Windows and one on other platforms. * You yourself listed 4 tools that would need to change to support this. I've listed a further 2, and Paul and Tim both indicated they would be impacted. ActiveState and Enthought haven't been canvassed. I suspect this is the tip of the iceberg - although I concede it is probably a relatively small iceberg :) Like Tim, I wont sulk if you can convince people to make this change anyway, but IMO it is completely clear the costs outweigh the benefits. Thus, if it were my decision to make, it would not happen. Paul and Tim have the same view best I can tell. I think it would be a huge shame if it happens even in the face of these pragmatic objections. Cheers, Mark From van.lindberg at gmail.com Fri Mar 23 00:15:53 2012 From: van.lindberg at gmail.com (VanL) Date: Thu, 22 Mar 2012 18:15:53 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 2: Moving the python.exe) In-Reply-To: <4F6BAE1B.8080005@skippinet.com.au> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B3B6E.2000402@gmail.com> <4F6BAE1B.8080005@skippinet.com.au> Message-ID: Another use case was just pointed out to me: making things consistent with buildout. Given a similar use case (create repeatable cross platform environments), they create and use a 'bin' directory for executable files. From ncoghlan at gmail.com Fri Mar 23 00:37:55 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 23 Mar 2012 09:37:55 +1000 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: <4F6B345C.1020406@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> Message-ID: (resending, only sent to Van the first time) FWIW, I avoid the directory naming problems Van describes entirely by including my "scripts" in the source package and running them with the "-m" switch. So "python -m pulpdist.manage_site", for example, is PulpDist's Django administration client wrapper. I do it that way mainly to get sys.path right automatically, but it avoids several other installed vs checked out differences too. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Fri Mar 23 00:26:30 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 22 Mar 2012 16:26:30 -0700 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 2: Moving the python.exe) In-Reply-To: <4F6BAE1B.8080005@skippinet.com.au> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B3B6E.2000402@gmail.com> <4F6BAE1B.8080005@skippinet.com.au> Message-ID: <4F6BB526.7070205@stoneleaf.us> Given the cost of the change, and the advent of the PEP-397 Launcher, I also vote -1. ~Ethan~ From stephen at xemacs.org Fri Mar 23 00:59:28 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 23 Mar 2012 00:59:28 +0100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6BAE0A.4080108@canterbury.ac.nz> References: <4F6A2084.90508@nedbatchelder.com> <4F6A2863.1080200@nedbatchelder.com> <4F6B5B33.9020406@pearwood.info> <4F6BAE0A.4080108@canterbury.ac.nz> Message-ID: On Thu, Mar 22, 2012 at 11:56 PM, Greg Ewing wrote: > Can we please get rid of the sidebar, or at least provide > a way of turning it off? I don't think it's anywhere > near useful enough to be worth the space it takes up. +1. It seems to mostly duplicate the headline next/previous buttons already duplicated in the footer, it doesn't give you the whole TOC, and the whole TOC already present in many nodes. The "Search bar" is a standard feature of most headers (and sometimes footers), and I like the "Report a Bug" link because it confirms to the reader that Python developers actually care what they (readers) think. I guess there is enough room for both of those in the header even after subtractiing the horizontal space for the sidebar. > A table of contents as a separate page is a lot more > usable for me. I agree, but with emphasis on the *for me* part. I suspect this is a personal preference. > Also I agree about the grey text being suboptimal. +1 for black text or a perceptibly darker grey. > Deliberately throwing away contrast, especially for > the main body text, is insane. It does *look* nice, though. Overall, very nice job, Georg! From brian at python.org Fri Mar 23 02:24:19 2012 From: brian at python.org (Brian Curtin) Date: Thu, 22 Mar 2012 20:24:19 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> Message-ID: On Thu, Mar 22, 2012 at 12:59, Paul Moore wrote: > Note - that is not "Regularizing the layout". You have not made any > changes to OS/2 (which matches Windows at the moment). I think that would be a wasted effort with OS/2 entering "unsupported" mode in 3.3, and OS/2 specific code being removed in 3.4. From brian at python.org Fri Mar 23 03:26:36 2012 From: brian at python.org (Brian Curtin) Date: Thu, 22 Mar 2012 21:26:36 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: <4F6B760D.3010004@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> <4F6B760D.3010004@gmail.com> Message-ID: On Thu, Mar 22, 2012 at 13:57, VanL wrote: > Honestly, I didn't expect that much resistance. None of the people I talked > to in person even cared, or if they did, they thought that consistency was a > benefit. But now that virtualenvs are going in in 3.3, I see this as the > last good chance to change this. I was one of these people, first finding out just about the Scripts/bin change, and my thought was JFDI. The rest of it seems fine to me - I say let's go for it. >> Personally, my main concerns are around procedure and policy. The more the >> discussion goes on, the more I feel that there should be a PEP to capture >> the details of the debate clearly. Too much is getting lost in the noise. >> And I think you should provide a clear statement of why this issue is >> important enough to justify violating the backward compatibility policies. >> As Mark said (I think it was Mark...) if this had been proposed for 3.0, it >> would have been fine. Now we're at 3.2 with 3.3 close to release, and it >> just seems too late to be worth the risk. One plus point about your posting >> this separately. It's made me think through the issue in a bit more detail, >> and I'm now a solid -1 on the proposal. > > > I have been trying at various PyCons and in various conversations to move > this for years. No one cares. The current urgency is driven by pyvenv - > changes now will be much, much easier than changes later. > > Again, I am happy to write a PEP. If I were to summarize (on this issue > only): > > 1. The current backwards compatibility hit is minimal; I would be happy to > contact and provide patches to the four packages I have found (and anyone > else who wants one). Backwards compatibility in the future will probably be > harder to deal with. > 2. There are advantages to cross-platform consistency and to > virtualenv-based development. I believe that these will grow in the future. > 3. Most people won't care. To the extent that people notice, I think they > will appreciate the consistency. The virtualenv point, to me, is a strong one. I think we have an opportunity right now to make an adjustment, otherwise we're locked in again. From brian at python.org Fri Mar 23 04:20:12 2012 From: brian at python.org (Brian Curtin) Date: Thu, 22 Mar 2012 22:20:12 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 2: Moving the python.exe) In-Reply-To: <4F6B3B6E.2000402@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B3B6E.2000402@gmail.com> Message-ID: 2012/3/22 VanL : > Open Issues: > > """If we do put python.exe on PATH (whether it's in bin or not), we have > to debate how to handle people having multiple versions of python on > their machine. In a post-PEP 397 world, no Python is "the machine > default" - .py files are associated with py.exe, not python.exe, so we > have to consider the following 3 commands being run from a shell > prompt: > > ? ?1. myprog.py > ? ?2. py myprog.py > ? ?3. python myprog.py > > 1 and 2 will always do the same thing. However, 3 could easily do > something completely different, if the Python in the #! line differs > from the one found on PATH. To me, this implies that it's better for > (3) to need explicit user action (setting PATH) if it's to do anything > other than give an error. But maybe that's just me. I've been hit too > often by confusion caused by *not* remembering this fact.""" I'm not sure how widely used #1 is. I can't remember coming across any bug reports or posts around the web where the example command line just uses the Python chosen by the file association. I would suspect it's especially rare in the current time when many people are running a lot of versions of Python. Right now I have 2.6, 2.7, 3.1, 3.2, and 3.3, all installed in some different order, and I couldn't tell you which of those I installed the latest bugfix release for. That last one wins the race when it comes to file associations, and I've never paid attention to the installer option. #3 *will* require explicit user action - the Path setting is off by default. For as much as it's an advanced feature, it's really helpful to beginners. If you just want to type in "python" and have it work, the Path option is great. That's not to say the launcher isn't *also* a good thing. If you're a first timer and install Python 3.3 and want to run a tutorial - add Python to the path, type "python", and you're on your way. If you're an advanced user and you want to write and run code on Python 3.3, do the same. If you're even more advanced and are doing multi-version work, the launcher is a helpful alternative. > One possible response here is that the moving of the python.exe binary and > the setting of the PATH would be tied to an unchecked-by-default installer > option, making an explicit user choice needed to invoke the new > functionality. I ended up typing out the above while missing this paragraph...but, bingo. > Breakage of existing tools: Mark Hammond, Paul Moore, and Tim Golden have > all expressed that they have existing tools that would break and would need > to be adjusted to match the new location of the python.exe, because that > location is assumed to be at the root of the python install. Isn't the proposed "BinaryDir" registry key helpful here? It's not like we're telling people to fend for themselves -- we'll tell you where it's at. > A related issue is that this portion of the proposal has met with some > resistance, but not much support here on Python-dev. The reason for that is > selection bias: Those who are on Python-dev are much more likely to have > tools that do advanced things with Python, such as introspect on the > location of the binary, and are also much more likely to be comfortable with > things like editing the PATH on windows. In contrast, the people that have > trouble with this issue are those that are newest to Python and programming > generally - those for whom editing the PATH is a challenge and whom are > likely to be confused by the distinction between python.exe and a python > program - and why, even after they add python to the path, the python > program is not directly executable. I still don't really get how this portion of the proposal, the python.exe move to bin, is holding people up. If you're using the launcher, the change is invisible. If you're using a setup where bin is on the Path, the change is invisible. File associations? Invisible. If you're typing out the full path, you have to type "bin" in the middle -- this kind of sucks but I think we'll live. I get that tools could be affected. I had two IDE makers at PyCon immediately throw up red flags to this change. I think one of them was about to charge the stage during my talk. When it was mentioned that we could point them to the proper location, they breathed a sigh of relief and said "cool, do it". If a registry key pointing you to python.exe (rather, the directory) right now in Python < 3.3 works, why doesn't another one pointing you to python.exe in Python >= 3.3 work? From brian at python.org Fri Mar 23 04:23:39 2012 From: brian at python.org (Brian Curtin) Date: Thu, 22 Mar 2012 22:23:39 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 2: Moving the python.exe) In-Reply-To: <4F6BB526.7070205@stoneleaf.us> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B3B6E.2000402@gmail.com> <4F6BAE1B.8080005@skippinet.com.au> <4F6BB526.7070205@stoneleaf.us> Message-ID: On Thu, Mar 22, 2012 at 18:26, Ethan Furman wrote: > Given the cost of the change, and the advent of the PEP-397 Launcher, I also > vote -1. Can you provide some justification other than a number? It's a pretty cheap change and the launcher solves somewhat of a different problem. From mhammond at skippinet.com.au Fri Mar 23 04:39:25 2012 From: mhammond at skippinet.com.au (Mark Hammond) Date: Fri, 23 Mar 2012 14:39:25 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 2: Moving the python.exe) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B3B6E.2000402@gmail.com> Message-ID: <4F6BF06D.10106@skippinet.com.au> [snipped some CCs] On 23/03/2012 2:20 PM, Brian Curtin wrote: ... > I get that tools could be affected. I had two IDE makers at PyCon > immediately throw up red flags to this change. I think one of them was > about to charge the stage during my talk. When it was mentioned that > we could point them to the proper location, they breathed a sigh of > relief and said "cool, do it". If a registry key pointing you to > python.exe (rather, the directory) right now in Python< 3.3 works, > why doesn't another one pointing you to python.exe in Python>= 3.3 > work? It will work. The fact MvL is proposing the conservative approach of landing this in 3.5+ and have 3.3+ include the *new* registry key means I'm willing to reluctantly accept it rather than aggressively oppose it. Tools then have a chance to adapt to the new key. If the proposal moved any faster, existing tools which only use the old key would break without warning. The fact they need to change at all is unfortunate, but the timescale proposed means we can at least say we warned them. Mark From ncoghlan at gmail.com Fri Mar 23 04:48:30 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 23 Mar 2012 13:48:30 +1000 Subject: [Python-Dev] Setting up a RHEL6 buildbot Message-ID: I'm looking into getting a RHEL6 system set up to add to the buildbot fleet. The info already on the wiki [1] is pretty helpful, but does anyone have any suggestions on appropriate CPU/memory/disk allocations? Cheers, Nick. [1] http://wiki.python.org/moin/BuildBot -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From anacrolix at gmail.com Fri Mar 23 06:40:05 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 23 Mar 2012 13:40:05 +0800 Subject: [Python-Dev] Setting up a RHEL6 buildbot In-Reply-To: References: Message-ID: The 24 core machine at my last workplace could configure and make the tip in 45 seconds from a clean checkout. Lots of cores? :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimjjewett at gmail.com Fri Mar 23 06:42:53 2012 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Thu, 22 Mar 2012 22:42:53 -0700 (PDT) Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: <4F6B760D.3010004@gmail.com> Message-ID: <4f6c0d5d.c3d6e00a.2469.ffffde70@mx.google.com> In http://mail.python.org/pipermail/python-dev/2012-March/117953.html VanL wrote: > Paul Moore wrote: >> First of all, this difference is almost entirely *invisible*. Apart >> from possibly setting PATH (once!) users should not be digging around >> in the Python installation directory. Certainly on Windows, it is a >> very unusual application that expects users to even care how the >> application is laid out internally. And I suspect that is also true on >> Unix and Mac. >> Secondly, the layouts are not as similar as you claim here, In fact, of the 8 builtin layout schemes, the only two that are consistent are nt (which you propose to change) and os2. Of the 64 possible values, there are 26 unique values. So I suspect the right answer is just to make it easier for a user to set those 8 values at installation time; in your case, you can tell users what values to use when setting up their virtual environment. (And if you can't do that, then you can add files to your own directories, and if you can't do that, then you can add a post-install hook that renames your directories or moves your files according to the installed values.) > In fact, even the one change > of 'Scripts' to 'bin' everywhere would get 90% of my uses. So is this something that you can control in your recommended virtual environment? Or at least something that you can do with the same script that checks everything out of source control and adds *something* to the path? -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From stefan at bytereef.org Fri Mar 23 09:03:15 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 23 Mar 2012 09:03:15 +0100 Subject: [Python-Dev] Setting up a RHEL6 buildbot In-Reply-To: References: Message-ID: <20120323080315.GA16995@sleipnir.bytereef.org> Nick Coghlan wrote: > I'm looking into getting a RHEL6 system set up to add to the buildbot > fleet. The info already on the wiki [1] is pretty helpful, but does > anyone have any suggestions on appropriate CPU/memory/disk > allocations? The Fedora bot has been running ultra-stable under qemu with 512MB allocated for the VM. I've never limited CPU or disk space. On an i7 quad core with 8GB of memory, I've been running two buildbot VMs, four deccheck processes at 100% CPU and a web server without any kind of noticable performance degradation. Stefan Krah From p.f.moore at gmail.com Fri Mar 23 09:10:13 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 23 Mar 2012 08:10:13 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 2: Moving the python.exe) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B3B6E.2000402@gmail.com> Message-ID: On 23 March 2012 03:20, Brian Curtin wrote: >> Breakage of existing tools: Mark Hammond, Paul Moore, and Tim Golden have >> all expressed that they have existing tools that would break and would need >> to be adjusted to match the new location of the python.exe, because that >> location is assumed to be at the root of the python install. > > Isn't the proposed "BinaryDir" registry key helpful here? It's not > like we're telling people to fend for themselves -- we'll tell you > where it's at. It won't help me much. I either check a key and fall back on the old method, or check in bin and fall back on the old method. No major difference. The key is slightly worse, as I'm already looking in the filesystem, so why open a registry key, but it's mostly irrelevant. > I still don't really get how this portion of the proposal, the > python.exe move to bin, is holding people up. If you're using the > launcher, the change is invisible. If you're using a setup where bin > is on the Path, the change is invisible. File associations? Invisible. > If you're typing out the full path, you have to type "bin" in the > middle -- this kind of sucks but I think we'll live. Agreed, it's irrelevant for end users. It's only going to affect tools. Paul. From stefan at bytereef.org Fri Mar 23 09:26:31 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 23 Mar 2012 09:26:31 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: References: Message-ID: <20120323082631.GA17050@sleipnir.bytereef.org> Dirkjan Ochtman wrote: > As a packager, is the libmpdec library used elsewhere? For Gentoo, we > generally prefer to package libraries separately and have Python > depend on them. From the site, it seems like you more or less wrote > libmpdec for usage in Python, but if it's general-purpose and actually > used in other software, it would be nice if Python grew a configure > option to make it use the system libmpdec. libmpdec was actually written before the module. It's general purpose and it fully implements the specification. I'm only aware of in-house usage. Someone has tried to submit a libmpdec package to OpenSUSE, but it was rejected with the claim that there already exists a package with the name. I think the claim is false: There is a libmpcdec package, where "cdec" presumably stands for "codec". I'll add the --with-system-libmpdec option with the caveat that changes will probably make it first into the libmpdec shipped with Python, see also: http://bugs.python.org/issue7652#msg155744 On the bright side, I don't expect many changes, since the specification is stable. Stefan Krah From dirkjan at ochtman.nl Fri Mar 23 09:41:37 2012 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Fri, 23 Mar 2012 09:41:37 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: <20120323082631.GA17050@sleipnir.bytereef.org> References: <20120323082631.GA17050@sleipnir.bytereef.org> Message-ID: On Fri, Mar 23, 2012 at 09:26, Stefan Krah wrote: > I'll add the --with-system-libmpdec option with the caveat that > changes will probably make it first into the libmpdec shipped > with Python, see also: > > http://bugs.python.org/issue7652#msg155744 Sounds good, thanks! Cheers, Dirkjan From stefan at bytereef.org Fri Mar 23 10:22:55 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 23 Mar 2012 10:22:55 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: References: Message-ID: <20120323092255.GA17205@sleipnir.bytereef.org> Georg Brandl wrote: > >>> Issue #7652: Integrate the decimal floating point libmpdec library to speed > >>> up the decimal module. Performance gains of the new C implementation are > >>> between 12x and 80x, depending on the application. > > > > Congrats Stefan! And thanks for the huge chunk of code. > > Seconded. This is the kind of stuff that will make 3.3 the most awesomest > 3.x release ever (and hopefully convince people that it does make sense to > port)... Thanks! For cdecimal specifically I have the impression that 3.x is already used in the financial community, where web framework dependencies aren't an issue. On the web side, there seems to be a huge interest in speeding up database accesses, so let me evangelize again: Database applications using decimal will run 12x faster in 3.3. Stefan Krah From victor.stinner at gmail.com Fri Mar 23 10:28:44 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 23 Mar 2012 10:28:44 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: <20120323092255.GA17205@sleipnir.bytereef.org> References: <20120323092255.GA17205@sleipnir.bytereef.org> Message-ID: By the way, how much faster is cdecimal? 72x or 80x? http://docs.python.org/dev/whatsnew/3.3.html#decimal Victor 2012/3/23 Stefan Krah : > Georg Brandl wrote: >> >>> ?Issue #7652: Integrate the decimal floating point libmpdec library to speed >> >>> up the decimal module. Performance gains of the new C implementation are >> >>> between 12x and 80x, depending on the application. >> > >> > Congrats Stefan! And thanks for the huge chunk of code. >> >> Seconded. ?This is the kind of stuff that will make 3.3 the most awesomest >> 3.x release ever (and hopefully convince people that it does make sense to >> port)... > > Thanks! For cdecimal specifically I have the impression that 3.x is already > used in the financial community, where web framework dependencies aren't an > issue. > > On the web side, there seems to be a huge interest in speeding up database > accesses, so let me evangelize again: Database applications using decimal > will run 12x faster in 3.3. > > > Stefan Krah > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From stefan at bytereef.org Fri Mar 23 10:30:33 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 23 Mar 2012 10:30:33 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: References: Message-ID: <20120323093033.GB17205@sleipnir.bytereef.org> Amaury Forgeot d'Arc wrote: > > Seconded. ?This is the kind of stuff that will make 3.3 the most awesomest > > 3.x release ever (and hopefully convince people that it does make sense to > > port)... > > On the other hand, porting PyPy to 3.3 will be more work ;-) We've got to keep you on your toes, don't we? :) > Fortunately the libmpdec directory should be reusable as is. > Nice work! Thanks, also for helping out with the MutableMapping context. Stefan Krah From eliben at gmail.com Fri Mar 23 10:51:10 2012 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 23 Mar 2012 11:51:10 +0200 Subject: [Python-Dev] PEP 411 - request for pronouncement Message-ID: Hello, PEP 411 -- Provisional packages in the Python standard library Has been updated with all accumulated feedback from list discussions. Here it is: http://www.python.org/dev/peps/pep-0411/ (the text is also pasted in the bottom of this email). The PEP received mostly positive feedback. The only undecided point is where to specify that the package is provisional. Currently the PEP mandates to specify it in the documentation and in the docstring. Other suggestions were to put it in the code, either as a __provisional__ attribute on the module, or collect all such modules in a single sys.provisional list. According to http://blog.python.org/2012/03/2012-language-summit-report.html, the PEP was discussed in the language summit and overall viewed positively, although no final decision has been reached. ISTM a decision needs to be taken, which is why I request pronouncement, with a recommendation on the requirement the PEP should make of provisional modules (process details). Eli -------------------------------- PEP: 411 Title: Provisional packages in the Python standard library Version: $Revision$ Last-Modified: $Date$ Author: Nick Coghlan , Eli Bendersky Status: Draft Type: Informational Content-Type: text/x-rst Created: 2012-02-10 Python-Version: 3.3 Post-History: 2012-02-10 Abstract ======== The process of including a new package into the Python standard library is hindered by the API lock-in and promise of backward compatibility implied by a package being formally part of Python. This PEP describes a methodology for marking a standard library package "provisional" for the period of a single minor release. A provisional package may have its API modified prior to "graduating" into a "stable" state. On one hand, this state provides the package with the benefits of being formally part of the Python distribution. On the other hand, the core development team explicitly states that no promises are made with regards to the the stability of the package's API, which may change for the next release. While it is considered an unlikely outcome, such packages may even be removed from the standard library without a deprecation period if the concerns regarding their API or maintenance prove well-founded. Proposal - a documented provisional state ========================================= Whenever the Python core development team decides that a new package should be included into the standard library, but isn't entirely sure about whether the package's API is optimal, the package can be included and marked as "provisional". In the next minor release, the package may either be "graduated" into a normal "stable" state in the standard library, remain in provisional state, or be rejected and removed entirely from the Python source tree. If the package ends up graduating into the stable state after being provisional, its API may be changed according to accumulated feedback. The core development team explicitly makes no guarantees about API stability and backward compatibility of provisional packages. Marking a package provisional ----------------------------- A package will be marked provisional by a notice in its documentation page and its docstring. The following paragraph will be added as a note at the top of the documentation page: The package has been included in the standard library on a provisional basis. Backwards incompatible changes (up to and including removal of the package) may occur if deemed necessary by the core developers. The phrase "provisional basis" will then be a link to the glossary term "provisional package", defined as: A provisional package is one which has been deliberately excluded from the standard library's normal backwards compatibility guarantees. While major changes to such packages are not expected, as long as they are marked provisional, backwards incompatible changes (up to and including removal of the package) may occur if deemed necessary by core developers. Such changes will not be made gratuitously - they will occur only if serious flaws are uncovered that were missed prior to the inclusion of the package. This process allows the standard library to continue to evolve over time, without locking in problematic design errors for extended periods of time. See PEP 411 for more details. The following will be added to the start of the packages's docstring: The API of this package is currently provisional. Refer to the documentation for details. Moving a package from the provisional to the stable state simply implies removing these notes from its documentation page and docstring. Which packages should go through the provisional state ------------------------------------------------------ We expect most packages proposed for addition into the Python standard library to go through a minor release in the provisional state. There may, however, be some exceptions, such as packages that use a pre-defined API (for example ``lzma``, which generally follows the API of the existing ``bz2`` package), or packages with an API that has wide acceptance in the Python development community. In any case, packages that are proposed to be added to the standard library, whether via the provisional state or directly, must fulfill the acceptance conditions set by PEP 2. Criteria for "graduation" ------------------------- In principle, most provisional packages should eventually graduate to the stable standard library. Some reasons for not graduating are: * The package may prove to be unstable or fragile, without sufficient developer support to maintain it. * A much better alternative package may be found during the preview release. Essentially, the decision will be made by the core developers on a per-case basis. The point to emphasize here is that a package's inclusion in the standard library as "provisional" in some release does not guarantee it will continue being part of Python in the next release. Rationale ========= Benefits for the core development team -------------------------------------- Currently, the core developers are really reluctant to add new interfaces to the standard library. This is because as soon as they're published in a release, API design mistakes get locked in due to backward compatibility concerns. By gating all major API additions through some kind of a provisional mechanism for a full release, we get one full release cycle of community feedback before we lock in the APIs with our standard backward compatibility guarantee. We can also start integrating provisional packages with the rest of the standard library early, so long as we make it clear to packagers that the provisional packages should not be considered optional. The only difference between provisional APIs and the rest of the standard library is that provisional APIs are explicitly exempted from the usual backward compatibility guarantees. Benefits for end users ---------------------- For future end users, the broadest benefit lies in a better "out-of-the-box" experience - rather than being told "oh, the standard library tools for task X are horrible, download this 3rd party library instead", those superior tools are more likely to be just be an import away. For environments where developers are required to conduct due diligence on their upstream dependencies (severely harming the cost-effectiveness of, or even ruling out entirely, much of the material on PyPI), the key benefit lies in ensuring that all packages in the provisional state are clearly under python-dev's aegis from at least the following perspectives: * Licensing: Redistributed by the PSF under a Contributor Licensing Agreement. * Documentation: The documentation of the package is published and organized via the standard Python documentation tools (i.e. ReST source, output generated with Sphinx and published on http://docs.python.org). * Testing: The package test suites are run on the python.org buildbot fleet and results published via http://www.python.org/dev/buildbot. * Issue management: Bugs and feature requests are handled on http://bugs.python.org * Source control: The master repository for the software is published on http://hg.python.org. Candidates for provisional inclusion into the standard library ============================================================== For Python 3.3, there are a number of clear current candidates: * ``regex`` (http://pypi.python.org/pypi/regex) - approved by Guido [#]_. * ``daemon`` (PEP 3143) * ``ipaddr`` (PEP 3144) Other possible future use cases include: * Improved HTTP modules (e.g. ``requests``) * HTML 5 parsing support (e.g. ``html5lib``) * Improved URL/URI/IRI parsing * A standard image API (PEP 368) * Improved encapsulation of import state (PEP 406) * Standard event loop API (PEP 3153) * A binary version of WSGI for Python 3 (e.g. PEP 444) * Generic function support (e.g. ``simplegeneric``) Rejected alternatives and variations ==================================== See PEP 408. References ========== .. [#] http://mail.python.org/pipermail/python-dev/2012-January/115962.html Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From stefan at bytereef.org Fri Mar 23 11:40:05 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 23 Mar 2012 11:40:05 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: References: <20120323092255.GA17205@sleipnir.bytereef.org> Message-ID: <20120323104005.GA17581@sleipnir.bytereef.org> Victor Stinner wrote: > By the way, how much faster is cdecimal? 72x or 80x? > http://docs.python.org/dev/whatsnew/3.3.html#decimal It really depends on the precision. Also, the performance of decimal.py depends on many other things in the Python tree, so it easily changes +-10%. Currently, decimal.py seems to be 10% faster than in 3.2, maybe because of the new string representation. The 80x is a ballpark figure for the maximum expected speedup for standard numerical floating point applications. factorial(1000) is 219x faster in _decimal, and with increasing precision the difference gets larger and larger. For huge numbers _decimal is also faster than int: factorial(1000000): _decimal, calculation time: 6.844487905502319 _decimal, tostr(): 0.033592939376831055 int, calculation time: 17.96010398864746 int, tostr(): ... still running ... Stefan Krah From mal at egenix.com Fri Mar 23 11:40:47 2012 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 23 Mar 2012 11:40:47 +0100 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: <4F6B345C.1020406@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> Message-ID: <4F6C532F.7070208@egenix.com> VanL wrote: > As this has been brought up a couple times in this subthread, I figured that I would lay out the > rationale here. > > There are two proposals on the table: 1) Regularize the install layout, and 2) move the python > binary to the binaries directory. This email will deal with the first, and a second email will deal > with the second. > > 1) Regularizing the install layout: > > One of Python's strengths is its cross-platform appeal. Carefully- > written Python programs are frequently portable between operating > systems and Python implementations with very few changes. Over the > years, substantial effort has been put into maintaining platform > parity and providing consistent interfaces to available functionality, > even when different underlying implementations are necessary (such > as with ntpath and posixpath). > > One place where Python is unnecessarily different, however, is in > the layout and organization of the Python environment. This is most > visible in the name of the directory for binaries on the Windows platform ("Scripts") versus the > name of the directory for binaries on every other platform ("bin"), but a full listing of the > layouts shows > substantial differences in layout and capitalization across platforms. > Sometimes the include is capitalized ("Include"), and sometimes not; and > the python version may or may not be included in the path to the > standard library or not. > > This may seem like a harmless inconsistency, and if that were all it was, I wouldn't care. (That > said, cross-platform consistency is its own good). But it becomes a real pain when combined with > tools like virtualenv or the new pyvenv to create cross-platform development environments. > > In particular, I regularly do development on both Windows and a Mac, and then deploy on Linux. I do > this in virtualenvs, so that I have a controlled and regular environment. I keep them in sync using > source control. > > The problem comes when I have executable scripts that I want to include in my dvcs - I can't have it > in the obvious place - the binaries directory - because *the name of the directory changes when you > move between platforms.* More concretely, I can't hg add "Scripts/runner.py? on my windows > environment (where it is put in the PATH by virtualenv) and thendo a pull on Mac or Linux and have > it end up properly in "bin/runner.py" which is the correct PATH for those platforms. > > This applies anytime there are executable scripts that you want to manage using source control > across platforms. Django projects regularly have these, and I suspect we will be seeing more of this > with the new "project" support in virtualenvwrapper. > > While a few people have wondered why I would want this -- hopefully answered above -- I have not > heard any opposition to this part of the proposal. > > This first proposal is just to make the names of the directories match across platforms. There are > six keys defined in the installer files (sysconfig.cfg and distutils.command.install): 'stdlib', > 'purelib', 'platlib', 'headers', 'scripts', and 'data'. > > Currently on Windows, there are two different layouts defined: > > 'nt': { > 'stdlib': '{base}/Lib', > 'platstdlib': '{base}/Lib', > 'purelib': '{base}/Lib/site-packages', > 'platlib': '{base}/Lib/site-packages', > 'include': '{base}/Include', > 'platinclude': '{base}/Include', > 'scripts': '{base}/Scripts', > 'data' : '{base}', > }, > > 'nt_user': { > 'stdlib': '{userbase}/Python{py_version_nodot}', > 'platstdlib': '{userbase}/Python{py_version_nodot}', > 'purelib': '{userbase}/Python{py_version_nodot}/site-packages', > 'platlib': '{userbase}/Python{py_version_nodot}/site-packages', > 'include': '{userbase}/Python{py_version_nodot}/Include', > 'scripts': '{userbase}/Scripts', > 'data' : '{userbase}', > }, > > > The proposal is to make all the layouts change to: > > 'nt': { > 'stdlib': '{base}/lib', > 'platstdlib': '{base}/lib', > 'purelib': '{base}/lib/site-packages', > 'platlib': '{base}/lib/site-packages', > 'include': '{base}/include', > 'platinclude': '{base}/include', > 'scripts': '{base}/bin', > 'data' : '{base}', > }, > > The change here is that 'Scripts' will change to 'bin' and the capitalization will be removed. Also, > "user installs" of Python will have the same internal layout as "system installs" of Python. This > will also, not coincidentally, match the install layout for posix, at least with regard to the > 'bin', 'lib', and 'include' directories. > > Again, I have not heard *anyone* objecting to this part of the proposal as it is laid out here. > (Paul had a concern with the lib directory earlier, but he said he was ok with the above). > > Please let me know if you have any problems or concerns with this part 1. Since userbase will usually be a single directory in the home dir of a user, the above would loose the possibility to support multiple Python versions in that directory. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Mar 23 2012) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2012-04-03: Python Meeting Duesseldorf 11 days to go ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From mhammond at skippinet.com.au Fri Mar 23 12:03:02 2012 From: mhammond at skippinet.com.au (Mark Hammond) Date: Fri, 23 Mar 2012 22:03:02 +1100 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 2: Moving the python.exe) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B3B6E.2000402@gmail.com> Message-ID: <4F6C5866.2010902@skippinet.com.au> On 23/03/2012 7:10 PM, Paul Moore wrote: > On 23 March 2012 03:20, Brian Curtin wrote: >>> Breakage of existing tools: Mark Hammond, Paul Moore, and Tim Golden have >>> all expressed that they have existing tools that would break and would need >>> to be adjusted to match the new location of the python.exe, because that >>> location is assumed to be at the root of the python install. >> >> Isn't the proposed "BinaryDir" registry key helpful here? It's not >> like we're telling people to fend for themselves -- we'll tell you >> where it's at. > > It won't help me much. I either check a key and fall back on the old > method, or check in bin and fall back on the old method. No major > difference. The key is slightly worse, as I'm already looking in the > filesystem, so why open a registry key, but it's mostly irrelevant. That's a really good point. On reflection, the 2 tools I've been using as examples are already sniffing around the file-system relative to the install path, looking in the root and the PCBuild directories. The simplest approach for these tools to take is to simply sniff the bin directory too - so they are unlikely to refer to the BinaryDir key at all. Mark From solipsis at pitrou.net Fri Mar 23 12:10:19 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 23 Mar 2012 12:10:19 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed References: <20120323092255.GA17205@sleipnir.bytereef.org> Message-ID: <20120323121019.580f381b@pitrou.net> On Fri, 23 Mar 2012 10:22:55 +0100 Stefan Krah wrote: > Georg Brandl wrote: > > >>> Issue #7652: Integrate the decimal floating point libmpdec library to speed > > >>> up the decimal module. Performance gains of the new C implementation are > > >>> between 12x and 80x, depending on the application. > > > > > > Congrats Stefan! And thanks for the huge chunk of code. > > > > Seconded. This is the kind of stuff that will make 3.3 the most awesomest > > 3.x release ever (and hopefully convince people that it does make sense to > > port)... > > Thanks! For cdecimal specifically I have the impression that 3.x is already > used in the financial community, where web framework dependencies aren't an > issue. > > On the web side, there seems to be a huge interest in speeding up database > accesses, so let me evangelize again: Database applications using decimal > will run 12x faster in 3.3. Are you sure it isn't 12.5x ? Regards Antoine. From solipsis at pitrou.net Fri Mar 23 12:12:05 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 23 Mar 2012 12:12:05 +0100 Subject: [Python-Dev] PendingDeprecationWarning References: Message-ID: <20120323121205.6c8d8508@pitrou.net> On Thu, 22 Mar 2012 17:13:27 -0400 Terry Reedy wrote: > My impression is that the original reason for PendingDeprecationWarning > versus DeprecationWarning was to be off by default until the last > release before removal. But having DeprecationWarnings on by default was > found to be too obnoxious and it too is off by default. So do we still > need PendingDeprecationWarnings? My impression is that it is mostly not > used, as it is a nuisance to remember to change from one to the other. > The deprecation message can always indicate the planned removal time. I > searched the Developer's Guide for both deprecation and > DeprecationWarning and found nothing. Warnings are not only for us, they are for third-party libraries. In this case it seems removing PendingDeprecationWarning would be a pointless nuisance. Regards Antoine. From solipsis at pitrou.net Fri Mar 23 12:13:49 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 23 Mar 2012 12:13:49 +0100 Subject: [Python-Dev] Setting up a RHEL6 buildbot References: Message-ID: <20120323121349.63abf8ea@pitrou.net> On Fri, 23 Mar 2012 13:48:30 +1000 Nick Coghlan wrote: > I'm looking into getting a RHEL6 system set up to add to the buildbot > fleet. The info already on the wiki [1] is pretty helpful, but does > anyone have any suggestions on appropriate CPU/memory/disk > allocations? One or two cores is enough, unless you want to allow for multiple builds in parallel (in which case do say so :-)). You'll need quite a bit of disk space though. At least 20 or 30 GB to err on the safe side, IMO. Regards Antoine. From stefan at bytereef.org Fri Mar 23 12:47:02 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 23 Mar 2012 12:47:02 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: <20120323121019.580f381b@pitrou.net> References: <20120323092255.GA17205@sleipnir.bytereef.org> <20120323121019.580f381b@pitrou.net> Message-ID: <20120323114702.GA17859@sleipnir.bytereef.org> Antoine Pitrou wrote: > > On the web side, there seems to be a huge interest in speeding up database > > accesses, so let me evangelize again: Database applications using decimal > > will run 12x faster in 3.3. > > Are you sure it isn't 12.5x ? Well, that was marketing for 3.3. Stefan Krah From p.f.moore at gmail.com Fri Mar 23 14:37:42 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 23 Mar 2012 13:37:42 +0000 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 2: Moving the python.exe) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B3B6E.2000402@gmail.com> <4F6BAE1B.8080005@skippinet.com.au> Message-ID: On 22 March 2012 23:15, VanL wrote: > Another use case was just pointed out to me: making things consistent with buildout. Given a similar use > case (create repeatable cross platform environments), they create and use a 'bin' directory for executable files. Another problem case: cx_Freeze. This currently breaks when installed in a viretualenv, as it locates the "Scripts" directory by appending "Scripts" to the directory of the python executable. So the proposed change *will* break cx_Freeze. The Scripts->bin change will also break it. Paul. PS Yes, I need to report the existing bug. The point remains, however... From pje at telecommunity.com Fri Mar 23 17:39:41 2012 From: pje at telecommunity.com (PJ Eby) Date: Fri, 23 Mar 2012 12:39:41 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: <4F6B760D.3010004@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> <4F6B760D.3010004@gmail.com> Message-ID: On Mar 22, 2012 2:57 PM, "VanL" wrote: > That said, while I think that the above is a good idea, my personal ambitions are more modest: If the names of the top-level directories only were changed to 'bin', 'lib', and 'include' - never mind differences under 'lib' - I would be happy. In fact, even the one change of 'Scripts' to 'bin' everywhere would get 90% of my uses. Why don't you just install your scripts to 'bin' everywhere then, and add the bin directory to the path on Windows? Distutils allows you to customize your install target for scripts, as does setuptools. Why do you need *Python's own default* to change, to support your preference for using a bin directory? Even if you are using tools that don't use distutils' configuration settings for these directories, why not simply fix those tools so that they do? -------------- next part -------------- An HTML attachment was scrubbed... URL: From zooko at zooko.com Fri Mar 23 17:55:17 2012 From: zooko at zooko.com (Zooko Wilcox-O'Hearn) Date: Fri, 23 Mar 2012 10:55:17 -0600 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: <4F6137EF.9000000@gmail.com> References: <4F6137EF.9000000@gmail.com> Message-ID: > I merged the two functions into one function: time.steady(strict=False). > > time.steady() should be monotonic most of the time, but may use a fallback. > > time.steady(strict=True) fails with OSError or NotImplementedError if > reading the monotonic clock failed or if no monotonic clock is available. If someone wants time.steady(strict=False), then why don't they just continue to use time.time()? I want time.steady(strict=True), and I'm glad you're providing it and I'm willing to use it this way, although it is slightly annoying because "time.steady(strict=True)" really means "time.steady(i_really_mean_it=True)". Else, I would have used "time.time()". I am aware of a large number of use cases for a steady clock (event scheduling, profiling, timeouts), and a large number of uses cases for a "NTP-respecting wall clock" clock (calendaring, displaying to a user, timestamping). I'm not aware of any use case for "steady if implemented, else wall-clock", and it sounds like a mistake to me. Regards, Zooko From status at bugs.python.org Fri Mar 23 18:07:38 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 23 Mar 2012 18:07:38 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120323170738.C93321D080@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-03-16 - 2012-03-23) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3346 ( +9) closed 22829 (+50) total 26175 (+59) Open issues with patches: 1434 Issues opened (42) ================== #5301: add mimetype for image/vnd.microsoft.icon (patch) http://bugs.python.org/issue5301 reopened by eric.araujo #10340: asyncore doesn't properly handle EINVAL on OSX http://bugs.python.org/issue10340 reopened by r.david.murray #14336: Difference between pickle implementations for function objects http://bugs.python.org/issue14336 opened by sbt #14338: Document how to forward POST data on redirects http://bugs.python.org/issue14338 opened by beerNuts #14339: Optimizing bin, oct and hex http://bugs.python.org/issue14339 opened by storchaka #14340: Update embedded copy of expat - fix security & crash issues http://bugs.python.org/issue14340 opened by gregory.p.smith #14341: sporadic (?) test_urllib2 failures http://bugs.python.org/issue14341 opened by pitrou #14345: Document socket.SOL_SOCKET http://bugs.python.org/issue14345 opened by techtonik #14349: The documentation of 'dis' doesn't describe MAKE_FUNCTION corr http://bugs.python.org/issue14349 opened by eli.bendersky #14350: Strange Exception from copying an iterable http://bugs.python.org/issue14350 opened by Jakob.Bowyer #14352: Distutils2: add logging message to report successful installat http://bugs.python.org/issue14352 opened by agronholm #14353: Proper gettext support in locale module http://bugs.python.org/issue14353 opened by melflynn #14354: Crash in _ctypes_alloc_callback http://bugs.python.org/issue14354 opened by ogre #14356: Distutils2 ignores site-local configuration http://bugs.python.org/issue14356 opened by agronholm #14357: Distutils2 does not work with virtualenv http://bugs.python.org/issue14357 opened by agronholm #14360: email.encoders.encode_quopri doesn't work with python 3.2 http://bugs.python.org/issue14360 opened by mitya57 #14361: No link to issue tracker on Python home page http://bugs.python.org/issue14361 opened by stevenjd #14362: No mention of collections.ChainMap in What's New for 3.3 http://bugs.python.org/issue14362 opened by stevenjd #14364: Argparse incorrectly handles '--' http://bugs.python.org/issue14364 opened by maker #14365: argparse: subparsers, argument abbreviations and ambiguous opt http://bugs.python.org/issue14365 opened by jakub #14366: Supporting bzip2 and lzma compression in zip files http://bugs.python.org/issue14366 opened by storchaka #14367: try/except block in ismethoddescriptor() in inspect.py, so tha http://bugs.python.org/issue14367 opened by ncdave4life #14368: floattime() should not raise an exception http://bugs.python.org/issue14368 opened by haypo #14369: make __closure__ writable http://bugs.python.org/issue14369 opened by Yury.Selivanov #14371: Add support for bzip2 compression to the zipfile module http://bugs.python.org/issue14371 opened by storchaka #14372: Fix all invalid usage of borrowed references http://bugs.python.org/issue14372 opened by haypo #14373: C implementation of functools.lru_cache http://bugs.python.org/issue14373 opened by anacrolix #14374: Compiling Python 2.7.2 on HP11i PA-RISC ends with segmentation http://bugs.python.org/issue14374 opened by donchen #14375: Add socketserver running property http://bugs.python.org/issue14375 opened by giampaolo.rodola #14376: sys.exit documents argument as "integer" but actually requires http://bugs.python.org/issue14376 opened by Gareth.Rees #14377: Modify serializer for xml.etree.ElementTree to allow forcing t http://bugs.python.org/issue14377 opened by adpoliak #14379: Several traceback docs improvements http://bugs.python.org/issue14379 opened by techtonik #14381: Intern certain integral floats for memory savings and performa http://bugs.python.org/issue14381 opened by krisvale #14383: Generalize the use of _Py_IDENTIFIER in ceval.c and typeobject http://bugs.python.org/issue14383 opened by haypo #14385: Support other types than dict for __builtins__ http://bugs.python.org/issue14385 opened by haypo #14386: Expose dictproxy as a public type http://bugs.python.org/issue14386 opened by haypo #14387: Include\accu.h incompatible with Windows.h http://bugs.python.org/issue14387 opened by jeffr at livedata.com #14390: Tkinter single-threaded deadlock http://bugs.python.org/issue14390 opened by jcbollinger #14391: misc TYPO in argparse.Action docstring http://bugs.python.org/issue14391 opened by shima__shima #14392: type=bool doesn't raise error in argparse.Action http://bugs.python.org/issue14392 opened by shima__shima #14393: Incorporate Guide to Magic Methods? http://bugs.python.org/issue14393 opened by djc #14394: missing links on performance claims of cdecimal http://bugs.python.org/issue14394 opened by tshepang Most recent 15 issues with no replies (15) ========================================== #14392: type=bool doesn't raise error in argparse.Action http://bugs.python.org/issue14392 #14391: misc TYPO in argparse.Action docstring http://bugs.python.org/issue14391 #14390: Tkinter single-threaded deadlock http://bugs.python.org/issue14390 #14379: Several traceback docs improvements http://bugs.python.org/issue14379 #14375: Add socketserver running property http://bugs.python.org/issue14375 #14368: floattime() should not raise an exception http://bugs.python.org/issue14368 #14345: Document socket.SOL_SOCKET http://bugs.python.org/issue14345 #14341: sporadic (?) test_urllib2 failures http://bugs.python.org/issue14341 #14339: Optimizing bin, oct and hex http://bugs.python.org/issue14339 #14336: Difference between pickle implementations for function objects http://bugs.python.org/issue14336 #14329: proxy_bypass_macosx_sysconf does not handle singel ip addresse http://bugs.python.org/issue14329 #14326: IDLE - allow shell to support different locales http://bugs.python.org/issue14326 #14319: cleanup index switching mechanism on packaging.pypi http://bugs.python.org/issue14319 #14304: Implement utf-8-bmp codec http://bugs.python.org/issue14304 #14303: Incorrect documentation for socket.py on linux http://bugs.python.org/issue14303 Most recent 15 issues waiting for review (15) ============================================= #14392: type=bool doesn't raise error in argparse.Action http://bugs.python.org/issue14392 #14391: misc TYPO in argparse.Action docstring http://bugs.python.org/issue14391 #14387: Include\accu.h incompatible with Windows.h http://bugs.python.org/issue14387 #14386: Expose dictproxy as a public type http://bugs.python.org/issue14386 #14385: Support other types than dict for __builtins__ http://bugs.python.org/issue14385 #14383: Generalize the use of _Py_IDENTIFIER in ceval.c and typeobject http://bugs.python.org/issue14383 #14381: Intern certain integral floats for memory savings and performa http://bugs.python.org/issue14381 #14377: Modify serializer for xml.etree.ElementTree to allow forcing t http://bugs.python.org/issue14377 #14375: Add socketserver running property http://bugs.python.org/issue14375 #14373: C implementation of functools.lru_cache http://bugs.python.org/issue14373 #14371: Add support for bzip2 compression to the zipfile module http://bugs.python.org/issue14371 #14369: make __closure__ writable http://bugs.python.org/issue14369 #14368: floattime() should not raise an exception http://bugs.python.org/issue14368 #14367: try/except block in ismethoddescriptor() in inspect.py, so tha http://bugs.python.org/issue14367 #14366: Supporting bzip2 and lzma compression in zip files http://bugs.python.org/issue14366 Top 10 most discussed issues (10) ================================= #14387: Include\accu.h incompatible with Windows.h http://bugs.python.org/issue14387 17 msgs #14228: It is impossible to catch sigint on startup in python code http://bugs.python.org/issue14228 15 msgs #14302: Move python.exe to bin/ http://bugs.python.org/issue14302 15 msgs #14034: Add argparse howto http://bugs.python.org/issue14034 13 msgs #14361: No link to issue tracker on Python home page http://bugs.python.org/issue14361 13 msgs #14331: Python/import.c uses a lot of stack space due to MAXPATHLEN http://bugs.python.org/issue14331 10 msgs #14371: Add support for bzip2 compression to the zipfile module http://bugs.python.org/issue14371 10 msgs #14381: Intern certain integral floats for memory savings and performa http://bugs.python.org/issue14381 10 msgs #10340: asyncore doesn't properly handle EINVAL on OSX http://bugs.python.org/issue10340 9 msgs #13922: argparse handling multiple "--" in args improperly http://bugs.python.org/issue13922 9 msgs Issues closed (46) ================== #1676: Fork/exec issues with Tk 8.5/Python 2.5.1 on OS X http://bugs.python.org/issue1676 closed by ned.deily #3573: IDLE hangs when passing invalid command line args (directory(i http://bugs.python.org/issue3573 closed by asvetlov #4652: IDLE does not work with Unicode http://bugs.python.org/issue4652 closed by asvetlov #7738: IDLE hang when tooltip comes up in Linux http://bugs.python.org/issue7738 closed by ned.deily #7997: http://www.python.org/dev/faq/ doesn't seem to explain how to http://bugs.python.org/issue7997 closed by rosslagerwall #9408: curses: Link against libncursesw instead of libncurses http://bugs.python.org/issue9408 closed by haypo #10538: PyArg_ParseTuple("s*") does not always incref object http://bugs.python.org/issue10538 closed by krisvale #12757: undefined name in doctest.py http://bugs.python.org/issue12757 closed by r.david.murray #12788: test_email fails with -R http://bugs.python.org/issue12788 closed by r.david.murray #13009: Remove documentation in distutils2 repo http://bugs.python.org/issue13009 closed by eric.araujo #13325: no address in the representation of asyncore dispatcher after http://bugs.python.org/issue13325 closed by giampaolo.rodola #13694: asynchronous connect in asyncore.dispatcher does not set addr http://bugs.python.org/issue13694 closed by giampaolo.rodola #13782: xml.etree.ElementTree: Element.append doesn't type-check its a http://bugs.python.org/issue13782 closed by eli.bendersky #14115: 2.7.3rc and 3.2.3rc hang on test_asynchat and test_asyncore on http://bugs.python.org/issue14115 closed by r.david.murray #14204: Support for the NPN extension to TLS/SSL http://bugs.python.org/issue14204 closed by pitrou #14250: for string patterns regex.flags is never equal to 0 http://bugs.python.org/issue14250 closed by python-dev #14269: SMTPD server does not enforce client starting mail transaction http://bugs.python.org/issue14269 closed by r.david.murray #14277: time.monotonic docstring does not mention the time unit return http://bugs.python.org/issue14277 closed by haypo #14296: Compilation error on CentOS 5.8 http://bugs.python.org/issue14296 closed by neologix #14297: Custom string formatter doesn't work like builtin str.format http://bugs.python.org/issue14297 closed by terry.reedy #14306: try/except block is both efficient and expensive? http://bugs.python.org/issue14306 closed by python-dev #14311: ConfigParser does not parse utf-8 files with BOM bytes http://bugs.python.org/issue14311 closed by lukasz.langa #14328: Add keyword-only parameter support to PyArg_ParseTupleAndKeywo http://bugs.python.org/issue14328 closed by larry #14333: queue unittest errors http://bugs.python.org/issue14333 closed by r.david.murray #14335: Reimplement multiprocessing's ForkingPickler using dispatch_ta http://bugs.python.org/issue14335 closed by pitrou #14337: Recent refleaks http://bugs.python.org/issue14337 closed by benjamin.peterson #14342: In re's examples the example with recursion doesn't work http://bugs.python.org/issue14342 closed by python-dev #14343: In re's examples the example with re.split() shadows builtin i http://bugs.python.org/issue14343 closed by python-dev #14346: Typos in Mac/README http://bugs.python.org/issue14346 closed by ned.deily #14347: Update Misc/README http://bugs.python.org/issue14347 closed by ned.deily #14348: Minor whitespace changes in base64 module http://bugs.python.org/issue14348 closed by eric.araujo #14351: Script error in 3.2.3rc1 Windows doc http://bugs.python.org/issue14351 closed by georg.brandl #14355: imp module docs should omit references to init_frozen http://bugs.python.org/issue14355 closed by r.david.murray #14358: test_os failing with errno 61: No Data Available http://bugs.python.org/issue14358 closed by python-dev #14359: _posixsubprocess.o compilation error on CentOS 5.8 http://bugs.python.org/issue14359 closed by rosslagerwall #14363: Can't build Python 3.3a1 on Centos 5 http://bugs.python.org/issue14363 closed by skrah #14370: list.extend() called on an iterator of the list itself leads t http://bugs.python.org/issue14370 closed by rhettinger #14378: __future__ imports fail when compiling from python ast http://bugs.python.org/issue14378 closed by python-dev #14380: MIMEText should default to utf8 charset if input text contains http://bugs.python.org/issue14380 closed by r.david.murray #14382: test_unittest crashes loading 'unittest.test.testmock' when ru http://bugs.python.org/issue14382 closed by ned.deily #14384: Add "default" kw argument to operator.itemgetter and operator. http://bugs.python.org/issue14384 closed by r.david.murray #14388: configparser.py traceback http://bugs.python.org/issue14388 closed by r.david.murray #14389: Mishandling of large numbers http://bugs.python.org/issue14389 closed by r.david.murray #1222721: tk + setlocale problems... http://bugs.python.org/issue1222721 closed by terry.reedy #1752252: tkFileDialog closes Python when used http://bugs.python.org/issue1752252 closed by terry.reedy #14344: repr of email policies is wrong http://bugs.python.org/issue14344 closed by r.david.murray From victor.stinner at gmail.com Fri Mar 23 18:27:50 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 23 Mar 2012 18:27:50 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: > I want time.steady(strict=True), and I'm glad you're providing it and > I'm willing to use it this way, although it is slightly annoying > because "time.steady(strict=True)" really means > "time.steady(i_really_mean_it=True)". Else, I would have used > "time.time()". > > I am aware of a large number of use cases for a steady clock (event > scheduling, profiling, timeouts), and a large number of uses cases for > a "NTP-respecting wall clock" clock (calendaring, displaying to a > user, timestamping). I'm not aware of any use case for "steady if > implemented, else wall-clock", and it sounds like a mistake to me. time.steady(strict=False) is what you need to implement timeout. If you use time.steady(strict=True) for timeout, it means that you cannot use select, threads, etc. if your platform doesn't provide monotonic clock, whereas it works "well" (except the issue of adjusted time) with Python < 3.3. Victor From bradallen137 at gmail.com Fri Mar 23 18:26:22 2012 From: bradallen137 at gmail.com (Brad Allen) Date: Fri, 23 Mar 2012 12:26:22 -0500 Subject: [Python-Dev] Issue 13524: subprocess on Windows In-Reply-To: <61913F1B-1DFC-4EA5-AF29-25BBF24DA4DA@twistedmatrix.com> References: <61913F1B-1DFC-4EA5-AF29-25BBF24DA4DA@twistedmatrix.com> Message-ID: On Thu, Mar 22, 2012 at 2:35 PM, Glyph Lefkowitz wrote: > Also, in order to execute in any installation environment where libraries are found in non-default locations, you will need to set LD_LIBRARY_PATH. ?Oh, and you will also need to set $PATH on UNIX so that libraries can find their helper programs and %PATH% on Windows so that any compiled dynamically-loadable modules and/or DLLs can be loaded. ?And by the way you will also need to relay DYLD_LIBRARY_PATH if you did a UNIX-style build on OS X, not LD_LIBRARY_PATH. ?Don't forget that you probably also need PYTHONPATH to make sure any subprocess environments can import the same modules as their parent. ?Not to mention SSH_AUTH_SOCK if your application requires access to _remote_ process spawning, rather than just local. ?Oh and DISPLAY in case your subprocesses need GUI support from an X11 program (which sometimes you need just to initialize certain libraries which don't actually do anything with a GUI). ?Oh and __CF_USER_TEXT_ENCODING is important sometimes too, don't forget that. ?And if your subprocess is in Perl or Ruby or Java you may need a couple dozen other variables which your deployment environment has set for you too. ?Did I mention CFLAGS or LC_ALL yet? ?Let me tell you a story about this one HP/UX machine... > > Ahem. > > Bottom line: it seems like screwing with the process spawning environment to make it minimal is a good idea for simplicity, for security, and for modularity. ?But take it from me, it isn't. ?I guarantee you that you don't actually know what is in your operating system's environment, and initializing it is a complicated many-step dance which some vendor or sysadmin or product integrator figured out how to do much better than your hapless Python program can. > > %SystemRoot% is just the tip of a very big, very nasty iceberg. ?Better not to keep refining why exactly it's required, or someone will eventually be adding a new variable (starting with %APPDATA% and %HOMEPATH%) that can magically cause your subprocess not to spawn properly to this page every six months for eternity. ?If you're spawning processes as a regular user, you should just take the environment you're given, perhaps with a few specific light additions whose meaning you understand. ?If you're spawning a process as an administrator or root, you should probably initialize the environment for the user you want to spawn that process as using an OS-specific mechanism like login(1). ?(Sorry that I don't know the Windows equivalent.) Thanks, Glyph. In that case maybe the Python subprocess docs need not single out SystemRoot, but instead plaster a big warning around the use of the 'env' parameter.: Here is what the docs currently state for the Popen constructor 'env' parameter: >If env is not None, it must be a mapping that defines the environment variables for the new process; these are used instead of inheriting the current process? environment, which is the default behavior. > Note: If specified, env must provide any variables required for the program to execute. On Windows, in order to run a side-by-side assembly the specified env must include a valid SystemRoot. The "Note" section could instead state something like: "In most cases, the child process will need many of the same environment variables as the current process. Usually the safest course of action is to build the env dict to contain all the same keys and values from os.environ. For example... " From van.lindberg at gmail.com Fri Mar 23 18:37:36 2012 From: van.lindberg at gmail.com (VanL) Date: Fri, 23 Mar 2012 12:37:36 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> <4F6B760D.3010004@gmail.com> Message-ID: <4D5CC12CFE8E455FBD5BD49A3A295A1C@gmail.com> On Friday, March 23, 2012 at 11:39 AM, PJ Eby wrote: > Even if you are using tools that don't use distutils' configuration settings for these directories, why not simply fix those tools so that they do? Thats what I do currently - I set things to bin and patch Python and the tools so that they work. However, have considered this to be a little bit of a wart anyway for a long time - even before I adopted my current method of working - because it is a pointless inconsistency. The fact that it makes virtual environments consistent across platforms, together with pyvenv going into 3.3, gave me enough of a push to elevate this from private annoyance to "should fix." Thanks, Van -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Fri Mar 23 18:51:35 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 23 Mar 2012 18:51:35 +0100 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 2: Moving the python.exe) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B3B6E.2000402@gmail.com> <4F6BAE1B.8080005@skippinet.com.au> Message-ID: On Fri, Mar 23, 2012 at 2:37 PM, Paul Moore wrote: > Another problem case: cx_Freeze. This currently breaks when installed > in a viretualenv, as it locates the "Scripts" directory by appending > "Scripts" to the directory of the python executable. > > So the proposed change *will* break cx_Freeze. The Scripts->bin change > will also break it. > > Paul. > > PS Yes, I need to report the existing bug. The point remains, however... This seems to me to be evidence that the things that will be broken are in need of fixing anyway.<0.5 wink/> virtualenv is something that should Just Work in most, if not all, cases. From jimjjewett at gmail.com Fri Mar 23 19:03:20 2012 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 23 Mar 2012 14:03:20 -0400 Subject: [Python-Dev] [Python-checkins] cpython (3.2): attempt to fix asyncore buildbot failure In-Reply-To: References: Message-ID: What does this verify? My assumption from the name (test_quick_connect) and the context (an asynchronous server) is that it is verifying the server can handle a certain level of load. Refusing the sockets should then be a failure, or at least a skipped test. Would the below fail even if asyncore.loop were taken out of the threading.Thread target altogether? On Fri, Mar 23, 2012 at 10:10 AM, giampaolo.rodola wrote: > http://hg.python.org/cpython/rev/2db4e916245a > changeset: ? 75901:2db4e916245a > branch: ? ? ?3.2 > parent: ? ? ?75897:b97964af7299 > user: ? ? ? ?Giampaolo Rodola' > date: ? ? ? ?Fri Mar 23 15:07:07 2012 +0100 > summary: > ?attempt to fix asyncore buildbot failure > > files: > ?Lib/test/test_asyncore.py | ?10 +++++++--- > ?1 files changed, 7 insertions(+), 3 deletions(-) > > > diff --git a/Lib/test/test_asyncore.py b/Lib/test/test_asyncore.py > --- a/Lib/test/test_asyncore.py > +++ b/Lib/test/test_asyncore.py > @@ -741,11 +741,15 @@ > > ? ? ? ? for x in range(20): > ? ? ? ? ? ? s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) > + ? ? ? ? ? ?s.settimeout(.2) > ? ? ? ? ? ? s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, > ? ? ? ? ? ? ? ? ? ? ? ? ?struct.pack('ii', 1, 0)) > - ? ? ? ? ? ?s.connect(server.address) > - ? ? ? ? ? ?s.close() > - > + ? ? ? ? ? ?try: > + ? ? ? ? ? ? ? ?s.connect(server.address) > + ? ? ? ? ? ?except socket.error: > + ? ? ? ? ? ? ? ?pass > + ? ? ? ? ? ?finally: > + ? ? ? ? ? ? ? ?s.close() > > ?class TestAPI_UseSelect(BaseTestAPI): > ? ? use_poll = False > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From pje at telecommunity.com Fri Mar 23 19:35:38 2012 From: pje at telecommunity.com (PJ Eby) Date: Fri, 23 Mar 2012 14:35:38 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: <4D5CC12CFE8E455FBD5BD49A3A295A1C@gmail.com> References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> <4F6B760D.3010004@gmail.com> <4D5CC12CFE8E455FBD5BD49A3A295A1C@gmail.com> Message-ID: On Mar 23, 2012 1:37 PM, "VanL" wrote: > > On Friday, March 23, 2012 at 11:39 AM, PJ Eby wrote: >> >> Even if you are using tools that don't use distutils' configuration settings for these directories, why not simply fix those tools so that they do? > > > Thats what I do currently - I set things to bin and patch Python and the tools so that they work. Patch *Python*? Where? Are you talking about the distutils/distutils.cfg file? My point here is that AFAIK, Python already supports your desired layout - so your use case doesn't provide much of an argument in favor of making it the default. > However, have considered this to be a little bit of a wart anyway for a long time - even before I adopted my current method of working - because it is a pointless inconsistency. In other words, that's the real reason - which, as it's already been pointed out, is not much of an argument in favor of changing it. Consistency with previous Python releases seems a far more *useful* consistency to maintain than cross-platform consistency, which is only of relevance to cross-platform developers -- at best only a subset of the Windows developer audience. Worse, changing it means that tools have to grow version-specific code, not just platform-specific code. > The fact that it makes virtual environments consistent across platforms, Not really seeing a point. Home directory layouts, "develop" installs, .pth files, -m scripts... there are *zillions* of ways to develop code in cross-platform directory layouts, including the one you're using now. Tool developers are going "meh" about your proposal because it doesn't actually solve any problems for them: they still have to support the old layout, and if their code already uses distutils' facilities for obtaining paths, there's nothing they gain from the change. IOW, the only person who gains from the consistency is someone who wants their virtualenv's to look the same and check them into source. I'm really not seeing this as being a big enough group to be worth inconveniencing other people for, vs. telling them to add bin/ to PATH on Windows and edit a distutils config file. At best, this might be deserving of a FAQ entry on how to set up cross platform development environments. AFAICT, virtualenvs are overkill for most development anyway. If you're not using distutils except to install dependencies, then configure distutils to install scripts and libraries to the same directory, and then do all your development in that directory. Presto! You now have a cross-platform "virtualenv". Want the scripts on your path? Add that directory to your path... or if on Windows, don't bother, since the current directory is usually on the path. (In fact, if you're only using easy_install to install your dependencies, you don't even need to edit the distutils configuration, just use "-md targetdir".) The entire virtualenv concept was originally introduced as a way for non-root *nix users to have private site-packages directories with .pth support, in order to be able to install eggs -- a use case which was then solved by user-specific site directories in Python 2.6, and the addition of the site.py hacks in easy_install (to allow any directory to be a virtualenv as far as easy_install was concerned). Virtualenv seem to have caught on for a variety of other uses than that, but AFAIK, that's only because it's the most *visible* solution for those uses. Just dumping things in a directory adjacent to the corresponding scripts is the original virtualenv, and it still works just dandy -- most people just don't *know* this. (And again, if there are tools out there that *don't* support single-directory virtualenvs, the best answer is probably to fix them.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at oddbird.net Fri Mar 23 20:53:05 2012 From: carl at oddbird.net (Carl Meyer) Date: Fri, 23 Mar 2012 13:53:05 -0600 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> <4F6B760D.3010004@gmail.com> <4D5CC12CFE8E455FBD5BD49A3A295A1C@gmail.com> Message-ID: <4F6CD4A1.6030301@oddbird.net> Hi PJ, On 03/23/2012 12:35 PM, PJ Eby wrote: > AFAICT, virtualenvs are overkill for most development anyway. If you're > not using distutils except to install dependencies, then configure > distutils to install scripts and libraries to the same directory, and > then do all your development in that directory. Presto! You now have a > cross-platform "virtualenv". Want the scripts on your path? Add that > directory to your path... or if on Windows, don't bother, since the > current directory is usually on the path. (In fact, if you're only > using easy_install to install your dependencies, you don't even need to > edit the distutils configuration, just use "-md targetdir".) Creating and using a virtualenv is, in practice, _easier_ than any of those alternatives, so it's hard to see it as "overkill." Not to mention that the "isolation from system site-packages" feature is quite popular (the outpouring of gratitude when virtualenv went isolated-by-default a few months ago was astonishing), and AFAIK none of your alternative proposals support that at all. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From van.lindberg at gmail.com Fri Mar 23 21:19:08 2012 From: van.lindberg at gmail.com (VanL) Date: Fri, 23 Mar 2012 15:19:08 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> <4F6B760D.3010004@gmail.com> <4D5CC12CFE8E455FBD5BD49A3A295A1C@gmail.com> Message-ID: On Friday, March 23, 2012 at 1:35 PM, PJ Eby wrote: > Tool developers are going "meh" about your proposal because it doesn't actually solve any problems for them: they still have to support the old layout, and if their code already uses distutils' facilities for obtaining paths, there's nothing they gain from the change. Three notes. FIrst, distutils.cfg doesn't always work because it is centered around the idea of set paths that are the same each time - which doesn't always work with virtualenvs. Further, a number of tools find that it doesn't work (haven't seen it myself, but look at the comments in pypm's installer). So yes, I patch python. Second, most installer tools don't follow distutils.cfg. Even if that helps for python setup.py install, the other tools are still broken when you want to specify a layout. That is why changing the defaults is the only effective way to make this change - because the defaults drive what is actually implemented. I know, because I have looked at and patched these tools to make them work. Third, there are some tool makers going meh - because you are right, this is not a problem they have. This is a problem associated with using those tools. And regardless of there being other ways to do it, including your 'dump it in a directory' method, development in virtualenvs is convenient, widespread, and on the rise. Given that pyvenv will go into 3.3, it will be the 'one obvious way to do it' - making going-forward cross-platform compatibility a positive good. Again I note the example of buildout. And fourth, (because nobody expects the spanish inquisition), isn't the gratuitous difference a (small but) obvious wart? Does anybody positively like 'Scripts'? The most common comment I have received when talking to people off this list has been, 'yeah, that was always sort of weird.' -------------- next part -------------- An HTML attachment was scrubbed... URL: From nas at arctrix.com Fri Mar 23 21:21:37 2012 From: nas at arctrix.com (Neil Schemenauer) Date: Fri, 23 Mar 2012 20:21:37 +0000 (UTC) Subject: [Python-Dev] Issue #10278 -- why not just an attribute? References: <4f67fec7.65aa320a.62e4.0780@mx.google.com> Message-ID: Jim J. Jewett wrote: > Passing strict as an argument seems like overkill since it will always > be meaningless on some (most?) platforms. A keyword argument that gets passed as a constant in the caller is usually poor API. Why not have two different functions? Neil From glyph at twistedmatrix.com Fri Mar 23 21:40:34 2012 From: glyph at twistedmatrix.com (Glyph) Date: Fri, 23 Mar 2012 16:40:34 -0400 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: On Mar 23, 2012, at 12:55 PM, Zooko Wilcox-O'Hearn wrote: >> I merged the two functions into one function: time.steady(strict=False). >> >> time.steady() should be monotonic most of the time, but may use a fallback. >> >> time.steady(strict=True) fails with OSError or NotImplementedError if >> reading the monotonic clock failed or if no monotonic clock is available. > > If someone wants time.steady(strict=False), then why don't they just > continue to use time.time()? > > I want time.steady(strict=True), and I'm glad you're providing it and > I'm willing to use it this way, although it is slightly annoying > because "time.steady(strict=True)" really means > "time.steady(i_really_mean_it=True)". Else, I would have used > "time.time()". > > I am aware of a large number of use cases for a steady clock (event > scheduling, profiling, timeouts), and a large number of uses cases for > a "NTP-respecting wall clock" clock (calendaring, displaying to a > user, timestamping). I'm not aware of any use case for "steady if > implemented, else wall-clock", and it sounds like a mistake to me. I think I've lost the thread of this discussion. Is that really what "strict=False" was supposed to mean? I am aware of use-cases which want to respect slew, but reject steps. The local clock might not be all that reliable, and slew actually keeps it closer to "real" time. My understanding was that strict=True was something like CLOCK_MONOTONIC_RAW and strict=False was just CLOCK_MONOTONIC. I am increasingly thinking that the first order of business here should be to expose the platform-specific mechanisms directly, then try to come up with a unifying abstraction in the time module later. It's hard enough to understand the substantially dense verbiage around all of these different timers on their respective platforms; understanding which one exactly Python is swaddling up in a portability layer seems bound to generate confusion. Not to mention that you run into awesome little edge cases like this: https://github.com/ThomasHabets/monotonic_clock/blob/master/src/monotonic_win32.c#L62 which means sometimes you really really need to know exactly which clock is getting used, if you want to make it work right (unless Python is going to ship with all of these workarounds on day 1). (I still object to the "time.steady" naming, because this is what people in the make-believe land of C++ call it. The people who live in the real world of C and POSIX all refer to it as "monotonic". And even the C++ purists slip up sometimes, c.f. : "Class std::chrono::steady_clock represents a monotonic clock.") If there really are some applications for which the desired behavior is 'monotonic clock, but if you can't get one, nevermind, wallclock is good enough', it strikes me that this should be as explicit as possible, it should not be the default, and if try: value = time.steady() except OhWowYourComputerReallyHasProblems: value = time.time() is generally considered too onerous, it should be spelled time.steady(fallback=time.time). -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Fri Mar 23 21:46:53 2012 From: glyph at twistedmatrix.com (Glyph) Date: Fri, 23 Mar 2012 16:46:53 -0400 Subject: [Python-Dev] Issue 13524: subprocess on Windows In-Reply-To: References: <61913F1B-1DFC-4EA5-AF29-25BBF24DA4DA@twistedmatrix.com> Message-ID: On Mar 23, 2012, at 1:26 PM, Brad Allen wrote: > Thanks, Glyph. In that case maybe the Python subprocess docs need not > single out SystemRoot, but instead plaster a big warning around the > use of the 'env' parameter. I agree. I'm glad that my bitter experience here might be useful to someone in the future - all those late nights trying desperately to get my unit tests to run on some newly configured, slightly weird buildbot didn't go to waste :). > The "Note" section could instead state something like: "In most cases, > the child process will need many of the same environment variables as > the current process. Usually the safest course of action is to build > the env dict to contain all the same keys and values from os.environ. > For example... " I think including all the examples might be overstating the case. It is probably best to say that other operating systems, vendors, and integration tools may set necessary environment variables that there is no way for you to be aware of in advance, unless you are an expert sysadmin on every platform where you expect your code to run, and that many of these variables are required for libraries to function properly, both libraries bundled with python and those from third parties. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Mar 23 22:11:40 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 23 Mar 2012 21:11:40 +0000 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: On 23 March 2012 20:40, Glyph wrote: > I am increasingly thinking that the first order of business here should be > to expose the platform-specific mechanisms directly, then try to come up > with a unifying abstraction in the time module later. +1. Paul From bradallen137 at gmail.com Fri Mar 23 22:31:12 2012 From: bradallen137 at gmail.com (Brad Allen) Date: Fri, 23 Mar 2012 16:31:12 -0500 Subject: [Python-Dev] Issue 13524: subprocess on Windows In-Reply-To: References: <61913F1B-1DFC-4EA5-AF29-25BBF24DA4DA@twistedmatrix.com> Message-ID: On Fri, Mar 23, 2012 at 3:46 PM, Glyph wrote: > On Mar 23, 2012, at 1:26 PM, Brad Allen wrote: > > Thanks, Glyph. In that case maybe the Python subprocess docs need not > single out SystemRoot, but instead plaster a big warning around the > use of the 'env' parameter. > > > I agree. ?I'm glad that my bitter experience here might be useful to someone > in the future - all those late nights trying desperately to get my unit > tests to run on some newly configured, slightly weird buildbot didn't go to > waste :). Ok, I'll open a ticket on the bugtracker for this over the weekend. From yselivanov.ml at gmail.com Fri Mar 23 23:54:42 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 23 Mar 2012 18:54:42 -0400 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: <47D9CBF0-CCA5-4CBB-8899-BAC2027C4F19@gmail.com> On 2012-03-23, at 4:40 PM, Glyph wrote: > (I still object to the "time.steady" naming, because this is what people in the make-believe land of C++ call it. The people who live in the real world of C and POSIX all refer to it as "monotonic". And even the C++ purists slip up sometimes, c.f. : "Class std::chrono::steady_clock represents a monotonic clock.") +1. I also think that the function should be called 'monotonic' and simply fail with OSError on platforms that don't support such clocks. The 'strict' argument is non-intuitive. - Yury From yselivanov.ml at gmail.com Fri Mar 23 23:56:08 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 23 Mar 2012 18:56:08 -0400 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: <79DA2DE9-5285-489E-B252-26D897DC022E@gmail.com> On 2012-03-23, at 1:27 PM, Victor Stinner wrote: >> I want time.steady(strict=True), and I'm glad you're providing it and >> I'm willing to use it this way, although it is slightly annoying >> because "time.steady(strict=True)" really means >> "time.steady(i_really_mean_it=True)". Else, I would have used >> "time.time()". >> >> I am aware of a large number of use cases for a steady clock (event >> scheduling, profiling, timeouts), and a large number of uses cases for >> a "NTP-respecting wall clock" clock (calendaring, displaying to a >> user, timestamping). I'm not aware of any use case for "steady if >> implemented, else wall-clock", and it sounds like a mistake to me. > > time.steady(strict=False) is what you need to implement timeout. > > If you use time.steady(strict=True) for timeout, it means that you > cannot use select, threads, etc. if your platform doesn't provide > monotonic clock, whereas it works "well" (except the issue of adjusted > time) with Python < 3.3. Why can't I use select & threads? You mean that if a platform does not support monotonic clocks it also does not support threads and select sys call? - Yury From anacrolix at gmail.com Sat Mar 24 00:06:46 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Sat, 24 Mar 2012 07:06:46 +0800 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: <47D9CBF0-CCA5-4CBB-8899-BAC2027C4F19@gmail.com> References: <4F6137EF.9000000@gmail.com> <47D9CBF0-CCA5-4CBB-8899-BAC2027C4F19@gmail.com> Message-ID: Yes, call it what it is. monotonic or monotonic_time, because that's why I'm using it. No flags. I've followed this thread throughout, and I'm still not sure if "steady" gives the real guarantees it claims. It's trying to be too much. Existing bugs complain about backward jumps and demand a clock that doesn't do this. The function should guarantee monotonicity only, and not get overcomplicated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Sat Mar 24 00:07:45 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 24 Mar 2012 00:07:45 +0100 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: <79DA2DE9-5285-489E-B252-26D897DC022E@gmail.com> References: <4F6137EF.9000000@gmail.com> <79DA2DE9-5285-489E-B252-26D897DC022E@gmail.com> Message-ID: 2012/3/23 Yury Selivanov : > Why can't I use select & threads? ?You mean that if a platform does not > support monotonic clocks it also does not support threads and select sys > call? Python 3.3 now uses time.steady(strict=False) in the threading and queue modules. If we replace it by time.steady(strict=True), you may get an error if your platform doesn't provide a monotonic clock and so you cannot use these modules. Victor From yselivanov.ml at gmail.com Sat Mar 24 00:21:57 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 23 Mar 2012 19:21:57 -0400 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> <79DA2DE9-5285-489E-B252-26D897DC022E@gmail.com> Message-ID: <9FA0018E-08D0-48A4-8D9F-B97225973C87@gmail.com> On 2012-03-23, at 7:07 PM, Victor Stinner wrote: > 2012/3/23 Yury Selivanov : >> Why can't I use select & threads? You mean that if a platform does not >> support monotonic clocks it also does not support threads and select sys >> call? > > Python 3.3 now uses time.steady(strict=False) in the threading and > queue modules. If we replace it by time.steady(strict=True), you may > get an error if your platform doesn't provide a monotonic clock and so > you cannot use these modules. Why this won't work? try: from time import monotonic as _time except ImportError: from time import time as _time OR (if we decide to fail on first call, instead of ImportError) import time try: time.monotonic() except OSError: _time = time else: _time = time.monotonic And then just use '_time' in your code? What's the deal with the 'strict' kwarg? I really like how it currently works with epoll, for instance. It either exists in the 'select' module, or not, if the host OS doesn't support it. I think it should be the same for 'time.monotonic'. - Yury From victor.stinner at gmail.com Sat Mar 24 00:25:41 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 24 Mar 2012 00:25:41 +0100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? Message-ID: Hi, time.steady(strict=True) looks to be confusing for most people, some of them don't understand the purpose of the flag and others don't like a flag changing the behaviour of the function. I propose to replace time.steady(strict=True) by time.monotonic(). That would avoid the need of an ugly NotImplementedError: if the OS has no monotonic clock, time.monotonic() will just not exist. So we will have: - time.time(): realtime, can be adjusted by the system administrator (manually) or automatically by NTP - time.clock(): monotonic clock on Windows, CPU time on UNIX - time.monotonic(): monotonic clock, its speed may or may not be adjusted by NTP but it only goes forward, may raise an OSError - time.steady(): monotonic clock or the realtime clock, depending on what is available on the platform (use monotonic in priority). may be adjusted by NTP or the system administrator, may go backward. time.steady() is something like: try: return time.monotonic() except (NotImplementError, OSError): return time.time() time.time(), time.clock(), time.steady() are always available, whereas time.monotonic() will not be available on some platforms. Victor From brian at python.org Sat Mar 24 00:28:28 2012 From: brian at python.org (Brian Curtin) Date: Fri, 23 Mar 2012 18:28:28 -0500 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: Message-ID: On Mar 23, 2012 6:25 PM, "Victor Stinner" wrote: > > Hi, > > time.steady(strict=True) looks to be confusing for most people, some > of them don't understand the purpose of the flag and others don't like > a flag changing the behaviour of the function. > > I propose to replace time.steady(strict=True) by time.monotonic(). > That would avoid the need of an ugly NotImplementedError: if the OS > has no monotonic clock, time.monotonic() will just not exist. > > So we will have: > > - time.time(): realtime, can be adjusted by the system administrator > (manually) or automatically by NTP > - time.clock(): monotonic clock on Windows, CPU time on UNIX > - time.monotonic(): monotonic clock, its speed may or may not be > adjusted by NTP but it only goes forward, may raise an OSError > - time.steady(): monotonic clock or the realtime clock, depending on > what is available on the platform (use monotonic in priority). may be > adjusted by NTP or the system administrator, may go backward. > > time.steady() is something like: > > try: > return time.monotonic() > except (NotImplementError, OSError): > return time.time() > > time.time(), time.clock(), time.steady() are always available, whereas > time.monotonic() will not be available on some platforms. > > Victor This seems like it should have been a PEP, or maybe should become a PEP. -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Sat Mar 24 00:36:31 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 24 Mar 2012 00:36:31 +0100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: Message-ID: > This seems like it should have been a PEP, or maybe should become a PEP. I replaced time.wallclock() by time.steady(strict=False) and time.monotonic() by time.steady(strict=True). This change solved the naming issue of time.wallclock(), but it was a bad idea to merge monotonic() feature into time.steady(). It looks like everybody agrees, am I wrong? Victor From yselivanov.ml at gmail.com Sat Mar 24 00:38:10 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 23 Mar 2012 19:38:10 -0400 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: Message-ID: <4FB90B27-4778-4FE9-B93B-887A730BCBEC@gmail.com> On 2012-03-23, at 7:28 PM, Brian Curtin wrote: > This seems like it should have been a PEP, or maybe should become a PEP. Why? AFAIK Victor just proposes to add two new functions: monotonic() and steady(). time() and clock() do already exist and won't be changed. - Yury From yselivanov.ml at gmail.com Sat Mar 24 00:42:17 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 23 Mar 2012 19:42:17 -0400 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: Message-ID: <61C7F677-7F04-4FD6-AC66-4595374F402A@gmail.com> On 2012-03-23, at 7:25 PM, Victor Stinner wrote: > - time.steady(): monotonic clock or the realtime clock, depending on > what is available on the platform (use monotonic in priority). may be > adjusted by NTP or the system administrator, may go backward. > > time.steady() is something like: > > try: > return time.monotonic() > except (NotImplementError, OSError): > return time.time() Is the use of weak monotonic time so wide-spread in the stdlib that we need the 'steady()' function? If it's just two modules then it's not worth adding it. - Yury From victor.stinner at gmail.com Sat Mar 24 00:51:39 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 24 Mar 2012 00:51:39 +0100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: <61C7F677-7F04-4FD6-AC66-4595374F402A@gmail.com> References: <61C7F677-7F04-4FD6-AC66-4595374F402A@gmail.com> Message-ID: >> time.steady() is something like: >> >> try: >> ?return time.monotonic() >> except (NotImplementError, OSError): >> ?return time.time() > > Is the use of weak monotonic time so wide-spread in the stdlib that we > need the 'steady()' function? ?If it's just two modules then it's not > worth adding it. The Python standard library is not written to be used by Python itself, but by others. The try/except is a common pattern when applications use a monotonic clock. I suppose that quite all applications use this try/except pattern. I don't see what is the use case requiring a is truly monotonic clock. Victor From steve at pearwood.info Sat Mar 24 01:02:36 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 24 Mar 2012 11:02:36 +1100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: Message-ID: <4F6D0F1C.1080404@pearwood.info> Victor Stinner wrote: [...] > So we will have: > > - time.time(): realtime, can be adjusted by the system administrator > (manually) or automatically by NTP > - time.clock(): monotonic clock on Windows, CPU time on UNIX > - time.monotonic(): monotonic clock, its speed may or may not be > adjusted by NTP but it only goes forward, may raise an OSError This all sounds good to me. +1 up to this point. Question: under what circumstances will monotonic() exist but raise OSError? > - time.steady(): monotonic clock or the realtime clock, depending on > what is available on the platform (use monotonic in priority). may be > adjusted by NTP or the system administrator, may go backward. What makes this "steady", given that it can be adjusted and it can go backwards? Doesn't sound steady to me. Is steady() merely a convenience function to avoid the user having to write something like this? try: mytimer = time.monotonic except AttributeError: mytimer = time.time or inline: > try: > return time.monotonic() > except (NotImplementError, OSError): > return time.time() Should that be (AttributeError, OSError) instead? -- Steven From steve at pearwood.info Sat Mar 24 01:16:10 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 24 Mar 2012 11:16:10 +1100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: Message-ID: <4F6D124A.3000607@pearwood.info> Victor Stinner wrote: > - time.clock(): monotonic clock on Windows, CPU time on UNIX Actually, I think that is not correct. Or at least *was* not correct in 2006. http://bytes.com/topic/python/answers/527849-time-clock-going-backwards -- Steven From victor.stinner at gmail.com Sat Mar 24 01:25:42 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 24 Mar 2012 01:25:42 +0100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: <4F6D0F1C.1080404@pearwood.info> References: <4F6D0F1C.1080404@pearwood.info> Message-ID: > Question: under what circumstances will monotonic() exist but raise OSError? On Windows, OSError is raised if QueryPerformanceFrequency fails. Extract of Microsoft doc: "If the function fails, the return value is zero. To get extended error information, call GetLastError. For example, if the installed hardware does not support a high-resolution performance counter, the function fails." On UNIX, OSError is raised if clock_gettime(CLOCK_MONOTONIC) fails. Extract of clock_gettime() doc: "ERRORS EINVAL The clk_id specified is not supported on this system." It may occur if the libc exposes CLOCK_MONOTONIC but the kernel doesn't support it. I don't know if it can occur in practice. >> - time.steady(): monotonic clock or the realtime clock, depending on >> what is available on the platform (use monotonic in priority). may be >> adjusted by NTP or the system administrator, may go backward. > > What makes this "steady", given that it can be adjusted and it can go > backwards? Doesn't sound steady to me. In practice, it will be monotonic in most cases. "steady" name is used instead of "monotonic" because it may not be monotonic is other cases. > Is steady() merely a convenience function to avoid the user having > to write something like this? steady() remembers if the last call to monotonic failed or not. The real implementation is closer to something like: def steady(): if not steady.has_monotonic: return time.time() try: return time.monotonic() except (AttributeError, OSError): steady.has_monotonic = False return time.time() steady.has_monotonic = True Victor From victor.stinner at gmail.com Sat Mar 24 01:45:37 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 24 Mar 2012 01:45:37 +0100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: <4F6D124A.3000607@pearwood.info> References: <4F6D124A.3000607@pearwood.info> Message-ID: >> - time.clock(): monotonic clock on Windows, CPU time on UNIX > > > Actually, I think that is not correct. Or at least *was* not correct in > 2006. > > http://bytes.com/topic/python/answers/527849-time-clock-going-backwards Oh, I was not aware of this issue. Do you suggest to not use QueryPerformanceCounter() on Windows to implement a monotonic clock? The python-monotonic-time project uses GetTickCount64(), or GetTickCount(), on Windows. GetTickCount64() was added to Windows Seven / Server 2008. GetTickCount() overflows after 49 days. QueryPerformanceCounter() has a better resolution than GetTickCount[64](). Victor From greg.ewing at canterbury.ac.nz Sat Mar 24 02:16:17 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 24 Mar 2012 14:16:17 +1300 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> Message-ID: <4F6D2061.3080804@canterbury.ac.nz> Glyph Lefkowitz wrote: > "do I have to resize my browser every time I visit a new site to get a > decent width for reading". If all sites left the width to the browser, then I would be able to make my browser window a width that is comfortable for me with my chosen font size and leave it that way. The only time a site forces me to resize my window is when it thinks it has a better idea than me how wide the text should be. I prefer sites that don't try to control the layout of everything. When using a site that leaves most of it up to my browser, I never find myself wishing that the designer had specified something more tightly. However, I do often find myself wishing that the designer *hadn't* overridden the width, or the font size, or the text colour, or decided that I shouldn't be allowed to know whether I've visited links before, etc. etc. A web page is not a printed page. It is not rendered at a fixed size and viewed in its entirety at once. It needs to be flexible, able to be rendered in whatever size space is available or the user wants to devote to it. Browsers are very good at doing that -- unless the designer defeats them by fixing something that is better left flexible. -- Greg From steve at pearwood.info Sat Mar 24 03:08:58 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 24 Mar 2012 13:08:58 +1100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: <4F6D0F1C.1080404@pearwood.info> Message-ID: <4F6D2CBA.10205@pearwood.info> Victor Stinner wrote: >> Is steady() merely a convenience function to avoid the user having >> to write something like this? > > steady() remembers if the last call to monotonic failed or not. The > real implementation is closer to something like: > > def steady(): > if not steady.has_monotonic: > return time.time() > try: > return time.monotonic() > except (AttributeError, OSError): > steady.has_monotonic = False > return time.time() > steady.has_monotonic = True Does this mean that there are circumstances where monotonic will work for a while, but then fail? Otherwise, we would only need to check monotonic once, when the time module is first loaded, rather than every time it is called. Instead of the above: # global to the time module try: monotonic() except (NameError, OSError): steady = time else: steady = monotonic Are there failure modes where monotonic can recover? That is, it works for a while, then raises OSError, then works again on the next call. If so, steady will stop using monotonic and never try it again. Is that deliberate? -- Steven From janzert at janzert.com Sat Mar 24 03:12:51 2012 From: janzert at janzert.com (Janzert) Date: Fri, 23 Mar 2012 22:12:51 -0400 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: Message-ID: On 3/23/2012 7:25 PM, Victor Stinner wrote: [snip] > - time.monotonic(): monotonic clock, its speed may or may not be > adjusted by NTP but it only goes forward, may raise an OSError > - time.steady(): monotonic clock or the realtime clock, depending on > what is available on the platform (use monotonic in priority). may be > adjusted by NTP or the system administrator, may go backward. > > time.steady() is something like: > > try: > return time.monotonic() > except (NotImplementError, OSError): > return time.time() > I am surprised that a clock with the name time.steady() has a looser definition than one called time.monotonic(). To my mind a steady clock is by definition monotonic but a monotonic one may or may not be steady. Janzert From steve at pearwood.info Sat Mar 24 03:26:15 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 24 Mar 2012 13:26:15 +1100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: <4F6D124A.3000607@pearwood.info> Message-ID: <4F6D30C7.6010209@pearwood.info> Victor Stinner wrote: >>> - time.clock(): monotonic clock on Windows, CPU time on UNIX >> >> Actually, I think that is not correct. Or at least *was* not correct in >> 2006. >> >> http://bytes.com/topic/python/answers/527849-time-clock-going-backwards > > Oh, I was not aware of this issue. Do you suggest to not use > QueryPerformanceCounter() on Windows to implement a monotonic clock? I do not have an opinion on the best way to implement monotonic to guarantee that it actually is monotonic. -- Steven From jimjjewett at gmail.com Sat Mar 24 03:36:30 2012 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Fri, 23 Mar 2012 19:36:30 -0700 (PDT) Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: <4F6D0F1C.1080404@pearwood.info> Message-ID: <4f6d332e.e938b60a.3825.ffffe8f5@mx.google.com> In http://mail.python.org/pipermail/python-dev/2012-March/118024.html Steven D'Aprano wrote: > What makes this "steady", given that it can be adjusted > and it can go backwards? It is best-effort for steady, but putting "best" in the name would be an attractive nuisance. > Is steady() merely a convenience function to avoid the user > having to write something like this? > try: > mytimer = time.monotonic > except AttributeError: > mytimer = time.time That would still be worth doing. But I think the main point is that the clock *should* be monotonic, and *should* be as precise as possible. Given that it returns seconds elapsed (since an undefined start), perhaps it should be time.seconds() or even time.counter() -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From pje at telecommunity.com Sat Mar 24 04:21:18 2012 From: pje at telecommunity.com (PJ Eby) Date: Fri, 23 Mar 2012 23:21:18 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> <4F6B760D.3010004@gmail.com> <4D5CC12CFE8E455FBD5BD49A3A295A1C@gmail.com> Message-ID: On Mar 23, 2012 4:19 PM, "VanL" wrote: > > Three notes. FIrst, distutils.cfg doesn't always work because it is centered around the idea of set paths that are the same each time - which doesn't always work with virtualenvs. And the virtualenv doesn't contain its own copy of distutils.cfg? > Second, most installer tools don't follow distutils.cfg. Even if that helps for python setup.py install, the other tools are still broken when you want to specify a layout. So, we should change Python to fix the broken tools that don't follow documented standards for configuring installation locations? If the tools are that broken, aren't they going to break even *harder* when you change the paths for Windows? > And fourth, (because nobody expects the spanish inquisition), isn't the gratuitous difference a (small but) obvious wart? It's hardly the only wart we keep around for backwards compatibility. If it's going to change, it needs a proper transition period at the least. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Sat Mar 24 04:22:04 2012 From: pje at telecommunity.com (PJ Eby) Date: Fri, 23 Mar 2012 23:22:04 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) Message-ID: On Mar 23, 2012 3:53 PM, "Carl Meyer" wrote: > > Hi PJ, > > On 03/23/2012 12:35 PM, PJ Eby wrote: > > AFAICT, virtualenvs are overkill for most development anyway. If you're > > not using distutils except to install dependencies, then configure > > distutils to install scripts and libraries to the same directory, and > > then do all your development in that directory. Presto! You now have a > > cross-platform "virtualenv". Want the scripts on your path? Add that > > directory to your path... or if on Windows, don't bother, since the > > current directory is usually on the path. (In fact, if you're only > > using easy_install to install your dependencies, you don't even need to > > edit the distutils configuration, just use "-md targetdir".) > > Creating and using a virtualenv is, in practice, _easier_ than any of > those alternatives, Really? As I said, I've never seen the need to try, since just installing stuff to a directory on PYTHONPATH seems quite easy enough for me. > that the "isolation from system site-packages" feature is quite popular > (the outpouring of gratitude when virtualenv went isolated-by-default a > few months ago was astonishing), and AFAIK none of your alternative > proposals support that at all. What is this isolation for, exactly? If you don't want site-packages on your path, why not use python -S? (Sure, nobody knows about these things, but surely that's a documentation problem, not a tooling problem.) Don't get me wrong, I don't have any deep objection to virtualenvs, I've just never seen the *point* (outside of the scenarios I mentioned), and thus don't see what great advantage will be had by rearranging layouts to make them shareable across platforms, when "throw stuff in a directory" seems perfectly serviceable for that use case already. Tools that *don't* support "just throw it in a directory" as a deployment option are IMO unpythonic -- practicality beats purity, after all. ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Sat Mar 24 04:30:07 2012 From: pje at telecommunity.com (PJ Eby) Date: Fri, 23 Mar 2012 23:30:07 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6D2061.3080804@canterbury.ac.nz> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> Message-ID: On Mar 23, 2012 9:16 PM, "Greg Ewing" wrote: > > Glyph Lefkowitz wrote: > >> "do I have to resize my browser every time I visit a new site to get a decent width for reading". > > > If all sites left the width to the browser, then I would > be able to make my browser window a width that is comfortable > for me with my chosen font size and leave it that way. > The only time a site forces me to resize my window is when > it thinks it has a better idea than me how wide the text > should be. Weird - I have the exact *opposite* problem, where I have to resize my window because somebody *didn't* set their text max-width sanely (to a reasonable value based on ems instead of pixels), and I have nearly 1920 pixels of raw text spanning my screen. Bloody impossible to read that way. But I guess this is going to turn into one of those vi vs. emacs holy war things... (Personally, I prefer jEdit, or nano if absolutely forced to edit in a terminal. Heretical, I know. To the comfy chair with me!) -------------- next part -------------- An HTML attachment was scrubbed... URL: From van.lindberg at gmail.com Sat Mar 24 04:37:15 2012 From: van.lindberg at gmail.com (VanL) Date: Fri, 23 Mar 2012 22:37:15 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> <4F6B760D.3010004@gmail.com> <4D5CC12CFE8E455FBD5BD49A3A295A1C@gmail.com> Message-ID: On Mar 23, 2012 10:21 PM, "PJ Eby" wrote: > > > On Mar 23, 2012 4:19 PM, "VanL" wrote: > > > > Three notes. FIrst, distutils.cfg doesn't always work because it is centered around the idea of set paths that are the same each time - which doesn't always work with virtualenvs. > > And the virtualenv doesn't contain its own copy of distutils.cfg? It can, but a new one. Virtualenvs don't carry over the distutils.cfg from the main installation. Thus, using distutils.cfg in the virtualenv would require editing the .cfg for every new virtualenv-and it still wouldn't work all the time for the other reasons discussed. > > Second, most installer tools don't follow distutils.cfg. Even if that helps for python setup.py install, the other tools are still broken when you want to specify a layout. > > So, we should change Python to fix the broken tools that don't follow documented standards for configuring installation locations? If the documented functions don't work for the use cases, there is nothing else. Again, see the comments in PyPM. > > If the tools are that broken, aren't they going to break even *harder* when you change the paths for Windows? If people substitute on hard-coded value for another, does cross platform consistency help? And ifthat focuses attention on the new packaging APIs and the correct way to do it, isn't that even better? > > And fourth, (because nobody expects the spanish inquisition), isn't the gratuitous difference a (small but) obvious wart? > > It's hardly the only wart we keep around for backwards compatibility. If it's going to change, it needs a proper transition period at the least. Already proposed, making a transition over three releases with it starting as an off by default option in 3.3. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Sat Mar 24 04:38:19 2012 From: brian at python.org (Brian Curtin) Date: Fri, 23 Mar 2012 22:38:19 -0500 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: <4FB90B27-4778-4FE9-B93B-887A730BCBEC@gmail.com> References: <4FB90B27-4778-4FE9-B93B-887A730BCBEC@gmail.com> Message-ID: On Fri, Mar 23, 2012 at 18:38, Yury Selivanov wrote: > On 2012-03-23, at 7:28 PM, Brian Curtin wrote: >> This seems like it should have been a PEP, or maybe should become a PEP. > > Why? ?AFAIK Victor just proposes to add two new functions: monotonic() and > steady(). We just previously had "Drop time.monotonic() function, rename time.wallclock() to time.steady()" checked in a few weeks ago, and now we're renaming a variation on time.steady to time.monotonic? What's the next move? I'm not paying close attention here but there's a lot of movement going on. Figuring out the API before we get too deep is probably a good idea. From greg.ewing at canterbury.ac.nz Sat Mar 24 06:32:02 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 24 Mar 2012 18:32:02 +1300 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> Message-ID: <4F6D5C52.1030802@canterbury.ac.nz> PJ Eby wrote: > Weird - I have the exact *opposite* problem, where I have to resize my > window because somebody *didn't* set their text max-width sanely (to a > reasonable value based on ems instead of pixels), and I have nearly 1920 > pixels of raw text spanning my screen. If you don't want 1920-pixel-wide text, why make your browser window that large? -- Greg From jyasskin at gmail.com Sat Mar 24 06:37:45 2012 From: jyasskin at gmail.com (Jeffrey Yasskin) Date: Fri, 23 Mar 2012 22:37:45 -0700 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: Message-ID: On Fri, Mar 23, 2012 at 4:25 PM, Victor Stinner wrote: > Hi, > > time.steady(strict=True) looks to be confusing for most people, some > of them don't understand the purpose of the flag and others don't like > a flag changing the behaviour of the function. > > I propose to replace time.steady(strict=True) by time.monotonic(). > That would avoid the need of an ugly NotImplementedError: if the OS > has no monotonic clock, time.monotonic() will just not exist. > > So we will have: > > - time.time(): realtime, can be adjusted by the system administrator > (manually) or automatically by NTP > - time.clock(): monotonic clock on Windows, CPU time on UNIX > - time.monotonic(): monotonic clock, its speed may or may not be > adjusted by NTP but it only goes forward, may raise an OSError > - time.steady(): monotonic clock or the realtime clock, depending on > what is available on the platform (use monotonic in priority). may be > adjusted by NTP or the system administrator, may go backward. Please don't use the word "steady" for something different from what C++ means by it. C++'s term means "may not be adjusted at all, even by NTP; proceeds at as close to the rate of real time as the system can manage" (paraphrased). If the consensus in the Python community is that a C++-style "steady" clock is unnecessary, then feel free not to define it. If the consensus is that "monotonic" already means everything C++ means by "steady", that's fine with me too. I mentioned it because I thought it might be worth looking at what other languages were doing in this space, not because I thought it was a nice word that you should attach your own definitions to. Jeffrey From greg at krypto.org Sat Mar 24 08:15:00 2012 From: greg at krypto.org (Gregory P. Smith) Date: Sat, 24 Mar 2012 00:15:00 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: Message-ID: On Tue, Mar 20, 2012 at 3:55 PM, John O'Connor wrote: > On Tue, Mar 20, 2012 at 6:38 PM, Georg Brandl wrote: > > recently I've grown a bit tired of seeing our default Sphinx theme, > > especially as so many other projects use it. > > I think regardless of the chosen style, giving the Python 3 docs a > different look and feel also has a psychological benefit that might > further encourage users to consider moving to Python 3. It could be a > bit of a wake-up call. > +3 Of course you do realize that the only possible outcome of this thread which is *literally* about painting the docs bike shed is to have a row of dynamic "change my css theme" buttons somewhere with one for each person that has piped up in this thread. Which would lead to a stateful docs web server with cookie preferences on which css to default to for each and every viewer. This doesn't end well... ;) Good luck (and thanks for trying, I like seeing the new styles!) -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From regebro at gmail.com Sat Mar 24 09:20:40 2012 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 24 Mar 2012 09:20:40 +0100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: Message-ID: On Sat, Mar 24, 2012 at 00:36, Victor Stinner wrote: >> This seems like it should have been a PEP, or maybe should become a PEP. > > I replaced time.wallclock() by time.steady(strict=False) and > time.monotonic() by time.steady(strict=True). This change solved the > naming issue of time.wallclock(), but it was a bad idea to merge > monotonic() feature into time.steady(). It looks like everybody > agrees, am I wrong? Yes. As mentioned time.steady(i_mean_it=True) or time.steady(no_not_really=True) doesn't make any sense. Merging the methods may very well make sense, but it should then return a best case and have no flags. I think, as it has been hard to reach an agreement on this that the proposal to only make "stupid" functions that expose the system API's are the correct thing to do at the moment. > - time.time(): realtime, can be adjusted by the system administrator > (manually) or automatically by NTP Sure. > - time.clock(): monotonic clock on Windows, CPU time on UNIX This is for historical reasons, right, because this is what it is now? Would there be a problem in making time.clock() monotonic on Unix as well, if it exists? > - time.monotonic(): monotonic clock, its speed may or may not be > adjusted by NTP but it only goes forward, may raise an OSError > if the OS > has no monotonic clock, time.monotonic() will just not exist. Works for me, > - time.steady(): monotonic clock or the realtime clock, depending on > what is available on the platform (use monotonic in priority). may be > adjusted by NTP or the system administrator, may go backward. So it's time.may_or_may_not_be_steady() I don't mind the function, but the name should not be steady(). It's implementation is also so trivial that those who want a monotonic if it exists, but a normal clock otherwise, can simply just do > try: > ?return time.monotonic() > except (NotImplementError, OSError): > ?return time.time() themselves. //Lennart From solipsis at pitrou.net Sat Mar 24 09:39:28 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 24 Mar 2012 09:39:28 +0100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? References: <4FB90B27-4778-4FE9-B93B-887A730BCBEC@gmail.com> Message-ID: <20120324093928.26640db9@pitrou.net> On Fri, 23 Mar 2012 22:38:19 -0500 Brian Curtin wrote: > On Fri, Mar 23, 2012 at 18:38, Yury Selivanov wrote: > > On 2012-03-23, at 7:28 PM, Brian Curtin wrote: > >> This seems like it should have been a PEP, or maybe should become a PEP. > > > > Why? ?AFAIK Victor just proposes to add two new functions: monotonic() and > > steady(). > > We just previously had "Drop time.monotonic() function, rename > time.wallclock() to time.steady()" checked in a few weeks ago, and now > we're renaming a variation on time.steady to time.monotonic? What's > the next move? > > I'm not paying close attention here but there's a lot of movement > going on. Figuring out the API before we get too deep is probably a > good idea. Agreed with Brian. Obviously the API needs further discussion, judging by Victor's own multiple changes of mind on the subject. Regards Antoine. From victor.stinner at gmail.com Sat Mar 24 11:35:43 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 24 Mar 2012 11:35:43 +0100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: <4F6D2CBA.10205@pearwood.info> References: <4F6D0F1C.1080404@pearwood.info> <4F6D2CBA.10205@pearwood.info> Message-ID: > Does this mean that there are circumstances where monotonic will work for a > while, but then fail? No. time.monotonic() always work or always fail. If monotonic() failed, steady() doesn't call it again. > Otherwise, we would only need to check monotonic once, when the time module > is first loaded, rather than every time it is called. Instead of the above: > > # global to the time module > try: > ? ?monotonic() > except (NameError, OSError): > ? ?steady = time > else: > ? ?steady = monotonic I implemented steady differently to avoid the need of calling monotonic at Python startup. Calling monotonic at startup would be an extra useless system call. Victor From victor.stinner at gmail.com Sat Mar 24 11:37:07 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 24 Mar 2012 11:37:07 +0100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: Message-ID: >> - time.monotonic(): monotonic clock, its speed may or may not be >> adjusted by NTP but it only goes forward, may raise an OSError >> - time.steady(): monotonic clock or the realtime clock, depending on >> what is available on the platform (use monotonic in priority). may be >> adjusted by NTP or the system administrator, may go backward. >> > > I am surprised that a clock with the name time.steady() has a looser > definition than one called time.monotonic(). To my mind a steady clock is by > definition monotonic but a monotonic one may or may not be steady. Do you suggest another name? Victor From victor.stinner at gmail.com Sat Mar 24 11:45:29 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 24 Mar 2012 11:45:29 +0100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: <4F6D30C7.6010209@pearwood.info> References: <4F6D124A.3000607@pearwood.info> <4F6D30C7.6010209@pearwood.info> Message-ID: >> Oh, I was not aware of this issue. Do you suggest to not use >> QueryPerformanceCounter() on Windows to implement a monotonic clock? > > > I do not have an opinion on the best way to implement monotonic to guarantee > that it actually is monotonic. I opened an issue: http://bugs.python.org/issue14397 Victor From regebro at gmail.com Sat Mar 24 12:53:39 2012 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 24 Mar 2012 12:53:39 +0100 Subject: [Python-Dev] PEP 411 - request for pronouncement In-Reply-To: References: Message-ID: On Fri, Mar 23, 2012 at 10:51, Eli Bendersky wrote: > The PEP received mostly positive feedback. The only undecided point is > where to specify that the package is provisional. Currently the PEP > mandates to specify it in the documentation and in the docstring. > Other suggestions were to put it in the code, either as a > __provisional__ attribute on the module, or collect all such modules > in a single sys.provisional list. I'm not sure what the usecase is for checking in code if a module is provisional or not. It doesn't seem useful, and risks being unmaintained, especially when the flag is on the module itself. //Lennart From ncoghlan at gmail.com Sat Mar 24 13:19:27 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 24 Mar 2012 22:19:27 +1000 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> <4F6B760D.3010004@gmail.com> <4D5CC12CFE8E455FBD5BD49A3A295A1C@gmail.com> Message-ID: On Sat, Mar 24, 2012 at 4:35 AM, PJ Eby wrote: > Just dumping things in a directory adjacent to the corresponding scripts is > the original virtualenv, and it still works just dandy -- most people just > don't *know* this.? (And again, if there are tools out there that *don't* > support single-directory virtualenvs, the best answer is probably to fix > them.) Not to mention that CPython gained native support for that layout in 2.6 via __main__.py files (although I stuffed up and forgot to add it to the What's New before the release). I'll chime in on the -1 side here as well. If you want *easy* cross-platform execution of __main__, use the -m switch. I'm obviously biased, since I'm the original author and primary maintainer of that switch, but it just makes all these problems with cross-platform questions and running from an installed copy vs running from source *go away*. Indeed, avoiding such cross-platform inconsistencies with regards to the location of stdlib modules was one of the major arguments in favour of adding the original incarnation of the switch way back in Python 2.4. To run the main clients (one for repo management, one for Django site management) in my current work project, I use "python -m pulpdist.manage_repos" and "python -m pulpdist.manage_site". It works from a source checkout (so long as I cd into src/ first), on an installed version, in a virtualenv, anywhere. I can easily run it on a different Python version just by changing the version I invoke. The commands would probably also work on at least Mac OS X and maybe even Windows (although I've never actually tried either of those, since PulpDist is targeted specifically at Linux systems). I may get around to installing at least the repo management client as a real script some day (since it will be more convenient for system administrators that way), but direct execution will never be the main way of executing it from a source checkout. So Van's proposal still smacks too much to me of change for change's sake. If you want an execution mechanism that is completely consistent across platforms (including virtual environments), then "-m" already exists. For direct execution, the proposal trades cross-version inconsistencies for cross-platform consistency. When we *already have* a consistent cross-platform mechanism in -m, the inevitable disruption involved in changing the Windows layout seems very hard to justify. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From brian at python.org Sat Mar 24 16:39:09 2012 From: brian at python.org (Brian Curtin) Date: Sat, 24 Mar 2012 10:39:09 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> <4F6B760D.3010004@gmail.com> <4D5CC12CFE8E455FBD5BD49A3A295A1C@gmail.com> Message-ID: On Sat, Mar 24, 2012 at 07:19, Nick Coghlan wrote: > On Sat, Mar 24, 2012 at 4:35 AM, PJ Eby wrote: >> Just dumping things in a directory adjacent to the corresponding scripts is >> the original virtualenv, and it still works just dandy -- most people just >> don't *know* this.? (And again, if there are tools out there that *don't* >> support single-directory virtualenvs, the best answer is probably to fix >> them.) > > Not to mention that CPython gained native support for that layout in > 2.6 via __main__.py files (although I stuffed up and forgot to add it > to the What's New before the release). > > I'll chime in on the -1 side here as well. If you want *easy* > cross-platform execution of __main__, use the -m switch. I love the -m option but what does it have to do with unifying the install layout? One is about executing __main__ and one is about a directory structure. > Indeed, avoiding such cross-platform inconsistencies with > regards to the location of stdlib modules was one of the major > arguments in favour of adding the original incarnation of the switch > way back in Python 2.4. Ok, so it is about directory structure, but about the standard library. Since part of this proposal is about Scripts vs. bin, how does -m help you there? From stephen at xemacs.org Sat Mar 24 16:39:38 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sat, 24 Mar 2012 16:39:38 +0100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: <4FB90B27-4778-4FE9-B93B-887A730BCBEC@gmail.com> Message-ID: On Sat, Mar 24, 2012 at 4:38 AM, Brian Curtin wrote: > On Fri, Mar 23, 2012 at 18:38, Yury Selivanov wrote: >> On 2012-03-23, at 7:28 PM, Brian Curtin wrote: >>> This seems like it should have been a PEP, or maybe should become a PEP. >> >> Why? ?AFAIK Victor just proposes to add two new functions: monotonic() and >> steady(). Need for PEPs is not determined by volume of content, but by amount of controversy and lack of clarity. Isn't it obvious that there's quite a bit of disagreement about the definitions of "monotonic" and "steady", and about whether these functions should be what they say they are or "best effort", and so on? +1 for a PEP. > We just previously had "Drop time.monotonic() function, rename > time.wallclock() to time.steady()" checked in a few weeks ago, and now > we're renaming a variation on time.steady to time.monotonic? What's > the next move? > > I'm not paying close attention here but there's a lot of movement > going on. Figuring out the API before we get too deep is probably a > good idea. I have been following the thread but don't have the technical knowledge to be sure what's going on. What I have decided is that I won't be using any function named time.steady() or time.monotonic() because neither one seems likely to guarantee the property it's named for, and by the time I have a use case (I don't have one now, I'm just an habitual lurker) I'll have forgotten the conclusion of this thread, but not the deep feelings of FUD. To get me on board (not that there's any particular reason you should care, but just in case), you're going to need to respect EIBTI. By that I mean that a monotonic clock is monotonic, and if not available at instantiation, an Exception will be raised. Similarly for a steady clock. There is no such thing as "best effort" here for clocks with these names. The default clock should be best effort. If that is for some reason "expensive", then there should be a "time.windup_clock()" to provide an unreliable resource- conserving clock. FWIW, I understand that (1) A monotonic clock is one that never goes backwards. If precision allows, it should always go forwards (ie, repeated calls should always produce strictly larger time values). (2) A steady clock is strictly monotonic, and when a discrepancy against "true time" is detected (however that might happen), it slews its visible clock until the discrepancy is eliminated, so that one clock second always means something "close" to one second. From martin at v.loewis.de Sat Mar 24 17:57:21 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 24 Mar 2012 17:57:21 +0100 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: <61C7F677-7F04-4FD6-AC66-4595374F402A@gmail.com> Message-ID: <4F6DFCF1.9000407@v.loewis.de> > I don't see what is the use case requiring a is truly monotonic clock. A clock that is purely monotonic may not be useful. However, people typically imply that it will have a certain minimum progress (seconds advanced/real seconds passed). Then you can use it for timeouts. Regards, Martin From pje at telecommunity.com Sun Mar 25 00:34:31 2012 From: pje at telecommunity.com (PJ Eby) Date: Sat, 24 Mar 2012 19:34:31 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F6D5C52.1030802@canterbury.ac.nz> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> Message-ID: On Sat, Mar 24, 2012 at 1:32 AM, Greg Ewing wrote: > PJ Eby wrote: > > Weird - I have the exact *opposite* problem, where I have to resize my >> window because somebody *didn't* set their text max-width sanely (to a >> reasonable value based on ems instead of pixels), and I have nearly 1920 >> pixels of raw text spanning my screen. >> > > If you don't want 1920-pixel-wide text, why make your > browser window that large? > Not every tab in my browser is text for reading; some are apps that need the extra horizontal space. That is, they have more than one column of text or data -- but no individual column spans anywhere near the full width. (Google Docs, for example, shows me two pages at a time when I'm reading a PDF.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Mar 25 00:46:28 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 25 Mar 2012 09:46:28 +1000 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: References: <4F68607B.7060307@gmail.com> <4F688F48.9010101@gmail.com> <4F68FB76.7010303@skippinet.com.au> <4F68FDE7.40505@gmail.com> <4F694DFB.7050400@netwok.org> <4F69E40C.6050901@gmail.com> <4F69EB9D.1060701@egenix.com> <4F6B345C.1020406@gmail.com> <4F6B760D.3010004@gmail.com> <4D5CC12CFE8E455FBD5BD49A3A295A1C@gmail.com> Message-ID: By dodging the issue entirely - anything I might want to regularly run from a source checkout I execute with -m. It gets sys.path right automatically and I don't need to care about platform specific executable naming conventions. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+python at benfinney.id.au Sun Mar 25 01:41:42 2012 From: ben+python at benfinney.id.au (Ben Finney) Date: Sun, 25 Mar 2012 11:41:42 +1100 Subject: [Python-Dev] Playing with a new theme for the docs References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> Message-ID: <87y5qpikah.fsf@benfinney.id.au> PJ Eby writes: > On Sat, Mar 24, 2012 at 1:32 AM, Greg Ewing wrote: > > > If you don't want 1920-pixel-wide text, why make your browser window > > that large? > > Not every tab in my browser is text for reading; some are apps that > need the extra horizontal space. So, again, why make your browser window *for reading text* that large? You have control over how large your window size is, and if you have purposes so different that they demand different widths, then you can easily make different-width windows. Everyone has different needs for how large the text should be and how much of it should go across the window. Every one of us is in a minority when it comes to those needs; that's exactly what a configuration setting is good for. It's madness to expect web designers to hobble the flexibility of a web page to cater preferentially for one minority over others. -- \ ?Come on Milhouse, there?s no such thing as a soul! It?s just | `\ something they made up to scare kids, like the Boogie Man or | _o__) Michael Jackson.? ?Bart, _The Simpsons_ | Ben Finney From v+python at g.nevcal.com Sun Mar 25 03:35:10 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Sat, 24 Mar 2012 18:35:10 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <87y5qpikah.fsf@benfinney.id.au> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> Message-ID: <4F6E764E.5070603@g.nevcal.com> On 3/24/2012 5:41 PM, Ben Finney wrote: > It's madness to expect web designers to hobble the flexibility of a web > page to cater preferentially for one minority over others. But largely, the 99% that makes the rest of them look bad, do, in fact, do exactly that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.brandl at gmx.net Sun Mar 25 08:34:44 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 25 Mar 2012 08:34:44 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 Message-ID: Here's another try, mainly with default browser font size, more contrast and collapsible sidebar again: http://www.python.org/~gbrandl/build/html2/ I've also added a little questionable gimmick to the sidebar (when you collapse it and expand it again, the content is shown at your current scroll location). Have fun! Georg From stephen at xemacs.org Sun Mar 25 08:56:53 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sun, 25 Mar 2012 08:56:53 +0200 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <87y5qpikah.fsf@benfinney.id.au> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> Message-ID: On Sun, Mar 25, 2012 at 1:41 AM, Ben Finney wrote: > PJ Eby writes: >> Not every tab in my browser is text for reading; some are apps that >> need the extra horizontal space. > > So, again, why make your browser window *for reading text* that large? Because he prefers controlling the content viewed by selecting tabs rather than selecting windows, no doubt. But since he's arguing the other end in the directory layout thread (where he says there are many special ways to invoke Python so that having different layouts on different platforms is easy to work around), I can't give much weight to his preference here. Anyway, CSS is supposed to allow the user to impose such constraints herself, so Philip "should" do so with a local style, rather than ask designers to do it globally. > It's madness to expect web designers to hobble the flexibility of a web > page to cater preferentially for one minority over others. No, as Glenn points out, designers (I wouldn't call them *web* designers since they clearly have no intention of taking advantage of the power of the web in design, even if they incorporate links in their pages!) frequently do exactly that. (The minority of one in question being the designer himself!) So it's rational to expect it. :-( However, I believe that CSS also gives us the power to undo such bloodymindedness, though I've never gone to the trouble of learning how. Steve From ben+python at benfinney.id.au Sun Mar 25 09:19:38 2012 From: ben+python at benfinney.id.au (Ben Finney) Date: Sun, 25 Mar 2012 18:19:38 +1100 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 References: Message-ID: <87fwcxi1v9.fsf@benfinney.id.au> Georg Brandl writes: > Here's another try, mainly with default browser font size, more > contrast and collapsible sidebar again: > > http://www.python.org/~gbrandl/build/html2/ Great! You've improved it nicely. I especially like that you have done the collapsible sidebar with graceful degradation: the content is quite accessible without ECMAscript. Can you make the link colors (in the body and sidebar) follow the usual conventions: use a blue colour for unvisited links, and a purple colour for visited links so it's more obvious where links are and where the reader has already been. -- \ ?I distrust those people who know so well what God wants them | `\ to do to their fellows, because it always coincides with their | _o__) own desires.? ?Susan Brownell Anthony, 1896 | Ben Finney From eliben at gmail.com Sun Mar 25 09:26:02 2012 From: eliben at gmail.com (Eli Bendersky) Date: Sun, 25 Mar 2012 09:26:02 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: On Sun, Mar 25, 2012 at 08:34, Georg Brandl wrote: > Here's another try, mainly with default browser font size, more contrast and > collapsible sidebar again: > > http://www.python.org/~gbrandl/build/html2/ > > I've also added a little questionable gimmick to the sidebar (when you collapse > it and expand it again, the content is shown at your current scroll location). > Nice and clean. +1 Eli From v+python at g.nevcal.com Sun Mar 25 09:46:18 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Sun, 25 Mar 2012 00:46:18 -0700 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: <4F6ECD4A.6000301@g.nevcal.com> On 3/24/2012 11:34 PM, Georg Brandl wrote: > I've also added a little questionable gimmick to the sidebar (when you collapse > it and expand it again, the content is shown at your current scroll location). It would be educational to see how you pulled that trick! I will look if I get time. However, in playing with it, it has the definite disadvantage of forcing the user to position/click the mouse twice, if the goal is not to collapse the sidebar, but simply to make the content visible. Might there be an additional way to move the content, perhaps by a click in the blank portions of the sidebar (above the top or below the bottom of the content, in the shaded area), that it would bring the content to view? The position chosen for the content could happily be the same position you choose when doing the collapse/expand dance, I have no quibble with that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From __peter__ at web.de Sun Mar 25 10:06:15 2012 From: __peter__ at web.de (Peter Otten) Date: Sun, 25 Mar 2012 10:06:15 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 References: Message-ID: Georg Brandl wrote: > Here's another try, mainly with default browser font size, more contrast > and collapsible sidebar again: > > http://www.python.org/~gbrandl/build/html2/ Nice! Lightweight and readable. >From the bikeshedding department: * Inlined code doesn't need the gray background. The bold font makes it stand out enough. * Instead of the box consider italics or another color for [New in ...] text. * Nobody is going to switch off the prompts for interactive sessions. * Maybe the Next/Previous Page headers on the left could link to the respective page. * Short descriptions in the module index don't need italics. * The disambiguation in the index table could use a different style instead of the parentheses. From storchaka at gmail.com Sun Mar 25 10:11:53 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 25 Mar 2012 11:11:53 +0300 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: 25.03.12 09:34, Georg Brandl ???????(??): > Here's another try, mainly with default browser font size, more contrast and > collapsible sidebar again: It may be worth now the line-height reduce too? > I've also added a little questionable gimmick to the sidebar (when you collapse > it and expand it again, the content is shown at your current scroll location). What if move the search field to the header and the footer? There a lot of free space. "Report a Bug" and "Show Source" can also be moved to the footer, if fit. The footer height is too big now, I think that you can reduce the copyright and technical information from 4 to 2 lines. From storchaka at gmail.com Sun Mar 25 10:44:18 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 25 Mar 2012 11:44:18 +0300 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: 25.03.12 11:06, Peter Otten ???????(??): > * Inlined code doesn't need the gray background. The bold font makes it > stand out enough. I believe that the gray background is good, but it should make it lighter. > * Instead of the box consider italics or another color for [New in ...] > text. Yes, the border around "New in" and "Changed in" looks not good-looking. Maybe a very light colored background with no border or underlined italic will look better. > * Maybe the Next/Previous Page headers on the left could link to the > respective page. Do you mean next/previous links in header/footer? I totally agree with your other comments, including the admiration of the current version of the design. From andrew.svetlov at gmail.com Sun Mar 25 10:53:20 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Sun, 25 Mar 2012 11:53:20 +0300 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: I like to always see "Quick search" widget without scrolling page to top. Is it possible? Or maybe you can embed some keyboard shortcut for quick jump to search input box? On Sun, Mar 25, 2012 at 11:44 AM, Serhiy Storchaka wrote: > 25.03.12 11:06, Peter Otten ???????(??): > >> * Inlined code doesn't need the gray background. The bold font makes it >> stand out enough. > > > I believe that the gray background is good, but it should make it lighter. > > >> * Instead of the box consider italics or another color for [New in ...] >> text. > > > Yes, the border around "New in" and "Changed in" looks not good-looking. > Maybe a very light colored background with no border or underlined italic > will look better. > > >> * Maybe the Next/Previous Page headers on the left could link to the >> respective page. > > > Do you mean next/previous links in header/footer? > > I totally agree with your other comments, including the admiration of the > current version of the design. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com -- Thanks, Andrew Svetlov From stephen at xemacs.org Sun Mar 25 10:54:51 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sun, 25 Mar 2012 10:54:51 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: In the header next to "Python v3.3a1 documentation" there is a "?" symbol, which suggests something can be expanded. Knowing that there are many versions of the documentation, I thought it might bring up a menu of versions. But clicking does nothing. Is that intentional? I guess it's supposed to mean "go to top" but that wasn't obvious to me. I think the clickable areas in the header and footer should be indicated with the usual coloring (either the scheme you currently use, or perhaps as Ben suggests blue and purple as in "traditional" HTML documents). I agree that what you do looks nice and is sufficiently functional once you realize it, but I've seen a lot of research that indicates that up to 60% of users can't find all the links on a page unless they're explicitly marked. (In one focus group 4 of 14 users never found a menu that took up 40% of the area of the page!) My first impression of the "questionable feature" that the sidebar is aligned with the scroll position when expanded is that it's useful. It looks pretty good without CSS, too! On Sun, Mar 25, 2012 at 8:34 AM, Georg Brandl wrote: > Here's another try, mainly with default browser font size, more contrast and > collapsible sidebar again: > > http://www.python.org/~gbrandl/build/html2/ > > I've also added a little questionable gimmick to the sidebar (when you collapse > it and expand it again, the content is shown at your current scroll location). > > Have fun! > Georg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/stephen%40xemacs.org From storchaka at gmail.com Sun Mar 25 10:56:36 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 25 Mar 2012 11:56:36 +0300 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: 25.03.12 09:34, Georg Brandl ???????(??): > I've also added a little questionable gimmick to the sidebar (when you collapse > it and expand it again, the content is shown at your current scroll location). I'm not sure if this is possible, and how good it would look like, but I have one crazy idea. What if transform the sidebar to collapsable floating box in the upper right corner? From stefan at bytereef.org Sun Mar 25 11:04:22 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 25 Mar 2012 11:04:22 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: <20120325090422.GA11869@sleipnir.bytereef.org> Andrew Svetlov wrote: > I like to always see "Quick search" widget without scrolling page to > top. Is it possible? Do you mean a fixed search box like this one? http://coq.inria.fr/documentation Please don't do this, I find scrolling exceptionally distracting in the presence of fixed elements. Stefan Krah From andrew.svetlov at gmail.com Sun Mar 25 11:10:21 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Sun, 25 Mar 2012 12:10:21 +0300 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <20120325090422.GA11869@sleipnir.bytereef.org> References: <20120325090422.GA11869@sleipnir.bytereef.org> Message-ID: On Sun, Mar 25, 2012 at 12:04 PM, Stefan Krah wrote: > Andrew Svetlov wrote: >> I like to always see "Quick search" widget without scrolling page to >> top. Is it possible? > > Do you mean a fixed search box like this one? > > http://coq.inria.fr/documentation > No. You are right, it's distracting. Maybe narrow persistent line with searchbox on the top will be better. But just jump to searchbox by shortcut is good enough for me also. > > Please don't do this, I find scrolling exceptionally distracting in the > presence of fixed elements. > > > > Stefan Krah > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com -- Thanks, Andrew Svetlov From hs at ox.cx Sun Mar 25 11:39:12 2012 From: hs at ox.cx (Hynek Schlawack) Date: Sun, 25 Mar 2012 11:39:12 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: <29667EB3-1BCE-41A6-90FB-4C4BD5BE4FA4@ox.cx> Hi Georg, Am 25.03.2012 um 08:34 schrieb Georg Brandl: > Here's another try, mainly with default browser font size, more contrast and > collapsible sidebar again: > > http://www.python.org/~gbrandl/build/html2/ I really like it! Only one nitpick: If a header follows on a ?seealso?, the vertical rhythm is slightly broken like in https://skitch.com/hyneks/8c6j8/ Minor detail but should be easy to fix. :) Cheers, Hynek From peck at us.ibm.com Sun Mar 25 12:00:58 2012 From: peck at us.ibm.com (Jon K Peck) Date: Sun, 25 Mar 2012 04:00:58 -0600 Subject: [Python-Dev] AUTO: Jon K Peck is out of the office (returning 03/30/2012) Message-ID: I am out of the office until 03/30/2012. I will be out of the office through Friday, March 30. I expect to have some email access but may be delayed in responding. Note: This is an automated response to your message "Python-Dev Digest, Vol 104, Issue 91" sent on 03/25/2012 1:19:50. This is the only notification you will receive while this person is away. From __peter__ at web.de Sun Mar 25 12:37:38 2012 From: __peter__ at web.de (Peter Otten) Date: Sun, 25 Mar 2012 12:37:38 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 References: Message-ID: Serhiy Storchaka wrote: >> * Maybe the Next/Previous Page headers on the left could link to the >> respective page. > > Do you mean next/previous links in header/footer? No, I mean the two sections in the sidebar on the left, below "Table of Contents". From stephen at xemacs.org Sun Mar 25 13:09:23 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sun, 25 Mar 2012 13:09:23 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <20120325090422.GA11869@sleipnir.bytereef.org> References: <20120325090422.GA11869@sleipnir.bytereef.org> Message-ID: On Sun, Mar 25, 2012 at 11:04 AM, Stefan Krah wrote: > Do you mean a fixed search box like this one? > > http://coq.inria.fr/documentation > > Please don't do this, I find scrolling exceptionally distracting in the > presence of fixed elements. Does it bother you when the header is fixed and contains the search box? I prefer that arrangement, anyway. From phd at phdru.name Sun Mar 25 13:36:57 2012 From: phd at phdru.name (Oleg Broytman) Date: Sun, 25 Mar 2012 15:36:57 +0400 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: <20120325113657.GA13240@iskra.aviel.ru> On Sun, Mar 25, 2012 at 08:34:44AM +0200, Georg Brandl wrote: > http://www.python.org/~gbrandl/build/html2/ Perfect! I like it! Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From anacrolix at gmail.com Sun Mar 25 13:44:09 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Sun, 25 Mar 2012 19:44:09 +0800 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <20120325113657.GA13240@iskra.aviel.ru> References: <20120325113657.GA13240@iskra.aviel.ru> Message-ID: Is nice yes?! When I small the nav bar, then embiggen it again, the text centers vertically. It's in the wrong place. The new theme is very minimal, perhaps a new color should be chosen. We've done green, what about orange, brown or blue? -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sun Mar 25 14:07:52 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 25 Mar 2012 14:07:52 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 References: Message-ID: <20120325140752.7f783801@pitrou.net> On Sun, 25 Mar 2012 08:34:44 +0200 Georg Brandl wrote: > Here's another try, mainly with default browser font size, more contrast and > collapsible sidebar again: > > http://www.python.org/~gbrandl/build/html2/ > > I've also added a little questionable gimmick to the sidebar (when you collapse > it and expand it again, the content is shown at your current scroll location). The gimmick is buggy (when you collapse then expand it in the middle, and then scroll up, the sidebar content disappears after scrolling), and in the end quite confusing. Also I think there should be some jquery animation when collapsing/expanding. I think the "New in version..." and "Changed in version..." styles stand out in the wrong kind of way (as in "drawn by a 8-year old with a pen and ruler", perhaps). Perhaps you want some coloured background instead? (or, better, an icon - by the way, warnings also scream for an icon IMO) Otherwise, not sure what problem this new theme solves, but it looks ok. Regards Antoine. From stefan at bytereef.org Sun Mar 25 14:34:07 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 25 Mar 2012 14:34:07 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: <20120325090422.GA11869@sleipnir.bytereef.org> Message-ID: <20120325123407.GA12989@sleipnir.bytereef.org> Stephen J. Turnbull wrote: > > Do you mean a fixed search box like this one? > > > > http://coq.inria.fr/documentation > > > > Please don't do this, I find scrolling exceptionally distracting in the > > presence of fixed elements. > > Does it bother you when the header is fixed and contains > the search box? I prefer that arrangement, anyway. Do you have an example website? In general fixed elements distract me greatly. This also applies to the '<<' element in the collapsible sidebar. When I'm scrolling, it's almost the center of my attention (when I should be focusing on the text). Perhaps users can discover the collapsible sidebar without the '<<' hint? Or let it move up like in the existing version? Stefan Krah From stephen at xemacs.org Sun Mar 25 14:47:21 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sun, 25 Mar 2012 14:47:21 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <20120325123407.GA12989@sleipnir.bytereef.org> References: <20120325090422.GA11869@sleipnir.bytereef.org> <20120325123407.GA12989@sleipnir.bytereef.org> Message-ID: On Sun, Mar 25, 2012 at 2:34 PM, Stefan Krah wrote: > Stephen J. Turnbull wrote: >> Does it bother you when the header is fixed and contains >> the search box? ?I prefer that arrangement, anyway. > > Do you have an example website? Not with just a header. http://turnbull.sk.tsukuba.ac.jp/Teach/IntroSES/ is a (very primitive and not stylistically improved in years) example of a frame-based layout that I use some of my classes. I would put a search field in the top frame (if I had one. :-) But I suppose you would find the fixed sidebar distracting. > In general fixed elements distract me > greatly. This also applies to the '<<' element in the collapsible > sidebar. When I'm scrolling, it's almost the center of my attention > (when I should be focusing on the text). I suspect you're unusual in that, but I guess it just is going to bug you no matter what, and I personally don't have *that* strong a preference either way. From murman at gmail.com Sun Mar 25 15:23:16 2012 From: murman at gmail.com (Michael Urman) Date: Sun, 25 Mar 2012 08:23:16 -0500 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <20120325140752.7f783801@pitrou.net> References: <20120325140752.7f783801@pitrou.net> Message-ID: On Sun, Mar 25, 2012 at 07:07, Antoine Pitrou wrote: >> >> I've also added a little questionable gimmick to the sidebar (when you collapse >> it and expand it again, the content is shown at your current scroll location). > > The gimmick is buggy (when you collapse then expand it in the middle, > and then scroll up, the sidebar content disappears after scrolling), > and in the end quite confusing. It also seems not to handle window resizes very well right now. It appears to choose the height for the vertical bar when shown, and then when the text next to it reflows to a new length, the bar can become longer or shorter than necessary. On the one hand this makes it hard to get the sidebar content to show at the bottom of the page; on the other, I believe it mitigates potential problems if sidebar content is too long for the window size. -- Michael Urman From ned at nedbatchelder.com Sun Mar 25 17:26:26 2012 From: ned at nedbatchelder.com (Ned Batchelder) Date: Sun, 25 Mar 2012 11:26:26 -0400 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: <4F6F3922.2020202@nedbatchelder.com> On 3/25/2012 2:34 AM, Georg Brandl wrote: > Here's another try, mainly with default browser font size, more contrast and > collapsible sidebar again: > > http://www.python.org/~gbrandl/build/html2/ Georg, thanks so much for taking on this thankless task with grace and skill. It can't be easy dealing with the death by a thousand tweaks, and I know I've contributed to the flurry. Nowhere on the page is a simple link to the front page of python.org. Perhaps the traditional upper-left corner could get a bread-crumb before "Python v3.3a1 documentation" that simply links to python.org. Maybe, use the word Python that is already there: [Python] ? [v3.3a1 documentation]. People do arrive at doc pages via search engines, and connecting the docs up to the rest of the site would be a good thing. Speaking of links to other pages, the doc front page, under "Other resources" lists Guido's Essays and New-style Classes second and third. These each point to extremely outdated material ("Unifying types and classes in 2.2", and "Unfortunately, new-style classes have not yet been integrated into Python's standard documention." ??). Another, "Other Doc Collections," points to an empty apache-style directory listing :-(. These links should be removed if we don't want to keep those sections of the site up-to-date. I know this is not strictly part of the redesign, but I just noticed it and thought I would throw it out there. I agree about the outlined style for "New" notices, and the red for deprecation is extremely alarming! :) I'll make one last plea for not justifying short paragraphs full of unbreakable elements, but I know I am in the minority. > I've also added a little questionable gimmick to the sidebar (when you collapse > it and expand it again, the content is shown at your current scroll location). I especially like using dynamic elements on a page to adapt to a reader's needs. I have some other ideas that I'll try to cobble together. --Ned. > Have fun! > Georg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ned%40nedbatchelder.com > From mail at timgolden.me.uk Sun Mar 25 17:36:44 2012 From: mail at timgolden.me.uk (Tim Golden) Date: Sun, 25 Mar 2012 16:36:44 +0100 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <4F6F3922.2020202@nedbatchelder.com> References: <4F6F3922.2020202@nedbatchelder.com> Message-ID: <4F6F3B8C.5030802@timgolden.me.uk> On 25/03/2012 16:26, Ned Batchelder wrote: > Georg, thanks so much for taking on this thankless task with grace and > skill. It can't be easy dealing with the death by a thousand tweaks Seconded. I'm constantly edified by the way in which people in the community respond to even quite abrupt criticism in a constructive, open and often humorous manner. (As I've said before, I'm also impressed by the way in which people are prepared to come back and apologise / acknowledge that they had a moment of jerkiness). TJG From tjreedy at udel.edu Sun Mar 25 17:54:10 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 25 Mar 2012 11:54:10 -0400 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: On 3/25/2012 2:34 AM, Georg Brandl wrote: > Here's another try, mainly with default browser font size, more contrast Untrue. You still changed the high contrast dark blue to the same low contrast light blue for builtin names, etc. What problem do you think you are trying to solve by making the doc difficult and even PAINFUL for me to read? - a lot more than 1 -- Terry Jan Reedy From stefan at bytereef.org Sun Mar 25 18:05:03 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 25 Mar 2012 18:05:03 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: <20120325090422.GA11869@sleipnir.bytereef.org> <20120325123407.GA12989@sleipnir.bytereef.org> Message-ID: <20120325160503.GA13935@sleipnir.bytereef.org> Stephen J. Turnbull wrote: > Not with just a header. http://turnbull.sk.tsukuba.ac.jp/Teach/IntroSES/ > is a (very primitive and not stylistically improved in years) example > of a frame-based layout that I use some of my classes. I would > put a search field in the top frame (if I had one. :-) But I suppose you > would find the fixed sidebar distracting. No, if the whole sidebar is fixed I don't mind (though I'm not a fan of frames). The top frame takes up space though. > > In general fixed elements distract me > > greatly. This also applies to the '<<' element in the collapsible > > sidebar. When I'm scrolling, it's almost the center of my attention > > (when I should be focusing on the text). > > I suspect you're unusual in that, but I guess it just is going > to bug you no matter what, and I personally don't have > *that* strong a preference either way. Maybe. It's hard to determine. It's just that I don't see fixed search boxes or fixed elements like '<<' on big name websites (who may or may not have usability departments). So it is at least possible that such features have always been controversial. Stefan Krah From g.brandl at gmx.net Sun Mar 25 18:15:20 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 25 Mar 2012 18:15:20 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <87fwcxi1v9.fsf@benfinney.id.au> References: <87fwcxi1v9.fsf@benfinney.id.au> Message-ID: On 25.03.2012 09:19, Ben Finney wrote: > Georg Brandl writes: > >> Here's another try, mainly with default browser font size, more >> contrast and collapsible sidebar again: >> >> http://www.python.org/~gbrandl/build/html2/ > > Great! You've improved it nicely. I especially like that you have > done the > collapsible sidebar with graceful degradation: the content is quite > accessible without ECMAscript. > > Can you make the link colors (in the body and sidebar) follow the usual > conventions: use a blue colour for unvisited links, and a purple colour > for visited links so > it's more obvious where links are and where the reader has already been. Thanks. Same colors for visited and unvisited links is indeed an oversight on my part. I'll put that in the final version. Georg From g.brandl at gmx.net Sun Mar 25 18:17:14 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 25 Mar 2012 18:17:14 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: On 25.03.2012 10:06, Peter Otten wrote: > Georg Brandl wrote: > >> Here's another try, mainly with default browser font size, more contrast >> and collapsible sidebar again: >> >> http://www.python.org/~gbrandl/build/html2/ > > Nice! Lightweight and readable. > >>From the bikeshedding department: > > * Inlined code doesn't need the gray background. The bold font makes it > stand out enough. > * Instead of the box consider italics or another color for [New in ...] > text. Yes, I'll revert to italics as most people don't seem to like the colored boxes. > * Nobody is going to switch off the prompts for interactive sessions. You'll laugh, but that was a pretty often-wished feature so that copy-paste gets easier. It'll certainly stay. > * Maybe the Next/Previous Page headers on the left could link to the > respective page. I see no reason since the links below already do. > * Short descriptions in the module index don't need italics. > * The disambiguation in the index table could use a different style instead > of the parentheses. These two would need to be changed in Sphinx. Georg From g.brandl at gmx.net Sun Mar 25 18:23:45 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 25 Mar 2012 18:23:45 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <4F6F3922.2020202@nedbatchelder.com> References: <4F6F3922.2020202@nedbatchelder.com> Message-ID: On 25.03.2012 17:26, Ned Batchelder wrote: > On 3/25/2012 2:34 AM, Georg Brandl wrote: >> Here's another try, mainly with default browser font size, more contrast and >> collapsible sidebar again: >> >> http://www.python.org/~gbrandl/build/html2/ > Georg, thanks so much for taking on this thankless task with grace and > skill. It can't be easy dealing with the death by a thousand tweaks, > and I know I've contributed to the flurry. > > Nowhere on the page is a simple link to the front page of python.org. > Perhaps the traditional upper-left corner could get a bread-crumb before > "Python v3.3a1 documentation" that simply links to python.org. Maybe, > use the word Python that is already there: [Python] ? [v3.3a1 > documentation]. People do arrive at doc pages via search engines, and > connecting the docs up to the rest of the site would be a good thing. Indeed. I'm trying to tweak that right now. > Speaking of links to other pages, the doc front page, under "Other > resources" lists Guido's Essays and New-style Classes second and third. > These each point to extremely outdated material ("Unifying types and > classes in 2.2", and "Unfortunately, new-style classes have not yet been > integrated into Python's standard documention." ??). Another, "Other > Doc Collections," points to an empty apache-style directory listing > :-(. These links should be removed if we don't want to keep those > sections of the site up-to-date. I know this is not strictly part of > the redesign, but I just noticed it and thought I would throw it out there. That would be best to capture in a bugs.python.org issue, I think. > I agree about the outlined style for "New" notices, and the red for > deprecation is extremely alarming! :) Changed. > I'll make one last plea for not justifying short paragraphs full of > unbreakable elements, but I know I am in the minority. :) >> I've also added a little questionable gimmick to the sidebar (when you collapse >> it and expand it again, the content is shown at your current scroll location). > I especially like using dynamic elements on a page to adapt to a > reader's needs. I have some other ideas that I'll try to cobble together. That would be great. Georg From storchaka at gmail.com Sun Mar 25 18:25:10 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 25 Mar 2012 19:25:10 +0300 Subject: [Python-Dev] PEP 393 decode() oddity Message-ID: PEP 393 (Flexible String Representation) is, without doubt, one of the pearls of the Python 3.3. In addition to reducing memory consumption, it also often leads to a corresponding increase in speed. In particular, the string encoding now in 1.5-3 times faster. But decoding is not so good. Here are the results of measuring the performance of the decoding of the 1000-character string consisting of characters from different ranges of the Unicode, for three versions of Python -- 2.7.3rc2, 3.2.3rc2+ and 3.3.0a1+. Little-endian 32-bit i686 builds, gcc 4.4. encoding string 2.7 3.2 3.3 ascii " " * 1000 5.4 5.3 1.2 latin1 " " * 1000 1.8 1.7 1.3 latin1 "\u0080" * 1000 1.7 1.6 1.0 utf-8 " " * 1000 6.7 2.4 2.1 utf-8 "\u0080" * 1000 12.2 11.0 13.0 utf-8 "\u0100" * 1000 12.2 11.1 13.6 utf-8 "\u0800" * 1000 14.7 14.4 17.2 utf-8 "\u8000" * 1000 13.9 13.3 17.1 utf-8 "\U00010000" * 1000 17.3 17.5 21.5 utf-16le " " * 1000 5.5 2.9 6.5 utf-16le "\u0080" * 1000 5.5 2.9 7.4 utf-16le "\u0100" * 1000 5.5 2.9 8.9 utf-16le "\u0800" * 1000 5.5 2.9 8.9 utf-16le "\u8000" * 1000 5.5 7.5 21.3 utf-16le "\U00010000" * 1000 9.6 12.9 30.1 utf-16be " " * 1000 5.5 3.0 9.0 utf-16be "\u0080" * 1000 5.5 3.1 9.8 utf-16be "\u0100" * 1000 5.5 3.1 10.4 utf-16be "\u0800" * 1000 5.5 3.1 10.4 utf-16be "\u8000" * 1000 5.5 6.6 21.2 utf-16be "\U00010000" * 1000 9.6 11.2 28.9 utf-32le " " * 1000 10.2 10.4 15.1 utf-32le "\u0080" * 1000 10.0 10.4 16.5 utf-32le "\u0100" * 1000 10.0 10.4 19.8 utf-32le "\u0800" * 1000 10.0 10.4 19.8 utf-32le "\u8000" * 1000 10.1 10.4 19.8 utf-32le "\U00010000" * 1000 11.7 11.3 20.2 utf-32be " " * 1000 10.0 11.2 15.0 utf-32be "\u0080" * 1000 10.1 11.2 16.4 utf-32be "\u0100" * 1000 10.0 11.2 19.7 utf-32be "\u0800" * 1000 10.1 11.2 19.7 utf-32be "\u8000" * 1000 10.1 11.2 19.7 utf-32be "\U00010000" * 1000 11.7 11.2 20.2 The first oddity in that the characters from the second half of the Latin1 table decoded faster than the characters from the first half. I think that the characters from the first half of the table must be decoded as quickly. The second sad oddity in that UTF-16 decoding in 3.3 is much slower than even in 2.7. Compared with 3.2 decoding is slower in 2-3 times. This is a considerable regress. UTF-32 decoding is also slowed down by 1.5-2 times. The fact that in some cases UTF-8 decoding also slowed, is not surprising. I believe, that on a platform with a 64-bit long, there may be other oddities. How serious a problem this is for the Python 3.3 release? I could do the optimization, if someone is not working on this already. -------------- next part -------------- A non-text attachment was scrubbed... Name: bench_decode.py Type: text/x-python Size: 806 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bench_decode-2.py Type: text/x-python Size: 810 bytes Desc: not available URL: From g.brandl at gmx.net Sun Mar 25 18:32:39 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 25 Mar 2012 18:32:39 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: On 25.03.2012 17:54, Terry Reedy wrote: > On 3/25/2012 2:34 AM, Georg Brandl wrote: >> Here's another try, mainly with default browser font size, more contrast > > Untrue. You still changed the high contrast dark blue to the same low > contrast light blue for builtin names, etc. What problem do you think > you are trying to solve by making the doc difficult and even PAINFUL for > me to read? > > - a lot more than 1 "More contrast" was meant in comparison to iteration #1. Hmm, don't you think you'll get used to the new style in a while? The link color is not actually that light in comparison. Of course you can always use a user stylesheet to override our choices. Georg From g.brandl at gmx.net Sun Mar 25 18:37:12 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 25 Mar 2012 18:37:12 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: <20120325090422.GA11869@sleipnir.bytereef.org> Message-ID: On 25.03.2012 13:09, Stephen J. Turnbull wrote: > On Sun, Mar 25, 2012 at 11:04 AM, Stefan Krah wrote: > >> Do you mean a fixed search box like this one? >> >> http://coq.inria.fr/documentation >> >> Please don't do this, I find scrolling exceptionally distracting in the >> presence of fixed elements. > > Does it bother you when the header is fixed and contains > the search box? I prefer that arrangement, anyway. I think this idea has some merit. I'd prefer it to be tried out and implemented in a second step though (maybe by someone else, even? ;) Georg From solipsis at pitrou.net Sun Mar 25 19:01:37 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 25 Mar 2012 19:01:37 +0200 Subject: [Python-Dev] PEP 393 decode() oddity References: Message-ID: <20120325190137.7f0ffce0@pitrou.net> Hi, On Sun, 25 Mar 2012 19:25:10 +0300 Serhiy Storchaka wrote: > > But decoding is not so good. The general problem with decoding is that you don't know up front what width (1, 2 or 4 bytes) is required for the result. The solution is either to compute the width in a first pass (and decode in a second pass), or decode in a single pass and enlarge the result on the fly when needed. Both incur a slowdown compared to a single-size representation. > The first oddity in that the characters from the second half of the > Latin1 table decoded faster than the characters from the first half. I > think that the characters from the first half of the table must be > decoded as quickly. It's probably a measurement error on your part. > The second sad oddity in that UTF-16 decoding in 3.3 is much slower than > even in 2.7. Compared with 3.2 decoding is slower in 2-3 times. This is > a considerable regress. UTF-32 decoding is also slowed down by 1.5-2 times. I don't think UTF-32 is used a lot. As for UTF-16, if you can optimize it then why not. Regards Antoine. From martin at v.loewis.de Sun Mar 25 19:50:03 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 25 Mar 2012 19:50:03 +0200 Subject: [Python-Dev] PEP 393 decode() oddity In-Reply-To: References: Message-ID: <4F6F5ACB.7090502@v.loewis.de> > How serious a problem this is for the Python 3.3 release? I could do the > optimization, if someone is not working on this already. I think the people who did the original implementation (Torsten, Victor, and myself) are done with optimizations. So: contributions are welcome. I'm not aware of any release-critical performance degradation (but I'd start with string formatting if I were you). From g.brandl at gmx.net Sun Mar 25 20:36:47 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 25 Mar 2012 20:36:47 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: On 25.03.2012 08:34, Georg Brandl wrote: > Here's another try, mainly with default browser font size, more contrast and > collapsible sidebar again: > > http://www.python.org/~gbrandl/build/html2/ > > I've also added a little questionable gimmick to the sidebar (when you collapse > it and expand it again, the content is shown at your current scroll location). Thanks everyone for the overwhelmingly positive feedback. I've committed the new design to 3.2 and 3.3 for now, and it will be live for the 3.3 docs momentarily (3.2 isn't rebuilt at the moment until 3.2.3 final goes out). I'll transplant to 2.7 too, probably after the final release of 2.7.3. Please make further suggestions (preferably with patches) through the bug tracker. cheers, Georg From storchaka at gmail.com Sun Mar 25 20:51:28 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 25 Mar 2012 21:51:28 +0300 Subject: [Python-Dev] PEP 393 decode() oddity In-Reply-To: <20120325190137.7f0ffce0@pitrou.net> References: <20120325190137.7f0ffce0@pitrou.net> Message-ID: 25.03.12 20:01, Antoine Pitrou ???????(??): > The general problem with decoding is that you don't know up front what > width (1, 2 or 4 bytes) is required for the result. The solution is > either to compute the width in a first pass (and decode in a second > pass), or decode in a single pass and enlarge the result on the fly > when needed. Both incur a slowdown compared to a single-size > representation. We can significantly reduce the number of checks, using the same trick that is used for fast checking of surrogate characters. While all characters < U+0100, we know that the result is a 1-byte string (ascii while all characters < U+0080). When meet a character >= U+0100, while all characters < U+10000, we know that the result is the 2-byte string. As soon as we met first character >= U+10000, we work with 4-bytes string. There will be several fast loops, the transition to the next loop will occur after the failure in the previous one. > It's probably a measurement error on your part. Anyone can test. $ ./python -m timeit -s 'enc = "latin1"; import codecs; d = codecs.getdecoder(enc); x = ("\u0020" * 100000).encode(enc)' 'd(x)' 10000 loops, best of 3: 59.4 usec per loop $ ./python -m timeit -s 'enc = "latin1"; import codecs; d = codecs.getdecoder(enc); x = ("\u0080" * 100000).encode(enc)' 'd(x)' 10000 loops, best of 3: 28.4 usec per loop The results are fairly stable (?0.1 ?sec) from run to run. It looks funny thing. From anacrolix at gmail.com Sun Mar 25 21:09:09 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Mon, 26 Mar 2012 03:09:09 +0800 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: <20120325113657.GA13240@iskra.aviel.ru> Message-ID: Not sure if you addressed this in your answers to other comments... Scroll down the page. Minimize the nav bar on the left. Bring it back out again. Now the text in the nav bar permanently starts at an offset from the top of the page. On Sun, Mar 25, 2012 at 7:44 PM, Matt Joiner wrote: > Is nice yes?! When I small the nav bar, then embiggen it again, the text > centers vertically. It's in the wrong place. The new theme is very minimal, > perhaps a new color should be chosen. We've done green, what about orange, > brown or blue? From steve at pearwood.info Sun Mar 25 21:11:20 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 26 Mar 2012 06:11:20 +1100 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: <4F6F6DD8.9050901@pearwood.info> Georg Brandl wrote: > Thanks everyone for the overwhelmingly positive feedback. I've committed the > new design to 3.2 and 3.3 for now, and it will be live for the 3.3 docs > momentarily (3.2 isn't rebuilt at the moment until 3.2.3 final goes out). > I'll transplant to 2.7 too, probably after the final release of 2.7.3. I think it would be better to leave 2.7 with the old theme, to keep it visually distinct from the nifty new theme used with the nifty new 3.2 and 3.3 versions. -- Steven From p.f.moore at gmail.com Sun Mar 25 21:12:11 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 25 Mar 2012 20:12:11 +0100 Subject: [Python-Dev] PEP 393 decode() oddity In-Reply-To: References: <20120325190137.7f0ffce0@pitrou.net> Message-ID: On 25 March 2012 19:51, Serhiy Storchaka wrote: > Anyone can test. > > $ ./python -m timeit -s 'enc = "latin1"; import codecs; d = > codecs.getdecoder(enc); x = ("\u0020" * 100000).encode(enc)' 'd(x)' > 10000 loops, best of 3: 59.4 usec per loop > $ ./python -m timeit -s 'enc = "latin1"; import codecs; d = > codecs.getdecoder(enc); x = ("\u0080" * 100000).encode(enc)' 'd(x)' > 10000 loops, best of 3: 28.4 usec per loop > > The results are fairly stable (?0.1 ?sec) from run to run. It looks funny > thing. Hmm, yes. I see the same results. Odd. PS D:\Data> py -3.3 -m timeit -s "enc = 'latin1'; import codecs; d = codecs.getdecoder(enc); x = ('\u0020' * 100000).encode(enc)" "d(x)" 10000 loops, best of 3: 37.3 usec per loop PS D:\Data> py -3.3 -m timeit -s "enc = 'latin1'; import codecs; d = codecs.getdecoder(enc); x = ('\u0080' * 100000).encode(enc)" "d(x)" 100000 loops, best of 3: 18 usec per loop PS D:\Data> py -3.3 -m timeit -s "enc = 'latin1'; import codecs; d = codecs.getdecoder(enc); x = ('\u0020' * 100000).encode(enc)" "d(x)" 10000 loops, best of 3: 37.6 usec per loop PS D:\Data> py -3.3 -m timeit -s "enc = 'latin1'; import codecs; d = codecs.getdecoder(enc); x = ('\u0080' * 100000).encode(enc)" "d(x)" 100000 loops, best of 3: 18.3 usec per loop PS D:\Data> py -3.3 -m timeit -s "enc = 'latin1'; import codecs; d = codecs.getdecoder(enc); x = ('\u0020' * 100000).encode(enc)" "d(x)" 10000 loops, best of 3: 37.8 usec per loop PS D:\Data> py -3.3 -m timeit -s "enc = 'latin1'; import codecs; d = codecs.getdecoder(enc); x = ('\u0080' * 100000).encode(enc)" "d(x)" 100000 loops, best of 3: 18.3 usec per loop Paul. From g.brandl at gmx.net Sun Mar 25 21:24:40 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 25 Mar 2012 21:24:40 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: <20120325113657.GA13240@iskra.aviel.ru> Message-ID: On 25.03.2012 21:09, Matt Joiner wrote: > Not sure if you addressed this in your answers to other comments... > > Scroll down the page. Minimize the nav bar on the left. Bring it back > out again. Now the text in the nav bar permanently starts at an offset > from the top of the page. Yes, that was the intention I mentioned in my post. That has been removed though from the final version I checked in. There are certainly much better solutions. Georg From g.brandl at gmx.net Sun Mar 25 21:25:42 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 25 Mar 2012 21:25:42 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <4F6F6DD8.9050901@pearwood.info> References: <4F6F6DD8.9050901@pearwood.info> Message-ID: On 25.03.2012 21:11, Steven D'Aprano wrote: > Georg Brandl wrote: > >> Thanks everyone for the overwhelmingly positive feedback. I've committed the >> new design to 3.2 and 3.3 for now, and it will be live for the 3.3 docs >> momentarily (3.2 isn't rebuilt at the moment until 3.2.3 final goes out). >> I'll transplant to 2.7 too, probably after the final release of 2.7.3. > > I think it would be better to leave 2.7 with the old theme, to keep it > visually distinct from the nifty new theme used with the nifty new 3.2 and 3.3 > versions. Hmm, -0 here. I'd like more opinions on this from other devs. Georg From janzert at janzert.com Sun Mar 25 21:48:34 2012 From: janzert at janzert.com (Janzert) Date: Sun, 25 Mar 2012 15:48:34 -0400 Subject: [Python-Dev] Rename time.steady(strict=True) to time.monotonic()? In-Reply-To: References: Message-ID: On 3/24/2012 6:37 AM, Victor Stinner wrote: >>> - time.monotonic(): monotonic clock, its speed may or may not be >>> adjusted by NTP but it only goes forward, may raise an OSError >>> - time.steady(): monotonic clock or the realtime clock, depending on >>> what is available on the platform (use monotonic in priority). may be >>> adjusted by NTP or the system administrator, may go backward. >>> >> >> I am surprised that a clock with the name time.steady() has a looser >> definition than one called time.monotonic(). To my mind a steady clock is by >> definition monotonic but a monotonic one may or may not be steady. > > Do you suggest another name? > > Victor I can't think of a word or short phrase that adequately describes that behavior, no. But that may just be because I also don't see any use case for it either. To me the more useful function would be one that used the OS monotonic clock when available and failing that used the realtime clock but cached the previous returned value and ensured that all values returned obeyed the monotonic property still. But I don't see why that function shouldn't just be time.monotonic(). Janzert From andrew.svetlov at gmail.com Sun Mar 25 21:50:59 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Sun, 25 Mar 2012 22:50:59 +0300 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: <4F6F6DD8.9050901@pearwood.info> Message-ID: I like to see new schema only for 3.3 as sign of shiny new release. On Sun, Mar 25, 2012 at 10:25 PM, Georg Brandl wrote: > On 25.03.2012 21:11, Steven D'Aprano wrote: >> Georg Brandl wrote: >> >>> Thanks everyone for the overwhelmingly positive feedback. ?I've committed the >>> new design to 3.2 and 3.3 for now, and it will be live for the 3.3 docs >>> momentarily (3.2 isn't rebuilt at the moment until 3.2.3 final goes out). >>> I'll transplant to 2.7 too, probably after the final release of 2.7.3. >> >> I think it would be better to leave 2.7 with the old theme, to keep it >> visually distinct from the nifty new theme used with the nifty new 3.2 and 3.3 >> versions. > > Hmm, -0 here. ?I'd like more opinions on this from other devs. > > Georg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com -- Thanks, Andrew Svetlov From pmoody at google.com Sun Mar 25 21:58:38 2012 From: pmoody at google.com (Peter Moody) Date: Sun, 25 Mar 2012 12:58:38 -0700 Subject: [Python-Dev] PEP czar for PEP 3144? In-Reply-To: <4F67AA1E.6070907@stoneleaf.us> References: <4F678AEF.4080002@stoneleaf.us> <4F679384.4040602@stoneleaf.us> <4F67AA1E.6070907@stoneleaf.us> Message-ID: On Mon, Mar 19, 2012 at 2:50 PM, Ethan Furman wrote: > [1] I'm assuming that 'iter(some_list)' is a quick operation. This seems to be the case so I've just gone ahead and renamed collapse_address_list to collapse_addresses and added 'return iter(...)' to the end. The rest of the list-returning methods all return iterators now too. There should only be a few minor outstanding issues to to work out. Cheers, peter -- Peter Moody? ? ? Google? ? 1.650.253.7306 Security Engineer? pgp:0xC3410038 From martin at v.loewis.de Sun Mar 25 22:55:13 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sun, 25 Mar 2012 22:55:13 +0200 Subject: [Python-Dev] PEP 393 decode() oddity In-Reply-To: References: <20120325190137.7f0ffce0@pitrou.net> Message-ID: <20120325225513.Horde.v_hfaKGZi1VPb4YxHpTWb7A@webmail.df.eu> > Anyone can test. > > $ ./python -m timeit -s 'enc = "latin1"; import codecs; d = > codecs.getdecoder(enc); x = ("\u0020" * 100000).encode(enc)' 'd(x)' > 10000 loops, best of 3: 59.4 usec per loop > $ ./python -m timeit -s 'enc = "latin1"; import codecs; d = > codecs.getdecoder(enc); x = ("\u0080" * 100000).encode(enc)' 'd(x)' > 10000 loops, best of 3: 28.4 usec per loop > > The results are fairly stable (?0.1 ?sec) from run to run. It looks > funny thing. This is not surprising. When decoding Latin-1, it needs to determine whether the string is pure ASCII or not. If it is not, it must be all Latin-1 (it can't be non-Latin-1). For a pure ASCII string, it needs to scan over the entire string, trying to find a non-ASCII character. If there is none, it has to inspect the entire string. In your example, as the first character is is already above 127, search for the maximum character can stop, so it needs to scan the string only once. Try '\u0020' * 999999+'\u0080', which is a non-ASCII string but still takes the same time as the pure ASCII string. Regards, Martin From fijall at gmail.com Sun Mar 25 23:08:49 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 25 Mar 2012 23:08:49 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: <4F6F6DD8.9050901@pearwood.info> Message-ID: On Sun, Mar 25, 2012 at 9:25 PM, Georg Brandl wrote: > On 25.03.2012 21:11, Steven D'Aprano wrote: >> Georg Brandl wrote: >> >>> Thanks everyone for the overwhelmingly positive feedback. ?I've committed the >>> new design to 3.2 and 3.3 for now, and it will be live for the 3.3 docs >>> momentarily (3.2 isn't rebuilt at the moment until 3.2.3 final goes out). >>> I'll transplant to 2.7 too, probably after the final release of 2.7.3. >> >> I think it would be better to leave 2.7 with the old theme, to keep it >> visually distinct from the nifty new theme used with the nifty new 3.2 and 3.3 >> versions. > > Hmm, -0 here. ?I'd like more opinions on this from other devs. > > Georg I would definitely like the new theme on 2.7 docs as well, since 2.7 is still supported. Cheers, fijal From brian at python.org Sun Mar 25 23:17:20 2012 From: brian at python.org (Brian Curtin) Date: Sun, 25 Mar 2012 16:17:20 -0500 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: <4F6F6DD8.9050901@pearwood.info> Message-ID: On Sun, Mar 25, 2012 at 14:50, Andrew Svetlov wrote: > I like to see new schema only for 3.3 as sign of shiny new release. Please don't do this. It will result in endless complaints. From ben+python at benfinney.id.au Sun Mar 25 23:25:28 2012 From: ben+python at benfinney.id.au (Ben Finney) Date: Mon, 26 Mar 2012 08:25:28 +1100 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 References: <4F6F6DD8.9050901@pearwood.info> Message-ID: <877gy8id9z.fsf@benfinney.id.au> Brian Curtin writes: > On Sun, Mar 25, 2012 at 14:50, Andrew Svetlov wrote: > > I like to see new schema only for 3.3 as sign of shiny new release. > > Please don't do this. It will result in endless complaints. Complaints of what nature? Do you think those complaints are justified? -- \ ?? Nature ? is seen to do all things herself and through | `\ herself of own accord, rid of all gods.? ?Titus Lucretius | _o__) Carus, c. 40 BCE | Ben Finney From tjreedy at udel.edu Sun Mar 25 23:42:33 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 25 Mar 2012 17:42:33 -0400 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: On 3/25/2012 12:32 PM, Georg Brandl wrote: > On 25.03.2012 17:54, Terry Reedy wrote: >> On 3/25/2012 2:34 AM, Georg Brandl wrote: >>> Here's another try, mainly with default browser font size, more contrast >> >> Untrue. You still changed the high contrast dark blue to the same low >> contrast light blue for builtin names, etc. What problem do you think >> you are trying to solve by making the doc difficult and even PAINFUL for >> me to read? >> >> - a lot more than 1 > > "More contrast" was meant in comparison to iteration #1. It is still subjectively dim enough to me that I could not tell from memory. I ran the following experiment: I put old and new versions of the buitin functions page side-by-side in separate browser windows. I asked my teenage daughter to come into the room, approach slowly, and say when she could read one or both windows. At about 5 feet, she could (just) read the old but not the new. If other people repeat the experiment and get the same result, it would then be fair to say that the new style is objectively less readable in regard to this one aspect. > Hmm, don't you think you'll get used to the new style in a while? This is a bit like asking a wheelchair user if he would get used to having a ramp ground down to add little one-inch steps every two feet, because leg-abled people found that somehow more aesthetic. Answer: somewhat. Wired magazine has used a similar thin blue font. I got used to that by ignoring any text written with it. > The link color is not actually that light in comparison. Using a magnifying glass, the difference seems to be more one of thickness -- 2 pixel lines versus 1-1.5 pixel lines. I have astigmatism that is only partly correctable and the residual blurring of single-pixel lines tends to somewhat mix text color with the background color. > Of course you can always use a user stylesheet to override our choices. Can anyone tell me the best way to do that with FireFox? Is it even possible with the Windows help version, which is what I usually use? -- Terry Jan Reedy From victor.stinner at gmail.com Mon Mar 26 00:28:33 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 26 Mar 2012 00:28:33 +0200 Subject: [Python-Dev] PEP 393 decode() oddity In-Reply-To: References: Message-ID: Cool, Python 3.3 is *much* faster to decode pure ASCII :-) > encoding ?string ? ? ? ? ? ? ? ? 2.7 ? 3.2 ? 3.3 > > ascii ? ? " " * 1000 ? ? ? ? ? ? 5.4 ? 5.3 ? 1.2 4.5 faster than Python 2 here. > utf-8 ? ? " " * 1000 ? ? ? ? ? ? 6.7 ? 2.4 ? 2.1 3.2x faster It's cool because in practice, a lot of strings are pure ASCII (as Martin showed in its Django benchmark). > latin1 " " * 1000 1.8 1.7 1.3 > latin1 "\u0080" * 1000 1.7 1.6 1.0 > ... > The first oddity in that the characters from the second half of the Latin1 > table decoded faster than the characters from the first half. The Latin1 decoder of Python 3.3 is *faster* than the decoder of Python 2.7 and 3.2 according to your bench. So I don't see any issue here :-) Martin explained why it is slower for pure ASCII. > I think that the characters from the first half of the table > must be decoded as quickly. The Latin1 decoder is already heavily optimized, I don't see how to make it faster. > The second sad oddity in that UTF-16 decoding in 3.3 is much slower than > even in 2.7. Compared with 3.2 decoding is slower in 2-3 times. This is a > considerable regress. UTF-32 decoding is also slowed down by 1.5-2 times. Only ASCII, latin1 and UTF-8 decoder are heavily optimized. We can do better for UTF-16 and UTF-32. I'm just less motivated because UTF-16/32 are less common than ASCII/latin1/UTF-8. > How serious a problem this is for the Python 3.3 release? I could do the > optimization, if someone is not working on this already. I'm interested by any patch optimizing any Python codecs. I'm not working on optimizing Python Unicode anymore, various benchmarks showed me that Python 3.3 is as good or faster than Python 3.2. That's enough for me. When Python 3.3 is slower than Python 3.2, it's because Python 3.3 must compute the maximum character of the result, and I fail to see how to optimize this requirement. I already introduced many fast-path where it was possible, like creating a substring of an ASCII string (the result is ASCII, no need to scan the substring). It doesn't mean that it is no more possible to optimize Python Unicode ;-) Victor From victor.stinner at gmail.com Mon Mar 26 01:03:46 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 26 Mar 2012 01:03:46 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: <20120323104005.GA17581@sleipnir.bytereef.org> References: <20120323092255.GA17205@sleipnir.bytereef.org> <20120323104005.GA17581@sleipnir.bytereef.org> Message-ID: > The 80x is a ballpark figure for the maximum expected speedup for > standard numerical floating point applications. Ok, but it's just surprising when you read the What's New document. 72x and 80x look to be inconsistent. > For huge numbers _decimal is also faster than int: > > factorial(1000000): > > _decimal, calculation time: 6.844487905502319 > _decimal, tostr(): ? ? ? ? ?0.033592939376831055 > > int, calculation time: 17.96010398864746 > int, tostr(): ... still running ... Hum, with a resolution able to store the result with all digits? If yes, would it be possible to reuse the multiply algorithm of _decimal (and maybe of other functions) for int? Or does it depend heavily on _decimal internal structures? Victor From steve at pearwood.info Mon Mar 26 02:37:56 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 26 Mar 2012 11:37:56 +1100 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: <4F6FBA64.9080101@pearwood.info> Terry Reedy wrote: > On 3/25/2012 12:32 PM, Georg Brandl wrote: >> On 25.03.2012 17:54, Terry Reedy wrote: >>> On 3/25/2012 2:34 AM, Georg Brandl wrote: >>>> Here's another try, mainly with default browser font size, more >>>> contrast >>> >>> Untrue. You still changed the high contrast dark blue to the same low >>> contrast light blue for builtin names, etc. What problem do you think >>> you are trying to solve by making the doc difficult and even PAINFUL for >>> me to read? >>> >>> - a lot more than 1 >> >> "More contrast" was meant in comparison to iteration #1. > > It is still subjectively dim enough to me that I could not tell from > memory. > > I ran the following experiment: I put old and new versions of the buitin > functions page side-by-side in separate browser windows. I asked my > teenage daughter to come into the room, approach slowly, and say when > she could read one or both windows. At about 5 feet, she could (just) > read the old but not the new. Do you often read things on your computer monitor from 5ft away? While I sympathize with the ideal of making the docs readable, particular for those of us who don't have 20-20 vision, "must be readable from halfway across the room" is setting the bar too high. What is important is not *absolute* readability, but readability relative to the normal use-case of sitting at a computer under typical reading conditions. To be honest here, I don't even know which elements you are having trouble with. I don't see any elements with such low contrast to cause problems at least not for me. Even with my glasses off, I find the built-in names to be no less readable as the vanilla text around it. E.g. on this page: http://www.python.org/~gbrandl/build/html2/library/stdtypes.html I see built-in names such as `int` and `str` are written as hyperlinks in medium blue on a white background. When I hover over the link, it becomes a touch lighter blue, but not enough to appreciably hurt contrast and readability. I see literals such as `{}` in black on a pale blue-grey background. The background is faint enough that it is hardly noticeable, not enough to hurt contrast. So I don't know what you are speaking off when you say "the same low contrast light blue for builtin names, etc." -- can you give an example? [...] > Using a magnifying glass, the difference seems to be more one of > thickness -- 2 pixel lines versus 1-1.5 pixel lines. I have astigmatism > that is only partly correctable and the residual blurring of > single-pixel lines tends to somewhat mix text color with the background > color. For what it's worth, it wouldn't surprise me if the problem is the fallback font. If I'm reading the CSS correctly, the standard font used in the new docs is Lucinda Grande, with a fallback of Arial. Unfortunately, Lucinda Grande is normally only available on the Apple Mac, and Arial is a notoriously poor choice for on-screen text (particularly in smaller text sizes). http://en.wikipedia.org/wiki/Lucida_Grande suggests fallbacks of Lucida Sans Unicode, Tahoma, and Verdana. Could they please be tried before Arial? E.g. change the font-family from font-family: 'Lucida Grande',Arial,sans-serif; to font-family: 'Lucida Grande','Lucida Sans Unicode','Lucida Sans',Tahoma,Verdana,Arial,sans-serif; or similar. -- Steven From ncoghlan at gmail.com Mon Mar 26 03:02:56 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 26 Mar 2012 11:02:56 +1000 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: On Mon, Mar 26, 2012 at 7:42 AM, Terry Reedy wrote: > Can anyone tell me the best way to do that with FireFox? For general webbrowsing, I'm reasonably impressed by the effectiveness of www.readability.com. It's a sign-up service however, and I've never tried it on technical material like the Python docs. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From brian at python.org Mon Mar 26 03:27:16 2012 From: brian at python.org (Brian Curtin) Date: Sun, 25 Mar 2012 20:27:16 -0500 Subject: [Python-Dev] AUTO: Jon K Peck is out of the office (returning 03/30/2012) In-Reply-To: References: Message-ID: On Sun, Mar 25, 2012 at 05:00, Jon K Peck wrote: > > I am out of the office until 03/30/2012. > > I will be out of the office through Friday, March 30. ?I expect to have > some email access but may be delayed in responding. > > > Note: This is an automated response to your message ?"Python-Dev Digest, > Vol 104, Issue 91" sent on 03/25/2012 1:19:50. > > This is the only notification you will receive while this person is away. Enjoy your vacation. From scott+python-dev at scottdial.com Mon Mar 26 04:23:09 2012 From: scott+python-dev at scottdial.com (Scott Dial) Date: Sun, 25 Mar 2012 22:23:09 -0400 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <4F6FBA64.9080101@pearwood.info> References: <4F6FBA64.9080101@pearwood.info> Message-ID: <4F6FD30D.3030508@scottdial.com> On 3/25/2012 8:37 PM, Steven D'Aprano wrote: > E.g. change the font-family from > > font-family: 'Lucida Grande',Arial,sans-serif; > > to > > font-family: 'Lucida Grande','Lucida Sans Unicode','Lucida > Sans',Tahoma,Verdana,Arial,sans-serif; > > or similar. > +1 To providing other fallbacks. As Steven says, on my Win7 machine, I do not have 'Lucida Grande' and it wasn't until he mentioned this that I compared the experience of the site with my MacBook (which looks much better!). This machine has both 'Lucida Sans Unicode' and 'Lucida Sans' and it's a toss-up to me which is better -- one is better than the other in certain contexts. Presumably, the character coverage of the Unicode font makes it the superior choice. Personally, I would leave Tahoma out of the list -- the kerning of the font is really aggressive and I find it much harder to read than Verdana. -- Scott Dial scott at scottdial.com From tjreedy at udel.edu Mon Mar 26 05:23:22 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 25 Mar 2012 23:23:22 -0400 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <4F6FBA64.9080101@pearwood.info> References: <4F6FBA64.9080101@pearwood.info> Message-ID: On 3/25/2012 8:37 PM, Steven D'Aprano wrote: > Terry Reedy wrote: >> I ran the following experiment: I put old and new versions of the >> buitin functions page side-by-side in separate browser windows. I >> asked my teenage daughter to come into the room, approach slowly, and >> say when she could read one or both windows. At about 5 feet, she >> could (just) read the old but not the new. The test page I used was the builtin function page in the Library Reference. > Do you often read things on your computer monitor from 5ft away? No, I cannot possibly do that. The point is that there *is* a distance for her as well as me at which the old style is clearly more readable than the new. For my daughter, is is 4-5 feet. For me, it is about 2 feet -- with my prescription computer reading glasses. For you, try it and find out. Obviously, most of the people here are more like my daughter than me. I am visually handicapped, and that particular new style is worse for me. And to what purpose? -- Terry Jan Reedy From eliben at gmail.com Mon Mar 26 05:40:41 2012 From: eliben at gmail.com (Eli Bendersky) Date: Mon, 26 Mar 2012 05:40:41 +0200 Subject: [Python-Dev] PEP 411 - request for pronouncement In-Reply-To: References: Message-ID: On Sat, Mar 24, 2012 at 13:53, Lennart Regebro wrote: > On Fri, Mar 23, 2012 at 10:51, Eli Bendersky wrote: >> The PEP received mostly positive feedback. The only undecided point is >> where to specify that the package is provisional. Currently the PEP >> mandates to specify it in the documentation and in the docstring. >> Other suggestions were to put it in the code, either as a >> __provisional__ attribute on the module, or collect all such modules >> in a single sys.provisional list. > > I'm not sure what the usecase is for checking in code if a module is > provisional or not. It doesn't seem useful, and risks being > unmaintained, especially when the flag is on the module itself. > Some usecases were given by Jim J. Jewett here: http://mail.python.org/pipermail/python-dev/2012-February/116400.html (and +1-ed by a couple of others.) Eli From eliben at gmail.com Mon Mar 26 06:11:42 2012 From: eliben at gmail.com (Eli Bendersky) Date: Mon, 26 Mar 2012 06:11:42 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: <4F6F6DD8.9050901@pearwood.info> Message-ID: On Sun, Mar 25, 2012 at 21:25, Georg Brandl wrote: > On 25.03.2012 21:11, Steven D'Aprano wrote: >> Georg Brandl wrote: >> >>> Thanks everyone for the overwhelmingly positive feedback. ?I've committed the >>> new design to 3.2 and 3.3 for now, and it will be live for the 3.3 docs >>> momentarily (3.2 isn't rebuilt at the moment until 3.2.3 final goes out). >>> I'll transplant to 2.7 too, probably after the final release of 2.7.3. > Georg, Is there a tracker issue to collect reports about misbehavior of the new theme? Specifically, the sidebar behaves very strangely in the search page. I think it should not be there at all. Eli From tjreedy at udel.edu Mon Mar 26 06:22:30 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 26 Mar 2012 00:22:30 -0400 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <4F6FBA64.9080101@pearwood.info> References: <4F6FBA64.9080101@pearwood.info> Message-ID: On 3/25/2012 8:37 PM, Steven D'Aprano wrote: > For what it's worth, it wouldn't surprise me if the problem is the > fallback font. If I'm reading the CSS correctly, the standard font used > in the new docs is Lucinda Grande, with a fallback of Arial. > Unfortunately, Lucinda Grande is normally only available on the Apple > Mac, and Arial is a notoriously poor choice for on-screen text > (particularly in smaller text sizes). Testing in LibreOffice, I think Ariel may be easier as it has a consistent stroke width, whereas Lucida has thin horizontals. It does look a bit more elegant, though. In any case, Ariel seems to be the basic text font I see in Firefox and Windows help and I have no problem with it. The particular entries I have discussed are class="reference internal" There have light serifs. Testing in LibreOffice, it seems to be Courier New. It was previously Courier New 'bold'. I put that in quotes because Courier New is a 'light' font, so that the 'bold' is normal relative to Ariel. In other words, Courier bold matches normal Ariel in stroke weight, so that is looks right mixed in with Ariel, whereas the Courier light is jarring. In a sentence like "this returns False; otherwise it returns True. bool is also a class, which is a subclass of int" False, True (which use light-weight black rather than light-weight blue Courier), bool, and int all stand out (or stand in) because they have a lighter stroke weight as well as a different (serif versus non-serif) font. These are important words and should not be made to recede into the background as if they were unimportant and optionally skipped. To me, this is backwards and poor design. False,false are marked up like this: False bool,in are like so: Does the css specify Courier New or is this an unfortunate fallback that might be improved? Perhaps things look better on max/*nix? -- Terry Jan Reedy From greg.ewing at canterbury.ac.nz Mon Mar 26 07:00:34 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Mon, 26 Mar 2012 18:00:34 +1300 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <4F6FBA64.9080101@pearwood.info> References: <4F6FBA64.9080101@pearwood.info> Message-ID: <4F6FF7F2.3000806@canterbury.ac.nz> Steven D'Aprano wrote: > While I sympathize with the ideal of making the docs readable, > particular for those of us who don't have 20-20 vision, "must be > readable from halfway across the room" is setting the bar too high. The point is that reducing contrast never makes anything more readable, and under some conditions it makes things less readable. So the only reason for using less than the maximum available contrast is aesthetics, and whether grey-on-white looks any nicer than black-on-white is very much a matter of opinion. In any case, the aesthetic difference is a very minor one, and you have to ask whether it's really worth compromising on contrast. > If I'm reading the CSS correctly, the standard font used > in the new docs is Lucinda Grande, with a fallback of Arial. > Unfortunately, Lucinda Grande is normally only available on the Apple > Mac, and Arial is a notoriously poor choice for on-screen text This seems to be another case of the designer over-specifying things. The page should just specify a sans-serif font and let the browser choose the best one available. Or not specify a font at all and leave it up to the user whether he wants serif or sans-serif for the body text -- some people have already said here that they prefer serif. -- Greg From regebro at gmail.com Mon Mar 26 08:48:33 2012 From: regebro at gmail.com (Lennart Regebro) Date: Mon, 26 Mar 2012 08:48:33 +0200 Subject: [Python-Dev] PEP 411 - request for pronouncement In-Reply-To: References: Message-ID: On Mon, Mar 26, 2012 at 05:40, Eli Bendersky wrote: > On Sat, Mar 24, 2012 at 13:53, Lennart Regebro wrote: >> On Fri, Mar 23, 2012 at 10:51, Eli Bendersky wrote: >>> The PEP received mostly positive feedback. The only undecided point is >>> where to specify that the package is provisional. Currently the PEP >>> mandates to specify it in the documentation and in the docstring. >>> Other suggestions were to put it in the code, either as a >>> __provisional__ attribute on the module, or collect all such modules >>> in a single sys.provisional list. >> >> I'm not sure what the usecase is for checking in code if a module is >> provisional or not. It doesn't seem useful, and risks being >> unmaintained, especially when the flag is on the module itself. >> > > Some usecases were given by Jim J. Jewett here: > http://mail.python.org/pipermail/python-dev/2012-February/116400.html > (and +1-ed by a couple of others.) Having a list of provisional modules seems the most helpful there, in that case, and also noting that it is provisional in the module docs shown when you do help(). From anacrolix at gmail.com Mon Mar 26 08:50:48 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Mon, 26 Mar 2012 14:50:48 +0800 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <4F6FF7F2.3000806@canterbury.ac.nz> References: <4F6FBA64.9080101@pearwood.info> <4F6FF7F2.3000806@canterbury.ac.nz> Message-ID: FWIW it doesn't hurt to err on the side of what worked. i have generally have issues with low contrast, the current stable design is very good with this. i've just built the docs from tip, and the nav bar issue is fixed, nicely done i also don't see any reason to backport theme changes, +0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.brandl at gmx.net Mon Mar 26 09:11:50 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 26 Mar 2012 09:11:50 +0200 Subject: [Python-Dev] cpython: Adding unittest.mock documentation In-Reply-To: References: Message-ID: On 26.03.2012 00:13, michael.foord wrote: > http://hg.python.org/cpython/rev/adc1fc2dc872 > changeset: 75938:adc1fc2dc872 > user: Michael Foord > date: Sun Mar 25 23:12:55 2012 +0100 > summary: > Adding unittest.mock documentation > > files: > Doc/library/development.rst | 6 + > Doc/library/unittest.mock-examples.rst | 887 +++++++++ > Doc/library/unittest.mock-getting-started.rst | 419 ++++ > Doc/library/unittest.mock-helpers.rst | 537 +++++ > Doc/library/unittest.mock-magicmethods.rst | 226 ++ > Doc/library/unittest.mock-patch.rst | 538 +++++ > Doc/library/unittest.mock.rst | 900 ++++++++++ > Lib/unittest/mock.py | 8 +- > 8 files changed, 3516 insertions(+), 5 deletions(-) That seems a bit much splitting to me. (By the way, the ".. module::" directive should only be in *one* place.) I would organize the mock, mock-patch, mock-magicmethods and mock-helpers as one file in Doc/library, and put the other two in Doc/howto, just as for logging. In general, I wouldn't mind splitting off more of the exemplary material from the main library docs, putting it in the howto section. Georg From scott+python-dev at scottdial.com Mon Mar 26 14:44:40 2012 From: scott+python-dev at scottdial.com (Scott Dial) Date: Mon, 26 Mar 2012 08:44:40 -0400 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <4F6FF7F2.3000806@canterbury.ac.nz> References: <4F6FBA64.9080101@pearwood.info> <4F6FF7F2.3000806@canterbury.ac.nz> Message-ID: <4F7064B8.5050406@scottdial.com> On 3/26/2012 1:00 AM, Greg Ewing wrote: > This seems to be another case of the designer over-specifying > things. The page should just specify a sans-serif font and let > the browser choose the best one available. Or not specify > a font at all and leave it up to the user whether he wants > serif or sans-serif for the body text -- some people have > already said here that they prefer serif. Why even bother formatting the page? The authorship and editorship have authority to dictate the presentation of the content. A large part of the effectiveness of a document and it's ease of consumption is determined by how it appears in whatever medium it's delivered on. While this particular medium invites the readership to participate in design choices, fonts are not all created equal and practical matters (size, tracking, and kerning) will dictate that some fonts will present better than other fonts. Consistent presentation across different systems is also a virtue, since people develop familiarity with the presentation and find information more readily if the presentation is consistent. I have no problem having Georg dictating to me the best font with which to present the documentation. However, I'd appreciate fallback choices that are of a similar appearance along the way to the ultimate fallback of "sans-serif". Practically, the fonts available are unknown and unless we adopt the use of a liberally licensed OpenType font and use @font-face to embed a font, we need to provide fallbacks. -- Scott Dial scott at scottdial.com From zvezdan at computer.org Mon Mar 26 14:46:11 2012 From: zvezdan at computer.org (Zvezdan Petkovic) Date: Mon, 26 Mar 2012 08:46:11 -0400 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: <4F6FBA64.9080101@pearwood.info> Message-ID: <3677ADF1-9CA6-49F7-8331-81441D7EEB58@computer.org> On Mar 26, 2012, at 12:22 AM, Terry Reedy wrote: > Does the css specify Courier New or is this an unfortunate fallback that might be improved? Perhaps things look better on max/*nix? I just checked pydoctheme.css and Courier New is not specified there. It only specifies monospace. That's a default monospace font set in your browser. I see the code rendered in the font I selected in my browser preferences as Fixed-width font: Menlo 14pt. It's not thin at all -- that's why I selected it. :-) It seems you may want to change that setting in your browser. Firefox uses Courier New as a default setting. Zvezdan From anacrolix at gmail.com Mon Mar 26 14:57:21 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Mon, 26 Mar 2012 20:57:21 +0800 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <3677ADF1-9CA6-49F7-8331-81441D7EEB58@computer.org> References: <4F6FBA64.9080101@pearwood.info> <3677ADF1-9CA6-49F7-8331-81441D7EEB58@computer.org> Message-ID: the text in the nav bar is too small, particularly in the search box. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Mon Mar 26 15:19:27 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 26 Mar 2012 09:19:27 -0400 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <4F7064B8.5050406@scottdial.com> References: <4F6FBA64.9080101@pearwood.info> <4F6FF7F2.3000806@canterbury.ac.nz> <4F7064B8.5050406@scottdial.com> Message-ID: <20120326131928.6D0E32500E9@webabinitio.net> On Mon, 26 Mar 2012 08:44:40 -0400, Scott Dial wrote: > Why even bother formatting the page? The web started out as *content markup*. Functional declarations, not style declarations. I wish it had stayed that way, but it was inevitable that it would not. > The authorship and editorship have authority to dictate the presentation > of the content. A large part of the effectiveness of a document and it's > ease of consumption is determined by how it appears in whatever medium > it's delivered on. While this particular medium invites the readership This argument can get as bad as editor religious wars. I like sites that do more content markup and less styling. You like the reverse. There is a good reason to separate css (style) from markup. I just wish browsers gave easier control over the css, and that more designers would stay concious of how a page *reads* (ie: access to the content) without the css (and javascript). I think Georg's design does pretty well in that last regard. (Except for those darn paragraph characters after the headers, but that flaw was in the old design too. Oh, and it would be even better, IMO, if the top navigation block wasn't there when there's no CSS, but that's more of an issue for easier access from screen readers, since the block is reasonably short). --David From van.lindberg at gmail.com Mon Mar 26 16:06:32 2012 From: van.lindberg at gmail.com (VanL) Date: Mon, 26 Mar 2012 09:06:32 -0500 Subject: [Python-Dev] Python install layout and the PATH on win32 In-Reply-To: References: Message-ID: <4F7077E8.2070005@gmail.com> I heard back from Enthought on this part of the proposal. They could accommodate this change. 1) The layout for the python root directory for all platforms should be as follows: stdlib = {base/userbase}/lib/ platstdlib = {base/userbase}/lib/ purelib = {base/userbase}/lib/site-packages platlib = {base/userbase}/lib/site-packages include = {base/userbase}/include/python{py_version_short} scripts = {base/userbase}/bin data = {base/userbase} From tjreedy at udel.edu Mon Mar 26 17:21:46 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 26 Mar 2012 11:21:46 -0400 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <3677ADF1-9CA6-49F7-8331-81441D7EEB58@computer.org> References: <4F6FBA64.9080101@pearwood.info> <3677ADF1-9CA6-49F7-8331-81441D7EEB58@computer.org> Message-ID: On 3/26/2012 8:46 AM, Zvezdan Petkovic wrote: > > On Mar 26, 2012, at 12:22 AM, Terry Reedy wrote: > >> Does the css specify Courier New or is this an unfortunate fallback >> that might be improved? Perhaps things look better on max/*nix? > > I just checked pydoctheme.css and Courier New is not specified > there. It only specifies monospace. > > That's a default monospace font set in your browser. I see the code > rendered in the font I selected in my browser preferences as > Fixed-width font: Menlo 14pt. It's not thin at all -- that's why I > selected it. :-) > > It seems you may want to change that setting in your browser. Firefox > uses Courier New as a default setting. I found the FireFox monospace setting under Tools / Options / Content / Default font: / Advanced and switched to Deja Vu mono, that being the first obviously monospace font I saw. (Lucida Console is similar.) It has the same 2-pixel lines as Ariel, and the page now looks okay, although when black (as for False, True),the lack of serifs reduces the contrast with Arial. I am guessing that the page now looks somewhat more like it did for Georg when he worked on it. Windows Help uses Internet Explorer settings. Options / Internet Options / General / Appearance / Fonts However, this only allows choice of base font for pages without a specified text font, so I do know know what will happen when new format is applied to the .chm files. -- Terry Jan Reedy From pje at telecommunity.com Mon Mar 26 18:35:18 2012 From: pje at telecommunity.com (PJ Eby) Date: Mon, 26 Mar 2012 12:35:18 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> Message-ID: On Sun, Mar 25, 2012 at 2:56 AM, Stephen J. Turnbull wrote: > But since he's arguing the > other end in the directory layout thread (where he says there are many > special ways to invoke Python so that having different layouts on > different platforms is easy to work around), I can't give much weight > to his preference here. > You're misconstruing my argument there: I said, rather, that the One Obvious Way to deploy a Python application is to dump everything in one directory, as that is the one way that Python has supported for at least 15 years now. Calling this a "special" way of invoking Python is disingenuous at best: it's the documented *default* way of deploying and invoking a Python script with accompanying libraries. In contrast, the directory layout thread is about supporting virtualenvs, which aren't even *in* Python yet -- if anything is to be considered a special case, that would be it. The comparison to CSS is also lost on me here; creating user-specific CSS is more aptly comparable telling people to write their own virtualenv implementations from scratch, and resizing the browser window is more akin to telling people to create a virtualenv every time they *run* the application, rather than just once when installing it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.crotti.0 at gmail.com Mon Mar 26 18:54:42 2012 From: andrea.crotti.0 at gmail.com (Andrea Crotti) Date: Mon, 26 Mar 2012 17:54:42 +0100 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> Message-ID: <4F709F52.7010803@gmail.com> On 03/24/2012 03:30 AM, PJ Eby wrote: > > > Weird - I have the exact *opposite* problem, where I have to resize my > window because somebody *didn't* set their text max-width sanely (to a > reasonable value based on ems instead of pixels), and I have nearly > 1920 pixels of raw text spanning my screen. Bloody impossible to read > that way. > > But I guess this is going to turn into one of those vi vs. emacs holy > war things... > > (Personally, I prefer jEdit, or nano if absolutely forced to edit in a > terminal. Heretical, I know. To the comfy chair with me!) > > Suppose the author set the size to 1000 pixels, you would end up with 920 white pixels on the side, does it make sense? Using a tiling window manager (for example awesome or xmonad) would solve your problem in a more definitive way imho than hoping in the web designer choices.. From pje at telecommunity.com Mon Mar 26 18:55:42 2012 From: pje at telecommunity.com (PJ Eby) Date: Mon, 26 Mar 2012 12:55:42 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <87y5qpikah.fsf@benfinney.id.au> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> Message-ID: On Sat, Mar 24, 2012 at 8:41 PM, Ben Finney wrote: > PJ Eby writes: > > > On Sat, Mar 24, 2012 at 1:32 AM, Greg Ewing >wrote: > > > > > If you don't want 1920-pixel-wide text, why make your browser window > > > that large? > > > > Not every tab in my browser is text for reading; some are apps that > > need the extra horizontal space. > > So, again, why make your browser window *for reading text* that large? > Because I have one browser window, and it's maximized. And I can do this, because most websites are designed in such a way that they have usable margins for text flows. Even PEPs and Python mailing list archives, for example, have sane text margins -- shall we go back and make *those* dependent on window width instead? Also, looking at the email I got from you, it has sane text margins in it. If you don't believe in text margins, why are you using a client that wraps lines and thereby prevents me from viewing your email with full-screen-width text? ;-) (In fairness, I am using a client that *doesn't* wrap the lines, AFAICT. But if Gmail had such an option I would probably use it if I knew where it was in the vast assortment of settings. Which ties in nicely with my next point, below...) Everyone has different needs for how large the text should be and how > much of it should go across the window. Every one of us is in a minority > when it comes to those needs; that's exactly what a configuration > setting is good for. > Designers' rules of thumb for text width are based on empirical observations of focal length, saccades, etc. If you have special needs visually, you're more likely to require the text read to you, than to have narrower text, and I at least am unable to conceive of a visual disability that would be helped by *increasing* the text width. In other words, there is a well-established *majority* need for how many characters should appear in an unwrapped line of text, based on majority physiology. Designers who limit it based on pixel size are Doing It Wrong; the max width should be based on em's rather than pixels. (Font sizes are a separate issue.) Done correctly (as visible, say, on any plaintext PEP), you may resize the window and change the font size to your heart's content without affecting the text width in characters. (Also, as a side note: adding lots of configuration options to an interface design is what adding lots of code is to a software design: a smell that the designer isn't *designing* enough.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at oddbird.net Mon Mar 26 18:58:10 2012 From: carl at oddbird.net (Carl Meyer) Date: Mon, 26 Mar 2012 10:58:10 -0600 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: References: Message-ID: <4F70A022.6060507@oddbird.net> On 03/23/2012 09:22 PM, PJ Eby wrote: > On Mar 23, 2012 3:53 PM, "Carl Meyer" > On 03/23/2012 12:35 PM, PJ Eby wrote: >> > AFAICT, virtualenvs are overkill for most development anyway. If you're >> > not using distutils except to install dependencies, then configure >> > distutils to install scripts and libraries to the same directory, and >> > then do all your development in that directory. Presto! You now have a >> > cross-platform "virtualenv". Want the scripts on your path? Add that >> > directory to your path... or if on Windows, don't bother, since the >> > current directory is usually on the path. (In fact, if you're only >> > using easy_install to install your dependencies, you don't even need to >> > edit the distutils configuration, just use "-md targetdir".) >> >> Creating and using a virtualenv is, in practice, _easier_ than any of >> those alternatives, > > Really? As I said, I've never seen the need to try, since just > installing stuff to a directory on PYTHONPATH seems quite easy enough > for me. > >> that the "isolation from system site-packages" feature is quite popular >> (the outpouring of gratitude when virtualenv went isolated-by-default a >> few months ago was astonishing), and AFAIK none of your alternative >> proposals support that at all. > > What is this isolation for, exactly? If you don't want site-packages on > your path, why not use python -S? > > (Sure, nobody knows about these things, but surely that's a > documentation problem, not a tooling problem.) > > Don't get me wrong, I don't have any deep objection to virtualenvs, I've > just never seen the *point* (outside of the scenarios I mentioned), No problem. I was just responding to the assertion that people only use virtualenvs because they aren't aware of the alternatives, which I don't believe is true. It's likely many people aren't aware of python -S, or of everything that's possible via distutils.cfg. But even if they were, for the cases where I commonly see virtualenv in use, it solves the same problems more easily and with much less fiddling with config files and environment variables. Case in point: libraries that also install scripts for use in development or build processes. If you're DIY, you have to figure out where to put these, too, and make sure it's on your PATH. And if you want isolation, not only do you have to remember to run python -S every time, you also have to edit every script wrapper to put -S in the shebang line. With virtualenv+easy_install/pip, all of these things Just Simply Work, and (mostly) in an intuitive way. That's why people use it. > thus don't see what great advantage will be had by rearranging layouts > to make them shareable across platforms, when "throw stuff in a > directory" seems perfectly serviceable for that use case already. Tools > that *don't* support "just throw it in a directory" as a deployment > option are IMO unpythonic -- practicality beats purity, after all. ;-) No disagreement here. I think virtualenv's sweet spot is as a convenient tool for development environments (used in virtualenvwrapper fashion, where the file structure of the virtualenv itself is hidden away and you never see it at all). I think it's fine to deploy _into_ a virtualenv, if you find that convenient too (though I think there are real advantages to deploying just a big ball of code with no need for installers). But I see little reason to make virtualenvs relocatable or sharable across platforms. I don't think virtualenvs as on on-disk file structure make a good distribution/deployment mechanism at all. IOW, I hijacked this thread (sorry) to respond to a specific denigration of the value of virtualenv that I disagree with. I don't care about making virtualenvs consistent across platforms. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From rdmurray at bitdance.com Mon Mar 26 19:19:00 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 26 Mar 2012 13:19:00 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> Message-ID: <20120326171901.A670E2500E9@webabinitio.net> On Mon, 26 Mar 2012 12:55:42 -0400, PJ Eby wrote: > On Sat, Mar 24, 2012 at 8:41 PM, Ben Finney wrote: > > So, again, why make your browser window *for reading text* that large? > > Because I have one browser window, and it's maximized. And I can do this, > because most websites are designed in such a way that they have usable > margins for text flows. Even PEPs and Python mailing list archives, for > example, have sane text margins -- shall we go back and make *those* > dependent on window width instead? > [...] > > Designers' rules of thumb for text width are based on empirical > observations of focal length, saccades, etc. If you have special needs > visually, you're more likely to require the text read to you, than to have > narrower text, and I at least am unable to conceive of a visual disability > that would be helped by *increasing* the text width. > > In other words, there is a well-established *majority* need for how many > characters should appear in an unwrapped line of text, based on majority > physiology. Designers who limit it based on pixel size are Doing It Wrong; > the max width should be based on em's rather than pixels. (Font sizes are a > separate issue.) I'm with Philip on this one. I hate web sites that have a fixed text width (so that you can't resize narrower and still read it), but I also prefer ones that set the max width to the "readable size" in number of character positions. Like Philip, I have *one* window. My window manager (ratpoison) is more like 'screen' for X: you *can* split the window up, but it is *much* more useful to have only one window visible at a time, most of the time. So splitting the window in order to make the text narrow enough to read slows down my workflow. (Which means that on the python docs and the bug tracker I just put up with reading it wide...) I realize that I'm in the minority, though :) --David From v+python at g.nevcal.com Mon Mar 26 19:32:26 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Mon, 26 Mar 2012 10:32:26 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <20120326171901.A670E2500E9@webabinitio.net> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> <20120326171901.A670E2500E9@webabinitio.net> Message-ID: <4F70A82A.70307@g.nevcal.com> On 3/26/2012 10:19 AM, R. David Murray wrote: > Like Philip, I have*one* window. My window manager (ratpoison) is more > like 'screen' for X: you*can* split the window up, but it is*much* more > useful to have only one window visible at a time, most of the time. I'm amazed at the number of people that use maximized windows, one application at a time. I have 2 1600x1200 displays, with 10-30 overlapping windows partially visible, and sometimes wish I had 3 displays, so I could see more windows at a time... but then I'd have to turn my head more, so maybe this is optimal. Two displays lets me get my autohidden taskbar in the vertical center, for quick access from either side. I occasionally maximize a window on one screen or the other (one portrait, one landscape mode), mostly for picture viewing, but a few other times as well. > I realize that I'm in the minority, though No doubt I am too :) Everyone is a minority, when all idiotsyncrasies [sic] are considered! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Mon Mar 26 19:58:13 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 26 Mar 2012 10:58:13 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F70A82A.70307@g.nevcal.com> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> <20120326171901.A670E2500E9@webabinitio.net> <4F70A82A.70307@g.nevcal.com> Message-ID: <4F70AE35.9060903@stoneleaf.us> Glenn Linderman wrote: > On 3/26/2012 10:19 AM, R. David Murray wrote: >> Like Philip, I have *one* window. My window manager (ratpoison) is more >> like 'screen' for X: you *can* split the window up, but it is *much* more >> useful to have only one window visible at a time, most of the time. > > I'm amazed at the number of people that use maximized windows, one > application at a time. I have 2 1600x1200 displays, with 10-30 > overlapping windows partially visible, and sometimes wish I had 3 > displays, so I could see more windows at a time... but then I'd have to > turn my head more, so maybe this is optimal. Two displays lets me get > my autohidden taskbar in the vertical center, for quick access from > either side. I also have two monitors, I use goScreen for three virtual desktops (I'm stuck with XP -- which is to say MS), and each application I use is full-screen while I'm using it. I find windows that I'm not using at that moment a distraction. ~Ethan~ From ethan at stoneleaf.us Mon Mar 26 21:25:20 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 26 Mar 2012 12:25:20 -0700 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: <4F6F6DD8.9050901@pearwood.info> Message-ID: <4F70C2A0.10405@stoneleaf.us> Georg Brandl wrote: > On 25.03.2012 21:11, Steven D'Aprano wrote: >> Georg Brandl wrote: >> >>> Thanks everyone for the overwhelmingly positive feedback. I've committed the >>> new design to 3.2 and 3.3 for now, and it will be live for the 3.3 docs >>> momentarily (3.2 isn't rebuilt at the moment until 3.2.3 final goes out). >>> I'll transplant to 2.7 too, probably after the final release of 2.7.3. >> I think it would be better to leave 2.7 with the old theme, to keep it >> visually distinct from the nifty new theme used with the nifty new 3.2 and 3.3 >> versions. > > Hmm, -0 here. I'd like more opinions on this from other devs. +1 on keeping the 2.x and 3.x styles separate. ~Ethan~ From ethan at stoneleaf.us Mon Mar 26 21:25:39 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 26 Mar 2012 12:25:39 -0700 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: <4F70C2B3.4080709@stoneleaf.us> Georg Brandl wrote: > Here's another try, mainly with default browser font size, more contrast and > collapsible sidebar again: > > http://www.python.org/~gbrandl/build/html2/ > > I've also added a little questionable gimmick to the sidebar (when you collapse > it and expand it again, the content is shown at your current scroll location). > > Have fun! > Georg Looks great! Thanks! ~Ethan~ From v+python at g.nevcal.com Mon Mar 26 21:14:15 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Mon, 26 Mar 2012 12:14:15 -0700 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: <4F70AE35.9060903@stoneleaf.us> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> <20120326171901.A670E2500E9@webabinitio.net> <4F70A82A.70307@g.nevcal.com> <4F70AE35.9060903@stoneleaf.us> Message-ID: <4F70C007.8080606@g.nevcal.com> On 3/26/2012 10:58 AM, Ethan Furman wrote: > Glenn Linderman wrote: >> On 3/26/2012 10:19 AM, R. David Murray wrote: >>> Like Philip, I have *one* window. My window manager (ratpoison) is >>> more >>> like 'screen' for X: you *can* split the window up, but it is *much* >>> more >>> useful to have only one window visible at a time, most of the time. >> >> I'm amazed at the number of people that use maximized windows, one >> application at a time. I have 2 1600x1200 displays, with 10-30 >> overlapping windows partially visible, and sometimes wish I had 3 >> displays, so I could see more windows at a time... but then I'd have >> to turn my head more, so maybe this is optimal. Two displays lets me >> get my autohidden taskbar in the vertical center, for quick access >> from either side. > > I also have two monitors, I use goScreen for three virtual desktops > (I'm stuck with XP -- which is to say MS), and each application I use > is full-screen while I'm using it. I find windows that I'm not using > at that moment a distraction. Interesting. I guess I'm always distracted :) But I often need data or information from multiple applications in order to make progress on a project... which is why each application seldom gets maximized. I even use multiple Firefox profiles concurrently, to have multiple browsers open for different purposes, as well as multiple tabs within each of them. Sometimes I even need multiple instances of Emacs, as well as multiple windows for a single instance, but that is usually very temporary, due to an interruption (distraction). So my pet peeve about web sites is those that, although they seem to dynamically adjust to different monitor sizes, seem to miscalculate, and display a horizontal scroll bar, and consistently chop stuff off on the right edge of my non-maximized window. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Mon Mar 26 21:21:06 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 26 Mar 2012 21:21:06 +0200 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 References: <4F6F6DD8.9050901@pearwood.info> <4F70C2A0.10405@stoneleaf.us> Message-ID: <20120326212106.76ea8f40@pitrou.net> On Mon, 26 Mar 2012 12:25:20 -0700 Ethan Furman wrote: > Georg Brandl wrote: > > On 25.03.2012 21:11, Steven D'Aprano wrote: > >> Georg Brandl wrote: > >> > >>> Thanks everyone for the overwhelmingly positive feedback. I've committed the > >>> new design to 3.2 and 3.3 for now, and it will be live for the 3.3 docs > >>> momentarily (3.2 isn't rebuilt at the moment until 3.2.3 final goes out). > >>> I'll transplant to 2.7 too, probably after the final release of 2.7.3. > >> I think it would be better to leave 2.7 with the old theme, to keep it > >> visually distinct from the nifty new theme used with the nifty new 3.2 and 3.3 > >> versions. > > > > Hmm, -0 here. I'd like more opinions on this from other devs. > > +1 on keeping the 2.x and 3.x styles separate. I don't really understand the point. If we want to distinguish between 2.x and 3.x, perhaps a lighter difference would suffice. Regards Antoine. From pje at telecommunity.com Mon Mar 26 21:27:48 2012 From: pje at telecommunity.com (PJ Eby) Date: Mon, 26 Mar 2012 15:27:48 -0400 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: <4F70A022.6060507@oddbird.net> References: <4F70A022.6060507@oddbird.net> Message-ID: On Mon, Mar 26, 2012 at 12:58 PM, Carl Meyer wrote: > No disagreement here. I think virtualenv's sweet spot is as a convenient > tool for development environments (used in virtualenvwrapper fashion, > where the file structure of the virtualenv itself is hidden away and you > never see it at all). I think it's fine to deploy _into_ a virtualenv, > if you find that convenient too (though I think there are real > advantages to deploying just a big ball of code with no need for > installers). But I see little reason to make virtualenvs relocatable or > sharable across platforms. I don't think virtualenvs as on on-disk file > structure make a good distribution/deployment mechanism at all. > > IOW, I hijacked this thread (sorry) to respond to a specific denigration > of the value of virtualenv that I disagree with. I don't care about > making virtualenvs consistent across platforms. > Well, if you're the virtualenv maintainer (or at least the PEP author), and you're basically shooting down the principal rationale for reorganizing the Windows directory layout, then it's not really much of a hijack - it's pretty darn central to the thread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Mon Mar 26 21:40:25 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 26 Mar 2012 12:40:25 -0700 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <20120326212106.76ea8f40@pitrou.net> References: <4F6F6DD8.9050901@pearwood.info> <4F70C2A0.10405@stoneleaf.us> <20120326212106.76ea8f40@pitrou.net> Message-ID: <4F70C629.9040607@stoneleaf.us> Antoine Pitrou wrote: > On Mon, 26 Mar 2012 12:25:20 -0700 > Ethan Furman wrote: >> Georg Brandl wrote: >>> On 25.03.2012 21:11, Steven D'Aprano wrote: >>>> Georg Brandl wrote: >>>> >>>>> Thanks everyone for the overwhelmingly positive feedback. I've committed the >>>>> new design to 3.2 and 3.3 for now, and it will be live for the 3.3 docs >>>>> momentarily (3.2 isn't rebuilt at the moment until 3.2.3 final goes out). >>>>> I'll transplant to 2.7 too, probably after the final release of 2.7.3. >>>> I think it would be better to leave 2.7 with the old theme, to keep it >>>> visually distinct from the nifty new theme used with the nifty new 3.2 and 3.3 >>>> versions. >>> Hmm, -0 here. I'd like more opinions on this from other devs. >> +1 on keeping the 2.x and 3.x styles separate. > > I don't really understand the point. If we want to distinguish between > 2.x and 3.x, perhaps a lighter difference would suffice. The point being that 2.x is finished, and the bulk of our effort is now on 3.x. By not changing the 2.x docs we are emphasizing that 3.x is the way to go. ~Ethan~ From ethan at stoneleaf.us Mon Mar 26 21:13:10 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 26 Mar 2012 12:13:10 -0700 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: <20120325140752.7f783801@pitrou.net> References: <20120325140752.7f783801@pitrou.net> Message-ID: <4F70BFC6.4040507@stoneleaf.us> Antoine Pitrou wrote: > On Sun, 25 Mar 2012 08:34:44 +0200 > Also I think there should be some jquery animation when > collapsing/expanding. Please, no. I don't need my technical web pages singing and dancing for me. ;) ~Ethan~ From fuzzyman at voidspace.org.uk Mon Mar 26 22:32:26 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Mon, 26 Mar 2012 21:32:26 +0100 Subject: [Python-Dev] cpython: Adding unittest.mock documentation In-Reply-To: References: Message-ID: <46A927A7-2671-45CD-AF24-F820D38341CE@voidspace.org.uk> On 26 Mar 2012, at 08:11, Georg Brandl wrote: > On 26.03.2012 00:13, michael.foord wrote: >> http://hg.python.org/cpython/rev/adc1fc2dc872 >> changeset: 75938:adc1fc2dc872 >> user: Michael Foord >> date: Sun Mar 25 23:12:55 2012 +0100 >> summary: >> Adding unittest.mock documentation >> >> files: >> Doc/library/development.rst | 6 + >> Doc/library/unittest.mock-examples.rst | 887 +++++++++ >> Doc/library/unittest.mock-getting-started.rst | 419 ++++ >> Doc/library/unittest.mock-helpers.rst | 537 +++++ >> Doc/library/unittest.mock-magicmethods.rst | 226 ++ >> Doc/library/unittest.mock-patch.rst | 538 +++++ >> Doc/library/unittest.mock.rst | 900 ++++++++++ >> Lib/unittest/mock.py | 8 +- >> 8 files changed, 3516 insertions(+), 5 deletions(-) > > That seems a bit much splitting to me. Ok, it's just the style of the current mock documentation. My original intention *was* to create one big-ass api doc more in keeping with the current library doc styles. > > (By the way, the ".. module::" directive should only be in *one* place.) Thanks. Maybe sphinx could complain about duplicates. > > I would organize the mock, mock-patch, mock-magicmethods and mock-helpers as one > file in Doc/library, and put the other two in Doc/howto, just as for logging. > The examples pages aren't written in a howto style - there's a getting started guide (basic examples) and then a bunch of separate more advanced examples illustrating specific features of unittest.mock or particular mocking scenarios. I'll move the api documentation into a single doc and the getting started guide and examples as a second page. > In general, I wouldn't mind splitting off more of the exemplary material from > the main library docs, putting it in the howto section. Hmmm... in general I think I'd like to see the documentation for non-trivial modules include some examples (separate from - and after - the api docs) . Hiving them off as a howto makes them harder to find. More in depth howto guides in addition to this are a good thing though. Michael > > Georg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From v+python at g.nevcal.com Mon Mar 26 22:21:41 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Mon, 26 Mar 2012 13:21:41 -0700 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: References: <4F70A022.6060507@oddbird.net> Message-ID: <4F70CFD5.9070306@g.nevcal.com> On 3/26/2012 12:27 PM, PJ Eby wrote: > On Mon, Mar 26, 2012 at 12:58 PM, Carl Meyer > wrote: > > No disagreement here. I think virtualenv's sweet spot is as a > convenient > tool for development environments (used in virtualenvwrapper fashion, > where the file structure of the virtualenv itself is hidden away > and you > never see it at all). I think it's fine to deploy _into_ a virtualenv, > if you find that convenient too (though I think there are real > advantages to deploying just a big ball of code with no need for > installers). But I see little reason to make virtualenvs > relocatable or > sharable across platforms. I don't think virtualenvs as on on-disk > file > structure make a good distribution/deployment mechanism at all. > > IOW, I hijacked this thread (sorry) to respond to a specific > denigration > of the value of virtualenv that I disagree with. I don't care about > making virtualenvs consistent across platforms. > > > Well, if you're the virtualenv maintainer (or at least the PEP > author), and you're basically shooting down the principal rationale > for reorganizing the Windows directory layout, then it's not really > much of a hijack - it's pretty darn central to the thread! What I read here is a bit different than Mr Eby read, it seems. I read Carl as suggesting that keeping deployment copies of virtualenvs as foolish, but thinking it is fine to deploy into a virtualenv file structure (although preferring to deplay a big ball of code, himself). Personally, I see application deployment as a big ball of code the preferred technique also, but library/module deployment is harder to do that way... it sort of defeats the ability to then bundle the library/module into the big ball of code for the application. But if the goal is to deploy a big ball of code, that would run on top of an installed Python or virtualenv Python, then that is a lot easier if the only modules used are Python modules (no C extensions). Such can be bundled into a zip file, with little support, such that a relative Python novice as myself can figure it out and implement it quickly. C extensions cannot be run from a zip file, so then one needs support code to unzip the C binaries dynamically, and (possibly) delete them when done. Or am I missing something? Hmm. And here's something else that might be missing: integration of the launcher with .py files that are actually ZIP archives... where does it find the #! line? (probably it can't, currently -- I couldn't figure out how to make it do it). Is it possible to add a #! line at the beginning of a ZIP archive for the launcher to use, and still have Python recognize the result as a ZIP archive? I know self-extracting archives put an executable program in front of a ZIP file, and the result is still recognized by most ZIP archivers, but I tried just putting a #! line followed by a ZIP archive, and Python gave me SyntaxError: unknown decode error. -------------- next part -------------- An HTML attachment was scrubbed... URL: From v+python at g.nevcal.com Mon Mar 26 23:26:34 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Mon, 26 Mar 2012 14:26:34 -0700 Subject: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout) In-Reply-To: <4F70CFD5.9070306@g.nevcal.com> References: <4F70A022.6060507@oddbird.net> <4F70CFD5.9070306@g.nevcal.com> Message-ID: <4F70DF0A.8020901@g.nevcal.com> On 3/26/2012 1:21 PM, Glenn Linderman wrote: > Hmm. And here's something else that might be missing: integration of > the launcher with .py files that are actually ZIP archives... where > does it find the #! line? (probably it can't, currently -- I couldn't > figure out how to make it do it). Is it possible to add a #! line at > the beginning of a ZIP archive for the launcher to use, and still have > Python recognize the result as a ZIP archive? I know self-extracting > archives put an executable program in front of a ZIP file, and the > result is still recognized by most ZIP archivers, but I tried just > putting a #! line followed by a ZIP archive, and Python gave me > SyntaxError: unknown decode error. OK, my first try there, I forgot the stupid Windows /b switch on copy, so apparently the ZIP archive got mangled. When I use ropy /b to join #!/usr/bin/python3.2 and a zip file, it now works. Sorry for the noise. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Mar 26 23:50:49 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 26 Mar 2012 14:50:49 -0700 Subject: [Python-Dev] PEP 411 - request for pronouncement In-Reply-To: References: Message-ID: On Fri, Mar 23, 2012 at 2:51 AM, Eli Bendersky wrote: > PEP 411 -- Provisional packages in the Python standard library > > Has been updated with all accumulated feedback from list discussions. > Here it is: http://www.python.org/dev/peps/pep-0411/ (the text is also > pasted in the bottom of this email). > > The PEP received mostly positive feedback. The only undecided point is > where to specify that the package is provisional. Currently the PEP > mandates to specify it in the documentation and in the docstring. > Other suggestions were to put it in the code, either as a > __provisional__ attribute on the module, or collect all such modules > in a single sys.provisional list. > > According to http://blog.python.org/2012/03/2012-language-summit-report.html, > the PEP was discussed in the language summit and overall viewed > positively, although no final decision has been reached. > > ISTM a decision needs to be taken, which is why I request > pronouncement, with a recommendation on the requirement the PEP should > make of provisional modules (process details). I think the PEP is almost ready for approval. Congratulations! A few comments: - I'd leave some wiggle room for the docs owner (Georg) about the exact formulation of the text blurb included for provisional modules and the glossary entry; I don't want the PEP to have the last word here. - I think we are settling on the term "feature release" instead of the somewhat ambiguous "minor release". - As was discussed at the language summit, I'd like to emphasize that the bar for making changes to a provisional package should be considered pretty high. That is, while we don't make guarantees about backward compatibility, we still expect that most of the API of most provisional packages will be unchanged at graduation. Withdrawals should also be pretty rare. - Should we limit the duration of the provisional state to 1 or 2 feature releases? - I'm not sure what to do with regex -- it may be better to just include in as "re" and keep the old re module around under another name ("sre" has been proposed half jokingly). PS. Please use the version in the peps repo as the starting point of future edits. -- --Guido van Rossum (python.org/~guido) From storchaka at gmail.com Tue Mar 27 00:04:05 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 27 Mar 2012 01:04:05 +0300 Subject: [Python-Dev] PEP 393 decode() oddity In-Reply-To: References: Message-ID: 26.03.12 01:28, Victor Stinner ???????(??): > Cool, Python 3.3 is *much* faster to decode pure ASCII :-) He even faster on large data. 1000 characters is not enough to completely neutralize the constant costs of the function calls. Python 3.3 is really cool. >> encoding string 2.7 3.2 3.3 >> >> ascii " " * 1000 5.4 5.3 1.2 > > 4.5 faster than Python 2 here. And it can be accelerated (issue #14419). >> utf-8 " " * 1000 6.7 2.4 2.1 > > 3.2x faster In theory, the speed must coincide with latin1 speed. And it coincides in the limit, for large data. For medium data starting overhead cost is visible and utf-8 is a bit slower than it could be. > It's cool because in practice, a lot of strings are pure ASCII (as > Martin showed in its Django benchmark). But there are a lot of non-ascii text. But with mostly-ascii text, containing at least one non-ascii character (for example, Martin's full name), utf-8 decoder copes much worse. And worse than in Python 3.2. The decoder may be slower only by a small amount, related to scanning. I believe that the stock to optimize exists. > I'm interested by any patch optimizing any Python codecs. I'm not > working on optimizing Python Unicode anymore, various benchmarks > showed me that Python 3.3 is as good or faster than Python 3.2. That's > enough for me. Then would you accept a patch, proposed by me in issue 14249? This patch will not catch up all arrears, but it is very simple and should not cause objections. Developed by me now optimization accelerates decoder even more, but so far it is too ugly spaghetti-code. > When Python 3.3 is slower than Python 3.2, it's because Python 3.3 > must compute the maximum character of the result, and I fail to see > how to optimize this requirement. A significant slowdown was caused by the use of PyUnicode_WRITE with a variable kind in loop. In some cases, it would be useful to expand the loop in cascade of independent loops which fallback onto each other (as you have already done in utf8_scanner). From storchaka at gmail.com Tue Mar 27 00:09:47 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 27 Mar 2012 01:09:47 +0300 Subject: [Python-Dev] PEP 393 decode() oddity In-Reply-To: References: Message-ID: 27.03.12 01:04, Serhiy Storchaka ???????(??): > 26.03.12 01:28, Victor Stinner ???????(??): > loop in cascade of independent loops which fallback onto each other (as > you have already done in utf8_scanner). Sorry. Not you. Antoine Pitrou. From storchaka at gmail.com Mon Mar 26 23:10:43 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 27 Mar 2012 00:10:43 +0300 Subject: [Python-Dev] PEP 393 decode() oddity In-Reply-To: <20120325225513.Horde.v_hfaKGZi1VPb4YxHpTWb7A@webmail.df.eu> References: <20120325190137.7f0ffce0@pitrou.net> <20120325225513.Horde.v_hfaKGZi1VPb4YxHpTWb7A@webmail.df.eu> Message-ID: 25.03.12 23:55, martin at v.loewis.de ???????(??): >> The results are fairly stable (?0.1 ?sec) from run to run. It looks >> funny thing. > > This is not surprising. Thank you. Indeed, it is logical. I looked at the code and do not see how to speed it up. From solipsis at pitrou.net Tue Mar 27 00:25:15 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 27 Mar 2012 00:25:15 +0200 Subject: [Python-Dev] cpython: Fix time.steady(strict=True): don't use CLOCK_REALTIME References: Message-ID: <20120327002515.523b8e29@pitrou.net> On Mon, 26 Mar 2012 22:53:37 +0200 victor.stinner wrote: > http://hg.python.org/cpython/rev/566527ace50b > changeset: 75960:566527ace50b > user: Victor Stinner > date: Mon Mar 26 22:53:14 2012 +0200 > summary: > Fix time.steady(strict=True): don't use CLOCK_REALTIME Victor, could we have a PEP on all this? I think everyone has lost track of what you are trying to do with these new methods. cheers Antoine. From zooko at zooko.com Tue Mar 27 00:31:48 2012 From: zooko at zooko.com (Zooko Wilcox-O'Hearn) Date: Mon, 26 Mar 2012 16:31:48 -0600 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: On Fri, Mar 23, 2012 at 11:27 AM, Victor Stinner wrote: > > time.steady(strict=False) is what you need to implement timeout. No, that doesn't fit my requirements, which are about event scheduling, profiling, and timeouts. See below for more about my requirements. I didn't say this explicitly enough in my previous post: Some use cases (timeouts, event scheduling, profiling, sensing) require a steady clock. Others (calendaring, communicating times to users, generating times for comparison to remote hosts) require a wall clock. Now here's the kicker: each use case incur significant risks if it uses the wrong kind of clock. If you're implementing event scheduling or sensing and control, and you accidentally get a wall clock when you thought you had a steady clock, then your program may go seriously wrong -- events may fire in the wrong order, measurements of your sensors may be wildly incorrect. This can lead to serious accidents. On the other hand, if you're implementing calendaring or display of "real local time of day" to a user, and you are using a steady clock for some reason, then you risk displaying incorrect results to the user. So using one kind of clock and then "falling back" to the other kind is a choice that should be rare, explicit, and discouraged. The provision of such a function in the standard library is an attractive nuisance -- a thing that people naturally think that they want when they haven't though about it very carefully, but that is actually dangerous. If someone has a use case which fits the "steady or else fall back to wall clock" pattern, I would like to learn about it. Regards, Zooko From stefan at bytereef.org Tue Mar 27 00:47:49 2012 From: stefan at bytereef.org (Stefan Krah) Date: Tue, 27 Mar 2012 00:47:49 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed In-Reply-To: References: <20120323092255.GA17205@sleipnir.bytereef.org> <20120323104005.GA17581@sleipnir.bytereef.org> Message-ID: <20120326224749.GA23190@sleipnir.bytereef.org> Victor Stinner wrote: > > The 80x is a ballpark figure for the maximum expected speedup for > > standard numerical floating point applications. > > Ok, but it's just surprising when you read the What's New document. > 72x and 80x look to be inconsistent. Yes, indeed, I'll reword that. > > For huge numbers _decimal is also faster than int: > > > > factorial(1000000): > > > > _decimal, calculation time: 6.844487905502319 > > _decimal, tostr(): ?? ?? ?? ?? ??0.033592939376831055 > > > > int, calculation time: 17.96010398864746 > > int, tostr(): ... still running ... > > Hum, with a resolution able to store the result with all digits? Yes, you have to set context.prec (and emax) to the maximum values, then the result is an exact integer. The conversion to string is so fast because there is no complicated base conversion. > If yes, would it be possible to reuse the multiply algorithm of _decimal > (and maybe of other functions) for int? Or does it depend heavily on > _decimal internal structures? Large parts of the Number Theoretic Transform could be reused, but there would be still quite a bit of work. Stefan Krah From victor.stinner at gmail.com Tue Mar 27 01:07:39 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 27 Mar 2012 01:07:39 +0200 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: > So using one kind of clock and then "falling back" to the other kind > is a choice that should be rare, explicit, and discouraged. The > provision of such a function in the standard library is an attractive > nuisance -- a thing that people naturally think that they want when > they haven't though about it very carefully, but that is actually > dangerous. > > If someone has a use case which fits the "steady or else fall back to > wall clock" pattern, I would like to learn about it. Python 3.2 doesn't provide a monotonic clock, so most program uses time.time() even if a monotonic clock would be better in some functions. For these programs, you can replace time.time() by time.steady() where you need to compute a time delta (e.g. compute a timeout) to avoid issues with the system clock update. The idea is to improve the program without refusing to start if no monotonic clock is available. Victor From glyph at twistedmatrix.com Tue Mar 27 00:47:05 2012 From: glyph at twistedmatrix.com (Glyph) Date: Mon, 26 Mar 2012 18:47:05 -0400 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: On Mar 26, 2012, at 6:31 PM, Zooko Wilcox-O'Hearn wrote: > On Fri, Mar 23, 2012 at 11:27 AM, Victor Stinner > wrote: >> >> time.steady(strict=False) is what you need to implement timeout. > > No, that doesn't fit my requirements, which are about event > scheduling, profiling, and timeouts. See below for more about my > requirements. > > I didn't say this explicitly enough in my previous post: > > Some use cases (timeouts, event scheduling, profiling, sensing) > require a steady clock. Others (calendaring, communicating times to > users, generating times for comparison to remote hosts) require a wall > clock. > > Now here's the kicker: each use case incur significant risks if it > uses the wrong kind of clock. > > If you're implementing event scheduling or sensing and control, and > you accidentally get a wall clock when you thought you had a steady > clock, then your program may go seriously wrong -- events may fire in > the wrong order, measurements of your sensors may be wildly incorrect. > This can lead to serious accidents. On the other hand, if you're > implementing calendaring or display of "real local time of day" to a > user, and you are using a steady clock for some reason, then you risk > displaying incorrect results to the user. > > So using one kind of clock and then "falling back" to the other kind > is a choice that should be rare, explicit, and discouraged. The > provision of such a function in the standard library is an attractive > nuisance -- a thing that people naturally think that they want when > they haven't though about it very carefully, but that is actually > dangerous. > > If someone has a use case which fits the "steady or else fall back to > wall clock" pattern, I would like to learn about it. I feel that this should be emphasized. Zooko knows what he's talking about here. Listen to him :). (Antoine has the right idea. I think it's well past time for a PEP on this feature.) -glyph From victor.stinner at gmail.com Tue Mar 27 01:32:38 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 27 Mar 2012 01:32:38 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock Message-ID: Hi, I started to write the PEP 418 to clarify the notions of monotonic and steady clocks. The PEP is a draft and everyone is invited to contribute! http://www.python.org/dev/peps/pep-0418/ http://hg.python.org/peps/file/tip/pep-0418.txt Victor From victor.stinner at gmail.com Tue Mar 27 02:16:52 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 27 Mar 2012 02:16:52 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: Message-ID: > I started to write the PEP 418 to clarify the notions of monotonic and > steady clocks. > > The PEP is a draft and everyone is invited to contribute! time.steady() doesn't fit the benchmarking use case: it looks like we have to decide between stability and clock resolution. QueryPerformanceCounter() has a good resolution for benchmarking, but it is not monotonic and so GetTickCount64() would be used for time.steady(). GetTickCount64() is monotonic but has only a resolution of 1 millisecond. We might add a third new function which provides the most accurate clock with or without a known starting point. We cannot use QueryPerformanceCounter() to enhance time.time() resolution because it has an unknown starting point. Victor From scott+python-dev at scottdial.com Tue Mar 27 02:23:12 2012 From: scott+python-dev at scottdial.com (Scott Dial) Date: Mon, 26 Mar 2012 20:23:12 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: Message-ID: <4F710870.9090602@scottdial.com> On 3/26/2012 7:32 PM, Victor Stinner wrote: > I started to write the PEP 418 to clarify the notions of monotonic and > steady clocks. """ time.steady This clock advances at a steady rate relative to real time. It may be adjusted. """ Please do not call this "steady". If the clock can be adjusted, then it is not "steady" by any acceptable definition. I cannot fathom the utility of this function other than as a function that provides an automatic fallback from "time.monotonic()". More importantly: this definition of "steady" is in conflict with the C++0x definition of "steady" that is where you sourced this named from![1] """ time.steady(strict=False) falls back to another clock if no monotonic clock is not available or does not work, but it does never fail. """ As I say above, that is so far away from what "steady" implies that this is a misnomer. What you are describing is a best-effort clock, which sounds a lot more like the C++0x "high resolution" clock. """ time.steady(strict=True) raises OSError if monotonic clock fails or NotImplementedError if the system does not provide a monotonic clock """ What is the utility of "strict=True"? If I wanted that mode of operation, then why would I not just try to use "time.monotonic()" directly? At worst, it generates an "AttributeError" (although that is not clear from your PEP). What is the use case for "strict=True" that is not covered by your "time.monotonic()"? If you want to define new clocks, then I wish you would use the same definitions that C++0x is using. That is: system_clock = wall clock time monotonic_clock = always goes forward but can be adjusted steady_clock = always goes forward and cannot be adjusted high_resolution_clock = steady_clock || system_clock Straying from that is only going to create confusion. Besides that, the one use case for "time.steady()" that you give (benchmarking) is better served by a clock that follows the C++0x definition. As well, certain kinds of scheduling/timeouts would be better implemented with the C++0x definition for "steady" rather than the "monotonic" one and vice-versa. Rather, it seems you have a particular use-case in mind and have settled on calling that a "steady" clock despite how it belies its name. [1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3128.html#time.clock.steady """ Objects of class steady_clock represent clocks for which values of time_point advance at a steady rate relative to real time. That is, the clock may not be adjusted. """ -- Scott Dial scott at scottdial.com From zooko at zooko.com Tue Mar 27 04:26:23 2012 From: zooko at zooko.com (Zooko Wilcox-O'Hearn) Date: Mon, 26 Mar 2012 20:26:23 -0600 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F710870.9090602@scottdial.com> References: <4F710870.9090602@scottdial.com> Message-ID: > ?system_clock = wall clock time > ?monotonic_clock = always goes forward but can be adjusted > ?steady_clock = always goes forward and cannot be adjusted > ?high_resolution_clock = steady_clock || system_clock Note that the C++ standard deprecated monotonic_clock once they realized that there is absolutely no point in having a clock that jumps forward but not back, and that none of the operating systems implement such a thing -- instead they all implement a clock which doesn't jump in either direction. http://stackoverflow.com/questions/6777278/what-is-the-rationale-for-renaming-monotonic-clock-to-steady-clock-in-chrono In other words, yes! +1! The C++ standards folks just went through the process that we're now going through, and if we do it right we'll end up at the same place they are: http://en.cppreference.com/w/cpp/chrono/system_clock """ system_clock represents the system-wide real time wall clock. It may not be monotonic: on most systems, the system time can be adjusted at any moment. It is the only clock that has the ability to map its time points to C time, and, therefore, to be displayed. steady_clock: monotonic clock that will never be adjusted high_resolution_clock: the clock with the shortest tick period available """ Note that we don't really have the option of providing a clock which is "monotonic but not steady" in the sense of "can jump forward but not back". It is a misunderstanding (doubtless due to the confusing name "monotonic") to think that such a thing is offered by the underlying platforms. We can choose to *call* it "monotonic", following POSIX instead of calling it "steady", following C++. Regards, Zooko From zooko at zooko.com Tue Mar 27 04:41:56 2012 From: zooko at zooko.com (Zooko Wilcox-O'Hearn) Date: Mon, 26 Mar 2012 20:41:56 -0600 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: On Mon, Mar 26, 2012 at 5:07 PM, Victor Stinner wrote: >> >> If someone has a use case which fits the "steady or else fall back to wall clock" pattern, I would like to learn about it. > > Python 3.2 doesn't provide a monotonic clock, so most program uses time.time() even if a monotonic clock would be better in some functions. For these programs, you can replace time.time() by time.steady() where you need to compute a time delta (e.g. compute a timeout) to avoid issues with the system clock update. The idea is to improve the program without refusing to start if no monotonic clock is available. I agree that this is a reasonable use case. I think of it as basically being a kind of backward-compatibility, for situations where an unsteady clock is okay, and a steady clock isn't available. Twisted faces a similar issue: http://twistedmatrix.com/trac/ticket/2424 It might good for use cases like this to explicitly implement the try-and-fallback, since they might have specific needs about how it is done. For one thing, some such uses may need to emit a warning, or even to require the caller to explicitly override, such a refusing to start if a steady clock isn't available unless the user specifies "--unsteady-clock-ok". For motivating examples, consider software written using Twisted > 12.0 or Python > 3.2 which is using a clock to drive real world sensing and control -- measuring the position of a machine and using time deltas to calculate the machine's velocity, in order to automatically control the motion of the machine. For some uses, it is okay if the measurement could, in rare cases, be drastically wrong. For other uses, that is not an acceptable risk. One reason I'm sensitive to this issue is that I work in the field of security, and making the behavior dependent on the system clock extends the "reliance set", i.e. the set of things that an attacker could use against you. For example, if your robot depends on the system clock for its sensing and control, and if your system clock obeys NTP, then the set of things that an attacker could use against you includes your NTP servers. If your robot depends instead on a steady clock, then NTP servers are not in the reliance set. Now, if your control platform doesn't have a steady clock, you may choose to go ahead, while making sure that the NTP servers are authenticated, or you may choose to disable NTP on the control platform, etc., but that choice might need to be made explicitly by the operator, rather than automatically by the library. Regards, Zooko From anacrolix at gmail.com Tue Mar 27 04:55:33 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Tue, 27 Mar 2012 10:55:33 +0800 Subject: [Python-Dev] Drop the new time.wallclock() function? In-Reply-To: References: <4F6137EF.9000000@gmail.com> Message-ID: Inside time.steady, there are two different clocks trying to get out. I think this steady business should be removed sooner rather than later. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anacrolix at gmail.com Tue Mar 27 04:59:40 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Tue, 27 Mar 2012 10:59:40 +0800 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> Message-ID: So does anyone care to dig into the libstd++/boost/windoze implementation to see how they each did steady_clock? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Tue Mar 27 05:23:32 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 27 Mar 2012 12:23:32 +0900 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> Message-ID: On Tue, Mar 27, 2012 at 1:35 AM, PJ Eby wrote: > On Sun, Mar 25, 2012 at 2:56 AM, Stephen J. Turnbull > wrote: >> >> But since he's arguing the >> other end in the directory layout thread (where he says there are many >> special ways to invoke Python so that having different layouts on >> different platforms is easy to work around), I can't give much weight >> to his preference here. > > > You're misconstruing my argument there: I said, rather, that the One Obvious > Way to deploy a Python application is to dump everything in one directory, > as that is the one way that Python has supported for at least 15 years now. It's not at all obvious on any of the open source platforms (Cygwin, Debian, Gentoo, and MacPorts) that I use. In all cases, both several python binaries and installed libraries end up in standard places in the distro-maintained hierarchies, and it is not hard to confuse the distro-installed Pythons. Being confident that one knows enough to set up a single directory correctly in the face of some of the unlovely things that packages may do requires more knowledge of how Python's import etc works than I can boast: virtualenv is a godsend. By analogy, yes, I think it makes sense to ask you to learn a bit about CSS and add a single line like "body { width: 65em; }" to your local config. That's one reason why CSS is designed to cascade. Of course, even better yet would be if the browsers wrote the CSS for you (which probably wouldn't be too hard, if I knew any XUL, which I don't). > The comparison to CSS is also lost on me here; creating user-specific CSS is > more aptly comparable telling people to write their own virtualenv > implementations from scratch, and resizing the browser window is more akin > to telling people to create a virtualenv every time they *run* the > application, rather than just once when installing it. Huh, if you say so -- I didn't realize that virtualenv did so little that it could be written in one line. All I know (and care) is that it promises to do all that stuff for me, and without affecting the general public (ie, the distro-provided Python apps). And that's why I think the width of pages containing flowed text should be left up to the user to configure. From eliben at gmail.com Tue Mar 27 05:23:55 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 27 Mar 2012 05:23:55 +0200 Subject: [Python-Dev] PEP 411 - request for pronouncement In-Reply-To: References: Message-ID: > I think the PEP is almost ready for approval. Congratulations! A few comments: > > - I'd leave some wiggle room for the docs owner (Georg) about the > exact formulation of the text blurb included for provisional modules > and the glossary entry; I don't want the PEP to have the last word > here. Sure, Georg is free to modify the pep to amend the formulation if he wants to. > > - I think we are settling on the term "feature release" instead of the > somewhat ambiguous "minor release". Fixed > > - As was discussed at the language summit, I'd like to emphasize that > the bar for making changes to a provisional package should be > considered pretty high. That is, while we don't make guarantees about > backward compatibility, we still expect that most of the API of most > provisional packages will be unchanged at graduation. Withdrawals > should also be pretty rare. > Added this emphasis at the end of the "Criteria for graduation" section. > - Should we limit the duration of the provisional state to 1 or 2 > feature releases? Initially the PEP came out with a 1-release limit, but some of the devs pointed out (http://mail.python.org/pipermail/python-dev/2012-February/116406.html) that me should not necessarily restrict ourselves. > > - I'm not sure what to do with regex -- it may be better to just > include in as "re" and keep the old re module around under another > name ("sre" has been proposed half jokingly). > Document updated in the PEPs Hg, rev a1bb0a9af63f. Eli From guido at python.org Tue Mar 27 05:34:51 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 26 Mar 2012 20:34:51 -0700 Subject: [Python-Dev] PEP 411 - request for pronouncement In-Reply-To: References: Message-ID: On Mon, Mar 26, 2012 at 8:23 PM, Eli Bendersky wrote: >> I think the PEP is almost ready for approval. Congratulations! A few comments: >> >> - I'd leave some wiggle room for the docs owner (Georg) about the >> exact formulation of the text blurb included for provisional modules >> and the glossary entry; I don't want the PEP to have the last word >> here. > > Sure, Georg is free to modify the pep to amend the formulation if he wants to. Ok. He can do that at any time. :-) >> - I think we are settling on the term "feature release" instead of the >> somewhat ambiguous "minor release". > > Fixed Great. >> - As was discussed at the language summit, I'd like to emphasize that >> the bar for making changes to a provisional package should be >> considered pretty high. That is, while we don't make guarantees about >> backward compatibility, we still expect that most of the API of most >> provisional packages will be unchanged at graduation. Withdrawals >> should also be pretty rare. >> > > Added this emphasis at the end of the "Criteria for graduation" section. Cool. >> - Should we limit the duration of the provisional state to 1 or 2 >> feature releases? > > Initially the PEP came out with a 1-release limit, but some of the > devs pointed out > (http://mail.python.org/pipermail/python-dev/2012-February/116406.html) > that me should not necessarily restrict ourselves. Gotcha. >> - I'm not sure what to do with regex -- it may be better to just >> include in as "re" and keep the old re module around under another >> name ("sre" has been proposed half jokingly). >> > > Document updated in the PEPs Hg, rev a1bb0a9af63f. TBH I'm not sure what I meant to say about regex in http://mail.python.org/pipermail/python-dev/2012-January/115962.html ... But I think if we end up changing the decision about this or any others that doesn't invalidate this PEP, which is informational PEP anyway. I've marked it up as Approved. Thanks, and congrats! -- --Guido van Rossum (python.org/~guido) From scott+python-dev at scottdial.com Tue Mar 27 05:36:10 2012 From: scott+python-dev at scottdial.com (Scott Dial) Date: Mon, 26 Mar 2012 23:36:10 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> Message-ID: <4F7135AA.9000203@scottdial.com> On 3/26/2012 10:59 PM, Matt Joiner wrote: > So does anyone care to dig into the libstd++/boost/windoze > implementation to see how they each did steady_clock? The Boost implementation can be summarized as: system_clock: mac = gettimeofday posix = clock_gettime(CLOCK_REALTIME) win = GetSystemTimeAsFileTime steady_clock: mac = mach_absolute_time posix = clock_gettime(CLOCK_MONOTONIC) win = QueryPerformanceCounter high_resolution_clock: * = { steady_clock, if available system_clock, otherwise } Whether or not these implementations meet the specification is an exercise left to the reader.. -- Scott Dial scott at scottdial.com From eliben at gmail.com Tue Mar 27 05:43:51 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 27 Mar 2012 05:43:51 +0200 Subject: [Python-Dev] PEP 411 - request for pronouncement In-Reply-To: References: Message-ID: > > I've marked it up as Approved. Thanks, and congrats! > Thanks! Eli From eliben at gmail.com Tue Mar 27 05:47:37 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 27 Mar 2012 05:47:37 +0200 Subject: [Python-Dev] PEP 411 - request for pronouncement In-Reply-To: References: Message-ID: On Tue, Mar 27, 2012 at 05:34, Guido van Rossum wrote: > On Mon, Mar 26, 2012 at 8:23 PM, Eli Bendersky wrote: >>> I think the PEP is almost ready for approval. Congratulations! A few comments: >>> >>> - I'd leave some wiggle room for the docs owner (Georg) about the >>> exact formulation of the text blurb included for provisional modules >>> and the glossary entry; I don't want the PEP to have the last word >>> here. >> >> Sure, Georg is free to modify the pep to amend the formulation if he wants to. > > Ok. He can do that at any time. :-) > Georg, would you like to change the suggested phrasing in the PEP, or can I go on and add the "provisional package" term to the glossary (in 3.3 only, of course). Eli From merwok at netwok.org Tue Mar 27 05:53:44 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Mon, 26 Mar 2012 23:53:44 -0400 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: <4F6F6DD8.9050901@pearwood.info> Message-ID: <4F7139C8.5090301@netwok.org> Hi, Le 25/03/2012 15:25, Georg Brandl a ?crit : > On 25.03.2012 21:11, Steven D'Aprano wrote: >> I think it would be better to leave 2.7 with the old theme, >> to keep it visually distinct from the nifty new theme used >> with the nifty new 3.2 and 3.3 versions. > Hmm, -0 here. I'd like more opinions on this from other devs. I?m +0 on Steven?s proposal and +1 on whatever you will decide. Cheers From anacrolix at gmail.com Tue Mar 27 05:54:51 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Tue, 27 Mar 2012 11:54:51 +0800 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F7135AA.9000203@scottdial.com> References: <4F710870.9090602@scottdial.com> <4F7135AA.9000203@scottdial.com> Message-ID: Cheers, that clears things up. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Tue Mar 27 06:45:15 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 27 Mar 2012 13:45:15 +0900 Subject: [Python-Dev] Playing with a new theme for the docs, iteration 2 In-Reply-To: References: Message-ID: On Mon, Mar 26, 2012 at 6:42 AM, Terry Reedy wrote: >> Of course you can always use a user stylesheet to override our choices. > > Can anyone tell me the best way to do that with FireFox? http://kb.mozillazine.org/UserContent.css explains clearly enough. I can't help you with your particular version since I'm on Mac OS X and Linux, but it works for me there. From jyasskin at gmail.com Tue Mar 27 07:51:38 2012 From: jyasskin at gmail.com (Jeffrey Yasskin) Date: Mon, 26 Mar 2012 22:51:38 -0700 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: Message-ID: FWIW, I'm not sure you're the right person to drive time PEPs. You don't seem to have come into it with much knowledge of time, and it's taken several repetitions for you to take corrections into account in both this discussion and the Decimal/datetime representation PEP. On Mon, Mar 26, 2012 at 4:32 PM, Victor Stinner wrote: > Hi, > > I started to write the PEP 418 to clarify the notions of monotonic and > steady clocks. > > The PEP is a draft and everyone is invited to contribute! > > http://www.python.org/dev/peps/pep-0418/ > http://hg.python.org/peps/file/tip/pep-0418.txt > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jyasskin%40gmail.com From ncoghlan at gmail.com Tue Mar 27 08:48:49 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 27 Mar 2012 16:48:49 +1000 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: Message-ID: On Tue, Mar 27, 2012 at 3:51 PM, Jeffrey Yasskin wrote: > FWIW, I'm not sure you're the right person to drive time PEPs. You > don't seem to have come into it with much knowledge of time, and it's > taken several repetitions for you to take corrections into account in > both this discussion and the Decimal/datetime representation PEP. The main things required to be a PEP champion are passion and a willingness to listen to expert feedback and change course in response. If someone lacks the former, they will lose steam and their PEP will eventually be abandoned. If they don't listen to expert feedback, then their PEP will ultimately be rejected (sometimes a PEP will be rejected anyway as a poor fit for the language *despite* being responsive to feedback, but that's no slight to the PEP author). Victor has shown himself to be quite capable of handling those aspects of the PEP process, and the topics he has recently applied himself to are ones where it is worthwhile having a good answer in the standard library for Python 3.3. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From p.f.moore at gmail.com Tue Mar 27 09:14:11 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 27 Mar 2012 08:14:11 +0100 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F710870.9090602@scottdial.com> References: <4F710870.9090602@scottdial.com> Message-ID: On 27 March 2012 01:23, Scott Dial wrote: > If you want to define new clocks, then I wish you would use the same > definitions that C++0x is using. That is: > > ?system_clock = wall clock time > ?monotonic_clock = always goes forward but can be adjusted > ?steady_clock = always goes forward and cannot be adjusted > ?high_resolution_clock = steady_clock || system_clock +1. This seems like an ideal case for following prior art in designing a Python API. Paul From glyph at twistedmatrix.com Tue Mar 27 09:17:32 2012 From: glyph at twistedmatrix.com (Glyph) Date: Tue, 27 Mar 2012 03:17:32 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> Message-ID: <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> On Mar 26, 2012, at 10:26 PM, Zooko Wilcox-O'Hearn wrote: > Note that the C++ standard deprecated monotonic_clock once they > realized that there is absolutely no point in having a clock that > jumps forward but not back, and that none of the operating systems > implement such a thing -- instead they all implement a clock which > doesn't jump in either direction. This is why I don't like the C++ terminology, because it seems to me that the C++ standard makes incorrect assertions about platform behavior, and apparently they standardized it without actually checking on platform capabilities. The clock does jump forward when the system suspends. At least some existing implementations of steady_clock in C++ already have this problem, and I think they all might. I don't think they can fully fix it without kernel changes, either. On linux, see discussion of a possible CLOCK_BOOTTIME in the future. The only current way I know of to figure out how long the system has been asleep is to look at the wall clock and compare, and we've already gone over the problems with relying on the wall clock. Plus, libstdc++ gives you no portable way to get informed about system power management events, so you can't fix it even if you know about this problem, natch. Time with respect to power management state changes is something that the PEP should address fully, for each platform. On the other hand, hopefully you aren't controlling your python-based CNC laser welder from a laptop that you are closing the lid on while the beam is in operation. Not that the PEP shouldn't address it, but maybe it should just address it to say "you're on your own" and refer to a few platform-specific resources for correcting this type of discrepancy. (, , ). -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Tue Mar 27 10:05:39 2012 From: glyph at twistedmatrix.com (Glyph) Date: Tue, 27 Mar 2012 04:05:39 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> References: <4F710870.9090602@scottdial.com> <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> Message-ID: On Mar 27, 2012, at 3:17 AM, Glyph wrote: > I don't think they can fully fix it without kernel changes I got really curious about this and went and did some research. With some really platform-specific hackery on every platform, you can mostly figure it out; completely on OS X and Windows, although (as far as I can tell) only partially on Linux and FreeBSD. I'm not sure if it's possible to make use of these facilities without a Twisted-style event-loop though. If anybody's interested, I recorded the results of my research in a comment on the Twisted ticket for this: . -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezio.melotti at gmail.com Tue Mar 27 10:28:32 2012 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Tue, 27 Mar 2012 02:28:32 -0600 Subject: [Python-Dev] PendingDeprecationWarning In-Reply-To: References: Message-ID: Hi, On Thu, Mar 22, 2012 at 3:13 PM, Terry Reedy wrote: > My impression is that the original reason for PendingDeprecationWarning > versus DeprecationWarning was to be off by default until the last release > before removal. But having DeprecationWarnings on by default was found to be > too obnoxious and it too is off by default. So do we still need > PendingDeprecationWarnings? My impression is that it is mostly not used, as > it is a nuisance to remember to change from one to the other. The > deprecation message can always indicate the planned removal time. I searched > the Developer's Guide for both deprecation and DeprecationWarning and found > nothing. See http://mail.python.org/pipermail/python-dev/2011-October/114199.html Best Regards, Ezio Melotti From regebro at gmail.com Tue Mar 27 11:03:07 2012 From: regebro at gmail.com (Lennart Regebro) Date: Tue, 27 Mar 2012 11:03:07 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> Message-ID: Reading this discussion, my conclusion is that not only us are confused, but everyone is. I think therefore, that the way forward is to only expose underlying API functions, and pretty much have no intelligence at all. At a higher level, we have two different "desires" here. You may want a monotonic clock, or you may not care. You may want high resolution, or you might not care. Which one is more important is something only you know. Therefore, we must have, at the minimum, a function that returns the highest resolution monotonic clock possible, as well as a function that returns the highest resolution system/wall clock possible. We also need ways to figure out what the resolution is of these clocks. In addition to that, you may have the requirement that the monotonic clock also should not be able to jump forward, but if I understand things correctly, most current OS's will not guarantee this. You may also have the requirement that the clock not only does not jump forward, but that it doesn't go faster or slower. Some clock implementations will speed up or slow down the monotonic clock, without jumps, to sync up with the wall clock. It seems only Unix provides a monotonic clock (CLOCK_MONOTONIC_RAW) that does not get adjusted at all. Now between all these requirements, only you know which one is more important? Do you primarily want a raw monotonic clock, and secondarily high resolution, or is the resolution more important than it being monotonic? (Because if you need a high resolution, you are usually measuring small timeframes, and the clock is more unlikely to be adjusted, for example). Since there is no obvious "A is better than B that is better than C" we first of all have to expose the underlying API's somehow, to allow people to make their own decisions. Secondly, since apparently not only python-dev, but many others as well, are a bit confused on this complex issue, I'm not sure we can provide any high-level functions that makes a best choice. As such the proposed time.monotonic() to get the monotonic clock on the various systems makes a lot of sense to me. It should get the highest resolution available on the system. Get GetTickCount64() of available on Windows, else GetTickCount(). The function could have a raw=False parameter to select between clock_gettime(CLOCK_MONOTONIC) and clock_gettime(CLOCK_MONOTONIC_RAW) on Unix, and it would get mach_absolute_time() on OS X. If no monotonic clock is available, it should raise an error.The same if you pass in raw=True and there is no monotonic clock that has no adjustments available. In the same vein, time.time() should provide the highest resolution system clock/wall clock available. We also need functions or attributes to get the resolution of these clocks. But a time.steady() that tries to get a "best case" doesn't make sense at this time, as apparently nobody knows what a best case is, or what it should be called, except that it should apparently not be called steady(). Since monotonic() raises an error if there is no monotonic clock available, implementing your own fallback is trivial in any case. //Lennart From linkerlv at gmail.com Tue Mar 27 12:03:12 2012 From: linkerlv at gmail.com (Linker) Date: Tue, 27 Mar 2012 18:03:12 +0800 Subject: [Python-Dev] how to uninstall python after 'make build' Message-ID: RT thx. linker -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Mar 27 15:23:00 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 27 Mar 2012 23:23:00 +1000 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> Message-ID: On Tue, Mar 27, 2012 at 7:03 PM, Lennart Regebro wrote: > But a time.steady() that tries to get a "best case" doesn't make sense > at this time, as apparently nobody knows what a best case is, or what > it should be called, except that it should apparently not be called > steady(). Since monotonic() raises an error if there is no monotonic > clock available, implementing your own fallback is trivial in any > case. +1 from me to Lennart's suggestion of mostly just exposing time.monotonic() without trying to get too clever. Applications that need a truly precise time source should *never* be reading it from the host OS (one fairly good solution can be to read your time directly from an attached GPS device). However, I think Victor's right to point out that the *standard library* needs to have a fallback to maintain backwards compatibility if time.monotonic() isn't available, and it seems silly to implement the same fallback logic in every module where we manipulate timeouts. As I concur with others that time.steady() is a thoroughly misleading name for this concept, I suggest we encapsulate the "time.monotic if available, time.time otherwise" handling as a "time.try_monotic()" function. That's simple clear and explicit: try_monotic() tries to use the monotic clock if it can, but falls back to time.time() rather than failing entirely if no monotonic clock is available. This is essential for backwards compatibility when migrating any current use of time.time() over to time.monotic(). Yes the monotonic clock is *better* for many use cases (including timeouts), but you'll usually be OK with the non-monotonic clock, too (particularly if that's what you were using anyway in earlier versions). After all, we've survived this long using time.time() for our timeout calculations, and bugs associated with clock adjustments are a rather rare occurrence. Third party libraries that need to support earlier Python versions can then implementation their own fallback logic (since they couldn't rely on time.try_monotonic being available either). The 3.3 time module would then be left with three interfaces: time.time() # Highest resolution timer available time.monotonic() # I'm not yet convinced we need the "raw" parameter but don't much mind either way time.try_monotonic() # Monotonic is preferred, but non-monotonic presents a tolerable risk Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ethan at stoneleaf.us Tue Mar 27 18:18:23 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 27 Mar 2012 09:18:23 -0700 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> Message-ID: <4F71E84F.8080309@stoneleaf.us> Nick Coghlan wrote: > The 3.3 time module would then be left with three interfaces: > > time.time() # Highest resolution timer available > time.monotonic() # I'm not yet convinced we need the "raw" parameter > but don't much mind either way > time.try_monotonic() # Monotonic is preferred, but non-monotonic > presents a tolerable risk +1 ~Ethan~ From yselivanov.ml at gmail.com Tue Mar 27 19:10:40 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 27 Mar 2012 13:10:40 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> Message-ID: On 2012-03-27, at 9:23 AM, Nick Coghlan wrote: > time.try_monotonic() # Monotonic is preferred, but non-monotonic > presents a tolerable risk This function seems unnecessary. It's easy to implement it when required in your application, hence I don't think it is worth adding to the stdlib. - Yury From victor.stinner at gmail.com Tue Mar 27 19:34:51 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 27 Mar 2012 19:34:51 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: Message-ID: > I started to write the PEP 418 to clarify the notions of monotonic and > steady clocks. I replaced time.steady() by time.try_monotonic(). I misunderstood "may not" in the C++ doc: I understood it as "it may be adjusted by NTP", whereas it means "it cannot be adjusted". Sorry for the confusion. I added a time.hires() clock because time.monotonic() and time.try_monotonic() are not the best clocks for profiling or benchmarking. For example, on Windows, time.hires() uses QueryPerformanceCounter() whereas time.monotonic() and time.try_monotonic() uses GetTickCount[64](). I added the pseudo-code of each function. I hope that it is easier to understand than a long text. Victor From victor.stinner at gmail.com Tue Mar 27 19:45:24 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 27 Mar 2012 19:45:24 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F710870.9090602@scottdial.com> References: <4F710870.9090602@scottdial.com> Message-ID: > What is the utility of "strict=True"? If I wanted that mode of > operation, then why would I not just try to use "time.monotonic()" > directly? I mentioned the strict=True API in the PEP just to list all propositions, but the PEP only proposes time.monotonic() and time.try_monotonic(), no the flags API. > At worst, it generates an "AttributeError" (although that is not clear from your PEP). I tried to mention when a function is always available or not always available. Is it better in the last version of the PEP? > ?system_clock = wall clock time > ?monotonic_clock = always goes forward but can be adjusted > ?steady_clock = always goes forward and cannot be adjusted > ?high_resolution_clock = steady_clock || system_clock I tried to follow these names in the PEP. I don't propose steady_clock because I don't see exactly which clocks would be used to implement it, nor if we need to provide monotonic *and* steady clocks. What do you think? > Straying from that is only going to create confusion. Besides that, the > one use case for "time.steady()" that you give (benchmarking) is better > served by a clock that follows the C++0x definition. I added a time.hires() clock to the PEP for the benchmarking/profiling use case. This function is not always available and so a program has to fallback manually to another clock. I don't think that it is an issue: Python programs already have to choose between time.clock() and time.time() depending on the OS (e.g. timeit module and pybench program). > As well, certain > kinds of scheduling/timeouts would be better implemented with the C++0x > definition for "steady" rather than the "monotonic" one and vice-versa. Sorry, I don't understand. Which kind of scheduling/timeouts? The PEP is still a draft (work-in-progress). Victor From victor.stinner at gmail.com Tue Mar 27 19:50:35 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 27 Mar 2012 19:50:35 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F7135AA.9000203@scottdial.com> References: <4F710870.9090602@scottdial.com> <4F7135AA.9000203@scottdial.com> Message-ID: > steady_clock: > > ?mac = mach_absolute_time > ?posix = clock_gettime(CLOCK_MONOTONIC) > ?win = QueryPerformanceCounter I read that QueryPerformanceCounter is no so monotonic, and GetTickCount is preferred. Is it true? > high_resolution_clock: > > ?* = { steady_clock, if available > ? ? ? ?system_clock, otherwise } On Windows, I propose to use QueryPerformanceCounter() for time.hires() and GetTickCount() for time.monotonic(). See the PEP for other OSes. Victor From ethan at stoneleaf.us Tue Mar 27 19:35:06 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 27 Mar 2012 10:35:06 -0700 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> Message-ID: <4F71FA4A.6070005@stoneleaf.us> Yury Selivanov wrote: > On 2012-03-27, at 9:23 AM, Nick Coghlan wrote: > >> time.try_monotonic() # Monotonic is preferred, but non-monotonic >> presents a tolerable risk > > This function seems unnecessary. It's easy to implement it when > required in your application, hence I don't think it is worth > adding to the stdlib. If I understood Nick correctly, time.try_monotonic() is /for/ the stdlib. If others want to make use of it, fine. If others want to make their own fallback mechanism, also fine. ~Ethan~ From victor.stinner at gmail.com Tue Mar 27 19:51:41 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 27 Mar 2012 19:51:41 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: Message-ID: 2012/3/27 Jeffrey Yasskin : > FWIW, I'm not sure you're the right person to drive time PEPs. I don't want to drive the PEP. Anyone is invited to contribute, as I wrote in my first message. I'm completing/rewriting the PEP with all comments. Victor From victor.stinner at gmail.com Tue Mar 27 19:55:16 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 27 Mar 2012 19:55:16 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> References: <4F710870.9090602@scottdial.com> <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> Message-ID: > The clock does?jump forward when the system suspends. ?At least some > existing implementations of steady_clock in C++ already have this problem, > and I think they all might. > .... > Time with respect to power management state changes is something that the > PEP should address fully, for each platform. I don't think that Python should workaround OS issues, but document them correctly. I started with this sentence for time.monotonic(): "The monotonic clock may stop while the system is suspended." I don't know exactly how clocks behave with system suspend. Tell me if you have more information. > ?(, > , > ). I will read these links and maybe add them to the PEP. Victor From victor.stinner at gmail.com Tue Mar 27 19:59:12 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 27 Mar 2012 19:59:12 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> Message-ID: > That's simple clear and explicit: try_monotic() tries to use the > monotic clock if it can, but falls back to time.time() rather than > failing entirely if no monotonic clock is available. I renamed time.steady() to time.try_monotonic() in the PEP. It's a temporary name until we decide what to do with this function. I also changed it to fallback to time.hires() if time.monotonic() is not available or failed. Victor From fuzzyman at voidspace.org.uk Tue Mar 27 20:00:31 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 27 Mar 2012 19:00:31 +0100 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> Message-ID: <4F72003F.6000606@voidspace.org.uk> On 27/03/2012 18:45, Victor Stinner wrote: > [snip...] >> Straying from that is only going to create confusion. Besides that, the >> one use case for "time.steady()" that you give (benchmarking) is better >> served by a clock that follows the C++0x definition. > I added a time.hires() clock to the PEP for the benchmarking/profiling > use case. This function is not always available and so a program has > to fallback manually to another clock. I don't think that it is an > issue: Python programs already have to choose between time.clock() and > time.time() depending on the OS (e.g. timeit module and pybench > program). It is this always-having-to-manually-fallback-depending-on-os that I was hoping your new functionality would avoid. Is time.try_monotonic() suitable for this usecase? Michael > >> As well, certain >> kinds of scheduling/timeouts would be better implemented with the C++0x >> definition for "steady" rather than the "monotonic" one and vice-versa. > Sorry, I don't understand. Which kind of scheduling/timeouts? > > The PEP is still a draft (work-in-progress). > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From anacrolix at gmail.com Tue Mar 27 20:18:29 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Wed, 28 Mar 2012 02:18:29 +0800 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> Message-ID: > I renamed time.steady() to time.try_monotonic() in the PEP. It's a > temporary name until we decide what to do with this function. How about get rid of it? Also monotonic should either not exist if it's not available, or always guarantee a (artificially) monotonic value. Finding out that something is already known to not work shouldn't require a call and a faked OSError. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Tue Mar 27 21:45:58 2012 From: pje at telecommunity.com (PJ Eby) Date: Tue, 27 Mar 2012 15:45:58 -0400 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> Message-ID: On Mon, Mar 26, 2012 at 11:23 PM, Stephen J. Turnbull wrote: > On Tue, Mar 27, 2012 at 1:35 AM, PJ Eby wrote: > > On Sun, Mar 25, 2012 at 2:56 AM, Stephen J. Turnbull > > > wrote: > >> > >> But since he's arguing the > >> other end in the directory layout thread (where he says there are many > >> special ways to invoke Python so that having different layouts on > >> different platforms is easy to work around), I can't give much weight > >> to his preference here. > > > > > > You're misconstruing my argument there: I said, rather, that the One > Obvious > > Way to deploy a Python application is to dump everything in one > directory, > > as that is the one way that Python has supported for at least 15 years > now. > > It's not at all obvious on any of the open source platforms (Cygwin, > Debian, > Gentoo, and MacPorts) that I use. In all cases, both several python > binaries > and installed libraries end up in standard places in the distro-maintained > hierarchies, and it is not hard to confuse the distro-installed Pythons. > Really? I've been doing "dump the app in a directory" since 1998 or so on various *nix platforms. And when distutils came along, I set up a user-specific cfg to install in the same directory. ISTR a 5-line pydistutils.cfg is sufficient to make everything go into to a particular directory, for packages using distutils for installation. Being confident that one knows enough to set up a single directory correctly > in the face of some of the unlovely things that packages may do requires > more knowledge of how Python's import etc works than I can boast: > virtualenv is a godsend. By analogy, yes, I think it makes sense to ask > you > to learn a bit about CSS and add a single line like "body { width: 65em; > }" to > your local config. That's one reason why CSS is designed to cascade. > That line won't work - it'll make the entire page that width, instead of just text paragraphs. (Plus, it should only be the *maximum* width - i.e. max-width.) Unfortunately, there's no way to identify the correct selector to use on all sites to select just the right elements to set max-width on - not all text is in "p", sometimes preformatted text is in a p with styles setting the formatting to be preformatted. (In other words, I actually do know a little about CSS - enough to know your idea won't actually work without tweaking it for different sites. I have enough Greasemonkey scripts as it is, never mind custom CSS.) > The comparison to CSS is also lost on me here; creating user-specific CSS > is > > more aptly comparable telling people to write their own virtualenv > > implementations from scratch, and resizing the browser window is more > akin > > to telling people to create a virtualenv every time they *run* the > > application, rather than just once when installing it. > > Huh, if you say so -- I didn't realize that virtualenv did so little that > it could be written in one line. Around 3-5 lines for dumping everything into a single directory. If you need multiple such directories at any one time, you can alternately pass --install-lib and --install-scripts to "setup.py install" when you install things. Or you can use easy_install and just specify the -d (--install-dir) option. Or, you could use the PYTHONHOME solution I described here in 2005: http://svn.python.org/view/sandbox/trunk/setuptools/EasyInstall.txt?r1=41220&r2=41221 Ian Bicking turned those instructions into a script, which he then gradually evolved into what we now know as virtualenv. Before that happened, though, I was deluged with complaints from people who were using "dump it in a directory somewhere" on *nix platforms and didn't want to have anything to do with those danged newfangled virtual whatchacallits. ;-) I mention this for context: my generic perception of virtualenv is that it's a fancy version of PYTHONHOME for people who can't install to site-packages and for some reason don't just dump their files in a PYTHONPATH directory like all those complaining people obviously did. ;-) All I know (and care) is > that it promises to do all that stuff for me, and without affecting the > general public (ie, the distro-provided Python apps). > > And that's why I think the width of pages containing flowed text > should be left up to the user to configure. > Your analogy is backwards: virtualenv is a generic, does-it-all-for-you, no need to touch it solution. User CSS and window sizes have to specified per-site. (Note that it doesn't suffice to use a small window to get optimal wrap width: you have to resize to allow for navigation bars, multiple columns, etc.) I think we should just agree to disagree; there's virtually no way I'm going to be convinced on either of these points. (I do, however, remain open to learning something new about virtualenv, if it actually does something besides make it possible for you to deal with ill-behaved setup scripts and installation tools that won't just let you point them at a single directory and have done with it.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Mar 27 23:02:08 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 27 Mar 2012 23:02:08 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> Message-ID: <4F722AD0.7010007@gmail.com> On 27/03/2012 20:18, Matt Joiner wrote: > Also monotonic should either not exist if it's not available, or always > guarantee a (artificially) monotonic value. What do you mean by "(artificially) monotonic value"? Should Python workaround OS bugs by always returning the maximum value of the clock? > Finding out that something > is already known to not work shouldn't require a call and a faked OSError. What do you mean? time.monotonic() is not implemented if the OS doesn't provide any monotonic clock (e.g. if clock_gettime(CLOCK_MONOTONIC) is missing on UNIX). OSError is only raised when the OS returns an error. There is no such "faked OSError". Victor From martin at v.loewis.de Tue Mar 27 23:11:07 2012 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 27 Mar 2012 23:11:07 +0200 Subject: [Python-Dev] Bug tracker outage Message-ID: <4F722CEB.1040102@v.loewis.de> Upfront hosting (Izak Burger) is going to do a Debian upgrade of the bug tracker machine "soon" (likely tomorrow). This may cause some outage, since there is a lot of custom stuff on the machine which may break with newer (Python) versions. I'll notify here when the upgrade is complete. Regards, Martin From victor.stinner at gmail.com Tue Mar 27 23:42:51 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 27 Mar 2012 23:42:51 +0200 Subject: [Python-Dev] [Python-checkins] peps: Approve PEP 411. In-Reply-To: References: Message-ID: 2012/3/27 guido.van.rossum : > http://hg.python.org/peps/rev/b9f43fe69691 > changeset: ? 4152:b9f43fe69691 > user: ? ? ? ?Guido van Rossum > date: ? ? ? ?Mon Mar 26 20:35:14 2012 -0700 > summary: > ?Approve PEP 411. > (...) > -Status: Draft > +Status: Approved The pep0 module doesn't accept the "Approved" status. I suppose that you mean "Accepted" and so changed the status. If not, please revert my change and fix the pep0 module. Victor From ncoghlan at gmail.com Wed Mar 28 01:45:32 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 28 Mar 2012 09:45:32 +1000 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <6D240213-9A35-4540-9400-1DE554C39629@twistedmatrix.com> Message-ID: Matt, we need the fallback behaviour in the stdlib so we can gracefully degrade the stdlib's *own* timeout handling back to the 3.2 status quo when there is no monotic clock available. It is *not* acceptable for the Python 3.3 stdlib to only work on platforms that provide a monotonic clock. Since duplicating that logic in every module that handles timeouts would be silly, it makes sense to provide an obvious way to do it in the time module. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Mar 28 02:36:18 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 28 Mar 2012 02:36:18 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F72003F.6000606@voidspace.org.uk> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> Message-ID: <4F725D02.1080706@gmail.com> Scott wrote: << The Boost implementation can be summarized as: system_clock: mac = gettimeofday posix = clock_gettime(CLOCK_REALTIME) win = GetSystemTimeAsFileTime steady_clock: mac = mach_absolute_time posix = clock_gettime(CLOCK_MONOTONIC) win = QueryPerformanceCounter high_resolution_clock: * = { steady_clock, if available system_clock, otherwise } >> I read again the doc of the QElapsedTimer class of the Qt library. So Qt and Boost agree to say that QueryPerformanceCounter() *is* monotonic. I was confused because of a bug found in 2006 in Windows XP on multicore processors. QueryPerformanceCounter() gave a different value on each core. The bug was fixed in Windows and is known as KB896256 (I already added a link to the bug in the PEP). >> I added a time.hires() clock to the PEP for the benchmarking/profiling >> use case (...) > > It is this always-having-to-manually-fallback-depending-on-os that I was > hoping your new functionality would avoid. Is time.try_monotonic() > suitable for this usecase? If QueryPerformanceCounter() is monotonic, the API can be simplified to: * time.time() = system clock * time.monotonic() = monotonic clock * time.hires() = monotonic clock or fallback to system clock time.hires() definition is exactly what I was trying to implement with "time.steady(strict=True)" / "time.try_monotonic()". -- Scott> monotonic_clock = always goes forward but can be adjusted Scott> steady_clock = always goes forward and cannot be adjusted I don't know if the monotonic clock should be called time.monotonic() or time.steady(). The clock speed can be adjusted by NTP, at least on Linux < 2.6.28. I don't know if other clocks used by my time.monotonic() proposition can be adjusted or not. If I understand correctly, time.steady() cannot be implemented using CLOCK_MONOTONIC on Linux because CLOCK_MONOTONIC can be adjusted? Does it really matter if a monotonic speed is adjusted? Victor From ethan at stoneleaf.us Wed Mar 28 03:02:48 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 27 Mar 2012 18:02:48 -0700 Subject: [Python-Dev] OT: single directory development [was Re: Playing with a new theme for the docs] In-Reply-To: References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> Message-ID: <4F726338.2060001@stoneleaf.us> PJ Eby wrote: > Really? I've been doing "dump the app in a directory" since 1998 or so > on various *nix platforms. And when distutils came along, I set up a > user-specific cfg to install in the same directory. ISTR a 5-line > pydistutils.cfg is sufficient to make everything go into to a particular > directory, for packages using distutils for installation. Perhaps somebody could clue me in on the best way to handle this scenario: I develop in a single directory: c:\source\loom\ loom.py test_loom.py because test_loom could at some point be executed by somebody besides me, while living in site-packages, I have test_loom.py create its needed files, including dynamic test scripts, in a temp directory. While this works fine for site-packages, it doesn't work at all while testing as the test script, being somewhere else, won't be able to load my test copy of loom. I know I have at least two choices: - go with a virtualenv and have my development code be in the virtualenv site-packages - insert the current path into sys.path before calling out to the dynamic scripts, but only if the current path is not site-packages Suggestions? ~Ethan~ From anacrolix at gmail.com Wed Mar 28 04:41:08 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Wed, 28 Mar 2012 10:41:08 +0800 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F725D02.1080706@gmail.com> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: On Mar 28, 2012 8:38 AM, "Victor Stinner" wrote: > > Scott wrote: > > << The Boost implementation can be summarized as: > > system_clock: > > mac = gettimeofday > posix = clock_gettime(CLOCK_REALTIME) > win = GetSystemTimeAsFileTime > > steady_clock: > > mac = mach_absolute_time > posix = clock_gettime(CLOCK_MONOTONIC) > win = QueryPerformanceCounter > > high_resolution_clock: > > * = { steady_clock, if available > system_clock, otherwise } >> > > I read again the doc of the QElapsedTimer class of the Qt library. So Qt and Boost agree to say that QueryPerformanceCounter() *is* monotonic. > > I was confused because of a bug found in 2006 in Windows XP on multicore processors. QueryPerformanceCounter() gave a different value on each core. The bug was fixed in Windows and is known as KB896256 (I already added a link to the bug in the PEP). > >>> I added a time.hires() clock to the PEP for the benchmarking/profiling >>> use case (...) >> >> >> It is this always-having-to-manually-fallback-depending-on-os that I was >> hoping your new functionality would avoid. Is time.try_monotonic() >> suitable for this usecase? > > > If QueryPerformanceCounter() is monotonic, the API can be simplified to: > > * time.time() = system clock > * time.monotonic() = monotonic clock > * time.hires() = monotonic clock or fallback to system clock > > time.hires() definition is exactly what I was trying to implement with "time.steady(strict=True)" / "time.try_monotonic()". > > -- > > Scott> monotonic_clock = always goes forward but can be adjusted > Scott> steady_clock = always goes forward and cannot be adjusted > > I don't know if the monotonic clock should be called time.monotonic() or time.steady(). The clock speed can be adjusted by NTP, at least on Linux < 2.6.28. Monotonic. It's still monotonic if it is adjusted forward, and that's okay. > > I don't know if other clocks used by my time.monotonic() proposition can be adjusted or not. > > If I understand correctly, time.steady() cannot be implemented using CLOCK_MONOTONIC on Linux because CLOCK_MONOTONIC can be adjusted? > > Does it really matter if a monotonic speed is adjusted? > > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Wed Mar 28 06:26:38 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 28 Mar 2012 13:26:38 +0900 Subject: [Python-Dev] Playing with a new theme for the docs In-Reply-To: References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> Message-ID: On Wed, Mar 28, 2012 at 4:45 AM, PJ Eby wrote: > [ body { width: 65em; } ] won't work - it'll make the entire page > that width, instead of just text paragraphs. True (I realized that might be bad in many cases later -- should have tested first rather than posting something random), but despite your argument, "p { max-width: 40em; }" will be good enough to handle pages where the designer leaves text width up to the user. Pages (or parts thereof) where the designer fubars the format for you are not my problem, they're *your* problem. "Be careful what you ask for, because you just might get it." Also, this is UI, in an environment where poor UI is easily worked around with a flick of your mouse. A improvement in 90% of the cases is a 90% improvement -- there won't be any fatal problems in the 10% where designers choose a max-width that's 200% of your personal max-width or whatever. > Your analogy is backwards: virtualenv is a generic, > does-it-all-for-you, no need to touch it solution. If you want to phrase it as an analogy, mine is virtualenv :: do-it-*for*-you :: site-specified max-width but dump-it-in-a-directory :: DIY :: user-specified CSS. The point of the analogy is that you're being inconsistent by using dump-it-in-a-directory yourself but recommending site-specified max-width for the Python docs. It's true I'm being similarly inconsistent in a sense (using virtualenv myself but recommending user CSS for Python docs). However, in this discussion, there are more important things than consistency. I think there are an awful lot of people who need reliable deployment, consistent with their development environments, so it makes sense to have Ian package it up for us as "virtualenv", and for it to be somewhat inflexible in its rules for doing so. OTOH, AFAICS those who use maximized windows for everything are a relatively small minority who will be well-served by a simple workaround, and there are gains to having the flexibility for the rest of us. The big problem from my point of view with the user CSS "solution" to the maximized-window problem is that common browsers don't make this easy to do. Cf. Terry Reedy's post asking how to specify user CSS for Firefox (where it's actually easy enough to do once you know how, but evidently not very discoverable). However, if you're in a sufficiently small minority (as I believe you are), it makes sense for Georg to (regretfully) ask you to use personal CSS to tell your browser about your preference. >?User CSS and window sizes have to specified per-site. They do? But *you* don't, you just maximize your window. So only a few sites will need specification in any case, those whose max-width exceeds tolerable bounds for you. And a personal max-width will affect only unbounded pages unless you use "! important". The point of user CSS is not to get optimality, which is a content-dependent problem for negotiation between user and designer, and sometimes one side or the other takes absolute priority. It's to ensure that users with special needs (very nearsighted users, users who prefer to work always in a maximized browser window) don't get screwed by extreme designs. >?(Note that it doesn't suffice to use a small > window to get optimal wrap width: I don't believe in optimal wrap width, and as far as I know, neither do the 1% of designers. I don't even *have* a personal optimal wrap width, although max height is almost always close enough to optimal. But I sometimes maximize the width of my browser window to get even more of the "structure" of text viewable in it, or reduce it to make word-for-word reading more efficient. Again, the problem here is not "suboptimal". AFAICS, it's preventing a few people who have evolved personal workflows adapted to a common design pattern that's not appropriate for documentation (IMO YMMV) from getting *pessimal* results. I believe that you can get what you need with user CSS in the case of no max-width (let's not forget that you and R. David Murray may prefer different values!), while many use-cases would want no max-width. But "p { max-width: none ! important; }" would not work well for us, since it would override all designers who set max-width. > I think we should just agree to disagree; there's virtually > no way I'm going to be convinced on either of these points. Hey, I'm an economist: de gustibus non est disputandum. Convincing you is not my goal; I want to convince Georg! *Policy* needs to be for the greatest good of the greatest number, and Georg IMO should set max-width, or not, as that makes reading the documentation more effective for the most people. I prefer "not", assuming it doesn't completely trash its usability for you and David (and assuming you're in as small a minority as I believe you to be). From ncoghlan at gmail.com Wed Mar 28 06:45:33 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 28 Mar 2012 14:45:33 +1000 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F725D02.1080706@gmail.com> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: On Wed, Mar 28, 2012 at 10:36 AM, Victor Stinner wrote: > If QueryPerformanceCounter() is monotonic, the API can be simplified to: > > ?* time.time() = system clock > ?* time.monotonic() = monotonic clock > ?* time.hires() = monotonic clock or fallback to system clock > > time.hires() definition is exactly what I was trying to implement with > "time.steady(strict=True)" / "time.try_monotonic()". Please don't call the fallback version "hires" as it suggests it may be higher resolution than time.time() and that's completely the wrong idea. If we're simplifying the idea to only promising a monotonic clock (i.e. will never go backwards within a given process, but may produce the same value for an indefinite period, and may jump forwards by arbitrarily large amounts), then we're back to being able to enforce monotonicity even if the underlying clock jumps backwards due to system clock adjustments. Specifically: time.time() = system clock time._monotonic() = system level monotonic clock (if it exists) time.monotonic() = clock based on either time._monotonic() (if available) or time.time() (if not) that enforces monotonicity of returned values. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From pje at telecommunity.com Wed Mar 28 07:18:33 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 28 Mar 2012 01:18:33 -0400 Subject: [Python-Dev] OT: single directory development [was Re: Playing with a new theme for the docs] In-Reply-To: <4F726338.2060001@stoneleaf.us> References: <4F6935E1.2030309@nedbatchelder.com> <4F69E337.7010501@nedbatchelder.com> <4F6A560A.6050207@canterbury.ac.nz> <5260192D-7EF1-4021-AB50-FD2021566CFF@twistedmatrix.com> <4F6D2061.3080804@canterbury.ac.nz> <4F6D5C52.1030802@canterbury.ac.nz> <87y5qpikah.fsf@benfinney.id.au> <4F726338.2060001@stoneleaf.us> Message-ID: On Tue, Mar 27, 2012 at 9:02 PM, Ethan Furman wrote: > PJ Eby wrote: > >> Really? I've been doing "dump the app in a directory" since 1998 or so >> on various *nix platforms. And when distutils came along, I set up a >> user-specific cfg to install in the same directory. ISTR a 5-line >> pydistutils.cfg is sufficient to make everything go into to a particular >> directory, for packages using distutils for installation. >> > > Perhaps somebody could clue me in on the best way to handle this scenario: > > I develop in a single directory: > > c:\source\loom\ > loom.py > test_loom.py > > because test_loom could at some point be executed by somebody besides me, > while living in site-packages, I have test_loom.py create its needed files, > including dynamic test scripts, in a temp directory. While this works fine > for site-packages, it doesn't work at all while testing as the test script, > being somewhere else, won't be able to load my test copy of loom. > > I know I have at least two choices: > - go with a virtualenv and have my development code be in the > virtualenv site-packages > - insert the current path into sys.path before calling out to > the dynamic scripts, but only if the current path is not > site-packages > > Suggestions? > At first I didn't understand the question, because I was misled by your directory layout sketch -- AFAICT it's completely irrelevant to the real problem, which is simply making sure that the directory containing loom.__file__ is on sys.path. I'm somewhat hard-pressed to see, "embed the virtualenv tool in my test script" as superior to either "Copy the modules I want to the temp directory alongside the modules" or "setup sys.path when running the scripts" (e.g. by altering the PYTHONPATH in the child environment). (I'm not clear on why you want to skip the path alteration in the site-packages case - is there something else in site-packages you don't want having top import priority? And if not, why not?) All that being said, I can see why under certain circumstances, a virtualenv might be an optimal tool to reach for; it's just not the *first* thing I'd reach for if a sys.path[0] assignment or environment variable setting would suffice to get the needed module(s) on the path. > ~Ethan~ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott+python-dev at scottdial.com Wed Mar 28 08:40:24 2012 From: scott+python-dev at scottdial.com (Scott Dial) Date: Wed, 28 Mar 2012 02:40:24 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F725D02.1080706@gmail.com> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: <4F72B258.10306@scottdial.com> On 3/27/2012 8:36 PM, Victor Stinner wrote: > Scott wrote: > Scott> monotonic_clock = always goes forward but can be adjusted > Scott> steady_clock = always goes forward and cannot be adjusted > > I don't know if the monotonic clock should be called time.monotonic() or > time.steady(). The clock speed can be adjusted by NTP, at least on Linux > < 2.6.28. > > I don't know if other clocks used by my time.monotonic() proposition can > be adjusted or not. > > If I understand correctly, time.steady() cannot be implemented using > CLOCK_MONOTONIC on Linux because CLOCK_MONOTONIC can be adjusted? > > Does it really matter if a monotonic speed is adjusted? You are right that CLOCK_MONOTONIC can be adjusted, so the Boost implementation is wrong. I'm not sure that CLOCK_MONOTONIC_RAW is right either due to suspend -- there doesn't appear to be a POSIX or Linux clock that is defined that meets the "steady" definition. I am not familiar enough with Windows or Mac to know for certain whether the Boost implementation has the correct behaviors either. With that in mind, it's certainly better that we just provide time.monotonic() for now. If platform support becomes available, then we can expose that as it becomes available in the future. In other words, at this time, I don't think "time.steady()" can be implemented faithfully for any platform so lets just not have it at all. In that case, I don't think time.try_monotonic() is really needed because we can emulate "time.monotonic()" in software if the platform is deficient. I can't imagine a scenario where you would ask for a monotonic clock and would rather have an error than have Python fill in the gap with an emulation. -- Scott Dial scott at scottdial.com From g.brandl at gmx.net Wed Mar 28 08:41:13 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 28 Mar 2012 08:41:13 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: On 28.03.2012 06:45, Nick Coghlan wrote: > On Wed, Mar 28, 2012 at 10:36 AM, Victor Stinner > wrote: >> If QueryPerformanceCounter() is monotonic, the API can be simplified to: >> >> * time.time() = system clock >> * time.monotonic() = monotonic clock >> * time.hires() = monotonic clock or fallback to system clock >> >> time.hires() definition is exactly what I was trying to implement with >> "time.steady(strict=True)" / "time.try_monotonic()". > > Please don't call the fallback version "hires" as it suggests it may > be higher resolution than time.time() and that's completely the wrong > idea. It's also a completely ugly name, since it's quite hard to figure out what it is supposed to stand for in the first place. Georg From victor.stinner at gmail.com Wed Mar 28 10:40:22 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 28 Mar 2012 10:40:22 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: >> ?* time.time() = system clock >> ?* time.monotonic() = monotonic clock >> ?* time.hires() = monotonic clock or fallback to system clock >> >> time.hires() definition is exactly what I was trying to implement with >> "time.steady(strict=True)" / "time.try_monotonic()". > > Please don't call the fallback version "hires" as it suggests it may > be higher resolution than time.time() and that's completely the wrong > idea. Why would it be a wrong idea? On Windows, time.monotonic() frequency is at least 1 MHz (can be GHz if it uses your CPU TSC) whereas time.time() is only updated each millisecond at the best case (each 15 ms by default if I remember correctly). On UNIX, CLOCK_MONOTONIC has the same theorical resolution than CLOCK_REALTIME (1 nanosecond thanks to the timespec structure) and I expect the same accuracy. On Mac, I don't know if mach_absolute_time() is more or as accurate than time.time(). time.hires() uses time.monotonic() if available, so if time.monotonic() has an higher resolution than time.time(), time.hires() can also be called a high-resolution clock. In practice, time.monotonic() is available on all modern platforms. > If we're simplifying the idea to only promising a monotonic > clock (i.e. will never go backwards within a given process, but may > produce the same value for an indefinite period, and may jump forwards > by arbitrarily large amounts), I don't know any monotonic clock jumping "forwards by arbitrarily large amounts". Linux can change CLOCK_MONOTONIC speed, but NTP doesn't "jump". > then we're back to being able to enforce monotonicity even > if the underlying clock jumps backwards due to system clock > adjustments. Do you know a monotonic clock that goes backward? If yes, Python might workaround the clock bug directly in time.monotonic(). But I would prefer to *not* workaround OS bugs. Victor From victor.stinner at gmail.com Wed Mar 28 10:48:04 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 28 Mar 2012 10:48:04 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F72B258.10306@scottdial.com> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> Message-ID: >> Scott> monotonic_clock = always goes forward but can be adjusted >> Scott> steady_clock = always goes forward and cannot be adjusted >> >> I don't know if the monotonic clock should be called time.monotonic() or >> time.steady(). The clock speed can be adjusted by NTP, at least on Linux >> < 2.6.28. (...) > > You are right that CLOCK_MONOTONIC can be adjusted, so the Boost > implementation is wrong. I'm not sure that CLOCK_MONOTONIC_RAW is right > either due to suspend -- there doesn't appear to be a POSIX or Linux > clock that is defined that meets the "steady" definition. The term "adjusted" should be clarified. A clock can be adjusted by setting its counter (e.g. setting the system date and time) or by changing temporary its frequency (to go faster or slower). Linux only adjusts CLOCK_MONOTONIC frequency but the clock is monotonic because it always goes forward. The monotonic property can be described as: t1=time.monotonic() t2=time.monotonic() assert t2 >= t1 > In that case, I don't think time.try_monotonic() is really needed > because we can emulate "time.monotonic()" in software if the platform is > deficient. time.hires() is needed when the OS doesn't provide any monotonic clock and because time.monotonic() must not use the system clock (which can jump backward). As I wrote, I don't think that Python should workaround OS bugs. If the OS monotonic clock is not monotonic, the OS should be fixed. > I can't imagine a scenario where you would ask for a > monotonic clock and would rather have an error than have Python fill in > the gap with an emulation. Sorry, I don't understand what you mean with "fill in the gap with an emulation". You would like to implement a monotonic clock based on the system clock? Victor From scott+python-dev at scottdial.com Wed Mar 28 11:37:34 2012 From: scott+python-dev at scottdial.com (Scott Dial) Date: Wed, 28 Mar 2012 05:37:34 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> Message-ID: <4F72DBDE.6040003@scottdial.com> On 3/28/2012 4:48 AM, Victor Stinner wrote: >>> Scott> monotonic_clock = always goes forward but can be adjusted >>> Scott> steady_clock = always goes forward and cannot be adjusted >>> >>> I don't know if the monotonic clock should be called time.monotonic() or >>> time.steady(). The clock speed can be adjusted by NTP, at least on Linux >>> < 2.6.28. (...) >> >> You are right that CLOCK_MONOTONIC can be adjusted, so the Boost >> implementation is wrong. I'm not sure that CLOCK_MONOTONIC_RAW is right >> either due to suspend -- there doesn't appear to be a POSIX or Linux >> clock that is defined that meets the "steady" definition. > > The term "adjusted" should be clarified. A clock can be adjusted by > setting its counter (e.g. setting the system date and time) or by > changing temporary its frequency (to go faster or slower). Linux only > adjusts CLOCK_MONOTONIC frequency but the clock is monotonic because > it always goes forward. The monotonic property can be described as: > > t1=time.monotonic() > t2=time.monotonic() > assert t2 >= t1 I agree. The point I was making is that implication of "steady" is that (t2-t1) is the same (given that t2 and t1 occur in time at the same relative moments), which is a guarantee that I don't see any platform providing currently. Any clock that can be "adjusted" in any manner is not going to meet the "steady" criterion. >> In that case, I don't think time.try_monotonic() is really needed >> because we can emulate "time.monotonic()" in software if the platform is >> deficient. > > As I wrote, I don't think that Python should workaround OS bugs. If > the OS monotonic clock is not monotonic, the OS should be fixed. I sympathize with this, but if the idea is that the Python stdlib should use time.monotonic() for scheduling, then it needs to always be available. Otherwise, we are not going to use it ourselves, and what sort of example is that to set? >> I can't imagine a scenario where you would ask for a >> monotonic clock and would rather have an error than have Python fill in >> the gap with an emulation. > > Sorry, I don't understand what you mean with "fill in the gap with an > emulation". You would like to implement a monotonic clock based on the > system clock? If "time.monotonic()" is only sometimes available, then I don't see the added clock being anything more than an amusement. (In this case, I'd rather just use clock_gettime() and friends directly, because I have to be platform aware anyways.) What developers want is a timer that is useful for scheduling things to happen after predictable interval in the future, so we should give them that to the best of our ability. -- Scott Dial scott at scottdial.com From victor.stinner at gmail.com Wed Mar 28 12:45:18 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 28 Mar 2012 12:45:18 +0200 Subject: [Python-Dev] Bug in generator if the generator in created in a C thread Message-ID: Hi, bugs.python.org is down so I'm reporting the bug here :-) We have a crash in our product when tracing is enabled by sys.settrace() and threading.settrace(). If a Python generator is created in a C thread, calling the generator later in another thread may crash if Python tracing is enabled. - the C thread calls PyGILState_Ensure() which creates a temporary Python thread state - a generator is created, the generator has a reference to a Python frame which keeps a reference to the temporary Python thread state - the C thread calls PyGILState_Releases() which destroys the temporary Python thread state - when the generator is called later in another thread, call_trace() reads the Python thread state from the generator frame, which is the destroyed frame => it does crash on a pointer dereference if the memory was reallocated (by malloc()) and the data were erased To reproduce the crash, unpack the attached generator_frame_bug.tar.gz, compile the C module using "python setup.py build" and then run "PYTHONPATH=$(ls -d build/lib*/) python test.py" (or just "python test.py if you installed the _test module). You may need to use Valgrind to see the error, or call memset(tstate, 0xFF, sizeof(*tstate)) before free(tstate) in tstate_delete_common(). Calling the generator should update its reference to the Python state thread in its frame. The generator may also clears frame->f_tstate (to detect bugs earlier), as it does for frame->f_back (to avoid a reference cycle). Attached patch implements this fix for Python 3.3. Victor -------------- next part -------------- A non-text attachment was scrubbed... Name: generator_frame_bug.tar.gz Type: application/x-gzip Size: 1747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: generator.patch Type: text/x-patch Size: 789 bytes Desc: not available URL: From victor.stinner at gmail.com Wed Mar 28 12:56:18 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 28 Mar 2012 12:56:18 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F72DBDE.6040003@scottdial.com> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: >>> In that case, I don't think time.try_monotonic() is really needed >>> because we can emulate "time.monotonic()" in software if the platform is >>> deficient. >> >> As I wrote, I don't think that Python should workaround OS bugs. If >> the OS monotonic clock is not monotonic, the OS should be fixed. > > I sympathize with this, but if the idea is that the Python stdlib should > use time.monotonic() for scheduling, then it needs to always be > available. Otherwise, we are not going to use it ourselves, and what > sort of example is that to set? There is time.hires() if you need a monotonic clock with a fallback to the system clock. Victor From steve at pearwood.info Wed Mar 28 14:05:59 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 28 Mar 2012 23:05:59 +1100 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: <4F72FEA7.8090903@pearwood.info> Georg Brandl wrote: > On 28.03.2012 06:45, Nick Coghlan wrote: >> On Wed, Mar 28, 2012 at 10:36 AM, Victor Stinner >> wrote: >>> If QueryPerformanceCounter() is monotonic, the API can be simplified to: >>> >>> * time.time() = system clock >>> * time.monotonic() = monotonic clock >>> * time.hires() = monotonic clock or fallback to system clock >>> >>> time.hires() definition is exactly what I was trying to implement with >>> "time.steady(strict=True)" / "time.try_monotonic()". >> Please don't call the fallback version "hires" as it suggests it may >> be higher resolution than time.time() and that's completely the wrong >> idea. > > It's also a completely ugly name, since it's quite hard to figure out > what it is supposed to stand for in the first place. Precisely. I always read "hires" as the verb hires (as in "he hires a car to go on holiday") rather than HIgh RESolution. -1 on hires, it's a horrible name. And misleading as well, because on Linux, it isn't any more high res than time.time(). +1 on Nick's suggestion of try_monotonic. It is clear and obvious and doesn't mislead. I don't have an opinion as to what the implementation of try_monotonic should be. Whether it should fall back to time.time, time.clock, or something else, I don't know. But it is a clear and obvious solution for the use-case of "I prefer the monotonic clock, if it is available, otherwise I'll take my chances with a best-effect clock." -- Steven From rdmurray at bitdance.com Wed Mar 28 14:56:16 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 28 Mar 2012 08:56:16 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F72FEA7.8090903@pearwood.info> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72FEA7.8090903@pearwood.info> Message-ID: <20120328125617.A99DD2500E9@webabinitio.net> On Wed, 28 Mar 2012 23:05:59 +1100, Steven D'Aprano wrote: > +1 on Nick's suggestion of try_monotonic. It is clear and obvious and doesn't > mislead. How about "monotonicest". (No, this is not really a serious suggestion.) However, time.steadiest might actually work. --David From larry at hastings.org Wed Mar 28 15:01:23 2012 From: larry at hastings.org (Larry Hastings) Date: Wed, 28 Mar 2012 14:01:23 +0100 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <20120328125617.A99DD2500E9@webabinitio.net> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72FEA7.8090903@pearwood.info> <20120328125617.A99DD2500E9@webabinitio.net> Message-ID: <4F730BA3.5080901@hastings.org> On 03/28/2012 01:56 PM, R. David Murray wrote: > On Wed, 28 Mar 2012 23:05:59 +1100, Steven D'Aprano wrote: >> +1 on Nick's suggestion of try_monotonic. It is clear and obvious and doesn't >> mislead. > How about "monotonicest". > > (No, this is not really a serious suggestion.) "monotonish". Thus honoring the Principle Of Least Monotonishment, //arry/ From anacrolix at gmail.com Wed Mar 28 15:48:58 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Wed, 28 Mar 2012 21:48:58 +0800 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F730BA3.5080901@hastings.org> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72FEA7.8090903@pearwood.info> <20120328125617.A99DD2500E9@webabinitio.net> <4F730BA3.5080901@hastings.org> Message-ID: time.monotonic(): The uneventful and colorless function. On Mar 28, 2012 9:30 PM, "Larry Hastings" wrote: > On 03/28/2012 01:56 PM, R. David Murray wrote: > >> On Wed, 28 Mar 2012 23:05:59 +1100, Steven D'Aprano >> wrote: >> >>> +1 on Nick's suggestion of try_monotonic. It is clear and obvious and >>> doesn't >>> mislead. >>> >> How about "monotonicest". >> >> (No, this is not really a serious suggestion.) >> > > "monotonish". > > Thus honoring the Principle Of Least Monotonishment, > > > //arry/ > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > anacrolix%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at jonathanfrench.net Wed Mar 28 16:03:11 2012 From: me at jonathanfrench.net (Jonathan French) Date: Wed, 28 Mar 2012 15:03:11 +0100 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72FEA7.8090903@pearwood.info> <20120328125617.A99DD2500E9@webabinitio.net> <4F730BA3.5080901@hastings.org> Message-ID: No, that would be time.monotonous(). This is time.monotonic(), the function that can only play a single note at a time. Uh, I mean time.monophonic(). Hmm, this is harder than it looks. On 28 March 2012 14:48, Matt Joiner wrote: > time.monotonic(): The uneventful and colorless function. > On Mar 28, 2012 9:30 PM, "Larry Hastings" wrote: > >> On 03/28/2012 01:56 PM, R. David Murray wrote: >> >>> On Wed, 28 Mar 2012 23:05:59 +1100, Steven D'Aprano >>> wrote: >>> >>>> +1 on Nick's suggestion of try_monotonic. It is clear and obvious and >>>> doesn't >>>> mislead. >>>> >>> How about "monotonicest". >>> >>> (No, this is not really a serious suggestion.) >>> >> >> "monotonish". >> >> Thus honoring the Principle Of Least Monotonishment, >> >> >> //arry/ >> ______________________________**_________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/**mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** >> anacrolix%40gmail.com >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/me%40jonathanfrench.net > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Mar 28 16:14:16 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 28 Mar 2012 07:14:16 -0700 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: Victor, I have completely lost track of the details of this discussion. Could you (with help from others who contributed) try to compile a table showing, for each platform (Windows/Mac/Linux/BSD) which clocks (or variations) we are considering, and for each of those: - a link for the reference documentation - what their typical accuracy is (barring jumps) - what they do when the "civil" time is made to jump (forward or back) by the user - how they are affected by small tweaks to the civil time by NTP - what they do if the system is suspended and resumed - whether they can be shared between processes running on the same machine - whether they may fail or be unsupported under some circumstances I have a feeling that if I saw such a table it would be much easier to decide. I assume much of this has already been said at one point in this thread, but it's impossible to have an overview at the moment. If someone has more questions they'd like to see answered please add to the list. -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Wed Mar 28 16:17:32 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 29 Mar 2012 00:17:32 +1000 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: On Wed, Mar 28, 2012 at 8:56 PM, Victor Stinner wrote: >>>> In that case, I don't think time.try_monotonic() is really needed >>>> because we can emulate "time.monotonic()" in software if the platform is >>>> deficient. >>> >>> As I wrote, I don't think that Python should workaround OS bugs. If >>> the OS monotonic clock is not monotonic, the OS should be fixed. >> >> I sympathize with this, but if the idea is that the Python stdlib should >> use time.monotonic() for scheduling, then it needs to always be >> available. Otherwise, we are not going to use it ourselves, and what >> sort of example is that to set? > > There is time.hires() if you need a monotonic clock with a fallback to > the system clock. Completely unintuitive and unnecessary. With the GIL taking care of synchronisation issues, we can easily coerce time.time() into being a monotonic clock by the simple expedient of saving the last returned value: def _make_monotic: try: # Use underlying system monotonic clock if we can return _monotonic except NameError: _tick = time() def monotic(): _new_tick = time() if _new_tick > _tick: _tick = _new_tick return _tick monotonic = _make_monotonic() Monotonicity of the result is thus ensured, even when using time.time() as a fallback. If using the system monotonic clock to get greater precision is acceptable for an application, then forcing monotonicity shouldn't be a problem either. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Wed Mar 28 16:27:26 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 28 Mar 2012 10:27:26 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: <467A106B-5887-4820-94A1-BF19D46F2F0F@gmail.com> On 2012-03-28, at 10:17 AM, Nick Coghlan wrote: > def _make_monotic: > try: > # Use underlying system monotonic clock if we can > return _monotonic > except NameError: > _tick = time() > def monotic(): > _new_tick = time() > if _new_tick > _tick: > _tick = _new_tick > return _tick > > monotonic = _make_monotonic() > > Monotonicity of the result is thus ensured, even when using > time.time() as a fallback. What if system time jumps 1 year back? We'll have the same monotonic time returned for this whole year? I don't think we should even try to emulate any of OS-level functionality. - Yury From guido at python.org Wed Mar 28 16:29:14 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 28 Mar 2012 07:29:14 -0700 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: On Wed, Mar 28, 2012 at 7:17 AM, Nick Coghlan wrote: > On Wed, Mar 28, 2012 at 8:56 PM, Victor Stinner > wrote: >>>>> In that case, I don't think time.try_monotonic() is really needed >>>>> because we can emulate "time.monotonic()" in software if the platform is >>>>> deficient. >>>> >>>> As I wrote, I don't think that Python should workaround OS bugs. If >>>> the OS monotonic clock is not monotonic, the OS should be fixed. >>> >>> I sympathize with this, but if the idea is that the Python stdlib should >>> use time.monotonic() for scheduling, then it needs to always be >>> available. Otherwise, we are not going to use it ourselves, and what >>> sort of example is that to set? >> >> There is time.hires() if you need a monotonic clock with a fallback to >> the system clock. > > Completely unintuitive and unnecessary. With the GIL taking care of > synchronisation issues, we can easily coerce time.time() into being a > monotonic clock by the simple expedient of saving the last returned > value: > > ?def _make_monotic: > ? ? ?try: > ? ? ? ? ?# Use underlying system monotonic clock if we can > ? ? ? ? ?return _monotonic > ? ? ?except NameError: > ? ? ? ? ?_tick = time() > ? ? ? ? ?def monotic(): > ? ? ? ? ? ? ?_new_tick = time() > ? ? ? ? ? ? ?if _new_tick > _tick: > ? ? ? ? ? ? ? ? ?_tick = _new_tick > ? ? ? ? ? ? ?return _tick > > ?monotonic = _make_monotonic() > > Monotonicity of the result is thus ensured, even when using > time.time() as a fallback. > > If using the system monotonic clock to get greater precision is > acceptable for an application, then forcing monotonicity shouldn't be > a problem either. That's a pretty obvious trick. But why don't the kernels do this if monotonicity is so important? I'm sure there are also downsides, e.g. if the clock is accidentally set forward by an hour and then back again, you wouldn't have a useful clock for an hour. And the cache is not shared between processes so different processes wouldn't see the same clock value (I presume that most of these clocks have state in the kernel that isn't bound to any particular process -- AFAIK only clock() does that, and only on Unixy systems). -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Wed Mar 28 16:36:31 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 29 Mar 2012 00:36:31 +1000 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: On Wed, Mar 28, 2012 at 6:40 PM, Victor Stinner wrote: >> If we're simplifying the idea to only promising a monotonic >> clock (i.e. will never go backwards within a given process, but may >> produce the same value for an indefinite period, and may jump forwards >> by arbitrarily large amounts), > > I don't know any monotonic clock jumping "forwards by arbitrarily > large amounts". Linux can change CLOCK_MONOTONIC speed, but NTP > doesn't "jump". If I understood Glyph's explanation correctly, then if your application is running in a VM and the VM is getting its clock data from the underlying hypervisor, then suspending and resuming the VM may result in forward jumping of the monotonic clocks in the guest OS. I believe suspending and hibernating may cause similar problems for even a non-virtualised OS that is getting its time data from a real-time clock chip that keeps running even when the main CPU goes to sleep. (If I *misunderstood* Glyph's explanation, then he may have only been talking about the latter case) Monotonicity is fairly easy to guarantee - you just remember the last value you returned and ensure you never return a lower value than that for the lifetime of the process. The only complication is thread synchronisation, and the GIL (or a dedicated lock for Jython/IronPython) can deal with that. Steadiness, on the other hand, requires a real world time reference and is thus really the domain of specialised hardware like atomic clocks and GPS units rather than software that can be suspended and resumed later without changing its internal state. There's a reason comms station operators pay substantial chunks of money for time & frequency reference devices [1]. This is why I now think we only need one new clock function: time.monotonic(). It will be the system monotonic clock if one is available, otherwise it will be our own equivalent wrapper around time.time() that just caches the last value returned to ensure the result never goes backwards. With time.monotonic() guaranteed to always be available, there's no need for a separate function that falls back to an unconditioned time.time() result. Regards, Nick. [1] For example: http://www.symmetricom.com/products/gps-solutions/gps-time-frequency-receivers/XLi/ -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From guido at python.org Wed Mar 28 16:42:13 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 28 Mar 2012 07:42:13 -0700 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: On Wed, Mar 28, 2012 at 7:36 AM, Nick Coghlan wrote: > On Wed, Mar 28, 2012 at 6:40 PM, Victor Stinner > wrote: >>> If we're simplifying the idea to only promising a monotonic >>> clock (i.e. will never go backwards within a given process, but may >>> produce the same value for an indefinite period, and may jump forwards >>> by arbitrarily large amounts), >> >> I don't know any monotonic clock jumping "forwards by arbitrarily >> large amounts". Linux can change CLOCK_MONOTONIC speed, but NTP >> doesn't "jump". > > If I understood Glyph's explanation correctly, then if your > application is running in a VM and the VM is getting its clock data > from the underlying hypervisor, then suspending and resuming the VM > may result in forward jumping of the monotonic clocks in the guest OS. > I believe suspending and hibernating may cause similar problems for > even a non-virtualised OS that is getting its time data from a > real-time clock chip that keeps running even when the main CPU goes to > sleep. (If I *misunderstood* Glyph's explanation, then he may have > only been talking about the latter case) > > Monotonicity is fairly easy to guarantee - you just remember the last > value you returned and ensure you never return a lower value than that > for the lifetime of the process. The only complication is thread > synchronisation, and the GIL (or a dedicated lock for > Jython/IronPython) can deal with that. Steadiness, on the other hand, > requires a real world time reference and is thus really the domain of > specialised hardware like atomic clocks and GPS units rather than > software that can be suspended and resumed later without changing its > internal state. There's a reason comms station operators pay > substantial chunks of money for time & frequency reference devices > [1]. > > This is why I now think we only need one new clock function: > time.monotonic(). It will be the system monotonic clock if one is > available, otherwise it will be our own equivalent wrapper around > time.time() that just caches the last value returned to ensure the > result never goes backwards. As I said, I think the caching idea is bad. We may have to settle for semantics that are less than perfect -- presumably if you are doing benchmarking you just have to throw away a bad result that happened to be affected by a clock anomaly, and if you are using timeouts, retries are already part of life. > With time.monotonic() guaranteed to always be available, there's no > need for a separate function that falls back to an unconditioned > time.time() result. I would love to have only one new clock function in 3.3. > Regards, > Nick. > > [1] For example: > http://www.symmetricom.com/products/gps-solutions/gps-time-frequency-receivers/XLi/ -- --Guido van Rossum (python.org/~guido) From guido at python.org Wed Mar 28 16:43:27 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 28 Mar 2012 07:43:27 -0700 Subject: [Python-Dev] Bug in generator if the generator in created in a C thread In-Reply-To: References: Message-ID: Interesting bug. :-( It seems bugs.python.org is back up, so can you file it there too? On Wed, Mar 28, 2012 at 3:45 AM, Victor Stinner wrote: > Hi, > > bugs.python.org is down so I'm reporting the bug here :-) > > We have a crash in our product when tracing is enabled by > sys.settrace() and threading.settrace(). If a Python generator is > created in a C thread, calling the generator later in another thread > may crash if Python tracing is enabled. > > ?- the C thread calls PyGILState_Ensure() which creates a temporary > Python thread state > ?- a generator is created, the generator has a reference to a Python > frame which keeps a reference to the temporary Python thread state > ?- the C thread calls PyGILState_Releases() which destroys the > temporary Python thread state > ?- when the generator is called later in another thread, call_trace() > reads the Python thread state from the generator frame, which is the > destroyed frame => it does crash on a pointer dereference if the > memory was reallocated (by malloc()) and the data were erased > > To reproduce the crash, unpack the attached > generator_frame_bug.tar.gz, compile the C module using "python > setup.py build" and then run "PYTHONPATH=$(ls -d build/lib*/) python > test.py" (or just "python test.py if you installed the _test module). > You may need to use Valgrind to see the error, or call memset(tstate, > 0xFF, sizeof(*tstate)) before free(tstate) in tstate_delete_common(). > > Calling the generator should update its reference to the Python state > thread in its frame. The generator may also clears frame->f_tstate (to > detect bugs earlier), as it does for frame->f_back (to avoid a > reference cycle). Attached patch implements this fix for Python 3.3. > > Victor > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Wed Mar 28 16:45:04 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 29 Mar 2012 00:45:04 +1000 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <467A106B-5887-4820-94A1-BF19D46F2F0F@gmail.com> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> <467A106B-5887-4820-94A1-BF19D46F2F0F@gmail.com> Message-ID: On Thu, Mar 29, 2012 at 12:27 AM, Yury Selivanov wrote: > What if system time jumps 1 year back? ?We'll have the same > monotonic time returned for this whole year? > > I don't think we should even try to emulate any of OS-level > functionality. You have to keep in mind the alternative here: falling back to an *unconditioned* time.time() value (which is the status quo, and necessary to preserve backwards compatibility). That will break just as badly in that scenario and is precisely the reason that the OS level monotonic functionality is desirable in the first place. I'd be quite happy with a solution that made the OS level monotonic clock part of the public API, with the caveat that it may not be available. Then the necessary trio of functions would be: time.time(): existing system clock, always available time.os_monotonic(): OS level monotonic clock, not always available time.monotonic(): always available, same as os_monotonic if it exists, otherwise uses a time() based emulation that may not be consistent across processes and may "mark time" for extended periods if the underlying OS clock is forced to jump back a long way. I think that naming scheme is more elegant than using monotonic() for the OS level monotonicity and try_monotonic() for the fallback version, but I'd be OK with the latter approach, too. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Wed Mar 28 16:56:34 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 28 Mar 2012 10:56:34 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: On 2012-03-28, at 10:36 AM, Nick Coghlan wrote: > Monotonicity is fairly easy to guarantee - you just remember the last > value you returned and ensure you never return a lower value than that > for the lifetime of the process. As I said in my previous mail - I don't think we should ever do that. Time may jump back and forth, and with your approach it will result in monotonic() being completely unusable. If time jumps back for N minutes, or years, that leads to completely broken expectations for timeouts for N minutes or years correspondingly (and that's just the timeouts case, I'm sure that there are much more critical time-related use-cases.) If monotonic() will utilize such hack, you add nothing usable in stdlib. Every serious framework or library will have to re-implement it using only OS-level functions, and *FAIL* if the OS doesn't support monotonic time. Fail, because such framework can't guarantee that it will work correctly. So I think time module should have only one new function: monotonic(), and this function should be only available if OS provides the underlying functionality. No need for steady(), try_monotonic() and other hacks. Each module can decide if its dependancy on monotonic is critical or not, and if it is not, you can always have: try: from time import monotonic as _time except ImportError: from time import time as _time That's how lots of code is written these days, like using 'epoll' if available, and fallback to 'select' if not. Why don't you try to abstract differences between them in the standard library? So I see no point in adding some loose abstractions to the stdlib now. - Yury From yselivanov.ml at gmail.com Wed Mar 28 17:02:30 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 28 Mar 2012 11:02:30 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> <467A106B-5887-4820-94A1-BF19D46F2F0F@gmail.com> Message-ID: <53304706-8864-4CFE-B4E5-33F7918FE46D@gmail.com> On 2012-03-28, at 10:45 AM, Nick Coghlan wrote: > On Thu, Mar 29, 2012 at 12:27 AM, Yury Selivanov > wrote: >> What if system time jumps 1 year back? We'll have the same >> monotonic time returned for this whole year? >> >> I don't think we should even try to emulate any of OS-level >> functionality. > > You have to keep in mind the alternative here: falling back to an > *unconditioned* time.time() value (which is the status quo, and > necessary to preserve backwards compatibility). That will break just > as badly in that scenario and is precisely the reason that the OS > level monotonic functionality is desirable in the first place. Well, my argumentation is that you either have some code that depends on monotonic time and can't work without it, or you have a code that can work with any time (and only precision matters). Maybe I'm wrong. > I'd be quite happy with a solution that made the OS level monotonic > clock part of the public API, with the caveat that it may not be > available. Then the necessary trio of functions would be: > > time.time(): existing system clock, always available > time.os_monotonic(): OS level monotonic clock, not always available > time.monotonic(): always available, same as os_monotonic if it exists, > otherwise uses a time() based emulation that may not be consistent > across processes and may "mark time" for extended periods if the > underlying OS clock is forced to jump back a long way. I still don't like this 'emulation' idea. Smells bad for standard lib. Big -1 on this approach. - Yury From ncoghlan at gmail.com Wed Mar 28 17:08:50 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 29 Mar 2012 01:08:50 +1000 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: On Thu, Mar 29, 2012 at 12:42 AM, Guido van Rossum wrote: > As I said, I think the caching idea is bad. We may have to settle for > semantics that are less than perfect -- presumably if you are doing > benchmarking you just have to throw away a bad result that happened to > be affected by a clock anomaly, and if you are using timeouts, retries > are already part of life. I agree caching doesn't solve the problems that are solved by an OS level monotonic clock, but falling back to an unmodifided time.time() result instead doesn't solve those problems either. Falling back to time.time() just gives you the status quo: time may jump forwards or backwards by an arbitrary amount between any two calls. Cached monotonicity just changes the anomalous modes to be time jumping forwards, or time standing still for an extended period of time. The only thing the caching provides is that it becomes a reasonable fallback for a function called time.monotonic() - it *is* a monotonic clock that meets the formal contract of the function, it's just nowhere near as good or effective as one the OS can provide. Forward jumping anomalies aren't as harmful, are very hard to detect in the first place and behave the same regardless of the presence of caching, so the interesting case to look at is the difference in failure modes when the system clock jumps backwards. For benchmarking, a caching clock will produce a zero result instead of a negative result. Zeros aren't quite as obviously broken as negative numbers when benchmarking, but they're still sufficiently suspicious that most benchmarking activities will flag them as anomalous. If the jump back was sufficiently small that the subsequent call still produces a higher value than the original call, then behaviour reverts to being identical. For timeouts, setting the clock back means your operation will take longer to time out than you expected. This problem will occur regardless of whether you were using cached monotonicity (such that time stands still) or the system clock (such that time actually goes backwards). In either case, your deadline will never be reached until the backwards jump has been cancelled out by the subsequent passage of time. I want the standard library to be able to replace its time.time() calls with time.monotonic(). The only way we can do that without breaking cross-platform compatibility is if time.monotonic() is guaranteed to exist, even when the platform only provides time.time(). A dumb caching fallback implementation based on time.time() is the easiest way to achieve that withou making a complete mockery of the "monotonic()" name. There is then a *different* use case, which is 3.3+ only code which wants to fail noisily when there's no OS level monotonic support - the application developer really does want to fail *immediately* if there's no OS level monotonic clock available, instead of crossing your fingers and hoping you don't hit a clock adjustment glitch (crossing your fingers has, I'll point out, been the *only* option for all previous versions of Python, so it clearly can't be *that* scary a prospect). So, rather than making time.monotonic() something that the *standard library can't use*, I'd prefer to address that second use case by exposing the OS level monotonic clock as time.os_monotonic() only when it's available. That way, the natural transition for old time.time() based code is to time.monotonic() (with no cross-platform support implications), but time.os_monotonic() also becomes available for the stricter use cases. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Wed Mar 28 17:35:46 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 29 Mar 2012 01:35:46 +1000 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <53304706-8864-4CFE-B4E5-33F7918FE46D@gmail.com> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> <467A106B-5887-4820-94A1-BF19D46F2F0F@gmail.com> <53304706-8864-4CFE-B4E5-33F7918FE46D@gmail.com> Message-ID: On Thu, Mar 29, 2012 at 1:02 AM, Yury Selivanov wrote: > On 2012-03-28, at 10:45 AM, Nick Coghlan wrote: > >> On Thu, Mar 29, 2012 at 12:27 AM, Yury Selivanov >> wrote: >>> What if system time jumps 1 year back? ?We'll have the same >>> monotonic time returned for this whole year? >>> >>> I don't think we should even try to emulate any of OS-level >>> functionality. >> >> You have to keep in mind the alternative here: falling back to an >> *unconditioned* time.time() value (which is the status quo, and >> necessary to preserve backwards compatibility). That will break just >> as badly in that scenario and is precisely the reason that the OS >> level monotonic functionality is desirable in the first place. > > Well, my argumentation is that you either have some code that depends > on monotonic time and can't work without it, or you have a code that > can work with any time (and only precision matters). ?Maybe I'm wrong. You're wrong. The primary use case for the new time.monotonic() function is to replace *existing* uses of time.time() in the standard library (mostly related to timeouts) that are currently vulnerable to clock adjustment related bugs. This real, concrete use case has been lost in some of the abstract theoretical discussions that have been going on this thread. We can't lose sight of the fact that using a system clock that is vulnerable to clock adjustment bugs to handle timeouts and benchmarking in Python has worked just fine for 20+ years. Using a monotonic clock instead is *better*, but it's far from essential, since clock adjustments that are big enough and poorly timed enough to cause real problems are fortunately a very rare occurrence. So, the primary use case is that we want to replace many of the time.time() calls in the standard library with time.monotonic() calls. To avoid backwards compatibility problems in the cross-platform support, that means time.monotonic() *must be available on every platform that currently provides time.time()*. This is why Victor's original proposal was that time.monotonic() simply fall back to time.time() if there was no OS level monotonic clock available. The intended use cases are using time.time() *right now* and have been doing so for years, so it is clearly an acceptable fallback for those cases. People (rightly, in my opinion) objected to the idea of time.monotonic() failing to guarantee monotonicity, thus the proposal to enforce at least a basic level of monotonicity through caching of the last returned value. I agree completely that this dumb caching solution doesn't solve any of the original problems with time.time() that make a time.monotonic() function desirable, but it isn't meant to. It's only meant to provide graceful degradation to something that is *no worse than the current behaviour when using time.time() in Python 3.2* while still respecting the property of monotonicity for the new API. Yes, it's an ugly hack, but it is a necessary fallback to avoid accidental regressions in our cross-platform support. For the major platforms (i.e. *nix, Mac OS X, Windows), there *will* be an OS level monotonic clock available, thus using time.monotonic() will have the desired effect of protecting from clocks being adjusted backwards. For other platforms, the behaviour (and vulnerabilities) will be essentially unchanged from the Python 3.2 approach (i.e. using time.time() with no monotonicity guarantees at all). However, some 3.3+ applications may want to be stricter about their behaviour and either bail out completely or fall back to an unfiltered time.time() call if an OS-level monotonic clock is not available. For those, it makes sense to expose time.os_monotonic() directly (and only if it is available), thus allowing those developers to make up their own mind instead of accepting the cross-platform fallback in time.monotonic(). Yes, you can get the exact same effect with the "monotonic()" and "try_monotonic()" naming scheme, but why force the standard library (and anyone else wanting to upgrade from time.time() without harming cross-platform support) to use such an ugly name when the "os_monotonic" and "monotonic" naming scheme provides a much neater alternative? Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From victor.stinner at gmail.com Wed Mar 28 17:37:09 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 28 Mar 2012 17:37:09 +0200 Subject: [Python-Dev] Bug in generator if the generator in created in a C thread In-Reply-To: References: Message-ID: 2012/3/28 Guido van Rossum : > Interesting bug. :-( > > It seems bugs.python.org is back up, so can you file it there too? It took us weeks to track the bug. Here is the issue: http://bugs.python.org/issue14432 Victor From guido at python.org Wed Mar 28 17:47:31 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 28 Mar 2012 08:47:31 -0700 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: On Wed, Mar 28, 2012 at 8:08 AM, Nick Coghlan wrote: > On Thu, Mar 29, 2012 at 12:42 AM, Guido van Rossum wrote: >> As I said, I think the caching idea is bad. We may have to settle for >> semantics that are less than perfect -- presumably if you are doing >> benchmarking you just have to throw away a bad result that happened to >> be affected by a clock anomaly, and if you are using timeouts, retries >> are already part of life. > > I agree caching doesn't solve the problems that are solved by an OS > level monotonic clock, but falling back to an unmodifided time.time() > result instead doesn't solve those problems either. Falling back to > time.time() just gives you the status quo: time may jump forwards or > backwards by an arbitrary amount between any two calls. Cached > monotonicity just changes the anomalous modes to be time jumping > forwards, or time standing still for an extended period of time. The > only thing the caching provides is that it becomes a reasonable > fallback for a function called time.monotonic() - it *is* a monotonic > clock that meets the formal contract of the function, it's just > nowhere near as good or effective as one the OS can provide. TBH, I don't think like this focus on monotonicity as the most important feature. > Forward jumping anomalies aren't as harmful, are very hard to detect > in the first place and behave the same regardless of the presence of > caching, so the interesting case to look at is the difference in > failure modes when the system clock jumps backwards. Agreed. > For benchmarking, a caching clock will produce a zero result instead > of a negative result. Zeros aren't quite as obviously broken as > negative numbers when benchmarking, but they're still sufficiently > suspicious that most benchmarking activities will flag them as > anomalous. If the jump back was sufficiently small that the subsequent > call still produces a higher value than the original call, then > behaviour reverts to being identical. So for benchmarking we don't care about jumps, really, and the caching version is slightly less useful. > For timeouts, setting the clock back means your operation will take > longer to time out than you expected. This problem will occur > regardless of whether you were using cached monotonicity (such that > time stands still) or the system clock (such that time actually goes > backwards). In either case, your deadline will never be reached until > the backwards jump has been cancelled out by the subsequent passage of > time. Where in the stdlib do we actually calculate timeouts instead of using the timeouts built into the OS (e.g. select())? I think it would be nice if we could somehow use the *same* clock as the OS uses to implement timeouts. > I want the standard library to be able to replace its time.time() > calls with time.monotonic(). Where in the stdlib? (I'm aware of threading.py. Any other places?) > The only way we can do that without > breaking cross-platform compatibility is if time.monotonic() is > guaranteed to exist, even when the platform only provides time.time(). > A dumb caching fallback implementation based on time.time() is the > easiest way to achieve that withou making a complete mockery of the > "monotonic()" name. Yeah, so maybe it's a bad name. :-) > There is then a *different* use case, which is 3.3+ only code which > wants to fail noisily when there's no OS level monotonic support - the > application developer really does want to fail *immediately* if > there's no OS level monotonic clock available, instead of crossing > your fingers and hoping you don't hit a clock adjustment glitch > (crossing your fingers has, I'll point out, been the *only* option for > all previous versions of Python, so it clearly can't be *that* scary a > prospect). > > So, rather than making time.monotonic() something that the *standard > library can't use*, I'd prefer to address that second use case by > exposing the OS level monotonic clock as time.os_monotonic() only when > it's available. That way, the natural transition for old time.time() > based code is to time.monotonic() (with no cross-platform support > implications), but time.os_monotonic() also becomes available for the > stricter use cases. I'd be happier if the fallback function didn't try to guarantee things the underlying clock can't guarantee. I.e. I like the idea of having a function that uses some accurate OS clock if one exists but falls back to time.time() if not; I don't like the idea of that new function trying to interpret the value of time.time() in any way. Applications that need the OS clock's guarantees can call it directly. We could also offer something where you can introspect the properties of the clock (or clocks) so that an app can choose the best clock depending on its needs. To summarize my problem with the caching idea: take a simple timeout loop such as found in several places in threading.py. def wait_for(delta, eps): # Wait for delta seconds, sleeping eps seconds at a time deadline = now() + delta while now() < deadline: sleep(eps) If the now() clock jumps backward after the initial call, we end up waiting too long -- until either the clock jumps forward again or until we've made up the difference. If the now() clock jumps forward after the initial call, we end up waiting less time, which is probably not such a big problem (though it might). But now consider a caching clock, and consider that the system clock made a jump backwards *before* this function is called. The cache prevents us from seeing it, so the initial call to now() returns the highest clock value seen so far. And until the system clock has caught up with that, now() will return the same value over and over -- so WE STILL WAIT TOO LONG. My conclusion: you can't win this game by forcing the clock to return a monotonic value. A better approach might be to compute how many sleep(eps) calls we're expected to make, and to limit the loop to that -- although sleep() doesn't make any guarantees either about sleeping too short or too long. Basically, if you do sleep(1) and find that your clock didn't move (enough), you can't tell the difference between a short sleep and a clock that jumped back. And if your clock moved to much, you still don't know if the problem was with sleep() or with your clock. -- --Guido van Rossum (python.org/~guido) From anacrolix at gmail.com Wed Mar 28 18:11:49 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Thu, 29 Mar 2012 00:11:49 +0800 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: time.timeout_clock? Everyone knows what that will be for and we won't have to make silly theoretical claims about its properties and expected uses. If no one else looks before I next get to a PC I'll dig up the clock/timing source used for select and friends, and find any corresponding syscall that retrieves it for Linux. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Mar 28 18:14:11 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 29 Mar 2012 02:14:11 +1000 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: On Thu, Mar 29, 2012 at 1:47 AM, Guido van Rossum wrote: > Where in the stdlib? (I'm aware of threading.py. Any other places?) Victor had at least one other example. multiprocessing, maybe? I believe the test suite may still have a few instances as well. > But now consider a caching clock, and consider that the system clock > made a jump backwards *before* this function is called. The cache > prevents us from seeing it, so the initial call to now() returns the > highest clock value seen so far. And until the system clock has caught > up with that, now() will return the same value over and over -- so WE > STILL WAIT TOO ?LONG. Ouch. OK, I'm convinced the caching fallback is worse than just falling back to time.time() directly, which means the naming problem needs to be handled another way. > My conclusion: you can't win this game by forcing the clock to return > a monotonic value. A better approach might be to compute how many > sleep(eps) calls we're expected to make, and to limit the loop to that > -- although sleep() doesn't make any guarantees either about sleeping > too short or too long. Basically, if you do sleep(1) and find that > your clock didn't move (enough), you can't tell the difference between > a short sleep and a clock that jumped back. And if your clock moved to > much, you still don't know if the problem was with sleep() or with > your clock. With your point about the problem with the naive caching mechanism acknowledged, I think we can safely assign time.monotonic() as the name of the OS provided monotonic clock. That means choosing a name for the version that falls back to time() if monotonic() isn't available so it can be safely substituted for time.time() without having to worry about platform compatibility implications. I don't like Victor's current "hires" (because it doesn't hint at the fallback behaviour, may actually be the same res as time.time() and reads like an unrelated English word). My own suggestion of "try_monotic()" would get the job done, but is hardly going to win any API beauty contests. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From steve at pearwood.info Wed Mar 28 18:18:56 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 29 Mar 2012 03:18:56 +1100 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: <4F7339F0.6060501@pearwood.info> Matt Joiner wrote: > time.timeout_clock? > > Everyone knows what that will be for and we won't have to make silly > theoretical claims about its properties and expected uses. I don't. -- Steven From stephen at xemacs.org Wed Mar 28 18:39:40 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Thu, 29 Mar 2012 01:39:40 +0900 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: On Thu, Mar 29, 2012 at 1:14 AM, Nick Coghlan wrote: > That means choosing a name for the version that falls back to time() > if monotonic() isn't available so it can be safely substituted for > time.time() without having to worry about platform compatibility > implications. What's wrong with "time.time()" again? As documented in http://docs.python.org/py3k/library/time.html it makes no guarantees, and specifically there is *no* guarantee that it will ever behave *badly*. Of course, we'll have to guarantee that, if a badly-behaved clock is available, users can get access to it, so call that time._time(). From steve at pearwood.info Wed Mar 28 19:00:13 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 29 Mar 2012 04:00:13 +1100 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: <4F73439D.4070208@pearwood.info> Nick Coghlan wrote: > Completely unintuitive and unnecessary. With the GIL taking care of > synchronisation issues, we can easily coerce time.time() into being a > monotonic clock by the simple expedient of saving the last returned > value: [snip] Here's a version that doesn't suffer from the flaw of returning a long stream of constant values when the system clock jumps backwards a significant amount: class MockTime: def __init__(self): self.ticks = [1, 2, 3, 4, 2, 3, 4, 5, 7, 3, 4, 6, 7, 8, 8, 9] self.i = -1 def __call__(self): self.i += 1 return self.ticks[self.i] time = MockTime() _prev = _prev_raw = 0 def monotonic(): global _prev, _prev_raw raw = time() delta = max(0, raw - _prev_raw) _prev_raw = raw _prev += delta return _prev And in use: >>> [monotonic() for i in range(16)] [1, 2, 3, 4, 4, 5, 6, 7, 9, 9, 10, 12, 13, 14, 14, 15] Time: [1, 2, 3, 4, 2, 3, 4, 5, 7, 3, 4, 6, 7, 8, 8, 9] Nick: [1, 2, 3, 4, 4, 4, 4, 5, 7, 7, 7, 7, 7, 8, 8, 9] Mine: [1, 2, 3, 4, 4, 5, 6, 7, 9, 9, 10, 12, 13, 14, 14, 15] Mine will get ahead of the system clock each time it jumps back, but it's a lot closer to the ideal of a *strictly* monotonically increasing clock. Assuming that the system clock will never jump backwards twice in a row, the double-caching version will never have more than two constant values in a row. -- Steven From regebro at gmail.com Wed Mar 28 19:01:48 2012 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 28 Mar 2012 19:01:48 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: On Wed, Mar 28, 2012 at 12:56, Victor Stinner wrote: > There is time.hires() if you need a monotonic clock with a fallback to > the system clock. Does this primarily give a high resolution clock, or primarily a monotonic clock? That's not clear from either the name, or the PEP. //Lennart From neologix at free.fr Wed Mar 28 19:31:21 2012 From: neologix at free.fr (=?ISO-8859-1?Q?Charles=2DFran=E7ois_Natali?=) Date: Wed, 28 Mar 2012 19:31:21 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: > What's wrong with "time.time()" again? ?As documented in > http://docs.python.org/py3k/library/time.html it makes no guarantees, > and specifically there is *no* guarantee that it will ever behave > *badly*. ?Of course, we'll have to guarantee that, if a > badly-behaved clock is available, users can get access to it, so call > that time._time(). I'm not sure I understand your suggestion correctly, but replacing time.time() by time.monotonic() with fallback won't work, because time.monotonic() isn't wall-clock time: it can very well use an arbitrary reference point (most likely system start-up time). As for the hires() function, since there's no guarantee whatsoever that it does provide a better resolution than time.time(), this would be really misleading IMHO. From martin at v.loewis.de Wed Mar 28 19:47:09 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 28 Mar 2012 19:47:09 +0200 Subject: [Python-Dev] Bug tracker outage In-Reply-To: <4F722CEB.1040102@v.loewis.de> References: <4F722CEB.1040102@v.loewis.de> Message-ID: <4F734E9D.4050607@v.loewis.de> Am 27.03.2012 23:11, schrieb "Martin v. L?wis": > Upfront hosting (Izak Burger) is going to do a Debian upgrade of the bug > tracker machine "soon" (likely tomorrow). This may cause some outage, > since there is a lot of custom stuff on the machine which may > break with newer (Python) versions. I'll notify here when the upgrade > is complete. The upgrade is complete. It was in fact the Postgres upgrade (to 8.4) which caused the longest blackout time. Regards, Martin From yselivanov.ml at gmail.com Wed Mar 28 19:55:40 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 28 Mar 2012 13:55:40 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> <467A106B-5887-4820-94A1-BF19D46F2F0F@gmail.com> <53304706-8864-4CFE-B4E5-33F7918FE46D@gmail.com> Message-ID: <6905A43A-F68B-42F0-BE4B-E27CAB94A185@gmail.com> On 2012-03-28, at 11:35 AM, Nick Coghlan wrote: > So, the primary use case is that we want to replace many of the > time.time() calls in the standard library with time.monotonic() calls. > To avoid backwards compatibility problems in the cross-platform > support, that means time.monotonic() *must be available on every > platform that currently provides time.time()*. OK. I got your point. And also I've just realized what I dislike about the way you want to implement the fallback. The main problem is that I treat the situation when time jumps backward as an exception, because, again, if you have timeouts you may get those timeouts to never be executed. So let's make the "try_monotonic()" function (or whatever name will be chosen) this way (your original code edited): def _make_monotic(): try: # Use underlying system monotonic clock if we can return _monotonic except NameError: _tick = time() def monotic(): nonlocal _time _new_tick = time() if _new_tick <= _tick: raise RuntimeError('time was adjusted backward') _tick = _new_tick return _new_tick return monotonic try_monotonic = _make_monotonic() At least this approach tries to follow some of the python's zen. - Yury From jaraco at jaraco.com Wed Mar 28 20:22:50 2012 From: jaraco at jaraco.com (Jason R. Coombs) Date: Wed, 28 Mar 2012 18:22:50 +0000 Subject: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing) Message-ID: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> I see this was reported as a debian bug. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=665776 We encountered it as well. To reproduce, using virtualenv 1.7+ on Python 2.7.2 on Ubuntu, create a virtualenv. Move that virtualenv to a host with Python 2.7.3RC2 yields: jaraco at vdm-dev:~$ /usr/bin/python2.7 -V Python 2.7.3rc2 jaraco at vdm-dev:~$ env/bin/python -V Python 2.7.2 jaraco at vdm-dev:~$ env/bin/python -c "import os; os.urandom()" Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'urandom' This bug causes Django to not start properly (under some circumstances). I reviewed the changes between v2.7.2 and 2.7 (tip) and it seems there was substantial refactoring of the os and posix modules for urandom. I still don't fully understand why the urandom method is missing (because the env includes the python 2.7.2 executable and stdlib). I suspect this change is going to cause some significant backward compatibility issues. Is there a recommended workaround? Should I file a bug? Regards, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6662 bytes Desc: not available URL: From carl at oddbird.net Wed Mar 28 20:48:21 2012 From: carl at oddbird.net (Carl Meyer) Date: Wed, 28 Mar 2012 12:48:21 -0600 Subject: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing) In-Reply-To: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> References: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> Message-ID: <4F735CF5.3070700@oddbird.net> Hi Jason, On 03/28/2012 12:22 PM, Jason R. Coombs wrote: > To reproduce, using virtualenv 1.7+ on Python 2.7.2 on Ubuntu, create a > virtualenv. Move that virtualenv to a host with Python 2.7.3RC2 yields: > > jaraco at vdm-dev:~$ /usr/bin/python2.7 -V > > Python 2.7.3rc2 > > jaraco at vdm-dev:~$ env/bin/python -V > > Python 2.7.2 > > jaraco at vdm-dev:~$ env/bin/python -c "import os; os.urandom()" > > Traceback (most recent call last): > > File "", line 1, in > > AttributeError: 'module' object has no attribute 'urandom' > > This bug causes Django to not start properly (under some circumstances). > > I reviewed the changes between v2.7.2 and 2.7 (tip) and it seems there > was substantial refactoring of the os and posix modules for urandom. > > I still don?t fully understand why the urandom method is missing > (because the env includes the python 2.7.2 executable and stdlib). In Python 2.6.8/2.7.3, urandom is built into the executable. A virtualenv doesn't contain the whole stdlib, only the bits necessary to bootstrap site.py. So the problem arises from trying to use the 2.7.3 stdlib with a 2.7.2 interpreter. > I suspect this change is going to cause some significant backward > compatibility issues. Is there a recommended workaround? Should I file a > bug? The workaround is easy: just re-run virtualenv on that path with the new interpreter. I was made aware of this issue a few weeks ago, and added a warning to the virtualenv "news" page: http://www.virtualenv.org/en/latest/news.html I'm not sure where else to publicize it. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From yselivanov.ml at gmail.com Wed Mar 28 21:02:27 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 28 Mar 2012 15:02:27 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <6905A43A-F68B-42F0-BE4B-E27CAB94A185@gmail.com> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> <467A106B-5887-4820-94A1-BF19D46F2F0F@gmail.com> <53304706-8864-4CFE-B4E5-33F7918FE46D@gmail.com> <6905A43A-F68B-42F0-BE4B-E27CAB94A185@gmail.com> Message-ID: <75752BDB-2F7C-40CB-9725-A17D53814839@gmail.com> On 2012-03-28, at 1:55 PM, Yury Selivanov wrote: > nonlocal _time nonlocal _tick obviously. P.S. And we can make it to raise an error after some N calls to time() resulting in lesser time that is stored in the _tick variable. - Yury From andrew.svetlov at gmail.com Wed Mar 28 22:20:55 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Wed, 28 Mar 2012 23:20:55 +0300 Subject: [Python-Dev] datetime module and pytz with dateutil Message-ID: I figured out what pytz and dateutil are not mentioned in python docs for datetime module. It's clean why these libs is not a part of Python Libraries ? but that's not clean for Docs. >From my perspective at least pytz (as py3k compatible) should to be mentioned as the library which contains timezone info, supported carefully and recommended to use with datetime standard module, -- Thanks, Andrew Svetlov From jaraco at jaraco.com Wed Mar 28 22:56:30 2012 From: jaraco at jaraco.com (Jason R. Coombs) Date: Wed, 28 Mar 2012 20:56:30 +0000 Subject: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing) In-Reply-To: <4F735CF5.3070700@oddbird.net> References: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> <4F735CF5.3070700@oddbird.net> Message-ID: <7E79234E600438479EC119BD241B48D601E7D159@BY2PRD0610MB389.namprd06.prod.outlook.com> > -----Original Message----- > From: python-dev-bounces+jaraco=jaraco.com at python.org [mailto:python- > dev-bounces+jaraco=jaraco.com at python.org] On Behalf Of Carl Meyer > Sent: Wednesday, 28 March, 2012 14:48 > > The workaround is easy: just re-run virtualenv on that path with the new > interpreter. > Thanks for the quick response Carl. I appreciate all the work that's been done. I'm not sure the workaround is as simple as you say. Virtualenv doesn't replace the 'python' exe if it already exists (because it may already exist for a different minor version of Python (3.2, 2.6)). So the procedure is probably something like this: For each version of Python the virtualenv wraps (ls env/bin/python?.?): 1) Run env/bin/python -V. If the result starts with "Python ", remove env/bin/python. 2) Determine if that Python version uses distribute or setuptools. 3) Run virtualenv --python=python env (with --distribute if appropriate) I haven't yet tested this procedure, but I believe it's closer to what will need to be done. There are probably other factors. Unfortunately, to reliably repair the virtualenv is very difficult, so we will probably opt with re-deploying all of our virtualenvs. Will the release notes include something about this change, since it will likely have broad backward incompatibility for all existing virtualenvs? I wouldn't expect someone in operations to read the virtualenv news to find out what things a Python upgrade will break. Indeed, this update will probably be pushed out as part of standard, unattended system updates. I realize that the relationship between stdlib.os and posixmodule isn't a guaranteed interface, and the fact that it breaks with virtualenv is a weakness of virtualenv. Nevertheless, virtualenv has become the defacto technique for Python environments. Putting my sysops cap on, I might perceive this change as being unannounced (w.r.t. Python) and having significant impact on operations. I would think this impact deserves at least a note in the release notes. Regards, Jason -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6662 bytes Desc: not available URL: From victor.stinner at gmail.com Wed Mar 28 23:05:18 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 28 Mar 2012 23:05:18 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: > I would love to have only one new clock function in 3.3. I already added time.clock_gettime() to 3.3 :-) Victor From guido at python.org Wed Mar 28 23:06:05 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 28 Mar 2012 14:06:05 -0700 Subject: [Python-Dev] datetime module and pytz with dateutil In-Reply-To: References: Message-ID: +1 If pytz is py3k cabable. -1 for dateutIl. On Wednesday, March 28, 2012, Andrew Svetlov wrote: > I figured out what pytz and dateutil are not mentioned in python docs > for datetime module. > It's clean why these libs is not a part of Python Libraries ? but > that's not clean for Docs. > From my perspective at least pytz (as py3k compatible) should to be > mentioned as the library which contains timezone info, supported > carefully and recommended to use with datetime standard module, > > -- > Thanks, > Andrew Svetlov > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott+python-dev at scottdial.com Wed Mar 28 23:36:06 2012 From: scott+python-dev at scottdial.com (Scott Dial) Date: Wed, 28 Mar 2012 17:36:06 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: <4F738446.9070200@scottdial.com> On 3/28/2012 10:29 AM, Guido van Rossum wrote: > On Wed, Mar 28, 2012 at 7:17 AM, Nick Coghlan wrote: >> Completely unintuitive and unnecessary. With the GIL taking care of >> synchronisation issues, we can easily coerce time.time() into being a >> monotonic clock by the simple expedient of saving the last returned >> value: > > That's a pretty obvious trick. But why don't the kernels do this if > monotonicity is so important? I'm sure there are also downsides, e.g. > if the clock is accidentally set forward by an hour and then back > again, you wouldn't have a useful clock for an hour. And the cache is > not shared between processes so different processes wouldn't see the > same clock value (I presume that most of these clocks have state in > the kernel that isn't bound to any particular process -- AFAIK only > clock() does that, and only on Unixy systems). > What makes you think that isn't already true? I don't know what platforms that CPython compiles for that *won't* have one of the aforementioned functions available that provide a *real* monotonic clock. Surely, any platform that doesn't didn't recognize the need for it, or they would just provide a monotonic clock. That is to say, if you are a POSIX compliant system, then there is no reason to break gettimeofday() and friends when you can just implement CLOCK_MONOTONIC proper (even if it's just a trick like Nick's). I think the PEP should enumerate what platforms that CPython supports that will not benefit from a real monotonic clock. I think the number of platforms will be such a minority that the emulation makes sense. Practicality beats purity, and all. -- Scott Dial scott at scottdial.com From andrew.svetlov at gmail.com Wed Mar 28 23:39:09 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Thu, 29 Mar 2012 00:39:09 +0300 Subject: [Python-Dev] datetime module and pytz with dateutil In-Reply-To: References: Message-ID: I'm personally +1 for pytz only ? dateutil is big enough and... Well, can we just point to pytz in our docs for datetime module? On Thu, Mar 29, 2012 at 12:06 AM, Guido van Rossum wrote: > +1 If pytz is py3k cabable. -1 for dateutIl. > > > On Wednesday, March 28, 2012, Andrew Svetlov wrote: >> >> I figured out what pytz and dateutil are not mentioned in python docs >> for datetime module. >> It's clean why these libs is not a part of Python Libraries ? but >> that's not clean for Docs. >> From my perspective at least pytz (as py3k compatible) should to be >> mentioned as the library which contains timezone info, supported >> carefully and recommended to use with datetime standard module, >> >> -- >> Thanks, >> Andrew Svetlov >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > -- > --Guido van Rossum (python.org/~guido) -- Thanks, Andrew Svetlov From victor.stinner at gmail.com Wed Mar 28 23:40:17 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 28 Mar 2012 23:40:17 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: > Does this primarily give a high resolution clock, or primarily a > monotonic clock? That's not clear from either the name, or the PEP. I expect a better resolution from time.monotonic() than time.time(). I don't have exact numbers right now, but I began to document each OS clock in the PEP. Victor From victor.stinner at gmail.com Wed Mar 28 23:45:24 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 28 Mar 2012 23:45:24 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F738446.9070200@scottdial.com> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> <4F738446.9070200@scottdial.com> Message-ID: > I think the PEP should enumerate what platforms that CPython supports > that will not benefit from a real monotonic clock. I think the number of > platforms will be such a minority that the emulation makes sense. > Practicality beats purity, and all. The PEP lists OS monotonic clocks by platform. Windows, Mac OS X, Solaris, and "UNIX" (CLOCK_MONOTONIC & friends) provide monotonic clocks. I don't know any platform without monotonic clock. Victor From rdmurray at bitdance.com Wed Mar 28 23:46:10 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 28 Mar 2012 17:46:10 -0400 Subject: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing) In-Reply-To: <7E79234E600438479EC119BD241B48D601E7D159@BY2PRD0610MB389.namprd06.prod.outlook.com> References: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> <4F735CF5.3070700@oddbird.net> <7E79234E600438479EC119BD241B48D601E7D159@BY2PRD0610MB389.namprd06.prod.outlook.com> Message-ID: <20120328214610.E164D2500E9@webabinitio.net> On Wed, 28 Mar 2012 20:56:30 -0000, "Jason R. Coombs" wrote: > Will the release notes include something about this change, since it will > likely have broad backward incompatibility for all existing virtualenvs? I > wouldn't expect someone in operations to read the virtualenv news to find > out what things a Python upgrade will break. Indeed, this update will > probably be pushed out as part of standard, unattended system updates. I think it is reasonable to put something in the release notes. This change is much larger than changes we normally make in maintenance release, because it fixes a security bug. But because it is larger than normal, adding release notes like this about known breakage is, I think, a good idea. Perhaps you and Carl could collaborate on a page explaining the issue in detail, and on a brief note to include in the release notes that points to your more extensive discussion? But keep in mind I'm not the release manager; we'll need their buy in on this. --David From guido at python.org Wed Mar 28 23:50:41 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 28 Mar 2012 14:50:41 -0700 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F738446.9070200@scottdial.com> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> <4F738446.9070200@scottdial.com> Message-ID: On Wed, Mar 28, 2012 at 2:36 PM, Scott Dial wrote: > On 3/28/2012 10:29 AM, Guido van Rossum wrote: >> On Wed, Mar 28, 2012 at 7:17 AM, Nick Coghlan wrote: >>> Completely unintuitive and unnecessary. With the GIL taking care of >>> synchronisation issues, we can easily coerce time.time() into being a >>> monotonic clock by the simple expedient of saving the last returned >>> value: >> >> That's a pretty obvious trick. But why don't the kernels do this if >> monotonicity is so important? I'm sure there are also downsides, e.g. >> if the clock is accidentally set forward by an hour and then back >> again, you wouldn't have a useful clock for an hour. And the cache is >> not shared between processes so different processes wouldn't see the >> same clock value (I presume that most of these clocks have state in >> the kernel that isn't bound to any particular process -- AFAIK only >> clock() does that, and only on Unixy systems). >> > > What makes you think that isn't already true? What does "that" refer to in this sentence? > I don't know what > platforms that CPython compiles for that *won't* have one of the > aforementioned functions available that provide a *real* monotonic > clock. Surely, any platform that doesn't didn't recognize the need for > it, or they would just provide a monotonic clock. That is to say, if you > are a POSIX compliant system, then there is no reason to break > gettimeofday() and friends when you can just implement CLOCK_MONOTONIC > proper (even if it's just a trick like Nick's). > > I think the PEP should enumerate what platforms that CPython supports > that will not benefit from a real monotonic clock. I think the number of > platforms will be such a minority that the emulation makes sense. > Practicality beats purity, and all. > > -- > Scott Dial > scott at scottdial.com -- --Guido van Rossum (python.org/~guido) From rdmurray at bitdance.com Wed Mar 28 23:55:50 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 28 Mar 2012 17:55:50 -0400 Subject: [Python-Dev] bug tracker offline again for re-indexing Message-ID: <20120328215551.99B452500E9@webabinitio.net> Since Martin hasn't sent a note about this here I will: I noticed that text search wasn't working right on the bug tracker, and Martin has taken it offline again to re-index. --David From martin at v.loewis.de Thu Mar 29 00:06:22 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 29 Mar 2012 00:06:22 +0200 Subject: [Python-Dev] bug tracker offline again for re-indexing In-Reply-To: <20120328215551.99B452500E9@webabinitio.net> References: <20120328215551.99B452500E9@webabinitio.net> Message-ID: <4F738B5E.4010100@v.loewis.de> Am 28.03.2012 23:55, schrieb R. David Murray: > Since Martin hasn't sent a note about this here I will: > > I noticed that text search wasn't working right on the bug tracker, and Martin > has taken it offline again to re-index. which will, unfortunately, take a few more hours to complete. Regards, Martin From victor.stinner at gmail.com Thu Mar 29 01:27:44 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 29 Mar 2012 01:27:44 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> Message-ID: > Where in the stdlib do we actually calculate timeouts instead of using > the timeouts built into the OS (e.g. select())? At least in threading and queue modules. The common use case is to retry a function with a timeout if the syscall was interrupted by a signal (EINTR error). The socket module and _threading.Lock.acquire() implement such retry loop using the system clock. They should use a monotonic clock instead. > I think it would be nice if we could somehow use the *same* clock as > the OS uses to implement timeouts. On Linux, nanosleep() uses CLOCK_MONOTONIC whereas POSIX suggests CLOCK_REALTIME. Some functions allow to choose the clock, like pthread locks or clock_nanosleep(). > I'd be happier if the fallback function didn't try to guarantee things > the underlying clock can't guarantee. I.e. I like the idea of having a > function that uses some accurate OS clock if one exists but falls back > to time.time() if not; I don't like the idea of that new function > trying to interpret the value of time.time() in any way. We may workaround some OS known bugs like: http://support.microsoft.com/?id=274323 The link contains an example how to workaround the bug. The idea of the workaround is to use two different monotonic clocks to detect leaps, with one trusted clock (GetTickCount) and one untrusted clock having an higher resolution (QueryPerformanceCounter). I don't think that the same algorithm is applicable on other OSes because other OSes usually only provide one monotonic clock, sometimes though different API. Victor From victor.stinner at gmail.com Thu Mar 29 02:14:00 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 29 Mar 2012 02:14:00 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: > Could you (with help from others who contributed) try to compile a table > showing, for each platform (Windows/Mac/Linux/BSD) which clocks (or > variations) we are considering, and for each of those: > > - a link for the reference documentation > - what their typical accuracy is (barring jumps) > - what they do when the "civil" time is made to jump (forward or back) > by the user > - how they are affected by small tweaks to the civil time by NTP > - what they do if the system is suspended and resumed > - whether they can be shared between processes running on the same machine > - whether they may fail or be unsupported under some circumstances > > I have a feeling that if I saw such a table it would be much easier to decide. > > I assume much of this has already been said at one point in this > thread, but it's impossible to have an overview at the moment. I don't know where I can get all these information, but I'm completing the PEP each time that I find a new information. It's difficult to get the accuracy of a clock and how it handles system suspend. I'm intereted if anyone has such information. Victor From martin at v.loewis.de Thu Mar 29 05:07:48 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 29 Mar 2012 05:07:48 +0200 Subject: [Python-Dev] bug tracker offline again for re-indexing In-Reply-To: <4F738B5E.4010100@v.loewis.de> References: <20120328215551.99B452500E9@webabinitio.net> <4F738B5E.4010100@v.loewis.de> Message-ID: <4F73D204.2000906@v.loewis.de> >> I noticed that text search wasn't working right on the bug tracker, and Martin >> has taken it offline again to re-index. > > which will, unfortunately, take a few more hours to complete. It seems to work now, so I turned it on again. Text search now uses Xapian, and recreating the Xapian index of all msg objects took a while. Regards, Martin From tshepang at gmail.com Thu Mar 29 09:33:42 2012 From: tshepang at gmail.com (Tshepang Lekhonkhobe) Date: Thu, 29 Mar 2012 09:33:42 +0200 Subject: [Python-Dev] Bug tracker outage In-Reply-To: <4F734E9D.4050607@v.loewis.de> References: <4F722CEB.1040102@v.loewis.de> <4F734E9D.4050607@v.loewis.de> Message-ID: On Wed, Mar 28, 2012 at 19:47, "Martin v. L?wis" wrote: > Am 27.03.2012 23:11, schrieb "Martin v. L?wis": >> Upfront hosting (Izak Burger) is going to do a Debian upgrade of the bug >> tracker machine "soon" (likely tomorrow). This may cause some outage, >> since there is a lot of custom stuff on the machine which may >> break with newer (Python) versions. I'll notify here when the upgrade >> is complete. > > The upgrade is complete. It was in fact the Postgres upgrade (to 8.4) > which caused the longest blackout time. Curiously, did the update have anything to do with Debian 5's EOL? From jbschne at umich.edu Wed Mar 28 09:06:40 2012 From: jbschne at umich.edu (Jordan Schneider) Date: Wed, 28 Mar 2012 03:06:40 -0400 Subject: [Python-Dev] possible distutils.sysconfig.get_config_var bug Message-ID: <49B4A6A0-2225-49C6-82B7-B284D626A415@umich.edu> Hi python-dev, Sorry if this is the wrong place to discuss this potential bug - feel free to point me in the right direction if so. I'm running OS X 10.7.3, and have two python2.7s installed, one system default in /usr/bin and the other by homebrew symlinked in /usr/local/bin. While running a configure script, distutils.sysconfig.get_config_var is not returing a full path for my homebrew Python framework, like so: Python 2.7.2 (default, Mar 28 2012, 02:31:16) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00)] on darwin >>> distutils.sysconfig.get_config_var('LINKFORSHARED') '-u _PyMac_Error Python.framework/Versions/2.7/Python' (the full path should be /usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/Python) Whereas from the system python in /usr/bin: Python 2.7.1 (r271:86832, Jul 31 2011, 19:30:53) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin >>> distutils.sysconfig.get_config_var('LINKFORSHARED') '-u _PyMac_Error /System/Library/Frameworks/Python.framework/Versions/2.7/Python' This seems like a bug, no? Thanks for the help, Jordan From jaraco at jaraco.com Thu Mar 29 13:41:46 2012 From: jaraco at jaraco.com (Jason R. Coombs) Date: Thu, 29 Mar 2012 11:41:46 +0000 Subject: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing) In-Reply-To: <20120328214610.E164D2500E9@webabinitio.net> References: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> <4F735CF5.3070700@oddbird.net> <7E79234E600438479EC119BD241B48D601E7D159@BY2PRD0610MB389.namprd06.prod.outlook.com> <20120328214610.E164D2500E9@webabinitio.net> Message-ID: <7E79234E600438479EC119BD241B48D601E7D9E1@BY2PRD0610MB389.namprd06.prod.outlook.com> Carl, I've drafted some notes: http://piratepad.net/PAZ3CEq9CZ Please feel free to edit them. If you want to chat, I can often be reached on freenode as 'jaraco' or XMPP at my e-mail address if you want to sprint on this in real-time. Does the issue only exist for Python 2.6 and 2.7? I'm not familiar with the release process. What's the next step? > -----Original Message----- > From: R. David Murray [mailto:rdmurray at bitdance.com] > Sent: Wednesday, 28 March, 2012 17:46 > > I think it is reasonable to put something in the release notes. This change is > much larger than changes we normally make in maintenance release, > because it fixes a security bug. But because it is larger than normal, adding > release notes like this about known breakage is, I think, a good idea. > > Perhaps you and Carl could collaborate on a page explaining the issue in > detail, and on a brief note to include in the release notes that points to your > more extensive discussion? -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6662 bytes Desc: not available URL: From regebro at gmail.com Thu Mar 29 14:11:13 2012 From: regebro at gmail.com (Lennart Regebro) Date: Thu, 29 Mar 2012 14:11:13 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: On Wed, Mar 28, 2012 at 23:40, Victor Stinner wrote: >> Does this primarily give a high resolution clock, or primarily a >> monotonic clock? That's not clear from either the name, or the PEP. > > I expect a better resolution from time.monotonic() than time.time(). Sure. And for me that means that time.hires() would give a high resolution version of time.time(). Ie, not monotonic, but wall clock. The question then is why time.time() doesn't give that resolution from the start. It seems to me we need three functions: One to get the wall clock, one to get a monotonic clock, and one that falls back if no monotonic clock is available. Both time.time() and time.monotonic() should give the highest resolution possible. As such, time.hires() seems pointless. //Lennart From rdmurray at bitdance.com Thu Mar 29 15:31:39 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 29 Mar 2012 09:31:39 -0400 Subject: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing) In-Reply-To: <7E79234E600438479EC119BD241B48D601E7D9E1@BY2PRD0610MB389.namprd06.prod.outlook.com> References: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> <4F735CF5.3070700@oddbird.net> <7E79234E600438479EC119BD241B48D601E7D159@BY2PRD0610MB389.namprd06.prod.outlook.com> <20120328214610.E164D2500E9@webabinitio.net> <7E79234E600438479EC119BD241B48D601E7D9E1@BY2PRD0610MB389.namprd06.prod.outlook.com> Message-ID: <20120329133139.9D7452500E9@webabinitio.net> On Thu, 29 Mar 2012 11:41:46 -0000, "Jason R. Coombs" wrote: > Does the issue only exist for Python 2.6 and 2.7? It might exist for 3.1 and 3.2 as well. > I'm not familiar with the release process. What's the next step? I would suggest opening an issue on the tracker and marking it as a release-blocker. That way the release managers will see it and can decide what if anything they want to do about it. --David From brett at python.org Thu Mar 29 16:31:20 2012 From: brett at python.org (Brett Cannon) Date: Thu, 29 Mar 2012 10:31:20 -0400 Subject: [Python-Dev] possible distutils.sysconfig.get_config_var bug In-Reply-To: <49B4A6A0-2225-49C6-82B7-B284D626A415@umich.edu> References: <49B4A6A0-2225-49C6-82B7-B284D626A415@umich.edu> Message-ID: If you could, Jordan, please file a bug at bugs.python.org so the discussion can happen there and be tracked better. On Wed, Mar 28, 2012 at 03:06, Jordan Schneider wrote: > Hi python-dev, > > Sorry if this is the wrong place to discuss this potential bug - feel free > to point me in the right direction if so. > > I'm running OS X 10.7.3, and have two python2.7s installed, one system > default in /usr/bin and the other by homebrew symlinked in /usr/local/bin. > > While running a configure script, distutils.sysconfig.get_config_var is > not returing a full path for my homebrew Python framework, like so: > > Python 2.7.2 (default, Mar 28 2012, 02:31:16) > [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00)] on > darwin > >>> distutils.sysconfig.get_config_var('LINKFORSHARED') > '-u _PyMac_Error Python.framework/Versions/2.7/Python' > > (the full path should be > /usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/Python) > > > Whereas from the system python in /usr/bin: > > Python 2.7.1 (r271:86832, Jul 31 2011, 19:30:53) > [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on > darwin > >>> distutils.sysconfig.get_config_var('LINKFORSHARED') > '-u _PyMac_Error > /System/Library/Frameworks/Python.framework/Versions/2.7/Python' > > This seems like a bug, no? > > Thanks for the help, > Jordan > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosslagerwall at gmail.com Thu Mar 29 18:21:34 2012 From: rosslagerwall at gmail.com (Ross Lagerwall) Date: Thu, 29 Mar 2012 18:21:34 +0200 Subject: [Python-Dev] bug tracker offline again for re-indexing In-Reply-To: <4F73D204.2000906@v.loewis.de> References: <20120328215551.99B452500E9@webabinitio.net> <4F738B5E.4010100@v.loewis.de> <4F73D204.2000906@v.loewis.de> Message-ID: <4F748C0E.4080107@gmail.com> On 03/29/2012 05:07 AM, "Martin v. L?wis" wrote: >>> I noticed that text search wasn't working right on the bug tracker, and Martin >>> has taken it offline again to re-index. >> >> which will, unfortunately, take a few more hours to complete. > > It seems to work now, so I turned it on again. Text search now uses > Xapian, and recreating the Xapian index of all msg objects took a while. > Is the search working properly: Search all for "virtualenv" at the top right and issue 14357 doesn't appear in the list of results... Also, search open issues for "subprocess" and there is only 1 result. I wish :-) -- Ross Lagerwall -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 294 bytes Desc: OpenPGP digital signature URL: From rdmurray at bitdance.com Thu Mar 29 18:39:18 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 29 Mar 2012 12:39:18 -0400 Subject: [Python-Dev] bug tracker offline again for re-indexing In-Reply-To: <4F748C0E.4080107@gmail.com> References: <20120328215551.99B452500E9@webabinitio.net> <4F738B5E.4010100@v.loewis.de> <4F73D204.2000906@v.loewis.de> <4F748C0E.4080107@gmail.com> Message-ID: <20120329163918.B7EA12500E9@webabinitio.net> On Thu, 29 Mar 2012 18:21:34 +0200, Ross Lagerwall wrote: > On 03/29/2012 05:07 AM, "Martin v. L??wis" wrote: > >>> I noticed that text search wasn't working right on the bug tracker, and Martin > >>> has taken it offline again to re-index. > >> > >> which will, unfortunately, take a few more hours to complete. > > > > It seems to work now, so I turned it on again. Text search now uses > > Xapian, and recreating the Xapian index of all msg objects took a while. > > > > Is the search working properly: Search all for "virtualenv" at the top > right and issue 14357 doesn't appear in the list of results... > > Also, search open issues for "subprocess" and there is only 1 result. I > wish :-) I get three for 'all' issues, which is certainly wrong. All of them have subprocess in the title. I suspect the search is only searching the title field, which is wrong. --David From carl at oddbird.net Thu Mar 29 18:54:17 2012 From: carl at oddbird.net (Carl Meyer) Date: Thu, 29 Mar 2012 10:54:17 -0600 Subject: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing) In-Reply-To: <7E79234E600438479EC119BD241B48D601E7D9E1@BY2PRD0610MB389.namprd06.prod.outlook.com> References: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> <4F735CF5.3070700@oddbird.net> <7E79234E600438479EC119BD241B48D601E7D159@BY2PRD0610MB389.namprd06.prod.outlook.com> <20120328214610.E164D2500E9@webabinitio.net> <7E79234E600438479EC119BD241B48D601E7D9E1@BY2PRD0610MB389.namprd06.prod.outlook.com> Message-ID: <4F7493B9.5070709@oddbird.net> Thanks Jason for raising this. I just assumed that this was virtualenv's fault (which it is) and didn't consider raising it here, but a note in the release notes for the affected Python versions will certainly reach many more of the likely-to-be-affected users. FTR, I confirmed that the issue also affects the upcoming point releases for 3.1 and 3.2, as well as 2.6 and 2.7. Jason filed issue 14444 to track the addition to the release notes for those versions. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From dmalcolm at redhat.com Thu Mar 29 19:39:27 2012 From: dmalcolm at redhat.com (David Malcolm) Date: Thu, 29 Mar 2012 13:39:27 -0400 Subject: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing) In-Reply-To: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> References: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> Message-ID: <1333042767.31165.47.camel@surprise> On Wed, 2012-03-28 at 18:22 +0000, Jason R. Coombs wrote: > I see this was reported as a debian bug. > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=665776 > > To reproduce, using virtualenv 1.7+ on Python 2.7.2 on Ubuntu, create > a virtualenv. Move that virtualenv to a host with Python 2.7.3RC2 > yields: > jaraco at vdm-dev:~$ /usr/bin/python2.7 -V > Python 2.7.3rc2 > jaraco at vdm-dev:~$ env/bin/python -V > Python 2.7.2 > jaraco at vdm-dev:~$ env/bin/python -c "import os; os.urandom()" > Traceback (most recent call last): > File "", line 1, in > AttributeError: 'module' object has no attribute 'urandom' It looks like this a symptom of the move of urandom to os.py to posximodule et al. At first glance, it looks like this specific hunk should be reverted: http://hg.python.org/cpython/rev/a0f43f4481e0#l7.1 so that if you're running with the new stdlib but an old python binary the combination can still have a usable os.urandom Should this be tracked in bugs.python.org? Hope this is helpful Dave From rosslagerwall at gmail.com Thu Mar 29 20:13:51 2012 From: rosslagerwall at gmail.com (Ross Lagerwall) Date: Thu, 29 Mar 2012 20:13:51 +0200 Subject: [Python-Dev] bug tracker offline again for re-indexing In-Reply-To: <20120329163918.B7EA12500E9@webabinitio.net> References: <20120328215551.99B452500E9@webabinitio.net> <4F738B5E.4010100@v.loewis.de> <4F73D204.2000906@v.loewis.de> <4F748C0E.4080107@gmail.com> <20120329163918.B7EA12500E9@webabinitio.net> Message-ID: <4F74A65F.1020806@gmail.com> On 03/29/2012 06:39 PM, R. David Murray wrote: > I get three for 'all' issues, which is certainly wrong. All of them > have subprocess in the title. > > I suspect the search is only searching the title field, which is wrong. No, #14357 contains 'virtualenv' in the title but it doesn't come up in the 'all' search. -- Ross Lagerwall -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 294 bytes Desc: OpenPGP digital signature URL: From carl at oddbird.net Thu Mar 29 20:26:27 2012 From: carl at oddbird.net (Carl Meyer) Date: Thu, 29 Mar 2012 12:26:27 -0600 Subject: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing) In-Reply-To: <1333042767.31165.47.camel@surprise> References: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> <1333042767.31165.47.camel@surprise> Message-ID: <4F74A953.2090909@oddbird.net> On 03/29/2012 11:39 AM, David Malcolm wrote: >> jaraco at vdm-dev:~$ env/bin/python -c "import os; os.urandom()" >> Traceback (most recent call last): >> File "", line 1, in >> AttributeError: 'module' object has no attribute 'urandom' > > It looks like this a symptom of the move of urandom to os.py to > posximodule et al. > > At first glance, it looks like this specific hunk should be reverted: > http://hg.python.org/cpython/rev/a0f43f4481e0#l7.1 > so that if you're running with the new stdlib but an old python binary > the combination can still have a usable os.urandom Indeed, I've just tested and verified that this does fix the problem. > Should this be tracked in bugs.python.org? I've added this option as a comment on bug 14444. The title of that bug is worded such that it could be reasonably resolved either with the backwards-compatibility fix or the release notes addition, the release managers can decide what seems appropriate to them. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From martin at v.loewis.de Thu Mar 29 21:14:32 2012 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 29 Mar 2012 21:14:32 +0200 Subject: [Python-Dev] bug tracker offline again for re-indexing In-Reply-To: <4F748C0E.4080107@gmail.com> References: <20120328215551.99B452500E9@webabinitio.net> <4F738B5E.4010100@v.loewis.de> <4F73D204.2000906@v.loewis.de> <4F748C0E.4080107@gmail.com> Message-ID: <4F74B498.6010805@v.loewis.de> Am 29.03.2012 18:21, schrieb Ross Lagerwall: > On 03/29/2012 05:07 AM, "Martin v. L?wis" wrote: >>>> I noticed that text search wasn't working right on the bug tracker, and Martin >>>> has taken it offline again to re-index. >>> >>> which will, unfortunately, take a few more hours to complete. >> >> It seems to work now, so I turned it on again. Text search now uses >> Xapian, and recreating the Xapian index of all msg objects took a while. >> > > Is the search working properly: Search all for "virtualenv" at the top > right and issue 14357 doesn't appear in the list of results... > > Also, search open issues for "subprocess" and there is only 1 result. I > wish :-) Please submit an issue to the meta tracker. It may take weeks until I can get to it. Regards, Martin From tlesher at gmail.com Thu Mar 29 21:35:27 2012 From: tlesher at gmail.com (Tim Lesher) Date: Thu, 29 Mar 2012 15:35:27 -0400 Subject: [Python-Dev] Bug in generator if the generator in created in a C thread In-Reply-To: References: Message-ID: On Wed, Mar 28, 2012 at 06:45, Victor Stinner wrote: > We have a crash in our product when tracing is enabled by > sys.settrace() and threading.settrace(). If a Python generator is > created in a C thread, calling the generator later in another thread > may crash if Python tracing is enabled. > > ?- the C thread calls PyGILState_Ensure() which creates a temporary > Python thread state > ?- a generator is created, the generator has a reference to a Python > frame which keeps a reference to the temporary Python thread state > ?- the C thread calls PyGILState_Releases() which destroys the > temporary Python thread state > ?- when the generator is called later in another thread, call_trace() > reads the Python thread state from the generator frame, which is the > destroyed frame => it does crash on a pointer dereference if the > memory was reallocated (by malloc()) and the data were erased > This is timely. We've seen several similar bugs in our 3.1.2-based embedded product--in fact, I'm tracking down what might be another this week. The common theme is that C extension code that depends on PyGILState_Ensure() for GIL management runs the risk of causing latent crashes in any situation that involves preserving a frame beyond the lifetime of the thread that created it. In our case, the culprits have usually been the frames attached to exceptions thrown across the extension->Python->embedding app boundaries. >From a theoretical standpoint, I can't quite decide what the real error is: 1) the fact that PyGILState_Release() destroys a temporary thread state that may still be referenced by some objects, or 2) the fact that some code is trying to keep frame objects after the creating thread state no longer exists. This week I've been leaning toward 2), but then I realized that keeping frames post-thread-death is not that uncommon (for example, debuggers and other diagnostic techniques like http://nedbatchelder.com/blog/200711/rethrowing_exceptions_in_python.html). Locally we added some (unfortunate) code to our 3.1.2 port to wrap PyGILState_Ensure(), which I thought had sidestepped the issue for us: void takeGIL() { PyGILState_Ensure(); // This has the side effect of keeping such thread states alive until // the interpreter is finalized; however, all thread state objects get // unconditionally deleted during Py_Finalize, so they won't leak. PyThreadState* pThreadState = PyGILState_GetThisThreadState(); if (pThreadState->gilstate_counter == 1) { ++pThreadState->gilstate_counter; } } But clearly that can't be a correct answer (and it may not even be a functioning one, given that I'm seeing a similar issue again). -- Tim Lesher From rdmurray at bitdance.com Thu Mar 29 21:58:25 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 29 Mar 2012 15:58:25 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error Message-ID: <20120329195825.843352500E9@webabinitio.net> Some of us have expressed uneasiness about the consequences of dict raising an error on lookup if the dict has been modified, the fix Victor made to solve one of the crashers. I don't know if I speak for the others, but (assuming that I understand the change correctly) my concern is that there is probably a significant amount of threading code out there that assumes that dict *lookup* is a thread-safe operation. Much of that code will, if moved to Python 3.3, now be subject to random runtime errors for which it will not be prepared. Further, code which appears safe can suddenly become unsafe if a refactoring of the code causes an object to be stored in the dictionary that has a Python equality method. Would it be possible to modify the fix so that the lookup is retried a non-trivial but finite number of times, so that normal code will work and only pathological code will break? I know that I really don't want to think about having to audit the (significantly threaded) application I'm currently working on to make sure it is "3.3 safe". Dict lookup operations are *common*, and we've never had to think about whether or not they were thread-safe before (unless there were inter-thread synchronization issues involved, of course). Nor am I sure the locking dict type suggested by Jim on the issue would help, since a number of the dicts we are using are produced by library code. So we'd have to wait for those libraries to be ported to 3.3.... --David From rdmurray at bitdance.com Thu Mar 29 22:06:30 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 29 Mar 2012 16:06:30 -0400 Subject: [Python-Dev] bug tracker offline again for re-indexing In-Reply-To: <4F74B498.6010805@v.loewis.de> References: <20120328215551.99B452500E9@webabinitio.net> <4F738B5E.4010100@v.loewis.de> <4F73D204.2000906@v.loewis.de> <4F748C0E.4080107@gmail.com> <4F74B498.6010805@v.loewis.de> Message-ID: <20120329200631.6D7872500E9@webabinitio.net> On Thu, 29 Mar 2012 21:14:32 +0200, =?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?= wrote: > Am 29.03.2012 18:21, schrieb Ross Lagerwall: > > On 03/29/2012 05:07 AM, "Martin v. L??wis" wrote: > >>>> I noticed that text search wasn't working right on the bug tracker, and Martin > >>>> has taken it offline again to re-index. > >>> > >>> which will, unfortunately, take a few more hours to complete. > >> > >> It seems to work now, so I turned it on again. Text search now uses > >> Xapian, and recreating the Xapian index of all msg objects took a while. > >> > > > > Is the search working properly: Search all for "virtualenv" at the top > > right and issue 14357 doesn't appear in the list of results... > > > > Also, search open issues for "subprocess" and there is only 1 result. I > > wish :-) > > Please submit an issue to the meta tracker. It may take weeks until I > can get to it. I've reopened issue 443, with an example of a failing search giving relevant issue numbers. --David From guido at python.org Thu Mar 29 22:09:17 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 29 Mar 2012 13:09:17 -0700 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <20120329195825.843352500E9@webabinitio.net> References: <20120329195825.843352500E9@webabinitio.net> Message-ID: On Thu, Mar 29, 2012 at 12:58 PM, R. David Murray wrote: > Some of us have expressed uneasiness about the consequences of dict > raising an error on lookup if the dict has been modified, the fix Victor > made to solve one of the crashers. > > I don't know if I speak for the others, but (assuming that I understand > the change correctly) my concern is that there is probably a significant > amount of threading code out there that assumes that dict *lookup* is > a thread-safe operation. ?Much of that code will, if moved to Python > 3.3, now be subject to random runtime errors for which it will not > be prepared. ?Further, code which appears safe can suddenly become > unsafe if a refactoring of the code causes an object to be stored in > the dictionary that has a Python equality method. My original assessment was that this only affects dicts whose keys have a user-implemented __hash__ or __eq__ implementation, and that the number of apps that use this *and* assume the threadsafe property would be pretty small. This is just intuition, I don't have hard facts. But I do want to stress that not all dict lookups automatically become thread-unsafe, only those that need to run user code as part of the key lookup. > Would it be possible to modify the fix so that the lookup is retried a > non-trivial but finite number of times, so that normal code will work > and only pathological code will break? FWIW a similar approach was rejected as a fix for the hash DoS attack. > I know that I really don't want to think about having to audit the > (significantly threaded) application I'm currently working on to make sure > it is "3.3 safe". ?Dict lookup operations are *common*, and we've never > had to think about whether or not they were thread-safe before (unless > there were inter-thread synchronization issues involved, of course). > Nor am I sure the locking dict type suggested by Jim on the issue would > help, since a number of the dicts we are using are produced by library > code. ?So we'd have to wait for those libraries to be ported to 3.3.... Agreed that this is somewhat scary. -- --Guido van Rossum (python.org/~guido) From rdmurray at bitdance.com Thu Mar 29 22:31:03 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 29 Mar 2012 16:31:03 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: References: <20120329195825.843352500E9@webabinitio.net> Message-ID: <20120329203103.95A4B2500E9@webabinitio.net> On Thu, 29 Mar 2012 13:09:17 -0700, Guido van Rossum wrote: > On Thu, Mar 29, 2012 at 12:58 PM, R. David Murray wrote: > > Some of us have expressed uneasiness about the consequences of dict > > raising an error on lookup if the dict has been modified, the fix Victor > > made to solve one of the crashers. > > > > I don't know if I speak for the others, but (assuming that I understand > > the change correctly) my concern is that there is probably a significant > > amount of threading code out there that assumes that dict *lookup* is > > a thread-safe operation. ??Much of that code will, if moved to Python > > 3.3, now be subject to random runtime errors for which it will not > > be prepared. ??Further, code which appears safe can suddenly become > > unsafe if a refactoring of the code causes an object to be stored in > > the dictionary that has a Python equality method. > > My original assessment was that this only affects dicts whose keys > have a user-implemented __hash__ or __eq__ implementation, and that > the number of apps that use this *and* assume the threadsafe property > would be pretty small. This is just intuition, I don't have hard > facts. But I do want to stress that not all dict lookups automatically > become thread-unsafe, only those that need to run user code as part of > the key lookup. You are probably correct, but the thing is that one still has to do the code audit to be sure...and then make sure that no one later introduces such an object type as a dict key. Are there any other places in Python where substituting a duck-typed Python class or a Python subclass can cause a runtime error in previously working code? > > Would it be possible to modify the fix so that the lookup is retried a > > non-trivial but finite number of times, so that normal code will work > > and only pathological code will break? > > FWIW a similar approach was rejected as a fix for the hash DoS attack. Yes, but in this case the non-counting version breaks just as randomly, but more often. So arguing that counting here is analogous to counting in the DoS attack issue is an argument for removing the fix entirely :) The counting version could use a large enough count (since the count used to be infinite!) that only code that would be having pathological performance anyway would raise the runtime error, rather than any code (that uses python __eq__ on keys) randomly raising a runtime error, which is what we have now. --David From rdmurray at bitdance.com Thu Mar 29 22:48:14 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 29 Mar 2012 16:48:14 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <20120329203103.95A4B2500E9@webabinitio.net> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> Message-ID: <20120329204815.D7AC32500E9@webabinitio.net> On Thu, 29 Mar 2012 16:31:03 -0400, "R. David Murray" wrote: > On Thu, 29 Mar 2012 13:09:17 -0700, Guido van Rossum wrote: > > My original assessment was that this only affects dicts whose keys > > have a user-implemented __hash__ or __eq__ implementation, and that > > the number of apps that use this *and* assume the threadsafe property > > would be pretty small. This is just intuition, I don't have hard > > facts. But I do want to stress that not all dict lookups automatically > > become thread-unsafe, only those that need to run user code as part of > > the key lookup. > > You are probably correct, but the thing is that one still has to do the > code audit to be sure...and then make sure that no one later introduces > such an object type as a dict key. I just did a quick grep on our project. We are only defining __eq__ and __hash__ a couple places, but both are objects that could easily get used as dict keys (there is a good chance that's *why* those methods are defined) accessed by more than one thread. I haven't done the audit to find out :) The libraries we depend on have many more definitions of __eq__ and __hash__, and we'd have to check them too. (Including SQLAlchemy, and I wouldn't want that job.) So our intuition that this is not common may be wrong. --David From stefan_ml at behnel.de Thu Mar 29 23:00:20 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Thu, 29 Mar 2012 23:00:20 +0200 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <20120329203103.95A4B2500E9@webabinitio.net> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> Message-ID: R. David Murray, 29.03.2012 22:31: > On Thu, 29 Mar 2012 13:09:17 -0700, Guido van Rossum wrote: >> On Thu, Mar 29, 2012 at 12:58 PM, R. David Murray wrote: >>> Some of us have expressed uneasiness about the consequences of dict >>> raising an error on lookup if the dict has been modified, the fix Victor >>> made to solve one of the crashers. >>> >>> I don't know if I speak for the others, but (assuming that I understand >>> the change correctly) my concern is that there is probably a significant >>> amount of threading code out there that assumes that dict *lookup* is >>> a thread-safe operation. Much of that code will, if moved to Python >>> 3.3, now be subject to random runtime errors for which it will not >>> be prepared. Further, code which appears safe can suddenly become >>> unsafe if a refactoring of the code causes an object to be stored in >>> the dictionary that has a Python equality method. >> >> My original assessment was that this only affects dicts whose keys >> have a user-implemented __hash__ or __eq__ implementation, and that >> the number of apps that use this *and* assume the threadsafe property >> would be pretty small. This is just intuition, I don't have hard >> facts. But I do want to stress that not all dict lookups automatically >> become thread-unsafe, only those that need to run user code as part of >> the key lookup. > > You are probably correct, but the thing is that one still has to do the > code audit to be sure...and then make sure that no one later introduces > such an object type as a dict key. The thing is: the assumption that arbitrary dict lookups are GIL-atomic has *always* been false. Only those that do not involve Python code execution for the hash key calculation or the object comparison are. That includes the built-in strings and numbers (and tuples of them), which are by far the most common dict keys. Looking up arbitrary user provided objects is definitely not guaranteed to be atomic. Stefan From animelovin at gmail.com Thu Mar 29 23:03:17 2012 From: animelovin at gmail.com (Etienne Robillard) Date: Thu, 29 Mar 2012 17:03:17 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <20120329204815.D7AC32500E9@webabinitio.net> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329204815.D7AC32500E9@webabinitio.net> Message-ID: <4F74CE15.3040204@gmail.com> On 03/29/2012 04:48 PM, R. David Murray wrote: > On Thu, 29 Mar 2012 16:31:03 -0400, "R. David Murray" wrote: >> On Thu, 29 Mar 2012 13:09:17 -0700, Guido van Rossum wrote: >>> My original assessment was that this only affects dicts whose keys >>> have a user-implemented __hash__ or __eq__ implementation, and that >>> the number of apps that use this *and* assume the threadsafe property >>> would be pretty small. This is just intuition, I don't have hard >>> facts. But I do want to stress that not all dict lookups automatically >>> become thread-unsafe, only those that need to run user code as part of >>> the key lookup. >> >> You are probably correct, but the thing is that one still has to do the >> code audit to be sure...and then make sure that no one later introduces >> such an object type as a dict key. > > I just did a quick grep on our project. We are only defining __eq__ > and __hash__ a couple places, but both are objects that could easily get > used as dict keys (there is a good chance that's *why* those methods are > defined) accessed by more than one thread. I haven't done the audit to > find out :) > > The libraries we depend on have many more definitions of __eq__ and > __hash__, and we'd have to check them too. (Including SQLAlchemy, > and I wouldn't want that job.) > > So our intuition that this is not common may be wrong. > > --David > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/animelovin%40gmail.com > Hm, as far I understand this seems like an issue for gnu PTH, not python job, which should transparently handles thread safety issues based on the host/machine capabilities. Therefore I hope the fix in python don't affect thread-unsafe apps to raise spurious RuntimeErrors when a dict get modified across a SMP-aware platform... :-) From victor.stinner at gmail.com Thu Mar 29 23:26:42 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 29 Mar 2012 23:26:42 +0200 Subject: [Python-Dev] Bug in generator if the generator in created in a C thread In-Reply-To: References: Message-ID: <4F74D392.2040808@gmail.com> On 29/03/2012 21:35, Tim Lesher wrote: > From a theoretical standpoint, I can't quite decide what the real error is: > > 1) the fact that PyGILState_Release() destroys a temporary thread > state that may still be referenced by some objects, or > 2) the fact that some code is trying to keep frame objects after the > creating thread state no longer exists. > > This week I've been leaning toward 2), but then I realized that > keeping frames post-thread-death is not that uncommon (for example, > debuggers and other diagnostic techniques like > http://nedbatchelder.com/blog/200711/rethrowing_exceptions_in_python.html). The problem is not the frame, but the Python thread state referenced by the frame. It's a private attribute. My patch just updates this reference before running the generator (and it clears the reference when the generator excution is stopped or finished). > Locally we added some (unfortunate) code to our 3.1.2 port to wrap > PyGILState_Ensure(), which I thought had sidestepped the issue for us: > > void takeGIL() > { > PyGILState_Ensure(); > // This has the side effect of keeping such thread states alive until > // the interpreter is finalized; however, all thread state objects get > // unconditionally deleted during Py_Finalize, so they won't leak. > PyThreadState* pThreadState = PyGILState_GetThisThreadState(); > if (pThreadState->gilstate_counter == 1) > { > ++pThreadState->gilstate_counter; > } > } > > But clearly that can't be a correct answer (and it may not even be a > functioning one, given that I'm seeing a similar issue again). You may leak memory if your threads have a short lifetime and you create many threads. For example if one thread is only used to process one request and then is destroyed. Victor From rdmurray at bitdance.com Fri Mar 30 00:07:54 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 29 Mar 2012 18:07:54 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> Message-ID: <20120329220755.377052500E9@webabinitio.net> On Thu, 29 Mar 2012 23:00:20 +0200, Stefan Behnel wrote: > R. David Murray, 29.03.2012 22:31: > > On Thu, 29 Mar 2012 13:09:17 -0700, Guido van Rossum wrote: > >> On Thu, Mar 29, 2012 at 12:58 PM, R. David Murray wrote: > >>> Some of us have expressed uneasiness about the consequences of dict > >>> raising an error on lookup if the dict has been modified, the fix Victor > >>> made to solve one of the crashers. > >>> > >>> I don't know if I speak for the others, but (assuming that I understand > >>> the change correctly) my concern is that there is probably a significant > >>> amount of threading code out there that assumes that dict *lookup* is > >>> a thread-safe operation. Much of that code will, if moved to Python > >>> 3.3, now be subject to random runtime errors for which it will not > >>> be prepared. Further, code which appears safe can suddenly become > >>> unsafe if a refactoring of the code causes an object to be stored in > >>> the dictionary that has a Python equality method. > >> > >> My original assessment was that this only affects dicts whose keys > >> have a user-implemented __hash__ or __eq__ implementation, and that > >> the number of apps that use this *and* assume the threadsafe property > >> would be pretty small. This is just intuition, I don't have hard > >> facts. But I do want to stress that not all dict lookups automatically > >> become thread-unsafe, only those that need to run user code as part of > >> the key lookup. > > > > You are probably correct, but the thing is that one still has to do the > > code audit to be sure...and then make sure that no one later introduces > > such an object type as a dict key. > > The thing is: the assumption that arbitrary dict lookups are GIL-atomic has > *always* been false. Only those that do not involve Python code execution > for the hash key calculation or the object comparison are. That includes > the built-in strings and numbers (and tuples of them), which are by far the > most common dict keys. Looking up arbitrary user provided objects is > definitely not guaranteed to be atomic. Well, I'm afraid I was using the term 'thread safety' rather too loosely there. What I mean is that if you do a dict lookup, the lookup either returns a value or a KeyError, and that if you get back an object that object has internally consistent state. The problem this fix introduces is that the lookup may fail with a RuntimeError rather than a KeyError, which it has never done before. I think that is what Guido means by code that uses objects with python eq/hash *and* assumes threadsafe lookup. If mutation of the objects or dict during the lookup is a concern, then the code would use locks and wouldn't have the problem. But there are certainly situations where it doesn't matter if the dictionary mutates during the lookup, as long as you get either an object or a KeyError, and thus no locks are (currently) needed. Maybe I'm being paranoid about breakage here, but as with most backward compatibility concerns, there are probably more bits of code that will be affected than our intuition indicates. --David From brian at python.org Fri Mar 30 00:39:27 2012 From: brian at python.org (Brian Curtin) Date: Thu, 29 Mar 2012 17:39:27 -0500 Subject: [Python-Dev] Integrating the PEP 397 launcher Message-ID: After talking with Martin and several others during the language summit and elsewhere around PyCon, PEP 397 should be accepted. I don't remember who, but some suggested it should just be a regular old feature instead of going through the PEP process. So...does this even need to continue the PEP process? Vinay - is the code you have on bitbucket ready to roll into CPython, thus into the installer? From benjamin at python.org Fri Mar 30 00:45:10 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 29 Mar 2012 18:45:10 -0400 Subject: [Python-Dev] Integrating the PEP 397 launcher In-Reply-To: References: Message-ID: 2012/3/29 Brian Curtin : > After talking with Martin and several others during the language > summit and elsewhere around PyCon, PEP 397 should be accepted. I don't > remember who, but some suggested it should just be a regular old > feature instead of going through the PEP process. So...does this even > need to continue the PEP process? If you have a PEP and it's accepted, what would be the difference? -- Regards, Benjamin From brian at python.org Fri Mar 30 00:50:01 2012 From: brian at python.org (Brian Curtin) Date: Thu, 29 Mar 2012 17:50:01 -0500 Subject: [Python-Dev] Integrating the PEP 397 launcher In-Reply-To: References: Message-ID: On Thu, Mar 29, 2012 at 17:45, Benjamin Peterson wrote: > 2012/3/29 Brian Curtin : >> After talking with Martin and several others during the language >> summit and elsewhere around PyCon, PEP 397 should be accepted. I don't >> remember who, but some suggested it should just be a regular old >> feature instead of going through the PEP process. So...does this even >> need to continue the PEP process? > > If you have a PEP and it's accepted, what would be the difference? In the end? Nothing. It was a comment about this whole process not even needing to exist. From v+python at g.nevcal.com Fri Mar 30 01:08:06 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 29 Mar 2012 16:08:06 -0700 Subject: [Python-Dev] Integrating the PEP 397 launcher In-Reply-To: References: Message-ID: <4F74EB56.2040009@g.nevcal.com> On 3/29/2012 3:50 PM, Brian Curtin wrote: > On Thu, Mar 29, 2012 at 17:45, Benjamin Peterson wrote: >> > 2012/3/29 Brian Curtin: >>> >> After talking with Martin and several others during the language >>> >> summit and elsewhere around PyCon, PEP 397 should be accepted. I don't >>> >> remember who, but some suggested it should just be a regular old >>> >> feature instead of going through the PEP process. So...does this even >>> >> need to continue the PEP process? >> > >> > If you have a PEP and it's accepted, what would be the difference? > In the end? Nothing. It was a comment about this whole process not > even needing to exist. It was pretty controversial when it started... but it is such obviously beneficial functionality... and works well... All it needs is official acceptance now, and integration into the release, no? -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Fri Mar 30 01:15:13 2012 From: brian at python.org (Brian Curtin) Date: Thu, 29 Mar 2012 18:15:13 -0500 Subject: [Python-Dev] Integrating the PEP 397 launcher In-Reply-To: <4F74EB56.2040009@g.nevcal.com> References: <4F74EB56.2040009@g.nevcal.com> Message-ID: On Thu, Mar 29, 2012 at 18:08, Glenn Linderman wrote: > All it needs is official acceptance now, and integration into the release, > no? If it wasn't clear, this is what I said in the first post. From benjamin at python.org Fri Mar 30 02:02:39 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 29 Mar 2012 20:02:39 -0400 Subject: [Python-Dev] Integrating the PEP 397 launcher In-Reply-To: References: Message-ID: 2012/3/29 Brian Curtin : > On Thu, Mar 29, 2012 at 17:45, Benjamin Peterson wrote: >> 2012/3/29 Brian Curtin : >>> After talking with Martin and several others during the language >>> summit and elsewhere around PyCon, PEP 397 should be accepted. I don't >>> remember who, but some suggested it should just be a regular old >>> feature instead of going through the PEP process. So...does this even >>> need to continue the PEP process? >> >> If you have a PEP and it's accepted, what would be the difference? > > In the end? Nothing. It was a comment about this whole process not > even needing to exist. The PEP process in general or specifically for this PEP? -- Regards, Benjamin From vinay_sajip at yahoo.co.uk Fri Mar 30 02:22:04 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 30 Mar 2012 00:22:04 +0000 (UTC) Subject: [Python-Dev] Integrating the PEP 397 launcher References: Message-ID: Brian Curtin python.org> writes: > Vinay - is the code you have on bitbucket ready to roll into CPython, > thus into the installer? I believe the main C launcher code is ready to roll into CPython. However, the standalone installer I provide uses WiX rather than msilib, and includes additional executables for functionality like "associate .py with one of the installed Pythons" when the launcher is uninstalled, and for printing messages in certain contexts when installing. I believe there needs to be a little more thought given to how to bring the launcher into the main installer to see if we can either dispense with, or make suitable changes to, these ancillary functions. I would appreciate some feedback from Martin about this - as far as I know he has not made any comments about launcher integration into the main installer. The current launcher functionality (py[w].exe) is as outlined in the PEP + feedback from users (e.g. your recent suggestion to use LOCALAPPDATA rather than APPDATA). The test harness may also need some thinking about - as the launcher executable is separate from Python, I'm not sure if it's appropriate just to create a "test_launcher.py" in Lib/test. To do a full test of the launcher you need multiple 2.x and 3.x versions installed, and I'm not sure if this could be done on existing Windows buildbots, for example. Of course it could be done with mocked executables and synthetically-added registry entries, but that isn't currently in place. Regards, Vinay Sajip From stephen at xemacs.org Fri Mar 30 03:31:00 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 30 Mar 2012 10:31:00 +0900 Subject: [Python-Dev] Integrating the PEP 397 launcher In-Reply-To: References: Message-ID: On Fri, Mar 30, 2012 at 7:50 AM, Brian Curtin wrote: > On Thu, Mar 29, 2012 at 17:45, Benjamin Peterson wrote: >> 2012/3/29 Brian Curtin : >>> After talking with Martin and several others during the language >>> summit and elsewhere around PyCon, PEP 397 should be accepted. I don't >>> remember who, but some suggested it should just be a regular old >>> feature instead of going through the PEP process. So...does this even >>> need to continue the PEP process? >> >> If you have a PEP and it's accepted, what would be the difference? > > In the end? Nothing. It was a comment about this whole process not > even needing to exist. Hindsight, as usual, is 20-20. If the process got enough discussion that somebody said, "look, this really needs a PEP," it's worth the effort to record it. From ncoghlan at gmail.com Fri Mar 30 04:04:03 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Mar 2012 12:04:03 +1000 Subject: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing) In-Reply-To: <4F74A953.2090909@oddbird.net> References: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> <1333042767.31165.47.camel@surprise> <4F74A953.2090909@oddbird.net> Message-ID: On Fri, Mar 30, 2012 at 4:26 AM, Carl Meyer wrote: > I've added this option as a comment on bug 14444. The title of that bug > is worded such that it could be reasonably resolved either with the > backwards-compatibility fix or the release notes addition, the release > managers can decide what seems appropriate to them. +1 for restoring the fallback code in os.py. A security release definitely shouldn't be breaking that kind of thing. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From merwok at netwok.org Fri Mar 30 04:07:49 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Thu, 29 Mar 2012 22:07:49 -0400 Subject: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing) In-Reply-To: References: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> <1333042767.31165.47.camel@surprise> <4F74A953.2090909@oddbird.net> Message-ID: <4F751575.1030603@netwok.org> Le 29/03/2012 22:04, Nick Coghlan a ?crit : > On Fri, Mar 30, 2012 at 4:26 AM, Carl Meyer wrote: >> I've added this option as a comment on bug 14444. The title of that bug >> is worded such that it could be reasonably resolved either with the >> backwards-compatibility fix or the release notes addition, the release >> managers can decide what seems appropriate to them. > +1 for restoring the fallback code in os.py. A security release > definitely shouldn't be breaking that kind of thing. The RMs have already agreed not to restore the fallback, and the maintainer of virtualenv agreed. See report for more details. Cheers From ncoghlan at gmail.com Fri Mar 30 05:05:00 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Mar 2012 13:05:00 +1000 Subject: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing) In-Reply-To: <4F751575.1030603@netwok.org> References: <7E79234E600438479EC119BD241B48D601E7CFB2@BY2PRD0610MB389.namprd06.prod.outlook.com> <1333042767.31165.47.camel@surprise> <4F74A953.2090909@oddbird.net> <4F751575.1030603@netwok.org> Message-ID: On Fri, Mar 30, 2012 at 12:07 PM, ?ric Araujo wrote: > Le 29/03/2012 22:04, Nick Coghlan a ?crit : >> On Fri, Mar 30, 2012 at 4:26 AM, Carl Meyer wrote: >>> I've added this option as a comment on bug 14444. The title of that bug >>> is worded such that it could be reasonably resolved either with the >>> backwards-compatibility fix or the release notes addition, the release >>> managers can decide what seems appropriate to them. >> +1 for restoring the fallback code in os.py. A security release >> definitely shouldn't be breaking that kind of thing. > > The RMs have already agreed not to restore the fallback, and the > maintainer of virtualenv agreed. ?See report for more details. The details are pretty short (and make sense), so I'll repeat the most salient points here for the benefit of others: - if the binary in a virtualenv isn't updated, then projects running in a virtualenv won't receive the security fix when the updated version of Python is installed - restoring the fallback in os.py would make this failure mode *silent*, so a user could upgrade Python, set PYTHONHASHSEED, but fail to realise they also need to update the binary in the virtualenv in order to benefit from the hash protection - with the current behaviour, failing to upgrade the binary results in a noisy ImportError or AttributeError related to os.urandom, which the release notes and virtualenv help channels can instruct people on handling. This approach explicitly covers the additional steps needed to fully deploy the security fix when using virtualenv. That rationale makes sense to me, too. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Fri Mar 30 05:12:18 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Mar 2012 13:12:18 +1000 Subject: [Python-Dev] [Python-checkins] cpython (3.2): Issue #14409: IDLE doesn't not execute commands from shell with default In-Reply-To: References: Message-ID: On Fri, Mar 30, 2012 at 2:01 AM, andrew.svetlov wrote: > +- Issue #14409: IDLE doesn't not execute commands from shell, > + ?error with default keybinding for Return. (Patch by Roger Serwy) The double negative here makes this impossible to understand. Could we please get an updated NEWS entry that explains what actually changed in IDLE to fix this? Perhaps something like "IDLE now always sets the default keybind for Return correctly, ensuring commands can be executed in the IDLE shell window"? (assuming that's what happened). This is important, folks: NEWS entries need to be comprehensible for people that *haven't* read the associated tracker issue. This means that issue titles (which generally describe a problem someone was having) are often inappropriate as NEWS items. NEWS items should be short descriptions that clearly describe *what changed*, perhaps with some additional information to explain a bit about why the change was made. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From martin at v.loewis.de Fri Mar 30 05:48:42 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Fri, 30 Mar 2012 05:48:42 +0200 Subject: [Python-Dev] Integrating the PEP 397 launcher In-Reply-To: References: Message-ID: <20120330054842.Horde.b154O8L8999PdS0aUGOUQNA@webmail.df.eu> Zitat von Brian Curtin : > After talking with Martin and several others during the language > summit and elsewhere around PyCon, PEP 397 should be accepted. I don't > remember who, but some suggested it should just be a regular old > feature instead of going through the PEP process. So...does this even > need to continue the PEP process? I don't think PEP 397 can be accepted as it stands; it has too many open issues. However, I am in favor of accepting the proposed feature. Now that we do have the PEP, I think that should be done properly. I thought you offered to rewrite it. Formally, I could accept the PEP being withdrawn, and the feature integrated anyway, but I still consider that bad style. So: I can offer to rewrite the PEP to give a full specification of the feature. It might be that I still need some help to provide end-user prose in the PEP if people would want to see that as well. Regards, Martin From brian at python.org Fri Mar 30 06:01:23 2012 From: brian at python.org (Brian Curtin) Date: Thu, 29 Mar 2012 23:01:23 -0500 Subject: [Python-Dev] Integrating the PEP 397 launcher In-Reply-To: <20120330054842.Horde.b154O8L8999PdS0aUGOUQNA@webmail.df.eu> References: <20120330054842.Horde.b154O8L8999PdS0aUGOUQNA@webmail.df.eu> Message-ID: On Thu, Mar 29, 2012 at 22:48, wrote: > Now that we do have the PEP, I think that should be done properly. > I thought you offered to rewrite it. There are definitely areas that I would like to work on, especially pulling implementation details out and replacing with, as you say, end-user prose. For example, some part of a doc tells you to call some function with a specific parameter to figure out where py.ini should be found - it needs to be replaced with an example directory. > So: I can offer to rewrite the PEP to give a full specification > of the feature. It might be that I still need some help to > provide end-user prose in the PEP if people would want to see that as > well. I would be much better at proposing the end-user stuff than the specification. From ncoghlan at gmail.com Fri Mar 30 07:33:41 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Mar 2012 15:33:41 +1000 Subject: [Python-Dev] [Python-checkins] cpython (3.2): Issue #14409: IDLE doesn't not execute commands from shell with default In-Reply-To: <20120330061222.Horde.L8aPH7uWis5PdTKmYa9VVjA@webmail.df.eu> References: <20120330061222.Horde.L8aPH7uWis5PdTKmYa9VVjA@webmail.df.eu> Message-ID: On Fri, Mar 30, 2012 at 2:12 PM, wrote: > > Zitat von Nick Coghlan : > > >> On Fri, Mar 30, 2012 at 2:01 AM, andrew.svetlov >> wrote: >>> >>> +- Issue #14409: IDLE doesn't not execute commands from shell, >>> + ?error with default keybinding for Return. (Patch by Roger Serwy) >> >> >> The double negative here makes this impossible to understand. Could we >> please get an updated NEWS entry that explains what actually changed >> in IDLE to fix this? > > > Please consider that Andrew is not a native speaker of English. So > it's unfair to ask him to rewrite the NEWS entry. That can only > be done by a native speaker. The NEWS entries need to be treated as being at least close to on par with the rest of the docs - it's OK if someone can't come up with the words themselves, but if that's the case, then it's preferable to ask for help with the wording explicitly. Is my suggested rephrasing correct? I don't know, as I'm not familiar with either the original problem or what was done to fix it. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From martin at v.loewis.de Fri Mar 30 09:55:17 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Fri, 30 Mar 2012 09:55:17 +0200 Subject: [Python-Dev] [Python-checkins] cpython (3.2): Issue #14409: IDLE doesn't not execute commands from shell with default In-Reply-To: References: <20120330061222.Horde.L8aPH7uWis5PdTKmYa9VVjA@webmail.df.eu> Message-ID: <20120330095517.Horde.iuDqHtjz9kRPdWblbapDMTA@webmail.df.eu> > Is my suggested rephrasing correct? I don't know, as I'm not familiar > with either the original problem or what was done to fix it. I don't know, either, and Andrew may not able to answer the question as he may not see the fine difference between what he wrote and what you wrote (your phrasing is grammatically fairly advanced). Regards, Martin From solipsis at pitrou.net Fri Mar 30 09:54:57 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 30 Mar 2012 09:54:57 +0200 Subject: [Python-Dev] peps: PEP 418: Fill the "Adjusted by NTP" column of the summary table References: Message-ID: <20120330095457.5ef47a9c@pitrou.net> On Fri, 30 Mar 2012 04:21:22 +0200 victor.stinner wrote: > > diff --git a/pep-0418.txt b/pep-0418.txt > --- a/pep-0418.txt > +++ b/pep-0418.txt > @@ -190,13 +190,13 @@ > Name Resolution Adjusted by NTP? Action on suspend > ========================= =============== ================ ================= Is it supposed to be resolution or accuracy? For example for QueryPerformanceCounter() and timeGetTime(), you are giving an accuracy, but for CLOCK_* you are giving a resolution. Regards Antoine. From vinay_sajip at yahoo.co.uk Fri Mar 30 11:01:53 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 30 Mar 2012 09:01:53 +0000 (UTC) Subject: [Python-Dev] Integrating the PEP 397 launcher References: <20120330054842.Horde.b154O8L8999PdS0aUGOUQNA@webmail.df.eu> Message-ID: v.loewis.de> writes: > Now that we do have the PEP, I think that should be done properly. > I thought you offered to rewrite it. Formally, I could accept the > PEP being withdrawn, and the feature integrated anyway, but I still > consider that bad style. > > So: I can offer to rewrite the PEP to give a full specification > of the feature. It might be that I still need some help to > provide end-user prose in the PEP if people would want to see that as > well. +1 to both your and Brian's input. I will aim to keep the implementation in sync with any changes required as a result of resolving open issues. Regards, Vinay Sajip From storchaka at gmail.com Fri Mar 30 12:38:13 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Fri, 30 Mar 2012 13:38:13 +0300 Subject: [Python-Dev] datetime module and pytz with dateutil In-Reply-To: References: Message-ID: 28.03.12 23:20, Andrew Svetlov ???????(??): > I figured out what pytz and dateutil are not mentioned in python docs > for datetime module. > It's clean why these libs is not a part of Python Libraries ? but > that's not clean for Docs. I don't understand why Python may not include the pytz. The Olson tz database is not part of pytz. Python can depend on a system tz database, as it depends on libssl or libbz2, which also can be updated (for security reasons) independently. From victor.stinner at gmail.com Fri Mar 30 13:55:21 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 30 Mar 2012 13:55:21 +0200 Subject: [Python-Dev] peps: PEP 418: Fill the "Adjusted by NTP" column of the summary table In-Reply-To: <20120330095457.5ef47a9c@pitrou.net> References: <20120330095457.5ef47a9c@pitrou.net> Message-ID: >> diff --git a/pep-0418.txt b/pep-0418.txt >> --- a/pep-0418.txt >> +++ b/pep-0418.txt >> @@ -190,13 +190,13 @@ >> ?Name ? ? ? ? ? ? ? ? ? ? ? Resolution ? ? ? Adjusted by NTP? ?Action on suspend >> ?========================= ?=============== ?================ ?================= > > Is it supposed to be resolution or accuracy? Oh, right. I changed the table to add an accuracy column. Victor From tlesher at gmail.com Fri Mar 30 14:18:06 2012 From: tlesher at gmail.com (Tim Lesher) Date: Fri, 30 Mar 2012 08:18:06 -0400 Subject: [Python-Dev] Bug in generator if the generator in created in a C thread In-Reply-To: <4F74D392.2040808@gmail.com> References: <4F74D392.2040808@gmail.com> Message-ID: On Thu, Mar 29, 2012 at 17:26, Victor Stinner wrote: > The problem is not the frame, but the Python thread state referenced by the > frame. It's a private attribute. My patch just updates this reference before > running the generator (and it clears the reference when the generator > excution is stopped or finished). Right--my thought was the the situation we saw might be similarly triggered because we were storing an exception with traceback (and associated frames) generated by thread A, and then re-throwing it from thread B some time after thread A has exited. The frame attached to the exception's traceback would still, then, be referencing a nonexistent thread state, correct? My concern was that this might one instance of a broader problem for folks who embed Python and see the attractive PyGILState_Ensure() API. It already prevents us from using subinterpreters (which for us might have been a better solution than repeated initialize/finalize, with its known issues). We recently made a change to dispose of the traceback before storing the exception, and that appears to eliminate the corruption we were seeing, so that's making me more suspicious. > You may leak memory if your threads have a short lifetime and you create > many threads. For example if one thread is only used to process one request > and then is destroyed. Absolutely--this particular hack was only used for a thread created outside Python that had to call into the VM. Their behavior is well-defined in our case--two particular OS threads, with lifetimes longer than those of the interpreters we create and finalize. -- Tim Lesher From animelovin at gmail.com Fri Mar 30 14:20:32 2012 From: animelovin at gmail.com (Etienne Robillard) Date: Fri, 30 Mar 2012 08:20:32 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <20120329220755.377052500E9@webabinitio.net> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> Message-ID: <4F75A510.7080401@gmail.com> On 03/29/2012 06:07 PM, R. David Murray wrote: > On Thu, 29 Mar 2012 23:00:20 +0200, Stefan Behnel wrote: >> R. David Murray, 29.03.2012 22:31: >>> On Thu, 29 Mar 2012 13:09:17 -0700, Guido van Rossum wrote: >>>> On Thu, Mar 29, 2012 at 12:58 PM, R. David Murray wrote: >>>>> Some of us have expressed uneasiness about the consequences of dict >>>>> raising an error on lookup if the dict has been modified, the fix Victor >>>>> made to solve one of the crashers. >>>>> >>>>> I don't know if I speak for the others, but (assuming that I understand >>>>> the change correctly) my concern is that there is probably a significant >>>>> amount of threading code out there that assumes that dict *lookup* is >>>>> a thread-safe operation. Much of that code will, if moved to Python >>>>> 3.3, now be subject to random runtime errors for which it will not >>>>> be prepared. Further, code which appears safe can suddenly become >>>>> unsafe if a refactoring of the code causes an object to be stored in >>>>> the dictionary that has a Python equality method. >>>> >>>> My original assessment was that this only affects dicts whose keys >>>> have a user-implemented __hash__ or __eq__ implementation, and that >>>> the number of apps that use this *and* assume the threadsafe property >>>> would be pretty small. This is just intuition, I don't have hard >>>> facts. But I do want to stress that not all dict lookups automatically >>>> become thread-unsafe, only those that need to run user code as part of >>>> the key lookup. >>> >>> You are probably correct, but the thing is that one still has to do the >>> code audit to be sure...and then make sure that no one later introduces >>> such an object type as a dict key. >> >> The thing is: the assumption that arbitrary dict lookups are GIL-atomic has >> *always* been false. Only those that do not involve Python code execution >> for the hash key calculation or the object comparison are. That includes >> the built-in strings and numbers (and tuples of them), which are by far the >> most common dict keys. Looking up arbitrary user provided objects is >> definitely not guaranteed to be atomic. > > Well, I'm afraid I was using the term 'thread safety' rather too loosely > there. What I mean is that if you do a dict lookup, the lookup either > returns a value or a KeyError, and that if you get back an object that > object has internally consistent state. The problem this fix introduces > is that the lookup may fail with a RuntimeError rather than a KeyError, > which it has never done before. > > I think that is what Guido means by code that uses objects with python > eq/hash *and* assumes threadsafe lookup. If mutation of the objects > or dict during the lookup is a concern, then the code would use locks > and wouldn't have the problem. But there are certainly situations > where it doesn't matter if the dictionary mutates during the lookup, > as long as you get either an object or a KeyError, and thus no locks are > (currently) needed. > > Maybe I'm being paranoid about breakage here, but as with most backward > compatibility concerns, there are probably more bits of code that will > be affected than our intuition indicates. > > --David > _______________________________________________ what this suppose to mean exactly? To "mutate" is a bit odd concept for a programming language I suppose. Also I suppose I must be missing something which makes you feel like this is an OT post when the problem seem most likely to be exclusively in python 3.3, another reason I guess to not upgrade yet all that massively using 2to3. :-) cheers, Etienne From rdmurray at bitdance.com Fri Mar 30 14:47:46 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 30 Mar 2012 08:47:46 -0400 Subject: [Python-Dev] datetime module and pytz with dateutil In-Reply-To: References: Message-ID: <20120330124746.A50E82500F8@webabinitio.net> On Fri, 30 Mar 2012 13:38:13 +0300, Serhiy Storchaka wrote: > 28.03.12 23:20, Andrew Svetlov ??????????????(????): > > I figured out what pytz and dateutil are not mentioned in python docs > > for datetime module. > > It's clean why these libs is not a part of Python Libraries ??? but > > that's not clean for Docs. > > I don't understand why Python may not include the pytz. The Olson tz > database is not part of pytz. Python can depend on a system tz database, > as it depends on libssl or libbz2, which also can be updated (for > security reasons) independently. There is an extensive discussion of this somewhere in the archives of this list. If I remember correctly, it boils down to the fact that pytz does bundle the database, and that Windows either does not have or does not regularly update its own Olson database. Rather than ship something out-of-date, we choose to put the onus on the user to ensure that the appropriate code+db exists on their system. Hopefully someone will correct me if I'm wrong, and/or find a pointer to the relevant thread. --David From andrew.svetlov at gmail.com Fri Mar 30 15:05:23 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Fri, 30 Mar 2012 16:05:23 +0300 Subject: [Python-Dev] datetime module and pytz with dateutil In-Reply-To: <20120330124746.A50E82500F8@webabinitio.net> References: <20120330124746.A50E82500F8@webabinitio.net> Message-ID: I filed the http://bugs.python.org/issue14448 BTW. On Fri, Mar 30, 2012 at 3:47 PM, R. David Murray wrote: > On Fri, 30 Mar 2012 13:38:13 +0300, Serhiy Storchaka wrote: >> 28.03.12 23:20, Andrew Svetlov ???????(??): >> > I figured out what pytz and dateutil are not mentioned in python docs >> > for datetime module. >> > It's clean why these libs is not a part of Python Libraries ? but >> > that's not clean for Docs. >> >> I don't understand why Python may not include the pytz. The Olson tz >> database is not part of pytz. Python can depend on a system tz database, >> as it depends on libssl or libbz2, which also can be updated (for >> security reasons) independently. > > There is an extensive discussion of this somewhere in the archives of > this list. ?If I remember correctly, it boils down to the fact that pytz > does bundle the database, and that Windows either does not have or does > not regularly update its own Olson database. ?Rather than ship something > out-of-date, we choose to put the onus on the user to ensure that the > appropriate code+db exists on their system. > > Hopefully someone will correct me if I'm wrong, and/or find a pointer > to the relevant thread. > > --David > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com > -- Thanks, Andrew Svetlov From ncoghlan at gmail.com Fri Mar 30 15:56:04 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Mar 2012 23:56:04 +1000 Subject: [Python-Dev] datetime module and pytz with dateutil In-Reply-To: <20120330124746.A50E82500F8@webabinitio.net> References: <20120330124746.A50E82500F8@webabinitio.net> Message-ID: On Fri, Mar 30, 2012 at 10:47 PM, R. David Murray wrote: > On Fri, 30 Mar 2012 13:38:13 +0300, Serhiy Storchaka wrote: >> 28.03.12 23:20, Andrew Svetlov ???????(??): >> > I figured out what pytz and dateutil are not mentioned in python docs >> > for datetime module. >> > It's clean why these libs is not a part of Python Libraries ? but >> > that's not clean for Docs. >> >> I don't understand why Python may not include the pytz. The Olson tz >> database is not part of pytz. Python can depend on a system tz database, >> as it depends on libssl or libbz2, which also can be updated (for >> security reasons) independently. > > There is an extensive discussion of this somewhere in the archives of > this list. ?If I remember correctly, it boils down to the fact that pytz > does bundle the database, and that Windows either does not have or does > not regularly update its own Olson database. ?Rather than ship something > out-of-date, we choose to put the onus on the user to ensure that the > appropriate code+db exists on their system. > > Hopefully someone will correct me if I'm wrong, and/or find a pointer > to the relevant thread. That's my recollection as well. Because we don't want to take on the task of providely timely updates in response to timezone database changes, any named timezone support added to the stdlib would need to be based on a system provided timezone database, rather than the bundled database model used by pytz. This is straightforward on *nix based systems that provide the zoneinfo structure in the filesystem, but more complicated on Windows (which has its own custom scheme). Before the idea of adding full timezone support to the standard library could be seriously considered someone would have to, at the very least, use the mapping data from the Unicode Consortium's CLDR Supplementary data to map the standard Olsen database timezone names to the correct values to look up through the Windows timezone APIs (http://unicode.org/repos/cldr-tmp/trunk/diff/supplemental/zone_tzid.html) Adding mappings for *new* timezones would still be controlled by our release cycle (although I think it would be reasonable to permit such additions in maintenance releases), but updates in response to things like daylight savings dates changing would then be the responsibility of the OS vendors. However, "pip install pytz" is easy enough that there isn't a lot of motivation for anyone to do the work to switch from a bundled copy of the timezone database to a bundled TZID -> Windows API lookup mapping. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From guido at python.org Fri Mar 30 16:47:38 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 30 Mar 2012 07:47:38 -0700 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <4F75A510.7080401@gmail.com> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> Message-ID: Etienne, I have not understood either of your messages in this thread. They just did not make sense to me. Do you actually understand the issue at hand? --Guido On Friday, March 30, 2012, Etienne Robillard wrote: > On 03/29/2012 06:07 PM, R. David Murray wrote: > >> On Thu, 29 Mar 2012 23:00:20 +0200, Stefan Behnel >> wrote: >> >>> R. David Murray, 29.03.2012 22:31: >>> >>>> On Thu, 29 Mar 2012 13:09:17 -0700, Guido van Rossum wrote: >>>> >>>>> On Thu, Mar 29, 2012 at 12:58 PM, R. David Murray wrote: >>>>> >>>>>> Some of us have expressed uneasiness about the consequences of dict >>>>>> raising an error on lookup if the dict has been modified, the fix >>>>>> Victor >>>>>> made to solve one of the crashers. >>>>>> >>>>>> I don't know if I speak for the others, but (assuming that I >>>>>> understand >>>>>> the change correctly) my concern is that there is probably a >>>>>> significant >>>>>> amount of threading code out there that assumes that dict *lookup* is >>>>>> a thread-safe operation. Much of that code will, if moved to Python >>>>>> 3.3, now be subject to random runtime errors for which it will not >>>>>> be prepared. Further, code which appears safe can suddenly become >>>>>> unsafe if a refactoring of the code causes an object to be stored in >>>>>> the dictionary that has a Python equality method. >>>>>> >>>>> >>>>> My original assessment was that this only affects dicts whose keys >>>>> have a user-implemented __hash__ or __eq__ implementation, and that >>>>> the number of apps that use this *and* assume the threadsafe property >>>>> would be pretty small. This is just intuition, I don't have hard >>>>> facts. But I do want to stress that not all dict lookups automatically >>>>> become thread-unsafe, only those that need to run user code as part of >>>>> the key lookup. >>>>> >>>> >>>> You are probably correct, but the thing is that one still has to do the >>>> code audit to be sure...and then make sure that no one later introduces >>>> such an object type as a dict key. >>>> >>> >>> The thing is: the assumption that arbitrary dict lookups are GIL-atomic >>> has >>> *always* been false. Only those that do not involve Python code execution >>> for the hash key calculation or the object comparison are. That includes >>> the built-in strings and numbers (and tuples of them), which are by far >>> the >>> most common dict keys. Looking up arbitrary user provided objects is >>> definitely not guaranteed to be atomic. >>> >> >> Well, I'm afraid I was using the term 'thread safety' rather too loosely >> there. What I mean is that if you do a dict lookup, the lookup either >> returns a value or a KeyError, and that if you get back an object that >> object has internally consistent state. The problem this fix introduces >> is that the lookup may fail with a RuntimeError rather than a KeyError, >> which it has never done before. >> >> I think that is what Guido means by code that uses objects with python >> eq/hash *and* assumes threadsafe lookup. If mutation of the objects >> or dict during the lookup is a concern, then the code would use locks >> and wouldn't have the problem. But there are certainly situations >> where it doesn't matter if the dictionary mutates during the lookup, >> as long as you get either an object or a KeyError, and thus no locks are >> (currently) needed. >> >> Maybe I'm being paranoid about breakage here, but as with most backward >> compatibility concerns, there are probably more bits of code that will >> be affected than our intuition indicates. >> >> --David >> ______________________________**_________________ >> > > what this suppose to mean exactly? To "mutate" is a bit odd concept for a > programming language I suppose. Also I suppose I must be missing something > which makes you feel like this is an OT post when the problem seem most > likely to be exclusively in python 3.3, another reason I guess to not > upgrade yet all that massively using 2to3. :-) > > cheers, > Etienne > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From animelovin at gmail.com Fri Mar 30 17:27:05 2012 From: animelovin at gmail.com (Etienne Robillard) Date: Fri, 30 Mar 2012 11:27:05 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> Message-ID: <4F75D0C9.4080704@gmail.com> Hi Guido, I'm sorry for being unclear! I just try actually to learn what thoses consequences for theses 'unattended' mutations in dictionary key lookups could be, :-) however, it seems now that I might have touch a nerve without realizing it. I would therefore appreciate more light on this "issue" if you like to enlighten us all. :D Regards, Etienne On 03/30/2012 10:47 AM, Guido van Rossum wrote: > Etienne, I have not understood either of your messages in this thread. > They just did not make sense to me. Do you actually understand the issue > at hand? > > --Guido > > On Friday, March 30, 2012, Etienne Robillard wrote: > > On 03/29/2012 06:07 PM, R. David Murray wrote: > > On Thu, 29 Mar 2012 23:00:20 +0200, Stefan > Behnel wrote: > > R. David Murray, 29.03.2012 22:31: > > On Thu, 29 Mar 2012 13:09:17 -0700, Guido van Rossum wrote: > > On Thu, Mar 29, 2012 at 12:58 PM, R. David Murray wrote: > > Some of us have expressed uneasiness about the > consequences of dict > raising an error on lookup if the dict has been > modified, the fix Victor > made to solve one of the crashers. > > I don't know if I speak for the others, but > (assuming that I understand > the change correctly) my concern is that there > is probably a significant > amount of threading code out there that assumes > that dict *lookup* is > a thread-safe operation. Much of that code > will, if moved to Python > 3.3, now be subject to random runtime errors for > which it will not > be prepared. Further, code which appears safe > can suddenly become > unsafe if a refactoring of the code causes an > object to be stored in > the dictionary that has a Python equality method. > > > My original assessment was that this only affects > dicts whose keys > have a user-implemented __hash__ or __eq__ > implementation, and that > the number of apps that use this *and* assume the > threadsafe property > would be pretty small. This is just intuition, I > don't have hard > facts. But I do want to stress that not all dict > lookups automatically > become thread-unsafe, only those that need to run > user code as part of > the key lookup. > > > You are probably correct, but the thing is that one > still has to do the > code audit to be sure...and then make sure that no one > later introduces > such an object type as a dict key. > > > The thing is: the assumption that arbitrary dict lookups are > GIL-atomic has > *always* been false. Only those that do not involve Python > code execution > for the hash key calculation or the object comparison are. > That includes > the built-in strings and numbers (and tuples of them), which > are by far the > most common dict keys. Looking up arbitrary user provided > objects is > definitely not guaranteed to be atomic. > > > Well, I'm afraid I was using the term 'thread safety' rather too > loosely > there. What I mean is that if you do a dict lookup, the lookup > either > returns a value or a KeyError, and that if you get back an > object that > object has internally consistent state. The problem this fix > introduces > is that the lookup may fail with a RuntimeError rather than a > KeyError, > which it has never done before. > > I think that is what Guido means by code that uses objects with > python > eq/hash *and* assumes threadsafe lookup. If mutation of the objects > or dict during the lookup is a concern, then the code would use > locks > and wouldn't have the problem. But there are certainly situations > where it doesn't matter if the dictionary mutates during the lookup, > as long as you get either an object or a KeyError, and thus no > locks are > (currently) needed. > > Maybe I'm being paranoid about breakage here, but as with most > backward > compatibility concerns, there are probably more bits of code > that will > be affected than our intuition indicates. > > --David > ______________________________ _________________ > > > what this suppose to mean exactly? To "mutate" is a bit odd concept > for a programming language I suppose. Also I suppose I must be > missing something which makes you feel like this is an OT post when > the problem seem most likely to be exclusively in python 3.3, > another reason I guess to not upgrade yet all that massively using > 2to3. :-) > > cheers, > Etienne > ______________________________ _________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/ mailman/listinfo/python-dev > > Unsubscribe: http://mail.python.org/ mailman/options/python-dev/ > guido%40python.org > > > > > -- > --Guido van Rossum (python.org/~guido ) From ncoghlan at gmail.com Fri Mar 30 17:41:53 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 31 Mar 2012 01:41:53 +1000 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <4F75D0C9.4080704@gmail.com> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D0C9.4080704@gmail.com> Message-ID: On Sat, Mar 31, 2012 at 1:27 AM, Etienne Robillard wrote: > Hi Guido, > > I'm sorry for being unclear! I just try actually to learn what thoses > consequences for theses 'unattended' mutations in dictionary key lookups > could be, :-) > > however, it seems now that I might have touch a nerve without realizing it. > I would therefore appreciate more light on this "issue" if you like > to enlighten us all. :D Etienne, For those that need to understand the issue in order to further consider the consequences of the change, RDM has already explained the problem quite clearly. If you'd like a more in-depth explanation, please ask the question again over on core-mentorship at python.org. It's not an appropriate topic for the main development list. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From animelovin at gmail.com Fri Mar 30 17:45:05 2012 From: animelovin at gmail.com (Etienne Robillard) Date: Fri, 30 Mar 2012 11:45:05 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> Message-ID: <4F75D501.2050400@gmail.com> "Multiple threads can agree by convention not to mutate a shared dict, there's no great need for enforcement. Multiple processes can't share dicts." its not sure I get completely the meaning of "mutate"... And if possible, I would like also the rational for the 2nd phrase while we're at it as it seem a little unclear too. Sorry also if this is OT... :) Regards, Etienne http://www.python.org/dev/peps/pep-0416/ On 03/30/2012 10:47 AM, Guido van Rossum wrote: > Etienne, I have not understood either of your messages in this thread. > They just did not make sense to me. Do you actually understand the issue > at hand? > > --Guido > > On Friday, March 30, 2012, Etienne Robillard wrote: > > On 03/29/2012 06:07 PM, R. David Murray wrote: > > On Thu, 29 Mar 2012 23:00:20 +0200, Stefan > Behnel wrote: > > R. David Murray, 29.03.2012 22:31: > > On Thu, 29 Mar 2012 13:09:17 -0700, Guido van Rossum wrote: > > On Thu, Mar 29, 2012 at 12:58 PM, R. David Murray wrote: > > Some of us have expressed uneasiness about the > consequences of dict > raising an error on lookup if the dict has been > modified, the fix Victor > made to solve one of the crashers. > > I don't know if I speak for the others, but > (assuming that I understand > the change correctly) my concern is that there > is probably a significant > amount of threading code out there that assumes > that dict *lookup* is > a thread-safe operation. Much of that code > will, if moved to Python > 3.3, now be subject to random runtime errors for > which it will not > be prepared. Further, code which appears safe > can suddenly become > unsafe if a refactoring of the code causes an > object to be stored in > the dictionary that has a Python equality method. > > > My original assessment was that this only affects > dicts whose keys > have a user-implemented __hash__ or __eq__ > implementation, and that > the number of apps that use this *and* assume the > threadsafe property > would be pretty small. This is just intuition, I > don't have hard > facts. But I do want to stress that not all dict > lookups automatically > become thread-unsafe, only those that need to run > user code as part of > the key lookup. > > > You are probably correct, but the thing is that one > still has to do the > code audit to be sure...and then make sure that no one > later introduces > such an object type as a dict key. > > > The thing is: the assumption that arbitrary dict lookups are > GIL-atomic has > *always* been false. Only those that do not involve Python > code execution > for the hash key calculation or the object comparison are. > That includes > the built-in strings and numbers (and tuples of them), which > are by far the > most common dict keys. Looking up arbitrary user provided > objects is > definitely not guaranteed to be atomic. > > > Well, I'm afraid I was using the term 'thread safety' rather too > loosely > there. What I mean is that if you do a dict lookup, the lookup > either > returns a value or a KeyError, and that if you get back an > object that > object has internally consistent state. The problem this fix > introduces > is that the lookup may fail with a RuntimeError rather than a > KeyError, > which it has never done before. > > I think that is what Guido means by code that uses objects with > python > eq/hash *and* assumes threadsafe lookup. If mutation of the objects > or dict during the lookup is a concern, then the code would use > locks > and wouldn't have the problem. But there are certainly situations > where it doesn't matter if the dictionary mutates during the lookup, > as long as you get either an object or a KeyError, and thus no > locks are > (currently) needed. > > Maybe I'm being paranoid about breakage here, but as with most > backward > compatibility concerns, there are probably more bits of code > that will > be affected than our intuition indicates. > > --David > ______________________________ _________________ > > > what this suppose to mean exactly? To "mutate" is a bit odd concept > for a programming language I suppose. Also I suppose I must be > missing something which makes you feel like this is an OT post when > the problem seem most likely to be exclusively in python 3.3, > another reason I guess to not upgrade yet all that massively using > 2to3. :-) > > cheers, > Etienne > ______________________________ _________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/ mailman/listinfo/python-dev > > Unsubscribe: http://mail.python.org/ mailman/options/python-dev/ > guido%40python.org > > > > > -- > --Guido van Rossum (python.org/~guido ) From stefan_ml at behnel.de Fri Mar 30 17:54:31 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 30 Mar 2012 17:54:31 +0200 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <4F75D501.2050400@gmail.com> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> Message-ID: Etienne Robillard, 30.03.2012 17:45: > Sorry also if this is OT... :) Yes, it is. Please do as Nick told you. Stefan From animelovin at gmail.com Fri Mar 30 18:08:23 2012 From: animelovin at gmail.com (Etienne Robillard) Date: Fri, 30 Mar 2012 12:08:23 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> Message-ID: <4F75DA77.6040305@gmail.com> you wish...are you also truth allergic or irritated by the consequences of free speech ? Please stop giving me orders. You don't even know me and this is at all not necessary and good netiquette if you want to bring a point to ponder. Sorry for others who thinks this is not OT as I its probably related to pep-416 refusal. Cheers! Etienne On 03/30/2012 11:54 AM, Stefan Behnel wrote: > Etienne Robillard, 30.03.2012 17:45: >> Sorry also if this is OT... :) > > Yes, it is. Please do as Nick told you. > > Stefan From status at bugs.python.org Fri Mar 30 18:07:11 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 30 Mar 2012 18:07:11 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120330160711.97FC61C9A0@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-03-23 - 2012-03-30) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3359 (+13) closed 22871 (+42) total 26230 (+55) Open issues with patches: 1438 Issues opened (35) ================== #14396: Popen wait() doesn't handle spurious wakeups http://bugs.python.org/issue14396 opened by amscanne #14397: Use GetTickCount/GetTickCount64 instead of QueryPerformanceCou http://bugs.python.org/issue14397 opened by haypo #14398: bz2.BZ2DEcompressor.decompress fail on large files http://bugs.python.org/issue14398 opened by Laurent.Gautier #14399: zipfile and creat/update comment http://bugs.python.org/issue14399 opened by acassaigne #14405: Some "Other Resources" in the sidebar are hopelessly out of da http://bugs.python.org/issue14405 opened by nedbat #14406: Race condition in concurrent.futures http://bugs.python.org/issue14406 opened by anacrolix #14407: concurrent.futures tests don't adher to test_cases protocol http://bugs.python.org/issue14407 opened by anacrolix #14408: Support ./python -m unittest in the stdlib tests http://bugs.python.org/issue14408 opened by anacrolix #14412: Sqlite Integer Fields http://bugs.python.org/issue14412 opened by goatsofmendez #14414: xmlrpclib leaves connection in broken state if server returns http://bugs.python.org/issue14414 opened by fancycode #14417: dict RuntimeError workaround http://bugs.python.org/issue14417 opened by Jim.Jewett #14418: Document differences in SocketServer between Python 2.6 and 2. http://bugs.python.org/issue14418 opened by gjb1002 #14419: Faster ascii decoding http://bugs.python.org/issue14419 opened by storchaka #14420: winreg SetValueEx DWord type incompatible with value argument http://bugs.python.org/issue14420 opened by RoSanford #14422: Pack PyASCIIObject fields to reduce memory consumption of pure http://bugs.python.org/issue14422 opened by haypo #14423: Getting the starting date of iso week from a week number and a http://bugs.python.org/issue14423 opened by Esben.Agerb??k.Black #14424: document PyType_GenericAlloc http://bugs.python.org/issue14424 opened by eli.bendersky #14425: Improve handling of 'timeout' parameter default in urllib.urlo http://bugs.python.org/issue14425 opened by r.david.murray #14426: date format problem in Cookie/http.cookies http://bugs.python.org/issue14426 opened by M??te.Invert #14427: urllib.request.Request get_header and header_items not documen http://bugs.python.org/issue14427 opened by r.david.murray #14428: Implementation of the PEP 418 http://bugs.python.org/issue14428 opened by haypo #14432: Bug in generator if the generator in created in a C thread http://bugs.python.org/issue14432 opened by haypo #14433: Python 3 interpreter crash on windows when stdin closed in Pyt http://bugs.python.org/issue14433 opened by alexis.d #14434: Tutorial link in "help()" in Python3 points to Python2 tutoria http://bugs.python.org/issue14434 opened by Dubslow #14437: _io build fails on cygwin http://bugs.python.org/issue14437 opened by luch #14439: Easier error diagnosis when bootstrapping the runpy module in http://bugs.python.org/issue14439 opened by haypo #14440: Close background process if IDLE closes abnormally. http://bugs.python.org/issue14440 opened by asvetlov #14443: Distutils test failure http://bugs.python.org/issue14443 opened by rosslagerwall #14444: Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom http://bugs.python.org/issue14444 opened by jason.coombs #14445: Providing more fine-grained control over assert statements http://bugs.python.org/issue14445 opened by max #14446: Remove deprecated tkinter functions http://bugs.python.org/issue14446 opened by asvetlov #14448: Mention pytz in datetime's docs http://bugs.python.org/issue14448 opened by asvetlov #14449: argparse optional arguments should follow getopt_long(3) http://bugs.python.org/issue14449 opened by mamikonyan #14450: Log rotate cant execute in Windows. (logging module) http://bugs.python.org/issue14450 opened by shinta.nakayama #14452: SysLogHandler sends invalid messages when using unicode http://bugs.python.org/issue14452 opened by zmk Most recent 15 issues with no replies (15) ========================================== #14452: SysLogHandler sends invalid messages when using unicode http://bugs.python.org/issue14452 #14446: Remove deprecated tkinter functions http://bugs.python.org/issue14446 #14440: Close background process if IDLE closes abnormally. http://bugs.python.org/issue14440 #14427: urllib.request.Request get_header and header_items not documen http://bugs.python.org/issue14427 #14424: document PyType_GenericAlloc http://bugs.python.org/issue14424 #14418: Document differences in SocketServer between Python 2.6 and 2. http://bugs.python.org/issue14418 #14414: xmlrpclib leaves connection in broken state if server returns http://bugs.python.org/issue14414 #14407: concurrent.futures tests don't adher to test_cases protocol http://bugs.python.org/issue14407 #14379: Several traceback docs improvements http://bugs.python.org/issue14379 #14345: Document socket.SOL_SOCKET http://bugs.python.org/issue14345 #14341: sporadic (?) test_urllib2 failures http://bugs.python.org/issue14341 #14339: Optimizing bin, oct and hex http://bugs.python.org/issue14339 #14336: Difference between pickle implementations for function objects http://bugs.python.org/issue14336 #14329: proxy_bypass_macosx_sysconf does not handle singel ip addresse http://bugs.python.org/issue14329 #14326: IDLE - allow shell to support different locales http://bugs.python.org/issue14326 Most recent 15 issues waiting for review (15) ============================================= #14448: Mention pytz in datetime's docs http://bugs.python.org/issue14448 #14439: Easier error diagnosis when bootstrapping the runpy module in http://bugs.python.org/issue14439 #14433: Python 3 interpreter crash on windows when stdin closed in Pyt http://bugs.python.org/issue14433 #14432: Bug in generator if the generator in created in a C thread http://bugs.python.org/issue14432 #14428: Implementation of the PEP 418 http://bugs.python.org/issue14428 #14425: Improve handling of 'timeout' parameter default in urllib.urlo http://bugs.python.org/issue14425 #14423: Getting the starting date of iso week from a week number and a http://bugs.python.org/issue14423 #14422: Pack PyASCIIObject fields to reduce memory consumption of pure http://bugs.python.org/issue14422 #14419: Faster ascii decoding http://bugs.python.org/issue14419 #14408: Support ./python -m unittest in the stdlib tests http://bugs.python.org/issue14408 #14407: concurrent.futures tests don't adher to test_cases protocol http://bugs.python.org/issue14407 #14406: Race condition in concurrent.futures http://bugs.python.org/issue14406 #14399: zipfile and creat/update comment http://bugs.python.org/issue14399 #14396: Popen wait() doesn't handle spurious wakeups http://bugs.python.org/issue14396 #14392: type=bool doesn't raise error in argparse.Action http://bugs.python.org/issue14392 Top 10 most discussed issues (10) ================================= #14386: Expose dictproxy as a public type http://bugs.python.org/issue14386 28 msgs #14408: Support ./python -m unittest in the stdlib tests http://bugs.python.org/issue14408 22 msgs #14288: Make iterators pickleable http://bugs.python.org/issue14288 15 msgs #14419: Faster ascii decoding http://bugs.python.org/issue14419 13 msgs #14444: Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom http://bugs.python.org/issue14444 13 msgs #3367: Uninitialized value read in parsetok.c http://bugs.python.org/issue3367 12 msgs #14399: zipfile and creat/update comment http://bugs.python.org/issue14399 12 msgs #14433: Python 3 interpreter crash on windows when stdin closed in Pyt http://bugs.python.org/issue14433 11 msgs #14398: bz2.BZ2DEcompressor.decompress fail on large files http://bugs.python.org/issue14398 9 msgs #14412: Sqlite Integer Fields http://bugs.python.org/issue14412 7 msgs Issues closed (41) ================== #786827: IDLE starts with no menus (Cygwin) http://bugs.python.org/issue786827 closed by asvetlov #4528: test_httpservers consistently fails - OSError: [Errno 13] Perm http://bugs.python.org/issue4528 closed by r.david.murray #5707: IDLE will not load http://bugs.python.org/issue5707 closed by asvetlov #6488: ElementTree documentation refers to "path" with no explanation http://bugs.python.org/issue6488 closed by eli.bendersky #7635: 19.6 xml.dom.pulldom doc: stub? http://bugs.python.org/issue7635 closed by r.david.murray #11826: Leak in atexitmodule http://bugs.python.org/issue11826 closed by skrah #12649: email.Header ignores maxlinelen when wrapping encoded words http://bugs.python.org/issue12649 closed by r.david.murray #12932: dircmp does not allow non-shallow comparisons http://bugs.python.org/issue12932 closed by eric.araujo #13438: "Delete patch set" review action doesn't work http://bugs.python.org/issue13438 closed by eric.araujo #13608: remove born-deprecated PyUnicode_AsUnicodeAndSize http://bugs.python.org/issue13608 closed by loewis #13902: Sporadic test_threading failure on FreeBSD 6.4 buildbot http://bugs.python.org/issue13902 closed by neologix #14065: Element should support cyclic GC http://bugs.python.org/issue14065 closed by eli.bendersky #14154: reimplement the bigmem test memory watchdog as a subprocess http://bugs.python.org/issue14154 closed by neologix #14162: PEP 416: Add a builtin frozendict type http://bugs.python.org/issue14162 closed by haypo #14349: The documentation of 'dis' doesn't describe MAKE_FUNCTION corr http://bugs.python.org/issue14349 closed by eli.bendersky #14357: Distutils2 does not work with virtualenv http://bugs.python.org/issue14357 closed by eric.araujo #14361: No link to issue tracker on Python home page http://bugs.python.org/issue14361 closed by r.david.murray #14368: floattime() should not raise an exception http://bugs.python.org/issue14368 closed by haypo #14381: Intern certain integral floats for memory savings and performa http://bugs.python.org/issue14381 closed by krisvale #14383: Generalize the use of _Py_IDENTIFIER in ceval.c and typeobject http://bugs.python.org/issue14383 closed by haypo #14391: misc TYPO in argparse.Action docstring http://bugs.python.org/issue14391 closed by terry.reedy #14395: sftp: downloading files with % in name fails due to logging http://bugs.python.org/issue14395 closed by r.david.murray #14400: Typo in cporting.rst http://bugs.python.org/issue14400 closed by loewis #14401: Typos in curses.rst http://bugs.python.org/issue14401 closed by python-dev #14402: Notice PayPaI : Your account was accesed by a third party. http://bugs.python.org/issue14402 closed by gvanrossum #14403: unittest module: provide inverse of "assertRaises" http://bugs.python.org/issue14403 closed by michael.foord #14404: multiprocessing with maxtasksperchild: bug in control logic? http://bugs.python.org/issue14404 closed by neologix #14409: IDLE: not executing commands from shell, error with default ke http://bugs.python.org/issue14409 closed by serwy #14410: argparse typo http://bugs.python.org/issue14410 closed by sandro.tosi #14411: outdatedness on rlcompleter docstring http://bugs.python.org/issue14411 closed by python-dev #14413: whatsnew deprecation tweak http://bugs.python.org/issue14413 closed by r.david.murray #14415: ValueError: I/O operation on closed file http://bugs.python.org/issue14415 closed by r.david.murray #14416: syslog missing constants http://bugs.python.org/issue14416 closed by r.david.murray #14421: Avoid ResourceWarnings in ccbench http://bugs.python.org/issue14421 closed by python-dev #14435: Remove special block allocation from floatobject.c http://bugs.python.org/issue14435 closed by krisvale #14436: SocketHandler sends obejcts while they cannot be unpickled on http://bugs.python.org/issue14436 closed by python-dev #14438: _cursesmodule build fails on cygwin http://bugs.python.org/issue14438 closed by luch #14441: Add new date/time directives to strftime() http://bugs.python.org/issue14441 closed by r.david.murray #14442: test_smtplib.py lacks "import errno" http://bugs.python.org/issue14442 closed by r.david.murray #14447: marshal.load() reads entire remaining file instead of just nex http://bugs.python.org/issue14447 closed by r.david.murray #14451: sum, min, max only works with iterable http://bugs.python.org/issue14451 closed by r.david.murray From stefan_ml at behnel.de Fri Mar 30 18:18:46 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 30 Mar 2012 18:18:46 +0200 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <4F75DA77.6040305@gmail.com> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> <4F75DA77.6040305@gmail.com> Message-ID: Etienne Robillard, 30.03.2012 18:08: > are you also truth allergic or irritated by the consequences of > free speech ? Please note that "free speech" is a concept that is different from asking beginner's computer science questions on the core developer mailing list of a software development project. This is not the right forum to do so, and you should therefore move your "free speech" to one that is more appropriate. Nick has pointed you to one such forum and you would be well advised to use it - that's all I was trying to say. I hope it's clearer now. Stefan From animelovin at gmail.com Fri Mar 30 20:00:50 2012 From: animelovin at gmail.com (Etienne Robillard) Date: Fri, 30 Mar 2012 14:00:50 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> <4F75DA77.6040305@gmail.com> Message-ID: <4F75F4D2.4050706@gmail.com> your reasoning is pathetic at best. i pass... Thanks for the tip :-) Cheers, Etienne On 03/30/2012 12:18 PM, Stefan Behnel wrote: > Etienne Robillard, 30.03.2012 18:08: >> are you also truth allergic or irritated by the consequences of >> free speech ? > > Please note that "free speech" is a concept that is different from asking > beginner's computer science questions on the core developer mailing list of > a software development project. This is not the right forum to do so, and > you should therefore move your "free speech" to one that is more > appropriate. Nick has pointed you to one such forum and you would be well > advised to use it - that's all I was trying to say. I hope it's clearer now. > > Stefan > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/animelovin%40gmail.com > From andrew.svetlov at gmail.com Fri Mar 30 20:31:01 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Fri, 30 Mar 2012 21:31:01 +0300 Subject: [Python-Dev] [Python-checkins] cpython (3.2): Issue #14409: IDLE doesn't not execute commands from shell with default In-Reply-To: References: Message-ID: Thank you for mentoring. I will fix NEWS if you help me with better text. The bug fixed is that commit is: IDLE has 3 configs: user, system default and hardcoded in python code. Last one had bad binding for key. Usually this config is never used: user or system ones overrides former. But if IDLE cannot open config files by some reason it switches to hardcoded configs and user got broken IDLE. Can anybody guess me short and descriptive message describing what fix well? On Fri, Mar 30, 2012 at 6:12 AM, Nick Coghlan wrote: > On Fri, Mar 30, 2012 at 2:01 AM, andrew.svetlov > wrote: >> +- Issue #14409: IDLE doesn't not execute commands from shell, >> + ?error with default keybinding for Return. (Patch by Roger Serwy) > > The double negative here makes this impossible to understand. Could we > please get an updated NEWS entry that explains what actually changed > in IDLE to fix this? > > Perhaps something like "IDLE now always sets the default keybind for > Return correctly, ensuring commands can be executed in the IDLE shell > window"? (assuming that's what happened). > > This is important, folks: NEWS entries need to be comprehensible for > people that *haven't* read the associated tracker issue. This means > that issue titles (which generally describe a problem someone was > having) are often inappropriate as NEWS items. NEWS items should be > short descriptions that clearly describe *what changed*, perhaps with > some additional information to explain a bit about why the change was > made. > > Regards, > Nick. > > -- > Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins -- Thanks, Andrew Svetlov From ethan at stoneleaf.us Fri Mar 30 20:23:18 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 30 Mar 2012 11:23:18 -0700 Subject: [Python-Dev] OT: refusal to listen and learn [was Re: Issue 14417: consequences of new dict runtime error] In-Reply-To: <4F75F4D2.4050706@gmail.com> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> <4F75DA77.6040305@gmail.com> <4F75F4D2.4050706@gmail.com> Message-ID: <4F75FA16.6040704@stoneleaf.us> Etienne Robillard wrote: > your reasoning is pathetic at best. i pass... Thanks for the tip :-) The Python Developer list is for the discussion of developing Python, not for teaching basic programming. You are being rude, and a smiley does not make you less rude. I am adding you to my kill-file so I no longer see messages from you because you refuse to follow the advice you have been given. ~Ethan~ From animelovin at gmail.com Fri Mar 30 20:56:20 2012 From: animelovin at gmail.com (Etienne Robillard) Date: Fri, 30 Mar 2012 14:56:20 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <4F75FA16.6040704@stoneleaf.us> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> <4F75DA77.6040305@gmail.com> <4F75F4D2.4050706@gmail.com> <4F75FA16.6040704@stoneleaf.us> Message-ID: <4F7601D4.1080706@gmail.com> On 03/30/2012 02:23 PM, Ethan Furman wrote: > Etienne Robillard wrote: >> your reasoning is pathetic at best. i pass... Thanks for the tip :-) > > The Python Developer list is for the discussion of developing Python, > not for teaching basic programming. > > You are being rude, and a smiley does not make you less rude. > > I am adding you to my kill-file so I no longer see messages from you > because you refuse to follow the advice you have been given. > > ~Ethan~ > Add me to whatever file you want, but I believe the consequences for the new dict runtime errors just won't be resolved this way, and neither by systematically blocking alternative opinions as OT will help, because thats typically oppression, not free speech. :-) Cheers, :-) Etienne From regebro at gmail.com Fri Mar 30 21:01:07 2012 From: regebro at gmail.com (Lennart Regebro) Date: Fri, 30 Mar 2012 21:01:07 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: The overview of the different monotonic clocks was interesting, because only one of them is adjusted by NTP, and that's the unix CLOCK_MONOTONIC. Hence we don't need a raw=False flag, which I previously suggested, we only need to not use CLOCK_MONOTONIC (which the PEP psuedo-code indeed also does not do, so that's all good). That means I think the PEP is fine now, if we rename highres(). time.time() already gets the highest resolution clock it can. Hence a highres() is confusing as the name implies that it returns a higher resolution clock than time.time(). And the name does not in any way indicate that the returned clock might be monotonic. try_monotonic() seems the obvious choice, since that's what it actually does. //Lennart From animelovin at gmail.com Fri Mar 30 21:13:36 2012 From: animelovin at gmail.com (Etienne Robillard) Date: Fri, 30 Mar 2012 15:13:36 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> <4F75DA77.6040305@gmail.com> <4F75F4D2.4050706@gmail.com> <4F75FA16.6040704@stoneleaf.us> <4F7601D4.1080706@gmail.com> Message-ID: <4F7605E0.6070503@gmail.com> On 03/30/2012 03:02 PM, Guido van Rossum wrote: > Hey Etienne, I am honestly trying to understand your contribution but > you seem to have started a discussion about free speech. Trust me that > we don't mind your contributions, we're just trying to understand what > you're saying, and the free speech discussion isn't helping with that. I agree. > So if you have a comment on the dict mutation problem, please say so. OK. > If you need help understanding the problem, python-dev is not the > place to ask questions; you could ask on the bug, or on the > core-mentorship list as Nick suggested. But please stop bringing up > free speech, that's not an issue. Guido, thanks for the wisdom and clarity of your reasoning. I really appreciate a positive attitude towards questioning not so obvious problems. So far I was only attempting to verify whether this is related to PEP-416 or not. If this is indeed related PEP 416, then I must obviously attest that I must still understand why a immutable dict would prevent this bug or not... Either ways, I fail to see where this is OT or should be discussed on a more obscur forum than python-dev. :-) Kind regards, Etienne From rdmurray at bitdance.com Fri Mar 30 21:25:44 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 30 Mar 2012 15:25:44 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <4F7605E0.6070503@gmail.com> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> <4F75DA77.6040305@gmail.com> <4F75F4D2.4050706@gmail.com> <4F75FA16.6040704@stoneleaf.us> <4F7601D4.1080706@gmail.com> <4F7605E0.6070503@gmail.com> Message-ID: <20120330192545.1CC4A2500E9@webabinitio.net> On Fri, 30 Mar 2012 15:13:36 -0400, Etienne Robillard wrote: > So far I was only attempting to verify whether this is related to > PEP-416 or not. If this is indeed related PEP 416, then I must obviously > attest that I must still understand why a immutable dict would prevent > this bug or not... OK, that seems to be the source of your confusion, then. This has nothing to do with PEP-416. We are talking about issue Issue 14417 (like it says in the subject), which in turn is a reaction to the fix for issue 14205. --David From guido at python.org Fri Mar 30 21:27:15 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 30 Mar 2012 12:27:15 -0700 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <4F7605E0.6070503@gmail.com> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> <4F75DA77.6040305@gmail.com> <4F75F4D2.4050706@gmail.com> <4F75FA16.6040704@stoneleaf.us> <4F7601D4.1080706@gmail.com> <4F7605E0.6070503@gmail.com> Message-ID: On Fri, Mar 30, 2012 at 12:13 PM, Etienne Robillard wrote: > On 03/30/2012 03:02 PM, Guido van Rossum wrote: >> >> Hey Etienne, I am honestly trying to understand your contribution but >> you seem to have started a discussion about free speech. Trust me that >> we don't mind your contributions, we're just trying to understand what >> you're saying, and the free speech discussion isn't helping with that. > > > I agree. > > >> So if you have a comment on the dict mutation problem, please say so. > > > OK. > > >> If you need help understanding the problem, python-dev is not the >> place to ask questions; you could ask on the bug, or on the >> core-mentorship list as Nick suggested. But please stop bringing up >> free speech, that's not an issue. > > > Guido, thanks for the wisdom and clarity of your reasoning. I really > appreciate a positive attitude towards questioning not so obvious problems. > > So far I was only attempting to verify whether this is related to PEP-416 or > not. If this is indeed related PEP 416, then I must obviously attest that I > must still understand why a immutable dict would prevent this bug or not... It's not related to PEP 416 (which was rejected). Please refer to http://bugs.python.org/issue14417 for the issue being discussed. > Either ways, I fail to see where this is OT or should be discussed on a more > obscur forum than python-dev. :-) We need to keep that list clear for important discussions. It is the only channel that the core Python developers have. If it has too much noise people will stop reading it and it stops functioning. Hence, we try to keep questions from newbies to a minimum -- there are other places where we welcome such questions though. So, once more, if you don't understand the issue and cannot figure it out by reading up, please ask somewhere else (or just accept that you don't have anything to contribute to this particular issue). This includes explaining basic terms like "mutate". On the other hand, if you *do* understand the problem, by all means let us know what you think of the question at hand (whether the change referred to in the issue is going to break people's code or not). We don't need more speculation though; that's how we got here in the first place (my speculation that it's not going to be an issue vs. RDM's speculation that it's going to cause widespread havoc :-). I hope you understand. -- --Guido van Rossum (python.org/~guido) From benjamin at python.org Fri Mar 30 21:30:59 2012 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 30 Mar 2012 15:30:59 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14065: Added cyclic GC support to ET.Element In-Reply-To: References: Message-ID: 2012/3/30 eli.bendersky : > http://hg.python.org/cpython/rev/0ca32013d77e > changeset: ? 75997:0ca32013d77e > parent: ? ? ?75995:cf2e74e0b7d4 > user: ? ? ? ?Eli Bendersky > date: ? ? ? ?Fri Mar 30 16:38:33 2012 +0300 > summary: > ?Issue #14065: Added cyclic GC support to ET.Element > > files: > ?Lib/test/test_xml_etree.py | ?27 ++++++++++- > ?Modules/_elementtree.c ? ? | ?63 +++++++++++++++++++------ > ?2 files changed, 74 insertions(+), 16 deletions(-) > > > diff --git a/Lib/test/test_xml_etree.py b/Lib/test/test_xml_etree.py > --- a/Lib/test/test_xml_etree.py > +++ b/Lib/test/test_xml_etree.py > @@ -14,9 +14,10 @@ > ?# Don't re-import "xml.etree.ElementTree" module in the docstring, > ?# except if the test is specific to the Python implementation. > > -import sys > +import gc > ?import html > ?import io > +import sys > ?import unittest > > ?from test import support > @@ -1846,6 +1847,30 @@ > ? ? ? ? self.assertRaises(TypeError, e.extend, [ET.Element('bar'), 'foo']) > ? ? ? ? self.assertRaises(TypeError, e.insert, 0, 'foo') > > + ? ?def test_cyclic_gc(self): > + ? ? ? ?class ShowGC: > + ? ? ? ? ? ?def __init__(self, flaglist): > + ? ? ? ? ? ? ? ?self.flaglist = flaglist > + ? ? ? ? ? ?def __del__(self): > + ? ? ? ? ? ? ? ?self.flaglist.append(1) I think a nicer way to check for cyclic collection is to take a weakref to an object, call the GC, then check to make sure the weakref is broken. > + > + ? ? ? ?# Test the shortest cycle: lst->element->lst > + ? ? ? ?fl = [] > + ? ? ? ?lst = [ShowGC(fl)] > + ? ? ? ?lst.append(ET.Element('joe', attr=lst)) > + ? ? ? ?del lst > + ? ? ? ?gc.collect() support.gc_collect() is preferable > + ? ? ? ?self.assertEqual(fl, [1]) > + > + ? ? ? ?# A longer cycle: lst->e->e2->lst > + ? ? ? ?fl = [] > + ? ? ? ?e = ET.Element('joe') > + ? ? ? ?lst = [ShowGC(fl), e] > + ? ? ? ?e2 = ET.SubElement(e, 'foo', attr=lst) > + ? ? ? ?del lst, e, e2 > + ? ? ? ?gc.collect() > + ? ? ? ?self.assertEqual(fl, [1]) -- Regards, Benjamin From guido at python.org Fri Mar 30 21:40:25 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 30 Mar 2012 12:40:25 -0700 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: On Fri, Mar 30, 2012 at 12:01 PM, Lennart Regebro wrote: > The overview of the different monotonic clocks was interesting, > because only one of them is adjusted by NTP, and that's the unix > CLOCK_MONOTONIC. Hence we don't need a raw=False flag, which I > previously suggested, we only need to not use CLOCK_MONOTONIC (which > the PEP psuedo-code indeed also does not do, so that's all good). Right on. > That means I think the PEP is fine now, if we rename highres(). > time.time() already gets the highest resolution clock it can. No, time.time() is the clock that can be mapped to and from "civil time". (Adjustments by NTP and the user notwithstanding.) The other clocks have a variable epoch and do not necessarily tick with a constant rate (e.g. they may not tick at all while the system is suspended). > Hence a > highres() is confusing as the name implies that it returns a higher > resolution clock than time.time(). And the name does not in any way > indicate that the returned clock might be monotonic. try_monotonic() > seems the obvious choice, since that's what it actually does. I am still unhappy with the two names, but I'm glad that we're this close. We need two new names; one for an OS-provided clock that is "monotonic" or "steady" or whatever you want to call it, but which may not exist on all systems (some platforms don't have it, some host may not have it even though the platform generally does have it). The other name is for a clock that's one or the other; it should be the OS-provided clock if it exists, otherwise time.time(). Most code should probably use this one, so perhaps its name should be the shorter one. C++ calls these steady_clock and high_resolution_clock, respectively. But it also calls the civil time clock system_clock, so perhaps we shouldn't feel to bound by it (except that we *shouldn't* call something steady if it isn't). I still think the name "monotonic" give the wrong impression; I would be happy calling it steady. But for the other, I'm still at a loss, and that name is the most important one. We can't call it steady because it isn't always. highres or hires sounds awkward; try_monotonic or try_steady are even more awkward. I looked in an online thesaurus and here's a list of what it gave: Big Ben, alarm, chroniker, chronograph, chronometer, digital watch, hourglass, metronome, pendulum, stopwatch, sundial, tattler, tick-tock, ticker, timekeeper, timemarker, timepiece, timer, turnip, watch I wonder if something with tick would work? (Even though it returns a float. :-) If all else fails, I'd go with turnip. -- --Guido van Rossum (python.org/~guido) From victor.stinner at gmail.com Fri Mar 30 22:21:28 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 30 Mar 2012 22:21:28 +0200 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? Message-ID: Hi, Windows provides two main monotonic clocks: QueryPerformanceCounter() and GetTickCount(). QueryPerformanceCounter() has a good accuracy (0.3 ns - 5 ns) but has various issues and is know to not have a steady rate. GetTickCount() has a worse accuracy (1 ms - 15 ms) but is more stable and behave better on system suspend/resume. The glib library prefers GetTickCount() over QueryPerformanceCounter() for its g_get_monotonic_time() because "The QPC timer has too many issues to be used as is." Ihttp://mail.gnome.org/archives/commits-list/2011-November/msg04589.html The Qt library tries QueryPerformanceCounter() but may fallback to GetTickCount() if it is not available. python-monotonic-time only uses GetTickCount() or GetTickCount64(). It is important to decide which clock is used for the Python time.monotonic() because it may change the design of the PEP 418. If we use GetTickCount() for time.monotonic(), we should use QueryPerformanceCounter() for time.highres(). But in this case, it means that time.highres() is not a simple "try monotonic or falls back to system time", but may use a different clock with an higher resolution. So we might add a third function for the "try monotonic or falls back to system time" requirement. Python implements time.clock() using QueryPerformanceCounter() on Windows. Victor From rdmurray at bitdance.com Fri Mar 30 22:44:55 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 30 Mar 2012 16:44:55 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: <20120330204455.A82AB2500E9@webabinitio.net> On Fri, 30 Mar 2012 12:40:25 -0700, Guido van Rossum wrote: > But for the other, I'm still at a loss, and that name is the most > important one. We can't call it steady because it isn't always. > highres or hires sounds awkward; try_monotonic or try_steady are even > more awkward. I looked in an online thesaurus and here's a list of > what it gave: > > Big Ben, alarm, chroniker, chronograph, chronometer, digital watch, > hourglass, metronome, pendulum, stopwatch, sundial, tattler, > tick-tock, ticker, timekeeper, timemarker, timepiece, timer, turnip, > watch > > I wonder if something with tick would work? (Even though it returns a float. :-) > > If all else fails, I'd go with turnip. We could call it "alice"[*]: sometimes it goes fast, sometimes it goes slow, sometimes it even goes backward, but it does try to tell you when you are late. --David [*] 'whiterabbit' would be more descriptive, but that's longer than turnip. From yselivanov.ml at gmail.com Fri Mar 30 22:48:21 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 30 Mar 2012 16:48:21 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: <4F88077B-1C56-4E0E-955A-DDAE2EFA6C4A@gmail.com> On 2012-03-30, at 3:40 PM, Guido van Rossum wrote: > I still think the name "monotonic" give the wrong impression; I would > be happy calling it steady. Simple google search comparison shows that people ask about 'monotonic' clock in python, not 'steady'. How about following Nick's (if I recall correctly) proposal of calling the OS function - '_monotonic', and a python wrapper - 'monotonic'? And one more question: what do you think about introducing a special check, that will ensure that our python implementation of 'monotonic', in case of fallback to 'time.time()', raises an exception if time suddenly goes backward? - Yury From guido at python.org Fri Mar 30 22:52:50 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 30 Mar 2012 13:52:50 -0700 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: Message-ID: On Fri, Mar 30, 2012 at 1:21 PM, Victor Stinner wrote: > Windows provides two main monotonic clocks: QueryPerformanceCounter() > and GetTickCount(). QueryPerformanceCounter() has a good accuracy (0.3 > ns - 5 ns) but has various issues and is know to not have a steady > rate. GetTickCount() has a worse accuracy (1 ms - 15 ms) but is more > stable and behave better on system suspend/resume. > > The glib library prefers GetTickCount() over QueryPerformanceCounter() > for its g_get_monotonic_time() because "The QPC timer has too many > issues to be used as is." > Ihttp://mail.gnome.org/archives/commits-list/2011-November/msg04589.html > > The Qt library tries QueryPerformanceCounter() but may fallback to > GetTickCount() if it is not available. > > python-monotonic-time only uses GetTickCount() or GetTickCount64(). > > It is important to decide which clock is used for the Python > time.monotonic() because it may change the design of the PEP 418. If > we use GetTickCount() for time.monotonic(), we should use > QueryPerformanceCounter() for time.highres(). But in this case, it > means that time.highres() is not a simple "try monotonic or falls back > to system time", but may use a different clock with an higher > resolution. So we might add a third function for the "try monotonic or > falls back to system time" requirement. > > Python implements time.clock() using QueryPerformanceCounter() on Windows. Oh dear. I really want to say that 15 ms is good enough. Some possible exceptions I can think of: - Profiling. But this really wants to measure CPU time anyways, and it already uses a variety of hacks and heuristics to pick the best timer, so I don't really care. - Certain algorithms for thread (or even process?) communication might benefit from being able to set very short timeouts. But it seems you can just specify a timeout when waiting for a lock and it will be taken care of. So probably a non-use-case. So I think I still prefer GetTickCount*() over QueryPerformanceCounter(). And this is another reason why I don't like highres as a name -- as I said, I really prefer to have just two new functions, one that's a decent OS-provided timer that isn't affected by wall clock adjustments, if there is one, and the other strictly an alias for that one with a fallback to time.time() if there isn't one. Having two different OS-provided timers with different pros and cons will just make it too hard for users to decide, and hence we'll see a higher rate of using the wrong timer. (In fact, this even argues against having both the timer with fallback and the timer without fallback. So maybe we should just have a single timer function, *with* fallback, and a separate mechanism to inquire its properties.) -- --Guido van Rossum (python.org/~guido) From guido at python.org Fri Mar 30 23:35:29 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 30 Mar 2012 14:35:29 -0700 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: Message-ID: On Fri, Mar 30, 2012 at 1:52 PM, Guido van Rossum wrote: > On Fri, Mar 30, 2012 at 1:21 PM, Victor Stinner > wrote: >> Windows provides two main monotonic clocks: QueryPerformanceCounter() >> and GetTickCount(). QueryPerformanceCounter() has a good accuracy (0.3 >> ns - 5 ns) but has various issues and is know to not have a steady >> rate. GetTickCount() has a worse accuracy (1 ms - 15 ms) but is more >> stable and behave better on system suspend/resume. >> >> The glib library prefers GetTickCount() over QueryPerformanceCounter() >> for its g_get_monotonic_time() because "The QPC timer has too many >> issues to be used as is." >> Ihttp://mail.gnome.org/archives/commits-list/2011-November/msg04589.html >> >> The Qt library tries QueryPerformanceCounter() but may fallback to >> GetTickCount() if it is not available. >> >> python-monotonic-time only uses GetTickCount() or GetTickCount64(). >> >> It is important to decide which clock is used for the Python >> time.monotonic() because it may change the design of the PEP 418. If >> we use GetTickCount() for time.monotonic(), we should use >> QueryPerformanceCounter() for time.highres(). But in this case, it >> means that time.highres() is not a simple "try monotonic or falls back >> to system time", but may use a different clock with an higher >> resolution. So we might add a third function for the "try monotonic or >> falls back to system time" requirement. >> >> Python implements time.clock() using QueryPerformanceCounter() on Windows. > > Oh dear. I really want to say that 15 ms is good enough. Some possible > exceptions I can think of: > > - Profiling. But this really wants to measure CPU time anyways, and it > already uses a variety of hacks and heuristics to pick the best timer, > so I don't really care. > > - Certain algorithms for thread (or even process?) communication might > benefit from being able to set very short timeouts. But it seems you > can just specify a timeout when waiting for a lock and it will be > taken care of. So probably a non-use-case. > > So I think I still prefer GetTickCount*() over QueryPerformanceCounter(). > > And this is another reason why I don't like highres as a name -- as I > said, I really prefer to have just two new functions, one that's a > decent OS-provided timer that isn't affected by wall clock > adjustments, if there is one, and the other strictly an alias for that > one with a fallback to time.time() if there isn't one. Having two > different OS-provided timers with different pros and cons will just > make it too hard for users to decide, and hence we'll see a higher > rate of using the wrong timer. > > (In fact, this even argues against having both the timer with fallback > and the timer without fallback. So maybe we should just have a single > timer function, *with* fallback, and a separate mechanism to inquire > its properties.) And disagreeing with myself, I just found myself using a tool for RPC analytics that displays times in msec, and the times are often between 1 and 20 msec. Here a 15 msec timer would be horrible, and I see no alternative but to use QPC()... (Well, if the tool ran on Windows, which it doesn't. :-) Can we tell what the accuracy of GetTickCount*() is when we first request the time, and decode then? Can you go into more detail about QPC()'s issues? What is unsteady about its rate? -- --Guido van Rossum (python.org/~guido) From guido at python.org Fri Mar 30 23:40:12 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 30 Mar 2012 14:40:12 -0700 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: Message-ID: Possibly we really do need two timers, one for measuring short intervals and one for measuring long intervals? Perhaps we can use this to decide on the names? Anyway, the more I think about it, the more I believe these functions should have very loose guarantees, and instead just cater to common use cases -- availability of a timer with minimal fuss is usually more important than the guarantees. So forget the idea about one version that falls back to time.time() and another that doesn't -- just always fall back to time.time(), which is (almost) always better than failing. Then we can design a separate inquiry API (doesn't have to be complex as long as it's extensible -- a dict or object with a few predefined keys or attributes sounds good enough) for apps that want to know more about how the timer they're using is actually implemented. -- --Guido van Rossum (python.org/~guido) From cs at zip.com.au Fri Mar 30 23:43:19 2012 From: cs at zip.com.au (Cameron Simpson) Date: Sat, 31 Mar 2012 08:43:19 +1100 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: Message-ID: <20120330214319.GA3106@cskk.homeip.net> On 30Mar2012 13:52, Guido van Rossum wrote: | (In fact, this even argues against having both the timer with fallback | and the timer without fallback. So maybe we should just have a single | timer function, *with* fallback, and a separate mechanism to inquire | its properties.) Well, you should be able to know what kind of clock you're getting. But what do to if you don't get a satisfactory one? There's then no way for ask for an alternative; you don't get to control the fallback method, for example. I've come to the opinion that the chosen approach is wrong, and suggest a better one. There seem to be a few competing features for clocks that people want: - monotonic - never going backward at all - high resolution - no steps and only a few, fortunately. I think you're doing this wrong at the API level. People currently are expecting to call (for example) time.monotonic() (or whatever) and get back "now", a hopefully high resolution float. Given the subtlety sought for various purposes, people should be calling: T = time.getclock(flags) and then later calling: T.now() to get their float. That way people can: - request a set of the three characteristics above - inspect what they get back (it should have all the requested flags, but unrequested flags may be set or not depending on the underlying facility) Obviously this should return None or raise an exception if the flag set can't be met. Then users can request a less featured clock on failure, depending on what aspects of the clock are most desirable to their use case. Or of course fail if fallback is not satisfactory. Of course you can provide some convenient-with-fallback function that will let people do this in one hit without the need for "T", but it should not be the base facility offered. The base should let people request their feature set and inspect what is supplied. Cheers, -- Cameron Simpson DoD#743 http://www.cskk.ezoshosting.com/cs/ Time is nature's way of keeping everything from happening at once. From guido at python.org Sat Mar 31 00:21:32 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 30 Mar 2012 15:21:32 -0700 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: <20120330214319.GA3106@cskk.homeip.net> References: <20120330214319.GA3106@cskk.homeip.net> Message-ID: On Fri, Mar 30, 2012 at 2:43 PM, Cameron Simpson wrote: > On 30Mar2012 13:52, Guido van Rossum wrote: > | (In fact, this even argues against having both the timer with fallback > | and the timer without fallback. So maybe we should just have a single > | timer function, *with* fallback, and a separate mechanism to inquire > | its properties.) > > Well, you should be able to know what kind of clock you're getting. But > what do to if you don't get a satisfactory one? There's then no way for > ask for an alternative; you don't get to control the fallback method, > for example. > > I've come to the opinion that the chosen approach is wrong, and suggest > a better one. > > There seem to be a few competing features for clocks that people want: > > ?- monotonic - never going backward at all > ?- high resolution > ?- no steps > > and only a few, fortunately. > > I think you're doing this wrong at the API level. People currently are > expecting to call (for example) time.monotonic() (or whatever) and get > back "now", a hopefully high resolution float. > > Given the subtlety sought for various purposes, people should be > calling: > > ?T = time.getclock(flags) > > and then later calling: > > ?T.now() > > to get their float. > > That way people can: > ?- request a set of the three characteristics above > ?- inspect what they get back (it should have all the requested flags, > ? ?but unrequested flags may be set or not depending on the underlying > ? ?facility) > ? ?Obviously this should return None or raise an exception if the flag > ? ?set can't be met. > > Then users can request a less featured clock on failure, depending on > what aspects of the clock are most desirable to their use case. Or of > course fail if fallback is not satisfactory. > > Of course you can provide some convenient-with-fallback function that > will let people do this in one hit without the need for "T", but it > should not be the base facility offered. The base should let people > request their feature set and inspect what is supplied. I like this out-of-the-box thinking. But I'm still wondering if there really are enough flags for this to be worth it. If there are, great, the API is pretty. But if there are only 2 or 3 flag combinations that make actual sense, let's forget it. Another out-of-the-box idea, going back to simplicity: have just one new function, time.hrtimer(), which is implemented using the highest-resolution timer available on the platform, but with no strong guarantees. It *may* jump, move back, drift, change its rate, or roll over occasionally. We try to use the implementation that's got the fewest problems, but we don't try to hide its deficiencies, and nothing suitable exists, it may be equivalent to time.time(). If the times you measure are too weird, measure again. For scheduling things a day or more in the future, you should use time.time() instead. One issue that hasn't had enough attention: *scope* of a timer. If two processes running on the same machine ask for the time, do the values they see use the same epoch, or is the epoch dependent on the process? Some code I saw in timemodule.c for working around Windows clocks rolling over seem to imply that two processes may not always see the same timer value. Is there a use case where this matters? -- --Guido van Rossum (python.org/~guido) From animelovin at gmail.com Sat Mar 31 00:28:25 2012 From: animelovin at gmail.com (Etienne Robillard) Date: Fri, 30 Mar 2012 18:28:25 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> <4F75DA77.6040305@gmail.com> <4F75F4D2.4050706@gmail.com> <4F75FA16.6040704@stoneleaf.us> <4F7601D4.1080706@gmail.com> <4F7605E0.6070503@gmail.com> Message-ID: <4F763389.7090307@gmail.com> On 03/30/2012 03:27 PM, Guido van Rossum wrote: > On Fri, Mar 30, 2012 at 12:13 PM, Etienne Robillard > wrote: >> On 03/30/2012 03:02 PM, Guido van Rossum wrote: >>> >>> Hey Etienne, I am honestly trying to understand your contribution but >>> you seem to have started a discussion about free speech. Trust me that >>> we don't mind your contributions, we're just trying to understand what >>> you're saying, and the free speech discussion isn't helping with that. >> >> >> I agree. >> >> >>> So if you have a comment on the dict mutation problem, please say so. >> >> >> OK. >> >> >>> If you need help understanding the problem, python-dev is not the >>> place to ask questions; you could ask on the bug, or on the >>> core-mentorship list as Nick suggested. But please stop bringing up >>> free speech, that's not an issue. >> >> >> Guido, thanks for the wisdom and clarity of your reasoning. I really >> appreciate a positive attitude towards questioning not so obvious problems. >> >> So far I was only attempting to verify whether this is related to PEP-416 or >> not. If this is indeed related PEP 416, then I must obviously attest that I >> must still understand why a immutable dict would prevent this bug or not... > > It's not related to PEP 416 (which was rejected). Please refer to > http://bugs.python.org/issue14417 for the issue being discussed. > >> Either ways, I fail to see where this is OT or should be discussed on a more >> obscur forum than python-dev. :-) > > We need to keep that list clear for important discussions. It is the > only channel that the core Python developers have. If it has too much > noise people will stop reading it and it stops functioning. Hence, we > try to keep questions from newbies to a minimum -- there are other > places where we welcome such questions though. > > So, once more, if you don't understand the issue and cannot figure it > out by reading up, please ask somewhere else (or just accept that you > don't have anything to contribute to this particular issue). This > includes explaining basic terms like "mutate". On the other hand, if > you *do* understand the problem, by all means let us know what you > think of the question at hand (whether the change referred to in the > issue is going to break people's code or not). We don't need more > speculation though; that's how we got here in the first place (my > speculation that it's not going to be an issue vs. RDM's speculation > that it's going to cause widespread havoc :-). > > I hope you understand. No, not really. Anyways, I guess I'll have to further dig down why is PEP-416 is really important to Python and why it was likewise rejected, supposing I confused the pep 416 and issue 14417 along the way.. :-) CHeers, Etienne From victor.stinner at gmail.com Sat Mar 31 00:47:45 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 31 Mar 2012 00:47:45 +0200 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: Message-ID: > Can you go into more detail about QPC()'s issues? Yes, see the PEP: http://www.python.org/dev/peps/pep-0418/#windows-queryperformancecounter > What is unsteady about its rate? Hum, I read that you can expect a drift between QPC and GetTickCount. I don't have exact numbers. There are also issues with power-saving. When the CPU frequency changes, the frequency of the TSC is also impacted on some processors (others provide a TSC at a fixed frequency). 2012/3/30 Guido van Rossum : > Possibly we really do need two timers, one for measuring short > intervals and one for measuring long intervals? Perhaps we can use > this to decide on the names? "short" and "long" interval is not how these clocks should be defined. I prefer use cases: * time.highres() should be used for profiling and benchmarking * time.monotonic() should be used for a scheduler and timeout time.monotonic() rate should be as steady as possible, its stability is more important than its accuracy. time.highres() should provide the clock with the best accuracy. Both clocks may have an undefined starting point. > Anyway, the more I think about it, the more I believe these functions > should have very loose guarantees, and instead just cater to common > use cases -- availability of a timer with minimal fuss is usually more > important than the guarantees. So forget the idea about one version > that falls back to time.time() and another that doesn't -- just always > fall back to time.time(), which is (almost) always better than > failing. I disagree. Python must provide a truly monotonic clock, even it is not truly monotonic by default. If we want to provide a convinient "monotonic or falllback to system time", there are different options: add a dedicated function (e.g. time.try_monotonic()) or add a parameter to enable or disable the fallback. Between these two options, I prefer the parameter because it avoids the creation of a new function. Even if I now agree that a truly monotonic clock is required, I also agree with Guido: the most common case is to use a fallback. I suggest this API: time.monotonic(fallback=True). Another option is to not provide a "monotonic or falllback to system time" clock and write an explicit try/except where you need a fallback. I don't like this solution because time.monotonic() might fail at runtime with OSError, and so catching AttributeError is not enough... > Then we can design a separate inquiry API (doesn't have to be complex > as long as it's extensible -- a dict or object with a few predefined > keys or attributes sounds good enough) for apps that want to know more > about how the timer they're using is actually implemented. Sometimes, it does matter which exact OS clock is used. The QElapsedTime class of the Qt libraries has an attribute (an enum) describing which clock is used. We can give the name of the function and the resolution. In some specific cases, we are able to read the accuracy (ex: GetTickCount). The accuracy may change at runtime (at least for GetTickCount). Should I include such new function in the PEP, or can it be discussed (later) in a separated thread? -- I don't want to only provide time.monotonic() which would fallback if no monotonic clock is available, because we would be unable to tell if the clock is monotonic or not. See the implementation in the PEP: the "try monotonic or fallback" function may use a different clock at each call, and so may be monotonic at startup and then becomes non-monotonic. An alternative is time.monotonic() returning (time, is_monotonic) but this is an ugly API. You cannot simply write: a=time.monotonic(); benchmark(); b=time.monotonic(); print(b-a) because it raises a TypeError on b-a. -- I will update the PEP to describe my new proposition. Victor From victor.stinner at gmail.com Sat Mar 31 01:24:28 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 31 Mar 2012 01:24:28 +0200 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <4F763389.7090307@gmail.com> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> <4F75DA77.6040305@gmail.com> <4F75F4D2.4050706@gmail.com> <4F75FA16.6040704@stoneleaf.us> <4F7601D4.1080706@gmail.com> <4F7605E0.6070503@gmail.com> <4F763389.7090307@gmail.com> Message-ID: > No, not really. Anyways, I guess I'll have to further dig down why is > PEP-416 is really important to Python and why it was likewise rejected, > supposing I confused the pep 416 and issue 14417 along the way.. :-) The frozendict builtin type was rejected, but we are going to add types.MappingProxyType: see issue #14386. types.MappingProxyType(mydict.copy()) is very close to the frozendict builtin type. Victor From cs at zip.com.au Sat Mar 31 01:44:00 2012 From: cs at zip.com.au (Cameron Simpson) Date: Sat, 31 Mar 2012 10:44:00 +1100 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: Message-ID: <20120330234359.GA11489@cskk.homeip.net> On 30Mar2012 15:21, Guido van Rossum wrote: | On Fri, Mar 30, 2012 at 2:43 PM, Cameron Simpson wrote: | > Given the subtlety sought for various purposes, people should be | > calling: | > ?T = time.getclock(flags) | > and then later calling: | > ?T.now() | > to get their float. | > | > That way people can: | > ?- request a set of the three characteristics above | > ?- inspect what they get back (it should have all the requested flags, | > ? ?but unrequested flags may be set or not depending on the underlying | > ? ?facility) | > ? ?Obviously this should return None or raise an exception if the flag | > ? ?set can't be met. | > | > Then users can request a less featured clock on failure, depending on | > what aspects of the clock are most desirable to their use case. Or of | > course fail if fallback is not satisfactory. [...] | I like this out-of-the-box thinking. But I'm still wondering if there | really are enough flags for this to be worth it. If there are, great, | the API is pretty. But if there are only 2 or 3 flag combinations that | make actual sense, let's forget it. There are at least three characteristics listed above. There are several use cases described in the long discussions. I don't think we should be saying "we will only support these 3 thought-of use cases and the others are too hard/weird/perverse/rare". We should be saying: we can characterise clocks in these very few ways. Ask for a clock of some kind. We will provide one (abitrarily, though perhaps preferring higher precision) that satisfies the request if available. I can easily envisage calling code as simple as this: T = getclock(T_MONOTONIC|T_HIGHRES) or getclock(T_MONOTONIC) to do my own fallback for my own use case if we went with returning None if there is no satisfactory clock. If I need monotonic (for example) my code is in NO WAY helped by being handed something that sometimes is not monotonic; to make my code reliable I would need to act as though I wasn't getting a montonic clock in the first place. No win there at all. This has the following advantages: - it removes _all_ policy from the PEP, at least at the core - it makes things totally clear and concise at the calling end; the clock characterists are right there in front of the reader - policy is in the hands of the user where it belongs, and is easy to specify into the bargain - it avoids the horrendous ideas mooted of asking for something called monotonic and getting sometime only mostly monotonic, and the equivalent flavours for the other flags - implementation of the PEP is in principle as easy as a table of clock wrappers with the flags each supports. Loop over the table until one matches all the requested flags, or return None. - for higher quality of implementation you embed policy in the convenience function, not the getclock() function To take your suggestion of: Another out-of-the-box idea, going back to simplicity: have just one new function, time.hrtimer(), which is implemented using the highest-resolution timer available on the platform, but with no strong guarantees. Example: T = time.hrtimer() which would provide the "best" hires timer. You'd give it an optional flag argument for additional flags so that you could request the most-hires timer matching the flags. This is like: T = time.getclock(T_HIRES) except that is makes clear your preferred attribute. So internally one would have: - clock wrappers for the platform specific facilities, with the constructors being singletons or not as suits the facility - a list of the wrappers with their flag sets specifiying what clock attributes they offer - for the .hrtimer([flags]) or .monotonic([flags]) calls they each have their own list of the clock wrapper, ordered on most-hires or best-monotonic, as the implementor sees fit That gets you: - arbitrary user case support - convenience functions for common policies | It *may* jump, move back, drift, change its rate, or roll | over occasionally. We try to use the implementation that's got the | fewest problems, but we don't try to hide its deficiencies, and | nothing suitable exists, it may be equivalent to time.time(). If the | times you measure are too weird, measure again. For scheduling things | a day or more in the future, you should use time.time() instead. Getting ugly. This is why I think we should not be offering only what we think the user may want, especially when we offer something loose and rubbery like this. Instead, get the user to ask. The platform timers can all be well characterised with respect to the flags suggested above; you only need to tabulate what timers offer what behaviours. By all means offer convenience functions providing common choices, but don't make those the _only_ choices. Let the user ask for anything; if the platform can't support it that's a pity, but leave the user the choice of asking for less or making whatever other decision suits them. | One issue that hasn't had enough attention: *scope* of a timer. If two | processes running on the same machine ask for the time, do the values | they see use the same epoch, or is the epoch dependent on the process? | Some code I saw in timemodule.c for working around Windows clocks | rolling over seem to imply that two processes may not always see the | same timer value. Is there a use case where this matters? Make the epoch available in the clock wrapper as a property. At least then there's a mechanism for reconciling things. Don't try to mandate something that possibly can't be mandated. Cheers, -- Cameron Simpson DoD#743 http://www.cskk.ezoshosting.com/cs/ My computer always does exactly what I tell it to do but sometimes I have trouble finding out what it was that I told it to do. - Dick Wexelblat From guido at python.org Sat Mar 31 01:46:13 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 30 Mar 2012 16:46:13 -0700 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: Message-ID: Given the amount of disagreement I sense, I think we'll need to wait for more people to chime in. On Fri, Mar 30, 2012 at 3:47 PM, Victor Stinner wrote: >> Can you go into more detail about QPC()'s issues? > > Yes, see the PEP: > http://www.python.org/dev/peps/pep-0418/#windows-queryperformancecounter > >> What is unsteady about its rate? > > Hum, I read that you can expect a drift between QPC and GetTickCount. > I don't have exact numbers. There are also issues with power-saving. > When the CPU frequency changes, the frequency of the TSC is also > impacted on some processors (others provide a TSC at a fixed > frequency). And how does that impact teh various use cases? (There still isn't enough about use cases in the PEP. Thank you very much though for adding all those details about the various platform clocks.) > 2012/3/30 Guido van Rossum : >> Possibly we really do need two timers, one for measuring short >> intervals and one for measuring long intervals? Perhaps we can use >> this to decide on the names? > > "short" and "long" interval is not how these clocks should be defined. Why not? > I prefer use cases: > > ?* time.highres() should be used for profiling and benchmarking > ?* time.monotonic() should be used for a scheduler and timeout So if I want to schedule something a week in the future, what should I use? monotonic() or time()? And can you explain the reason for your answer? What about an hour or a day? Or a month or a year? > time.monotonic() rate should be as steady as possible, its stability > is more important than its accuracy. time.highres() should provide the > clock with the best accuracy. Both clocks may have an undefined > starting point. So what kind of drift is acceptable for each? >> Anyway, the more I think about it, the more I believe these functions >> should have very loose guarantees, and instead just cater to common >> use cases -- availability of a timer with minimal fuss is usually more >> important than the guarantees. So forget the idea about one version >> that falls back to time.time() and another that doesn't -- just always >> fall back to time.time(), which is (almost) always better than >> failing. > > I disagree. Python must provide a truly monotonic clock, even it is > not truly monotonic by default. Why? > If we want to provide a convenient "monotonic or falllback to system > time", there are different options: add a dedicated function (e.g. > time.try_monotonic()) or add a parameter to enable or disable the > fallback. Between these two options, I prefer the parameter because it > avoids the creation of a new function. Even if I now agree that a > truly monotonic clock is required, I also agree with Guido: the most > common case is to use a fallback. > > I suggest this API: time.monotonic(fallback=True). But this goes against the idea of "A keyword argument that gets passed as a constant in the caller is usually poor API." And it would encourage the creation of trivial lambdas just to call the timer with this flag, since a universal API is a timer function that takes no arguments. > Another option is to not provide a "monotonic or falllback to system > time" clock and write an explicit try/except where you need a > fallback. I don't like this solution because time.monotonic() might > fail at runtime with OSError, and so catching AttributeError is not > enough... Check out Cameron Simpson's proposal earlier in this thread. If we really are going to have a variety of functions and fallback options, his approach may be the best. >> Then we can design a separate inquiry API (doesn't have to be complex >> as long as it's extensible -- a dict or object with a few predefined >> keys or attributes sounds good enough) for apps that want to know more >> about how the timer they're using is actually implemented. > > Sometimes, it does matter which exact OS clock is used. When is that? > The QElapsedTime class of the Qt libraries has an attribute (an enum) > describing which clock is used. We can give the name of the function > and the resolution. In some specific cases, we are able to read the > accuracy (ex: GetTickCount). The accuracy may change at runtime (at > least for GetTickCount). That sounds like a fine API. > Should I include such new function in the PEP, or can it be discussed > (later) in a separated thread? FWIW, I prefer a single thread devoted to all aspects of this PEP. > -- > > I don't want to only provide time.monotonic() which would fallback if > no monotonic clock is available, because we would be unable to tell if > the clock is monotonic or not. There could be a separate inquiry API. > See the implementation in the PEP: the > "try monotonic or fallback" function may use a different clock at each > call, and so may be monotonic at startup and then becomes > non-monotonic. No, it should never switch implementations once it has chosen. That's just asking for trouble. If it can become non-monotonic it just isn't monotonic. Again, Cameron Simpson's idea might help here. > An alternative is time.monotonic() returning (time, is_monotonic) but > this is an ugly API. You cannot simply write: > ? a=time.monotonic(); benchmark(); b=time.monotonic(); print(b-a) > because it raises a TypeError on b-a. Agreed, that's not useful. It should be a constant property (constant within one process). > -- > > I will update the PEP to describe my new proposition. Thanks. Please pay specific attention to my why/where/when questions above. -- --Guido van Rossum (python.org/~guido) From guido at python.org Sat Mar 31 01:49:39 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 30 Mar 2012 16:49:39 -0700 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: <20120330234359.GA11489@cskk.homeip.net> References: <20120330234359.GA11489@cskk.homeip.net> Message-ID: On Fri, Mar 30, 2012 at 4:44 PM, Cameron Simpson wrote: [Lots of good stuff] Maybe you and Victor should try to merge your proposals off-line and then get back with a new proposal here. > Make the epoch available in the clock wrapper as a property. At least > then there's a mechanism for reconciling things. Don't try to mandate > something that possibly can't be mandated. Ah, but if a clock drifts, the epoch may too -- and we may never know it. I like knowing all sorts of things about a clock, but I'm not sure that for clocks other than time.time() I'd ever want to know the epoch -- ISTM that the only thing I could do with that information would be shooting myself in the foot. If you really want the epoch, compute it yourself by bracketing a timer() call in two time() calls, or vice versa (not sure which is better :-). -- --Guido van Rossum (python.org/~guido) From steve at pearwood.info Sat Mar 31 02:26:35 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 31 Mar 2012 11:26:35 +1100 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> Message-ID: <4F764F3B.2020306@pearwood.info> Guido van Rossum wrote: > But for the other, I'm still at a loss, and that name is the most > important one. We can't call it steady because it isn't always. > highres or hires sounds awkward; try_monotonic or try_steady are even > more awkward. I looked in an online thesaurus and here's a list of > what it gave: "hires" is a real English word, the present tense verb for engaging the service or labour of someone or something in return for payment, as in "he hires a gardener to mow the lawn". Can we please eliminate it from consideration? It is driving me slowly crazy every time I see it used as an abbreviation for high resolution. > Big Ben, alarm, chroniker, chronograph, chronometer, digital watch, > hourglass, metronome, pendulum, stopwatch, sundial, tattler, > tick-tock, ticker, timekeeper, timemarker, timepiece, timer, turnip, > watch > > I wonder if something with tick would work? (Even though it returns a float. :-) > > If all else fails, I'd go with turnip. I can't tell if you are being serious or not. For the record, "turnip" in this sense is archaic slang for a thick pocket watch. -- Steven From victor.stinner at gmail.com Sat Mar 31 02:51:56 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 31 Mar 2012 02:51:56 +0200 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: Message-ID: 2012/3/31 Guido van Rossum : > Given the amount of disagreement I sense, I think we'll need to wait > for more people to chime in. I hope that the PEP now gives enough data to help to choose the best API. >> Hum, I read that you can expect a drift between QPC and GetTickCount. >> I don't have exact numbers. There are also issues with power-saving. >> When the CPU frequency changes, the frequency of the TSC is also >> impacted on some processors (others provide a TSC at a fixed >> frequency). > > And how does that impact teh various use cases? (There still isn't > enough about use cases in the PEP. Thank you very much though for > adding all those details about the various platform clocks.) I didn't find a good source, but it looks like QueryPerformanceCounter behaves badly on system suspend/resume, whereas GetTickCount is reliable and has a well defined behaviour: "The elapsed time retrieved by GetTickCount or GetTickCount64 includes time the system spends in sleep or hibernation." So QPC is not really monotonic nor steady. It may be even worse than the system clock (especially in virtual machines). Impact: A timeout may be 42 seconds shorter than the requested duration if is uses QPC. For a scheduler, a task would be scheduled at the right moment. >> I prefer use cases: >> >> ?* time.highres() should be used for profiling and benchmarking >> ?* time.monotonic() should be used for a scheduler and timeout > > So if I want to schedule something a week in the future, what should I > use? monotonic() or time()? And can you explain the reason for your > answer? What about an hour or a day? Or a month or a year? You should use time.monotonic() because time.monotonic() has a steady rate and so is reliable for long duration (at least more reliable than time.highres()). In practice, time.monotonic() and time.highres() are only different on Windows. If I understood correctly, the Windows kernel uses something like GetTickCount whch has an accuracy of 1 ms in the best case. So Python only needs a clock with a similar accuracy for timeout and schedulers. time.highres() (QPC) rate is only steady during a short duration: it is not an issue for a benchmark because you usually rerun the same test at least 3 times and keep the minimum. It looks like QPC bugs are only unexpected forward jumps (no backward jump), so taking the minimum would workaround these issues. The hibernation issue should not affect benchmarking/profiling. >> time.monotonic() rate should be as steady as possible, its stability >> is more important than its accuracy. time.highres() should provide the >> clock with the best accuracy. Both clocks may have an undefined >> starting point. > > So what kind of drift is acceptable for each? I don't know. >> I disagree. Python must provide a truly monotonic clock, even it is >> not truly monotonic by default. > > Why? See Zooko Wilcox-O'Hearn's email: http://mail.python.org/pipermail/python-dev/2012-March/118147.html >> I suggest this API: time.monotonic(fallback=True). > > But this goes against the idea of "A keyword argument that gets passed > as a constant in the caller is usually poor API." Right :-) After changing (again) the PEP, I realized that it reintroduces the time.monotonic(fallback=False) raising NotImplementedError case and I really don't like this exception here. > And it would encourage the creation of trivial lambdas just to call > the timer with this flag, since a universal API is a timer function > that takes no arguments. My bet is that fallback=True is an uncommon use case. >> Sometimes, it does matter which exact OS clock is used. > > When is that? If you need to know more properties of the clock,. For example, Python may be unable to give the accuracy of the clock, but you may get it differently if you know which clock function is used. >> Should I include such new function in the PEP, or can it be discussed >> (later) in a separated thread? > > FWIW, I prefer a single thread devoted to all aspects of this PEP. Let's propose an API: time.clock_info() -> {'time': {'function': 'gettimeofday', 'resolution': 1e-6, 'monotonic': False}, 'monotonic': {'function': 'clock_gettime(CLOCK_MONOTONIC)', 'resolution': 1e-9, 'monotonic': True}, 'highres': {'function': 'clock_gettime(CLOCK_MONOTONIC)', 'resolution': 1e-9, 'monotonic': True}} or time.clock_info('time') -> {'function': 'gettimeofday', 'resolution': 1e-6, 'monotonic': False} time.clock_info('monotonic') -> {'function': 'clock_gettime(CLOCK_MONOTONIC)', 'resolution': 1e-9, 'monotonic': True} time.clock_info( 'highres') -> {'function': 'clock_gettime(CLOCK_MONOTONIC)', 'resolution': 1e-9, 'monotonic': True} or time.clock_info(time.time) -> {'function': 'gettimeofday', 'resolution': 1e-6, 'monotonic': False} time.clock_info(time.monotonic) -> {'function': 'clock_gettime(CLOCK_MONOTONIC)', 'resolution': 1e-9, 'monotonic': True} time.clock_info( time.highres) -> {'function': 'clock_gettime(CLOCK_MONOTONIC)', 'resolution': 1e-9, 'monotonic': True} "clock_" prefix in "time.clock_xxx" name is used by POSIX clock_xxx functions, another prefix may be used instead. >> I don't want to only provide time.monotonic() which would fallback if >> no monotonic clock is available, because we would be unable to tell if >> the clock is monotonic or not. > > There could be a separate inquiry API. Well, the PEP mentions something like that in the "One function, no flag" section. >> See the implementation in the PEP: the >> "try monotonic or fallback" function may use a different clock at each >> call, and so may be monotonic at startup and then becomes >> non-monotonic. > > No, it should never switch implementations once it has chosen. That's > just asking for trouble. If it can become non-monotonic it just isn't > monotonic. Again, Cameron Simpson's idea might help here. Correct. In practice, one call to time.monotonic() is enough to know if next calls will fail or not, and so you don't have to poll regulary to check if the clock becomes non-monotonic. So always fallback and providing something to check if time.monotonic() is or not monotonic is be an acceptable solution. I agree that it is less ugly than the time.monotonic(fallback=True) API. Victor From glyph at twistedmatrix.com Sat Mar 31 03:09:10 2012 From: glyph at twistedmatrix.com (Glyph) Date: Fri, 30 Mar 2012 21:09:10 -0400 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: Message-ID: <71C2DBD4-70D8-4232-880D-1EDB0775FFE8@twistedmatrix.com> On Mar 30, 2012, at 8:51 PM, Victor Stinner wrote: > time.highres() (QPC) rate is only steady during a short duration QPC is not even necessarily steady for a short duration, due to BIOS bugs, unless the code running your timer is bound to a single CPU core. mentions SetThreadAffinityMask for this reason, despite the fact that it is usually steady for longer than that. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Sat Mar 31 03:24:53 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 31 Mar 2012 03:24:53 +0200 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: <20120330214319.GA3106@cskk.homeip.net> References: <20120330214319.GA3106@cskk.homeip.net> Message-ID: > There seem to be a few competing features for clocks that people want: > > ?- monotonic - never going backward at all > ?- high resolution These features look to be exclusive on Windows. On other platforms, it looks like monotonic clocks are always the most accurate clocks. So I don't think that we need to be able to combine these two "flags". > ?- no steps You mean "not adjusted by NTP"? Except CLOCK_MONOTONIC on Linux, no monotonic clock is adjusted by NTP. On Linux, there is CLOCK_MONOTONIC_RAW, but it is only available on recent Linux kernel (2.6.28). Do you think that it is important to be able to refuse a monotonic clock adjusted by NTP? What would be the use case of such truly steady clock? -- The PEP 418 tries to expose various monotonic clocks in Python with a simple API. If we fail to agree how to expose these clocks, e.g. because there are major differences between these clocks, another solution is to only expose low-level function. This is already what we do with the os module, and there is the shutil module (and others) for a higher level API. I already added new "low-level" functions to the time module: time.clock_gettime() and time.clock_getres(). > Of course you can provide some convenient-with-fallback function that > will let people do this in one hit without the need for "T", but it > should not be the base facility offered. The base should let people > request their feature set and inspect what is supplied. The purpose of such function is to fix programs written with Python < 3.3 and using time.time() whereas a monotonic would be the right clock (e.g. implement a timeout). Another concern is to write portable code. Python should help developers to write portable code, and time.monotonic(fallback=False) is not always available (and may fail). Victor From guido at python.org Sat Mar 31 03:29:17 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 30 Mar 2012 18:29:17 -0700 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: Message-ID: On Fri, Mar 30, 2012 at 5:51 PM, Victor Stinner wrote: > 2012/3/31 Guido van Rossum : >> Given the amount of disagreement I sense, I think we'll need to wait >> for more people to chime in. > > I hope that the PEP now gives enough data to help to choose the best API. Lots of data, but not enough motivation, not enough about the use cases. >>> Hum, I read that you can expect a drift between QPC and GetTickCount. >>> I don't have exact numbers. There are also issues with power-saving. >>> When the CPU frequency changes, the frequency of the TSC is also >>> impacted on some processors (others provide a TSC at a fixed >>> frequency). >> >> And how does that impact teh various use cases? (There still isn't >> enough about use cases in the PEP. Thank you very much though for >> adding all those details about the various platform clocks.) > > I didn't find a good source, but it looks like QueryPerformanceCounter > behaves badly on system suspend/resume, whereas GetTickCount is > reliable and has a well defined behaviour: "The elapsed time retrieved > by GetTickCount or GetTickCount64 includes time the system spends in > sleep or hibernation." So QPC is not really monotonic nor steady. It > may be even worse than the system clock (especially in virtual > machines). I think you are in serious danger of overspecifying the problem. No clock can fulfill all requirements. (That's why there are so many to choose from of course!) > Impact: A timeout may be 42 seconds shorter than the requested > duration if is uses QPC. For a scheduler, a task would be scheduled at > the right moment. I don't understand this paragraph. And why is it always exactly a loss of 42 seconds? >>> I prefer use cases: >>> >>> ?* time.highres() should be used for profiling and benchmarking >>> ?* time.monotonic() should be used for a scheduler and timeout >> >> So if I want to schedule something a week in the future, what should I >> use? monotonic() or time()? And can you explain the reason for your >> answer? What about an hour or a day? Or a month or a year? > > You should use time.monotonic() because time.monotonic() has a steady > rate and so is reliable for long duration (at least more reliable than > time.highres()). Ah, but have you *thought* about the use case? *Why* would I want to schedule something that far in the future? Maybe it's a recurring birthday? Surely time() is better for that. (Etc. Think!) My question is trying to tease out the "meaning" of the various clocks that might exist. > In practice, time.monotonic() and time.highres() are only different on > Windows. If I understood correctly, the Windows kernel uses something > like GetTickCount whch has an accuracy of 1 ms in the best case. So > Python only needs a clock with a similar accuracy for timeout and > schedulers. So is it worth having two functions that are only different on Windows? ISTM that the average non-Windows user will have a 50% chance of picking the wrong timer from a portability perspective. > time.highres() (QPC) rate is only steady during a short duration: it > is not an issue for a benchmark because you usually rerun the same > test at least 3 times and keep the minimum. It looks like QPC bugs are > only unexpected forward jumps (no backward jump), so taking the > minimum would workaround these issues. The hibernation issue should > not affect benchmarking/profiling. Now you're talking: indeed the application should work around the limitations (if there are any) using end-to-end means, since only the application developer knows what matters to them. >>> time.monotonic() rate should be as steady as possible, its stability >>> is more important than its accuracy. time.highres() should provide the >>> clock with the best accuracy. Both clocks may have an undefined >>> starting point. >> >> So what kind of drift is acceptable for each? > > I don't know. Ok, let's just assume that we can't control drift so we'll have to let the app deal with it. >>> I disagree. Python must provide a truly monotonic clock, even it is >>> not truly monotonic by default. >> >> Why? > > See Zooko Wilcox-O'Hearn's email: > http://mail.python.org/pipermail/python-dev/2012-March/118147.html He explains the basic difference between the two types of clocks, and that's important. He doesn't say anything about a strict requirement for monotonicity or steadiness. This is why I still balk at "monotonic" for the name -- I don't think that monotonicity is the most important property. But I don't know how the put the desired requirement in words; "steady" seems to come closer for sure. >>> I suggest this API: time.monotonic(fallback=True). >> >> But this goes against the idea of "A keyword argument that gets passed >> as a constant in the caller is usually poor API." > > Right :-) After changing (again) the PEP, I realized that it > reintroduces the time.monotonic(fallback=False) raising > NotImplementedError case and I really don't like this exception here. Me neither. >> And it would encourage the creation of trivial lambdas just to call >> the timer with this flag, since a universal API is a timer function >> that takes no arguments. > > My bet is that fallback=True is an uncommon use case. Don't bet. Look at Cameron Simpson's proposal. (How many times do I have to say that?) >>> Sometimes, it does matter which exact OS clock is used. >> >> When is that? > > If you need to know more properties of the clock,. For example, Python > may be unable to give the accuracy of the clock, but you may get it > differently if you know which clock function is used. Writing code that depends on that sounds like asking for trouble. However I like to have this "clock metadata" available so that they can be printed along with benchmark results. A user reading those results may make good use of the information "this experiment used QueryPerformanceCounter so small times are pretty accurate but if there's an outlier you just have to start over". >>> Should I include such new function in the PEP, or can it be discussed >>> (later) in a separated thread? >> >> FWIW, I prefer a single thread devoted to all aspects of this PEP. > > Let's propose an API: > > time.clock_info() -> > {'time': {'function': 'gettimeofday', 'resolution': 1e-6, 'monotonic': False}, > ?'monotonic': {'function': 'clock_gettime(CLOCK_MONOTONIC)', > 'resolution': 1e-9, 'monotonic': True}, > ?'highres': {'function': 'clock_gettime(CLOCK_MONOTONIC)', > 'resolution': 1e-9, 'monotonic': True}} > > or > > time.clock_info('time') -> {'function': 'gettimeofday', 'resolution': > 1e-6, 'monotonic': False} > time.clock_info('monotonic') -> {'function': > 'clock_gettime(CLOCK_MONOTONIC)', 'resolution': 1e-9, 'monotonic': > True} > time.clock_info( 'highres') -> {'function': > 'clock_gettime(CLOCK_MONOTONIC)', 'resolution': 1e-9, 'monotonic': > True} > > or > > time.clock_info(time.time) -> {'function': 'gettimeofday', > 'resolution': 1e-6, 'monotonic': False} > time.clock_info(time.monotonic) -> {'function': > 'clock_gettime(CLOCK_MONOTONIC)', 'resolution': 1e-9, 'monotonic': > True} > time.clock_info( time.highres) -> {'function': > 'clock_gettime(CLOCK_MONOTONIC)', 'resolution': 1e-9, 'monotonic': > True} I like the version that takes the name of the clock as an argument best. If you have the function, the __name__ attribute gives the name. Going the other way seems more complicated. > "clock_" prefix in "time.clock_xxx" name is used by POSIX clock_xxx > functions, another prefix may be used instead. I'll leave this to the bikeshedders for now. :-) >>> I don't want to only provide time.monotonic() which would fallback if >>> no monotonic clock is available, because we would be unable to tell if >>> the clock is monotonic or not. >> >> There could be a separate inquiry API. > > Well, the PEP mentions something like that in the "One function, no > flag" section. But you don't seem to like it. (In general the alternatives section could do with reasons for rejection or at least pros and cons for each alternative.) >>> See the implementation in the PEP: the >>> "try monotonic or fallback" function may use a different clock at each >>> call, and so may be monotonic at startup and then becomes >>> non-monotonic. >> >> No, it should never switch implementations once it has chosen. That's >> just asking for trouble. If it can become non-monotonic it just isn't >> monotonic. Again, Cameron Simpson's idea might help here. > > Correct. In practice, one call to time.monotonic() is enough to know > if next calls will fail or not, and so you don't have to poll regulary > to check if the clock becomes non-monotonic. So always fallback and > providing something to check if time.monotonic() is or not monotonic > is be an acceptable solution. I agree that it is less ugly than the > time.monotonic(fallback=True) API. Ok. I guess if you call clock_info() before having called the timer function, it can call it internally to find out, if dynamic inquiries are required. -- --Guido van Rossum (python.org/~guido) From guido at python.org Sat Mar 31 03:32:37 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 30 Mar 2012 18:32:37 -0700 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: <20120330214319.GA3106@cskk.homeip.net> Message-ID: On Fri, Mar 30, 2012 at 6:24 PM, Victor Stinner wrote: >> There seem to be a few competing features for clocks that people want: >> >> ?- monotonic - never going backward at all >> ?- high resolution > > These features look to be exclusive on Windows. On other platforms, it > looks like monotonic clocks are always the most accurate clocks. So I > don't think that we need to be able to combine these two "flags". > >> ?- no steps > > You mean "not adjusted by NTP"? Except CLOCK_MONOTONIC on Linux, no > monotonic clock is adjusted by NTP. On Linux, there is > CLOCK_MONOTONIC_RAW, but it is only available on recent Linux kernel > (2.6.28). > > Do you think that it is important to be able to refuse a monotonic > clock adjusted by NTP? What would be the use case of such truly steady > clock? That depends on what NTP can do to the clock. If NTP makes the clock tick *slightly* faster or slower in order to gradually adjust the wall clock, that's fine. If NTP can make it jump wildly forward or even backward, it's no better than time.time(), and we know why (for some purposes) we don't want that. > -- > > The PEP 418 tries to expose various monotonic clocks in Python with a > simple API. If we fail to agree how to expose these clocks, e.g. > because there are major differences between these clocks, another > solution is to only expose low-level function. This is already what we > do with the os module, and there is the shutil module (and others) for > a higher level API. I already added new "low-level" functions to the > time module: time.clock_gettime() and time.clock_getres(). > >> Of course you can provide some convenient-with-fallback function that >> will let people do this in one hit without the need for "T", but it >> should not be the base facility offered. The base should let people >> request their feature set and inspect what is supplied. > > The purpose of such function is to fix programs written with Python < > 3.3 and using time.time() whereas a monotonic would be the right clock > (e.g. implement a timeout). > > Another concern is to write portable code. Python should help > developers to write portable code, and time.monotonic(fallback=False) > is not always available (and may fail). -- --Guido van Rossum (python.org/~guido) From glyph at twistedmatrix.com Sat Mar 31 03:58:40 2012 From: glyph at twistedmatrix.com (Glyph) Date: Fri, 30 Mar 2012 21:58:40 -0400 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: <20120330214319.GA3106@cskk.homeip.net> Message-ID: On Mar 30, 2012, at 9:32 PM, Guido van Rossum wrote: >>> - no steps >> >> You mean "not adjusted by NTP"? Except CLOCK_MONOTONIC on Linux, no >> monotonic clock is adjusted by NTP. On Linux, there is >> CLOCK_MONOTONIC_RAW, but it is only available on recent Linux kernel >> (2.6.28). >> >> Do you think that it is important to be able to refuse a monotonic >> clock adjusted by NTP? What would be the use case of such truly steady >> clock? > > That depends on what NTP can do to the clock. If NTP makes the clock > tick *slightly* faster or slower in order to gradually adjust the wall > clock, that's fine. If NTP can make it jump wildly forward or even > backward, it's no better than time.time(), and we know why (for some > purposes) we don't want that. "no steps" means something very specific. It does not mean "not adjusted by NTP". In NTP, changing the clock frequency to be slightly faster or slower is called "slewing" (which is done with adjtime()). Jumping by a large amount in a single discrete step is called "stepping" (which is done with settimeofday()). This is sort-of explained by . I think I'm agreeing with Guido here when I say that, personally, my understanding is that slewing is generally desirable (i.e. we should use CLOCK_MONOTONIC, not CLOCK_MONOTONIC_RAW) if one wishes to measure "real" time (and not a time-like object like CPU cycles). This is because the clock on the other end of the NTP connection from you is probably better at keeping time: hopefully that thirty five thousand dollars of Cesium timekeeping goodness is doing something better than your PC's $3 quartz crystal, after all. So, slew tends to correct for minor defects in your local timekeeping mechanism, and will compensate for its tendency to go too fast or too slow. By contrast, stepping only happens if your local clock is just set incorrectly and the re-sync delta has more to do with administrative error or failed batteries than differences in timekeeping accuracy. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Sat Mar 31 04:12:34 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 31 Mar 2012 04:12:34 +0200 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: Message-ID: >> Impact: A timeout may be 42 seconds shorter than the requested >> duration if is uses QPC. For a scheduler, a task would be scheduled at >> the right moment. > > I don't understand this paragraph. And why is it always exactly a loss > of 42 seconds? Sorry, it's late here, I'm too tired. A jump of 42 seconds is one of QPC known bug (it was a bug in VirtualBox and it is now fixed). > So is it worth having two functions that are only different on > Windows? ISTM that the average non-Windows user will have a 50% chance > of picking the wrong timer from a portability perspective. Can we solve this by documenting correctly time.highres() issues (it's not really monotonic) and when you need to use time.monotonic() or time.highres()? Well, there is already time.clock() that uses QPC on Windows, but time.highres() and time.clock() give different results for an idle process on UNIX. pybench, timeit and other programs have their own heuristic getting the most accurate clock depending on the OS and on which clocks are available. time.highres() is a portable function for these programs. >> See Zooko Wilcox-O'Hearn's email: >> http://mail.python.org/pipermail/python-dev/2012-March/118147.html > > He explains the basic difference between the two types of clocks, and > that's important. He doesn't say anything about a strict requirement > for monotonicity or steadiness. This is why I still balk at > "monotonic" for the name -- I don't think that monotonicity is the > most important property. But I don't know how the put the desired > requirement in words; "steady" seems to come closer for sure. If we provide a way to check if the monotonic clock is monotonic (or not), I agree to drop the flag from time.monotonic(fallback=True) and always fallback. I was never a fan of the "truly monotonic clock". time.clock_info('monotonic')['is_monotonic'] is a good candidate to store this information. "time.clock_info('monotonic')['is_monotonic']" looks like a bug: why would I check if a *monotonic* clock is *monotonic*? time.clock_info('steady')['monotonic'] looks more natural, but I don't remember why we should not use the term "steady". >> Well, the PEP mentions something like that in the "One function, no >> flag" section. > > But you don't seem to like it. (In general the alternatives section > could do with reasons for rejection or at least pros and cons for each > alternative.) I was concerned by the fact that time.monotonic() may become not-monotonic between two calls, but I changed my mind. I agree that it should not occur in practice. So always fallback and add a way to know if the monotonic clock is monotonic now sounds like the best API to me. Victor From victor.stinner at gmail.com Sat Mar 31 04:17:49 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 31 Mar 2012 04:17:49 +0200 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: <20120330214319.GA3106@cskk.homeip.net> Message-ID: > (...) > ?By contrast, stepping only happens if your local clock is just set > incorrectly and the re-sync delta has more to do with administrative error > or failed batteries than differences in timekeeping accuracy. Are you talking about CLOCK_REALTIME or CLOCK_MONOTONIC? Victor From glyph at twistedmatrix.com Sat Mar 31 04:25:49 2012 From: glyph at twistedmatrix.com (Glyph) Date: Fri, 30 Mar 2012 22:25:49 -0400 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: References: <20120330214319.GA3106@cskk.homeip.net> Message-ID: <13B4F7F8-5764-4A98-8185-DC8A11A562BC@twistedmatrix.com> On Mar 30, 2012, at 10:17 PM, Victor Stinner wrote: >> (...) >> By contrast, stepping only happens if your local clock is just set >> incorrectly and the re-sync delta has more to do with administrative error >> or failed batteries than differences in timekeeping accuracy. > > Are you talking about CLOCK_REALTIME or CLOCK_MONOTONIC? My understanding is: CLOCK_REALTIME is both stepped and slewed. CLOCK_MONOTONIC is slewed, but not stepped. CLOCK_MONOTONIC_RAW is neither slewed nor stepped. -glyph From glyph at twistedmatrix.com Sat Mar 31 04:28:26 2012 From: glyph at twistedmatrix.com (Glyph) Date: Fri, 30 Mar 2012 22:28:26 -0400 Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic() and/or time.highres()? In-Reply-To: <13B4F7F8-5764-4A98-8185-DC8A11A562BC@twistedmatrix.com> References: <20120330214319.GA3106@cskk.homeip.net> <13B4F7F8-5764-4A98-8185-DC8A11A562BC@twistedmatrix.com> Message-ID: <0A62ACCF-1BEF-42BE-B958-F86A903BC8A1@twistedmatrix.com> On Mar 30, 2012, at 10:25 PM, Glyph wrote: > > On Mar 30, 2012, at 10:17 PM, Victor Stinner wrote: > >>> (...) >>> By contrast, stepping only happens if your local clock is just set >>> incorrectly and the re-sync delta has more to do with administrative error >>> or failed batteries than differences in timekeeping accuracy. >> >> Are you talking about CLOCK_REALTIME or CLOCK_MONOTONIC? > > My understanding is: > > CLOCK_REALTIME is both stepped and slewed. > > CLOCK_MONOTONIC is slewed, but not stepped. > > CLOCK_MONOTONIC_RAW is neither slewed nor stepped. Sorry, I realize I should cite my source. This mailing list post talks about all three together: Although the documentation one can find by searching around the web is really bad. It looks like many of these time features were introduced, to Linux at least, with no documentation. -glyph From tjreedy at udel.edu Sat Mar 31 05:31:21 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 30 Mar 2012 23:31:21 -0400 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F764F3B.2020306@pearwood.info> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> <4F764F3B.2020306@pearwood.info> Message-ID: On 3/30/2012 8:26 PM, Steven D'Aprano wrote: > "hires" is a real English word, the present tense verb for engaging the > service or labour of someone or something in return for payment, as in > "he hires a gardener to mow the lawn". Can we please eliminate it from > consideration I agree. Heavy cognitive dissonance. 'Hires' is also a very famous brand of root beer. Hi-res *really* needs the hyphen (or underscore equivalent). -- Terry Jan Reedy From eliben at gmail.com Sat Mar 31 06:33:27 2012 From: eliben at gmail.com (Eli Bendersky) Date: Sat, 31 Mar 2012 06:33:27 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14065: Added cyclic GC support to ET.Element In-Reply-To: References: Message-ID: On Fri, Mar 30, 2012 at 21:30, Benjamin Peterson wrote: > > + def test_cyclic_gc(self): > > + class ShowGC: > > + def __init__(self, flaglist): > > + self.flaglist = flaglist > > + def __del__(self): > > + self.flaglist.append(1) > > > I think a nicer way to check for cyclic collection is to take a > weakref to an object, call the GC, then check to make sure the weakref > is broken. > > > + > > + # Test the shortest cycle: lst->element->lst > > + fl = [] > > + lst = [ShowGC(fl)] > > + lst.append(ET.Element('joe', attr=lst)) > > + del lst > > + gc.collect() > > support.gc_collect() is preferable > > > + self.assertEqual(fl, [1]) > > + > > + # A longer cycle: lst->e->e2->lst > > + fl = [] > > + e = ET.Element('joe') > > + lst = [ShowGC(fl), e] > > + e2 = ET.SubElement(e, 'foo', attr=lst) > > + del lst, e, e2 > > + gc.collect() > > + self.assertEqual(fl, [1]) > > Thanks for the insights, Benjamin. I'll explore these alternatives and will submit a fix. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From regebro at gmail.com Sat Mar 31 08:27:32 2012 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 31 Mar 2012 08:27:32 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: <4F764F3B.2020306@pearwood.info> References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> <4F764F3B.2020306@pearwood.info> Message-ID: On Sat, Mar 31, 2012 at 02:26, Steven D'Aprano wrote: > Guido van Rossum wrote: >> If all else fails, I'd go with turnip. > > I can't tell if you are being serious or not. > > For the record, "turnip" in this sense is archaic slang for a thick pocket > watch. If I understand this correctly, the most common use for this function is when to time things. It will give you the best source available for timers, but it doesn't guarantee that it is steady or monotonic or high resolution or anything. It is also not the time, as it's not reliable as a wall-clock. So, how about time.timer()? //Lennart From regebro at gmail.com Sat Mar 31 08:32:50 2012 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 31 Mar 2012 08:32:50 +0200 Subject: [Python-Dev] datetime module and pytz with dateutil In-Reply-To: References: Message-ID: On Fri, Mar 30, 2012 at 12:38, Serhiy Storchaka wrote: > I don't understand why Python may not include the pytz. The Olson tz > database is not part of pytz. Yes it is. > Python can depend on a system tz database That works on Unix, but not on Windows, where there is no Olsen database. //Lennart From nadeem.vawda at gmail.com Sat Mar 31 11:50:59 2012 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Sat, 31 Mar 2012 11:50:59 +0200 Subject: [Python-Dev] PEP 418: Add monotonic clock In-Reply-To: References: <4F710870.9090602@scottdial.com> <4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com> <4F72B258.10306@scottdial.com> <4F72DBDE.6040003@scottdial.com> <4F764F3B.2020306@pearwood.info> Message-ID: On Sat, Mar 31, 2012 at 8:27 AM, Lennart Regebro wrote: > So, how about time.timer()? That seems like a bad idea; it would be too easy to confuse with (or misspell as) time.time(). Out of the big synonym list Guido posted, I rather like time.stopwatch() - it makes it more explicit that the purpose of the function is to measure intervals, rather identifying absolute points in time. Cheers, Nadeem From fuzzyman at voidspace.org.uk Sat Mar 31 12:28:55 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sat, 31 Mar 2012 11:28:55 +0100 Subject: [Python-Dev] datetime module and pytz with dateutil In-Reply-To: References: Message-ID: On 31 Mar 2012, at 07:32, Lennart Regebro wrote: > On Fri, Mar 30, 2012 at 12:38, Serhiy Storchaka wrote: >> I don't understand why Python may not include the pytz. The Olson tz >> database is not part of pytz. > > Yes it is. > >> Python can depend on a system tz database > > That works on Unix, but not on Windows, where there is no Olsen database. *However*, doesn't Windows have its own system database? The problem is that in order to not include the olsen database, pytz (which would be a very useful addition to the standard library) would need to be modified to use the system database on Windows. This is my (potentially flawed) understanding, anyway. Michael > > //Lennart > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From andrew.svetlov at gmail.com Sat Mar 31 13:21:14 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Sat, 31 Mar 2012 14:21:14 +0300 Subject: [Python-Dev] [Python-checkins] cpython (3.2): Issue #14409: IDLE doesn't not execute commands from shell with default In-Reply-To: References: Message-ID: Updated NEWS as Terry Reedy recommended. Thank you, Terry. On Sat, Mar 31, 2012 at 12:59 AM, Terry Reedy wrote: > On 3/30/2012 2:31 PM, Andrew Svetlov wrote: >> >> Thank you for mentoring. >> >> I will fix NEWS if you help me with better text. > > > I believe a succint message would be > > Issue 14409: IDLE now properly executes commands in the Shell window when it > cannot read the normal config files on startup and has to use the built-in > default key bindings. There was previously a bug in one of the defaults. > > >> The bug fixed is that commit is: >> IDLE has 3 configs: user, system default and hardcoded in python code. >> Last one had bad binding for ?key. >> Usually this config is never used: user or system ones overrides former. >> But if IDLE cannot open config files by some reason it switches to >> hardcoded configs and user got broken IDLE. >> >> Can anybody guess me short and descriptive message describing what fix >> well? >> >> On Fri, Mar 30, 2012 at 6:12 AM, Nick Coghlan ?wrote: >>> >>> On Fri, Mar 30, 2012 at 2:01 AM, andrew.svetlov >>> ?wrote: >>>> >>>> +- Issue #14409: IDLE doesn't not execute commands from shell, >>>> + ?error with default keybinding for Return. (Patch by Roger Serwy) >>> >>> >>> The double negative here makes this impossible to understand. Could we >>> please get an updated NEWS entry that explains what actually changed >>> in IDLE to fix this? >>> >>> Perhaps something like "IDLE now always sets the default keybind for >>> Return correctly, ensuring commands can be executed in the IDLE shell >>> window"? (assuming that's what happened). >>> >>> This is important, folks: NEWS entries need to be comprehensible for >>> people that *haven't* read the associated tracker issue. This means >>> that issue titles (which generally describe a problem someone was >>> having) are often inappropriate as NEWS items. NEWS items should be >>> short descriptions that clearly describe *what changed*, perhaps with >>> some additional information to explain a bit about why the change was >>> made. > > > -- > Terry Jan Reedy > > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins -- Thanks, Andrew Svetlov From animelovin at gmail.com Sat Mar 31 13:43:28 2012 From: animelovin at gmail.com (Etienne Robillard) Date: Sat, 31 Mar 2012 07:43:28 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> <4F75DA77.6040305@gmail.com> <4F75F4D2.4050706@gmail.com> <4F75FA16.6040704@stoneleaf.us> <4F7601D4.1080706@gmail.com> <4F7605E0.6070503@gmail.com> <4F763389.7090307@gmail.com> Message-ID: <4F76EDE0.1020002@gmail.com> > The frozendict builtin type was rejected, but we are going to add > types.MappingProxyType: see issue #14386. > types.MappingProxyType(mydict.copy()) is very close to the frozendict > builtin type. > > Victor Thanks, Victor. :) Will this mean the new dict subclass for CPython will not expose dictproxy to favorize a new types.MappingProxyType type to emulate immutable-like types ? What could be then consequences for code still expecting a non-mutable dict() type ? Therefore I guess this ticket provides more than just speculating points to reconsider such aliased types in cpython. I also found this article quite useful: http://www.cs.toronto.edu/~tijmen/programming/immutableDictionaries.html Yet I might miss how this "new dict" type could potentially induce a RuntimeError unless in python 3.3 a new dict proxy alias is introduced to perform invariant operations in non thread-safe code. Regards, Etienne From regebro at gmail.com Sat Mar 31 14:14:07 2012 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 31 Mar 2012 14:14:07 +0200 Subject: [Python-Dev] datetime module and pytz with dateutil In-Reply-To: References: Message-ID: On Sat, Mar 31, 2012 at 12:28, Michael Foord wrote: > *However*, doesn't Windows have its own system database? Yeah, but it sucks. > The problem is that in order to not include the olsen database, pytz > would need to be modified to use the system database on Windows. Quite a lot too, I'd guess since the databases are completely different in pretty much every way, most importantly in the way that they are using different (and insane) names for the timezones. //Lennart From hannu at krosing.net Sat Mar 31 14:22:01 2012 From: hannu at krosing.net (Hannu Krosing) Date: Sat, 31 Mar 2012 14:22:01 +0200 Subject: [Python-Dev] Using Cython for developing a module to be used from postgreSQL/pl/python Message-ID: <1333196521.28732.23.camel@hvost> Hi Has anyone used Cython for developing a module to be used from postgreSQL/pl/python. Something that calls back to PostgreSQL internals. ------- Hannu Krosing PostgreSQL Unlimited Scalability and Performance Consultant 2ndQuadrant Nordic PG Admin Book: http://www.2ndQuadrant.com/books/ From stefan_ml at behnel.de Sat Mar 31 14:41:56 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 31 Mar 2012 14:41:56 +0200 Subject: [Python-Dev] Using Cython for developing a module to be used from postgreSQL/pl/python In-Reply-To: <1333196521.28732.23.camel@hvost> References: <1333196521.28732.23.camel@hvost> Message-ID: Hannu Krosing, 31.03.2012 14:22: > Has anyone used Cython for developing a module to be used from > postgreSQL/pl/python. > > Something that calls back to PostgreSQL internals. Note that this is the CPython core developers mailing list, for which your question is off-topic. Please ask on the cython-users mailing list (see http://cython.org). Stefan From hannu at krosing.net Sat Mar 31 14:56:14 2012 From: hannu at krosing.net (Hannu Krosing) Date: Sat, 31 Mar 2012 14:56:14 +0200 Subject: [Python-Dev] Using Cython for developing a module to be used from postgreSQL/pl/python In-Reply-To: References: <1333196521.28732.23.camel@hvost> Message-ID: <1333198574.28732.25.camel@hvost> On Sat, 2012-03-31 at 14:41 +0200, Stefan Behnel wrote: > Hannu Krosing, 31.03.2012 14:22: > > Has anyone used Cython for developing a module to be used from > > postgreSQL/pl/python. > > > > Something that calls back to PostgreSQL internals. > > Note that this is the CPython core developers mailing list, for which your > question is off-topic. Please ask on the cython-users mailing list (see > http://cython.org). Thanks for the pointer ! Can you recommend a similar place for asking the same queston about ctypes ? That is using ctypes for calling back to "outer" c application which embeds the python interpreter. > Stefan > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/hannu%40krosing.net From kristjan at ccpgames.com Sat Mar 31 13:46:26 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Sat, 31 Mar 2012 11:46:26 +0000 Subject: [Python-Dev] Issue13210 : Support Visual Studio 2010 Message-ID: Hi, Does anyone object if I submit my patches sxs.patch and errnomodule.patch? These allow python to work correctly when built with vs2010. There is also the PCBuild10.patch, but that can wait. I'm sure a number of people are regularly building python using vs2010 using their own modified solutions, but having the sources in a constant state of patching is a nuiscance so I think we ought to at least make the source code work with it, regardless of having a proper solution for it. K -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristjan at ccpgames.com Sat Mar 31 15:32:39 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Sat, 31 Mar 2012 13:32:39 +0000 Subject: [Python-Dev] Issue #14310: Socket duplication for windows Message-ID: Hi there. Antoine Pitroue requested that this topic (http://bugs.python.org/issue14310) be discussed by python-dev before moving further. I'm adding a windows-only api to "share" sockets between processes to _socket in the canonical way. Multiprocessing already has code for this (using unsupported methods) but a subsequent change could then simplify that. thanks, Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+python at benfinney.id.au Sat Mar 31 15:44:13 2012 From: ben+python at benfinney.id.au (Ben Finney) Date: Sun, 01 Apr 2012 00:44:13 +1100 Subject: [Python-Dev] Using Cython for developing a module to be used from postgreSQL/pl/python References: <1333196521.28732.23.camel@hvost> Message-ID: <87vclkdgwi.fsf@benfinney.id.au> Hannu Krosing writes: > Has anyone used Cython for developing a module to be used from > postgreSQL/pl/python. > > Something that calls back to PostgreSQL internals. Are you aware that PostgreSQL has long provided Python as a language for writing stored procedures in the database? Does that meet your needs? -- \ ?You've got to think about big things while you're doing small | `\ things, so that all the small things go in the right | _o__) direction.? ?Alvin Toffler | Ben Finney From hannu at krosing.net Sat Mar 31 16:04:16 2012 From: hannu at krosing.net (Hannu Krosing) Date: Sat, 31 Mar 2012 16:04:16 +0200 Subject: [Python-Dev] Using Cython for developing a module to be used from postgreSQL/pl/python In-Reply-To: <87vclkdgwi.fsf@benfinney.id.au> References: <1333196521.28732.23.camel@hvost> <87vclkdgwi.fsf@benfinney.id.au> Message-ID: <1333202656.28732.35.camel@hvost> On Sun, 2012-04-01 at 00:44 +1100, Ben Finney wrote: > Hannu Krosing writes: > > > Has anyone used Cython for developing a module to be used from > > postgreSQL/pl/python. > > > > Something that calls back to PostgreSQL internals. > > Are you aware that PostgreSQL has long provided Python as a language for > writing stored procedures in the database? > > > > Does that meet your needs? Sure, I have even contributed code to it ;) But what i want is some way to call _back_ into PostgreSQL to use its internal functions from a module imported from pl/python. I tried ctypes module and got a segfault when i used the following create or replace function callback_to_postgres(rn_text text) returns text language plpythonu as $$ from ctypes import * import struct pg = cdll.LoadLibrary('/usr/lib/postgresql/9.1/bin/postmaster') pg.pq_flush() return rn_text $$; select send_raw_notice('do you see me?'); I determined that the call to pg.pq_flush() was the one crashing the backend, so I assume that cdll.LoadLibrary did not get the already running backend, but loaded another copy of it as library. Now I'm looking for a simplest way to do some C-wrapping. I have used Cyuthon for wrapping simple library calls and I really don'r think there would be hard problems doing this inside the postgreSQL extension build framework. I was just hoping that someboduy had already taken care of all the nitty-gritty detaiuls of setting this up ------- Hannu Krosing PostgreSQL Unlimited Scalability and Performance Consultant 2ndQuadrant Nordic PG Admin Book: http://www.2ndQuadrant.com/books/ From brian at python.org Sat Mar 31 16:53:42 2012 From: brian at python.org (Brian Curtin) Date: Sat, 31 Mar 2012 09:53:42 -0500 Subject: [Python-Dev] Issue13210 : Support Visual Studio 2010 In-Reply-To: References: Message-ID: 2012/3/31 Kristj?n Valur J?nsson : > Hi, > > Does anyone object if I submit my patches sxs.patch and errnomodule.patch? > > These allow python to work correctly when built with vs2010. > > > > There is also the PCBuild10.patch, but that can wait.? I'm sure a number of > people are regularly building python using vs2010 using their own modified > solutions, but having the sources in a constant state of patching is a > nuiscance so I think we ought to at least make the source code work with it, > regardless of having a proper solution for it. Go ahead. As listed on the issue, there is a repo where building from VS2010 "works" but the port is not complete as not all tests are passing. I will complete it. From rdmurray at bitdance.com Sat Mar 31 17:25:49 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Sat, 31 Mar 2012 11:25:49 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <4F76EDE0.1020002@gmail.com> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329220755.377052500E9@webabinitio.net> <4F75A510.7080401@gmail.com> <4F75D501.2050400@gmail.com> <4F75DA77.6040305@gmail.com> <4F75F4D2.4050706@gmail.com> <4F75FA16.6040704@stoneleaf.us> <4F7601D4.1080706@gmail.com> <4F7605E0.6070503@gmail.com> <4F763389.7090307@gmail.com> <4F76EDE0.1020002@gmail.com> Message-ID: <20120331152550.697232500E9@webabinitio.net> On Sat, 31 Mar 2012 07:43:28 -0400, Etienne Robillard wrote: > Yet I might miss how this "new dict" type could potentially induce a > RuntimeError unless in python 3.3 a new dict proxy alias is introduced > to perform invariant operations in non thread-safe code. Etienne, again: issue 14417 has *nothing* to do with immutable dicts. Please carefully read over issue 14205 in order to understand what we are talking about so that you can contribute to the discussion. --David From guido at python.org Sat Mar 31 18:09:21 2012 From: guido at python.org (Guido van Rossum) Date: Sat, 31 Mar 2012 09:09:21 -0700 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <20120329204815.D7AC32500E9@webabinitio.net> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329204815.D7AC32500E9@webabinitio.net> Message-ID: On Thu, Mar 29, 2012 at 1:48 PM, R. David Murray wrote: > On Thu, 29 Mar 2012 16:31:03 -0400, "R. David Murray" wrote: >> On Thu, 29 Mar 2012 13:09:17 -0700, Guido van Rossum wrote: >> > My original assessment was that this only affects dicts whose keys >> > have a user-implemented __hash__ or __eq__ implementation, and that >> > the number of apps that use this *and* assume the threadsafe property >> > would be pretty small. This is just intuition, I don't have hard >> > facts. But I do want to stress that not all dict lookups automatically >> > become thread-unsafe, only those that need to run user code as part of >> > the key lookup. >> >> You are probably correct, but the thing is that one still has to do the >> code audit to be sure...and then make sure that no one later introduces >> such an object type as a dict key. > > I just did a quick grep on our project. ?We are only defining __eq__ > and __hash__ a couple places, but both are objects that could easily get > used as dict keys (there is a good chance that's *why* those methods are > defined) accessed by more than one thread. ?I haven't done the audit to > find out :) Of course, that doesn't mean they're likely to be used as keys in a dict that is read and written concurrently by multiple threads. > The libraries we depend on have many more definitions of __eq__ and > __hash__, and we'd have to check them too. ?(Including SQLAlchemy, > and I wouldn't want that job.) > > So our intuition that this is not common may be wrong. But how often does one share a dictionary between threads with the understanding that multiple threads can read and write it? Here's a different puzzle. Has anyone written a demo yet that provokes this RuntimeError, without cheating? (Cheating would be to mutate the dict from *inside* the __eq__ or __hash__ method.) If you're serious about revisiting this, I'd like to see at least one example of a program that is broken by the change. Otherwise I think the status quo in the 3.3 repo should prevail -- I don't want to be stymied by superstition. -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Sat Mar 31 18:26:12 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 1 Apr 2012 02:26:12 +1000 Subject: [Python-Dev] [Python-checkins] cpython (3.2): Issue #14409: IDLE doesn't not execute commands from shell with default In-Reply-To: References: Message-ID: On Sat, Mar 31, 2012 at 9:21 PM, Andrew Svetlov wrote: > Updated NEWS as Terry Reedy recommended. > Thank you, Terry. Thanks to you both :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Sat Mar 31 19:03:13 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 1 Apr 2012 03:03:13 +1000 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329204815.D7AC32500E9@webabinitio.net> Message-ID: On Sun, Apr 1, 2012 at 2:09 AM, Guido van Rossum wrote: > Here's a different puzzle. Has anyone written a demo yet that provokes > this RuntimeError, without cheating? (Cheating would be to mutate the > dict from *inside* the __eq__ or __hash__ method.) If you're serious > about revisiting this, I'd like to see at least one example of a > program that is broken by the change. Otherwise I think the status quo > in the 3.3 repo should prevail -- I don't want to be stymied by > superstition. I attached an attempt to *deliberately* break the new behaviour to the tracker issue. It isn't actually breaking for me, so I'd like other folks to look at it to see if I missed something in my implementation, of if it's just genuinely that hard to induce the necessary bad timing of a preemptive thread switch. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From martin at v.loewis.de Sat Mar 31 19:42:22 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sat, 31 Mar 2012 19:42:22 +0200 Subject: [Python-Dev] Issue13210 : Support Visual Studio 2010 In-Reply-To: References: Message-ID: <20120331194222.Horde.-YjNAaGZi1VPd0H_deEnonA@webmail.df.eu> > Does anyone object if I submit my patches sxs.patch and errnomodule.patch? > > These allow python to work correctly when built with vs2010. Please see my review: "allow to work correctly" is not a good explanation of what it does, and why it does that. As it is highly counter-intuitive, it needs at least justification. I'm trying to investigate whether it is actually correct (and that it works for CCP is not sufficient proof that it is correct). Also, please clarify what branches you would apply this to. 3.3 is certainly fine; for any other branches, I'd like to point out that this is not a bug fix. > There is also the PCBuild10.patch, but that can wait. Indeed, it shouldn't be added at all. Instead, the PCbuild tree should become a VS2010 (or VS2012) tree ultimately. Regards, Martin From martin at v.loewis.de Sat Mar 31 19:45:11 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sat, 31 Mar 2012 19:45:11 +0200 Subject: [Python-Dev] Using Cython for developing a module to be used from postgreSQL/pl/python In-Reply-To: <1333198574.28732.25.camel@hvost> References: <1333196521.28732.23.camel@hvost> <1333198574.28732.25.camel@hvost> Message-ID: <20120331194511.Horde.GqAhbqGZi1VPd0KndtBXrlA@webmail.df.eu> > Can you recommend a similar place for asking the same queston about > ctypes ? > > That is using ctypes for calling back to "outer" c application which > embeds the python interpreter. ISTM that Postgres lists should be the best place for this kind of question. Alternatively, try python-list or db-sig. Regards, Martin From rdmurray at bitdance.com Sat Mar 31 19:45:33 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Sat, 31 Mar 2012 13:45:33 -0400 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329204815.D7AC32500E9@webabinitio.net> Message-ID: <20120331174533.E0E612500E9@webabinitio.net> On Sun, 01 Apr 2012 03:03:13 +1000, Nick Coghlan wrote: > On Sun, Apr 1, 2012 at 2:09 AM, Guido van Rossum wrote: > > Here's a different puzzle. Has anyone written a demo yet that provokes > > this RuntimeError, without cheating? (Cheating would be to mutate the > > dict from *inside* the __eq__ or __hash__ method.) If you're serious > > about revisiting this, I'd like to see at least one example of a > > program that is broken by the change. Otherwise I think the status quo > > in the 3.3 repo should prevail -- I don't want to be stymied by > > superstition. > > I attached an attempt to *deliberately* break the new behaviour to the > tracker issue. It isn't actually breaking for me, so I'd like other > folks to look at it to see if I missed something in my implementation, > of if it's just genuinely that hard to induce the necessary bad timing > of a preemptive thread switch. Thanks, Nick. It looks reasonable to me, but I've only given it a quick look so far (I'll try to think about it more deeply later today). If it is indeed hard to provoke, then I'm fine with leaving the RuntimeError as a signal that the application needs to add some locking. My concern was that we'd have working production code that would start breaking. If it takes a *lot* of threads or a *lot* of mutation to trigger it, then it is going to be a lot less likely to happen anyway, since such programs are going to be much more careful about locking anyway. --David From tjreedy at udel.edu Sat Mar 31 21:20:12 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 31 Mar 2012 15:20:12 -0400 Subject: [Python-Dev] datetime module and pytz with dateutil In-Reply-To: References: Message-ID: On 3/31/2012 6:28 AM, Michael Foord wrote: > > On 31 Mar 2012, at 07:32, Lennart Regebro wrote: > >> On Fri, Mar 30, 2012 at 12:38, Serhiy >> Storchaka wrote: >>> I don't understand why Python may not include the pytz. The Olson >>> tz database is not part of pytz. >> >> Yes it is. >> >>> Python can depend on a system tz database >> >> That works on Unix, but not on Windows, where there is no Olsen >> database. > > *However*, doesn't Windows have its own system database? The problem > is that in order to not include the olsen database, pytz (which would > be a very useful addition to the standard library) would need to be > modified to use the system database on Windows. This is my > (potentially flawed) understanding, anyway. The Windows installer, by default, installs tcl/tk while Python on other systems uses the system install. Why can't we do the same for the Olson database? As for updates: The correct behavior for timezone functions is to use the most recent definitions of those functions. The date is an input parameter for those functions. Projection of correct behavior for today's date into the future is provisional and subject to correction. This is especially true of anything involving Daylight Stupid Time. (As you can tell, I would have it go away.) Testing the specific output of such functions with future dates is broken. So I think we should define correct behavior of pytz as use with the latest Olson database. Use with an older version would then be a 'bug' subject to being fixed. On Windows, we could update as needed with every bugfix release. (And give instructions for user update.) On other systems, users can do whatever is appropriate. Or perhaps we could add an 'update tz database' function to the module. Similary, our unicode implementation is defined as using the unicode database as of a few weeks before each feature release. Updates for bugfix releases are not done because changes to that database are a few additions each time rather than edits. -- As a side note: I think the same trick of defining correct behavior dynamically rather that statically could be applied to other modules, such mimetypes and internet protocol modules. This seems to be part of the intent of the idea of having stdlib feature releases every 6 months or so. -- Terry Jan Reedy From andrew.svetlov at gmail.com Sat Mar 31 21:36:16 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Sat, 31 Mar 2012 22:36:16 +0300 Subject: [Python-Dev] datetime module and pytz with dateutil In-Reply-To: References: Message-ID: On Sat, Mar 31, 2012 at 10:20 PM, Terry Reedy wrote: > So I think we should define correct behavior of pytz as use with the latest > Olson database. Use with an older version would then be a 'bug' subject to > being fixed. On Windows, we could update as needed with every bugfix > release. (And give instructions for user update.) On other systems, users > can do whatever is appropriate. Or perhaps we could add an 'update tz > database' function to the module. > Please, don't add updating function to standard lib. It's nightmare for package maintainers. In general I'm +0 for adding tz database to stdlib. -- Thanks, Andrew Svetlov From guido at python.org Sat Mar 31 23:26:25 2012 From: guido at python.org (Guido van Rossum) Date: Sat, 31 Mar 2012 14:26:25 -0700 Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error In-Reply-To: <20120331174533.E0E612500E9@webabinitio.net> References: <20120329195825.843352500E9@webabinitio.net> <20120329203103.95A4B2500E9@webabinitio.net> <20120329204815.D7AC32500E9@webabinitio.net> <20120331174533.E0E612500E9@webabinitio.net> Message-ID: Try reducing sys.setcheckinterval(). --Guido van Rossum (sent from Android phone) On Mar 31, 2012 10:45 AM, "R. David Murray" wrote: > On Sun, 01 Apr 2012 03:03:13 +1000, Nick Coghlan > wrote: > > On Sun, Apr 1, 2012 at 2:09 AM, Guido van Rossum > wrote: > > > Here's a different puzzle. Has anyone written a demo yet that provokes > > > this RuntimeError, without cheating? (Cheating would be to mutate the > > > dict from *inside* the __eq__ or __hash__ method.) If you're serious > > > about revisiting this, I'd like to see at least one example of a > > > program that is broken by the change. Otherwise I think the status quo > > > in the 3.3 repo should prevail -- I don't want to be stymied by > > > superstition. > > > > I attached an attempt to *deliberately* break the new behaviour to the > > tracker issue. It isn't actually breaking for me, so I'd like other > > folks to look at it to see if I missed something in my implementation, > > of if it's just genuinely that hard to induce the necessary bad timing > > of a preemptive thread switch. > > Thanks, Nick. It looks reasonable to me, but I've only given it a quick > look so far (I'll try to think about it more deeply later today). > > If it is indeed hard to provoke, then I'm fine with leaving the > RuntimeError as a signal that the application needs to add some locking. > My concern was that we'd have working production code that would start > breaking. If it takes a *lot* of threads or a *lot* of mutation to > trigger it, then it is going to be a lot less likely to happen anyway, > since such programs are going to be much more careful about locking > anyway. > > --David > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcepl at redhat.com Fri Mar 30 17:00:14 2012 From: mcepl at redhat.com (=?UTF-8?B?TWF0xJtqIENlcGw=?=) Date: Fri, 30 Mar 2012 17:00:14 +0200 Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo Message-ID: <4F75CA7E.7030204@redhat.com> Why does HG cpython repo contains .{bzr,git}ignore at all? IMHO, all .*ignore files should be strictly repository dependent and they should not be mixed together. It is even worse, that (understandingly) .{bzr,git}ignore are apparently poorly maintained, so in order to get an equivalent of .hgignore in .gitignore, one has to apply the attached patch. Best, Mat?j -- http://www.ceplovi.cz/matej/, Jabber: mceplceplovi.cz GPG Finger: 89EF 4BC6 288A BF43 1BAB 25C3 E09F EF25 D964 84AC We understand our competition isn't with Caldera or SuSE--our competition is with Microsoft. -- Bob Young of Red Hat http://www.linuxjournal.com/article/3553 -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: _gitignore.patch URL: